This year, TLBleed will be presented at Blackhat USA. TLBleed is a new side channel attack that exploits the TLB rather than CPU caches to infer activity from a co-resident hyperthread, the full details of which we have not yet released.
GLitch, our JS-based Rowhammer exploit that takes advantage of GPU acceleration to trigger bit flips and get control over the Firefox browser on Android made it to the news. After respecting the 90 days disclosure policy we finally went live on May 3 releasing all the details of our attack.
This year, VUSec had 2 papers accepted at USENIX Security ’18: Malicious Management Unit (how to use the MMU to mount indirect cache attacks and bypass software-based defenses) and TLBleed (how to mount TLB side-channel attacks across threads and leak fine-grained information).
Network infrastructure attacks are a growing threat, and are addressed by a budding VUSec research project.
KPN recently published the fifth European Cyber Security Perspectives – edition 2018. It features an article detailing an early version of an active research project of VUsec, called Packet Origin Fidelity (POF), a detection method of network infrastructure attacks.
On the 13th of March, Herbert Bos appeared on RTL Nieuws to summarize these findings. He is on briefly after 7 seconds, and then again at 3m17s (also with Sebastian, Marco and Sanjay, who did the heavy lifting for the analysis, together with Andrei).
Surprisingly, Minister Ollongren does not think there is a problem, even though we show vulnerabilities as bad as integer overflows that allow attackers to manipulate overall results even from compromised local polling stations.
Several days ago, we released a technical report entitled Benchmarking Crimes: An Emerging Threat in Systems Security. The paper was intended for publication at a security conference but was rejected at multiple venues. To let our work be a supporting piece of evidence and analysis for the community to build on, we share our work with the community as a technical report, and we publish it on Arxiv.org.
The results are as revealing as they are damning: we formulate 22 different benchmarking crimes, each of which violates the results of a benchmark in a minor or major fashion. We survey 50 different systems security defense papers. We include papers published by this group in that selection. To gauge reliability, the survey is performed twice – we let two independent readers perform this survey. Their findings are consistent: in this wide study of accepted papers at top systems security venues, all papers had committed benchmarking crimes in some number and degree of egregiousness.
Most of these are recent papers (2015), but a significant fraction are from 2010. This longitudinal component of the study tells us that not only are benchmarking crimes widespread, but also no better in modern papers than in older ones.
This raises the question of how we can trust benchmarks in research results. We hope our work will contribute to an improvement in this situation.
Recently (announcement here), Kaveh and Ben gave a talk at Hardwear.io about trusting the abstractions we think of when we program applications and kernels. We combine the very different Flip Feng Shui (rogue writing) and AnC (ASLR side channel leaking secrets) projects into a single assumptions-challenging talk.