Many of you will remember our blog post about RIDL, FALLOUT and ZombieLoad back in 2019, hot on the heels of Spectre and Meltdown from 2018; all of these attacks related to abuse of the hardware inside your computer, the CPU.
The latest exploit is called CacheOut, and its a bit more sophisticed than, but builds on the work of the previously listed vulnerabilities. It was discovered by Stephan van Schaik, Marina Minkin, Andrew Kwong and Daniel Genkin, all from the Univeristy of Michigan, alongside Yuval Yarom of University of Adelaid. The new attack is more than just a Microarchitechtural Data Sampling attack, it leverages cache buffers to leak information that is more aligned to what is coverted rather than waiting for the information to leak naturally. It also shows that the mitigations in place for the other MDS-based attacks are not complete and require a bit more thought (patches for this vulnerability have already been released so make sure you update your OS and Hypervisors). The vulnerability is also listed under CVE 2020-0549
First, lets deal with the burning question. Should you be concerned about your computers? Well, possibly, unless you are lucky enough to have a CPU made after 2018 Q4. A list of affected products can be found here:
As you can see this issue does not affect any AMD processors as yet, and although I am sure researchers will be looking closely at the ARM and IBM processors that have TSX-like feature sets, these are also not listed as being affected either.
This vulnerability is of particular concern because it allows data to be exfiltrated from systems that are using Intel's Secure Guard Extensions (SGX), an isolution feature set that provides a degree of protection from compromised operating systems and hypervisors to create a secure data enclave.
How it works is a bit tricky. We try to write our blog posts to be easy to understand whilst giving enough technical information. If you really want to get down and dirty with the technicals, I strongly suggest you read the paper by van Schaik et al and work through it. For everyone else, here is a high level overview:
To enable a modern CPU to have maximum performance, two feature sets have been enabled:
- speculative execution essentially where the CPU performs what it thinks might be needed, so if a process suddenly requires what the CPU expects, it is already halfway there; if the process doesn't require it, then it simply gets dropped
- Out-of-order execution basically where the CPU performs work out of order in free cycles and then uses it later
These two functions are what the original vulnerabilities took advantage of and they were mitigated mostly by patches in the form of Kernal Page Table Isolation (KPTI). Intel also redid some of their architecture to avoid the issues.
The newest set of MDS vulnerabilities take advantage of relativly unknown (and mostly undocumented) buffers within the CPU. These buffers can be forced to dump their contents via 'assisting' or 'fault load' instructions that actually bypass the address and persmission checks. As Intel already saw the problem coming, they decide to use a legacy 32 bit CPU instruction called VERW (Verify a Segment for Writing) to overwrite the contents of buffers with a constant set of data. But obviously that hasn't worked or we wouldnt be writing about it.
Buffer leaks like this are slow and inefficient, you can't choose what you want to leak, you just have to kinda wait it out and see what good stuff comes your way.
CacheOut allows the attacker to bypass the VERW overwrite mechnism and also select what cache sets to read. Because the L1 cache is not flushed when the security domain changes, it is possible to attack even if HyperThreading is disabled and including the SGX enclave.
A little background for those that need it. CPUs utilise caches, tiny amounts of memory, for storing and moving data between different areas. CPUs tend to have multiple caches, CacheOut mainly looks at one called L1-D cache which stores data the programme is using. There is one L1-D cache for every core in the CPU. Caches are generally set up to form a kind of pathway, this allows the normal flow of data to go from L1-D cache to the core, then via Line Fill Buffers to the L2 cache. However it is possible to bypass the core and go from L1-D cache straight to the Line Fill Buffer. The reason for this pathway existing is so that if the L1-D cache has a missed instruction the Line Fill Buffer can quickly fill it with stuff to do
CacheOut works like this... Data is taken from L1 cache to L2 cache via the Line Fill Buffers where the data remains until overwritten; faulting or assisting loads can therefore be used to recover the data. Forcing the L1 cache to move its data to the L2 cache (via the Line Fill Buffer) bypasses the VERW overwriting and means that CacheOut can also effectivly select what data to read. (There are some other helpful bits of data in the Line Fill Buffer that aids this too). This gives CacheOut the ability to control part of the address leaked (the 12 least significant bits), pretty much surgical precision with data leakage like this.
The good news is the speed is pretty poor, this blog post for example would take roughly half an hour to pull out using the techniques laid out here. The bad news is that its almost impossible to notice an attack performing this.
If you are interested, the paper published by van Schaik et al can be read here.
If you are confused by the title of this blog post, it relates to an old meme from an episode of Dr Phil - catch the significant part here