Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 06/02/19 in all areas

  1. HiddenWasp Malware Stings Targeted Linux Systems Ignacio Sanmillan 29.05.19 | 1:36 pm Share: Overview • Intezer has discovered a new, sophisticated malware that we have named “HiddenWasp”, targeting Linux systems. • The malware is still active and has a zero-detection rate in all major anti-virus systems. • Unlike common Linux malware, HiddenWasp is not focused on crypto-mining or DDoS activity. It is a trojan purely used for targeted remote control. • Evidence shows in high probability that the malware is used in targeted attacks for victims who are already under the attacker’s control, or have gone through a heavy reconnaissance. • HiddenWasp authors have adopted a large amount of code from various publicly available open-source malware, such as Mirai and the Azazel rootkit. In addition, there are some similarities between this malware and other Chinese malware families, however the attribution is made with low confidence. • We have detailed our recommendations for preventing and responding to this threat. 1. Introduction Although the Linux threat ecosystem is crowded with IoT DDoS botnets and crypto-mining malware, it is not very common to spot trojans or backdoors in the wild. Unlike Windows malware, Linux malware authors do not seem to invest too much effort writing their implants. In an open-source ecosystem there is a high ratio of publicly available code that can be copied and adapted by attackers. In addition, Anti-Virus solutions for Linux tend to not be as resilient as in other platforms. Therefore, threat actors targeting Linux systems are less concerned about implementing excessive evasion techniques since even when reusing extensive amounts of code, threats can relatively manage to stay under the radar. Nevertheless, malware with strong evasion techniques do exist for the Linux platform. There is also a high ratio of publicly available open-source malware that utilize strong evasion techniques and can be easily adapted by attackers. We believe this fact is alarming for the security community since many implants today have very low detection rates, making these threats difficult to detect and respond to. We have discovered further undetected Linux malware that appear to be enforcing advanced evasion techniques with the use of rootkits to leverage trojan-based implants. In this blog we will present a technical analysis of each of the different components that this new malware, HiddenWasp, is composed of. We will also highlight interesting code-reuse connections that we have observed to several open-source malware. The following images are screenshots from VirusTotal of the newer undetected malware samples discovered: 2. Technical Analysis When we came across these samples we noticed that the majority of their code was unique: Similar to the recent Winnti Linux variants reported by Chronicle, the infrastructure of this malware is composed of a user-mode rootkit, a trojan and an initial deployment script. We will cover each of the three components in this post, analyzing them and their interactions with one another. 2.1 Initial Deployment Script: When we spotted these undetected files in VirusTotal it seemed that among the uploaded artifacts there was a bash script along with a trojan implant binary. We observed that these files were uploaded to VirusTotal using a path containing the name of a Chinese-based forensics company known as Shen Zhou Wang Yun Information Technology Co., Ltd. Furthermore, the malware implants seem to be hosted in servers from a physical server hosting company known as ThinkDream located in Hong Kong. Among the uploaded files, we observed that one of the files was a bash script meant to deploy the malware itself into a given compromised system, although it appears to be for testing purposes: Thanks to this file we were able to download further artifacts not present in VirusTotal related to this campaign. This script will start by defining a set of variables that would be used throughout the script. Among these variables we can spot the credentials of a user named ‘sftp’, including its hardcoded password. This user seems to be created as a means to provide initial persistence to the compromised system: Furthermore, after the system’s user account has been created, the script proceeds to clean the system as a means to update older variants if the system was already compromised: The script will then proceed to download a tar compressed archive from a download server according to the architecture of the compromised system. This tarball will contain all of the components from the malware, containing the rootkit, the trojan and an initial deployment script: After malware components have been installed, the script will then proceed to execute the trojan: We can see that the main trojan binary is executed, the rootkit is added to LD_PRELOAD path and another series of environment variables are set such as the ‘I_AM_HIDDEN’. We will cover throughout this post what the role of this environment variable is. To finalize, the script attempts to install reboot persistence for the trojan binary by adding it to /etc/rc.local. Within this script we were able to observe that the main implants were downloaded in the form of tarballs. As previously mentioned, each tarball contains the main trojan, the rootkit and a deployment script for x86 and x86_64 builds accordingly. The deployment script has interesting insights of further features that the malware implements, such as the introduction of a new environment variable ‘HIDE_THIS_SHELL’: We found some of the environment variables used in a open-source rootkit known as Azazel. It seems that this actor changed the default environment variable from Azazel, that one being HIDE_THIS_SHELL for I_AM_HIDDEN. We have based this conclusion on the fact that the environment variable HIDE_THIS_SHELL was not used throughout the rest of the components of the malware and it seems to be residual remains from Azazel original code. The majority of the code from the rootkit implants involved in this malware infrastructure are noticeably different from the original Azazel project. Winnti Linux variants are also known to have reused code from this open-source project. 2.2 The Rootkit: The rootkit is a user-space based rootkit enforced via LD_PRELOAD linux mechanism. It is delivered in the form of an ET_DYN stripped ELF binary. This shared object has an DT_INIT dynamic entry. The value held by this entry is an address that will be executed once the shared object gets loaded by a given process: Within this function we can see that eventually control flow falls into a function in charge to resolve a set of dynamic imports, which are the functions it will later hook, alongside with decoding a series of strings needed for the rootkit operations. We can see that for each string it allocates a new dynamic buffer, it copies the string to it to then decode it. It seems that the implementation for dynamic import resolution slightly varies in comparison to the one used in Azazel rootkit. When we wrote the script to simulate the cipher that implements the string decoding function we observed the following algorithm: We recognized that a similar algorithm to the one above was used in the past by Mirai, implying that authors behind this rootkit may have ported and modified some code from Mirai. After the rootkit main object has been loaded into the address space of a given process and has decrypted its strings, it will export the functions that are intended to be hooked. We can see these exports to be the following: For every given export, the rootkit will hook and implement a specific operation accordingly, although they all have a similar layout. Before the original hooked function is called, it is checked whether the environment variable ‘I_AM_HIDDEN’ is set: We can see an example of how the rootkit hooks the function fopen in the following screenshot: We have observed that after checking whether the ‘I_AM_HIDDEN’ environment variable is set, it then runs a function to hide all the rootkits’ and trojans’ artifacts. In addition, specifically to the fopen function it will also check whether the file to open is ‘/proc/net/tcp’ and if it is it will attempt to hide the malware’s connection to the cnc by scanning every entry for the destination or source ports used to communicate with the cnc, in this case 61061. This is also the default port in Azazel rootkit. The rootkit primarily implements artifact hiding mechanisms as well as tcp connection hiding as previously mentioned. Overall functionality of the rootkit can be illustrated in the following diagram: 2.3 The Trojan: The trojan comes in the form of a statically linked ELF binary linked with stdlibc++. We noticed that the trojan has code connections with ChinaZ’s Elknot implant in regards to some common MD5 implementation in one of the statically linked libraries it was linked with: In addition, we also see a high rate of shared strings with other known ChinaZ malware, reinforcing the possibility that actors behind HiddenWasp may have integrated and modified some MD5 implementation from Elknot that could have been shared in Chinese hacking forums: When we analyze the main we noticed that the first action the trojan takes is to retrieve its configuration: The malware configuration is appended at the end of the file and has the following structure: The malware will try to load itself from the disk and parse this blob to then retrieve the static encrypted configuration. Once encryption configuration has been successfully retrieved the configuration will be decoded and then parsed as json. The cipher used to encode and decode the configuration is the following: This cipher seems to be an RC4 alike algorithm with an already computed PRGA generated key-stream. It is important to note that this same cipher is used later on in the network communication protocol between trojan clients and their CNCs. After the configuration is decoded the following json will be retrieved: Moreover, if the file is running as root, the malware will attempt to change the default location of the dynamic linker’s LD_PRELOAD path. This location is usually at /etc/ld.so.preload, however there is always a possibility to patch the dynamic linker binary to change this path: Patch_ld function will scan for any existent /lib paths. The scanned paths are the following: The malware will attempt to find the dynamic linker binary within these paths. The dynamic linker filename is usually prefixed with ld-<version number>. Once the dynamic linker is located, the malware will find the offset where the /etc/ld.so.preload string is located within the binary and will overwrite it with the path of the new target preload path, that one being /sbin/.ifup-local. To achieve this patching it will execute the following formatted string by using the xxd hex editor utility by previously having encoded the path of the rootkit in hex: Once it has changed the default LD_PRELOAD path from the dynamic linker it will deploy a thread to enforce that the rootkit is successfully installed using the new LD_PRELOAD path. In addition, the trojan will communicate with the rootkit via the environment variable ‘I_AM_HIDDEN’ to serialize the trojan’s session for the rootkit to apply evasion mechanisms on any other sessions. After seeing the rootkit’s functionality, we can understand that the rootkit and trojan work together in order to help each other to remain persistent in the system, having the rootkit attempting to hide the trojan and the trojan enforcing the rootkit to remain operational. The following diagram illustrates this relationship: Continuing with the execution flow of the trojan, a series of functions are executed to enforce evasion of some artifacts: These artifacts are the following: By performing some OSINT regarding these artifact names, we found that they belong to a Chinese open-source rootkit for Linux known as Adore-ng hosted in GitHub: The fact that these artifacts are being searched for suggests that potentially targeted Linux systems by these implants may have already been compromised with some variant of this open-source rootkit as an additional artifact in this malware’s infrastructure. Although those paths are being searched for in order to hide their presence in the system, it is important to note that none of the analyzed artifacts related to this malware are installed in such paths. This finding may imply that the target systems this malware is aiming to intrude may be already known compromised targets by the same group or a third party that may be collaborating with the same end goal of this particular campaign. Moreover, the trojan communicated with a simple network protocol over TCP. We can see that when connection is established to the Master or Stand-By servers there is a handshake mechanism involved in order to identify the client. With the help of this function we where able to understand the structure of the communication protocol employed. We can illustrate the structure of this communication protocol by looking at a pcap of the initial handshake between the server and client: We noticed while analyzing this protocol that the Reserved and Method fields are always constant, those being 0 and 1 accordingly. The cipher table offset represents the offset in the hardcoded key-stream that the encrypted payload was encoded with. The following is the fixed keystream this field makes reference to: After decrypting the traffic and analyzing some of the network related functions of the trojan, we noticed that the communication protocol is also implemented in json format. To show this, the following image is the decrypted handshake packets between the CNC and the trojan: After the handshake is completed, the trojan will proceed to handle CNC requests: Depending on the given requests the malware will perform different operations accordingly. An overview of the trojan’s functionalities performed by request handling are shown below: 2.3. Prevention and Response Prevention: Block Command-and-Control IP addresses detailed in the IOCs section. Response: We have provided a YARA rule intended to be run against in-memory artifacts in order to be able to detect these implants. In addition, in order to check if your system is infected, you can search for “ld.so” files — if any of the files do not contain the string ‘/etc/ld.so.preload’, your system may be compromised. This is because the trojan implant will attempt to patch instances of ld.so in order to enforce the LD_PRELOAD mechanism from arbitrary locations. 4. Summary We analyzed every component of HiddenWasp explaining how the rootkit and trojan implants work in parallel with each other in order to enforce persistence in the system. We have also covered how the different components of HiddenWasp have adapted pieces of code from various open-source projects. Nevertheless, these implants managed to remain undetected. Linux malware may introduce new challenges for the security community that we have not yet seen in other platforms. The fact that this malware manages to stay under the radar should be a wake up call for the security industry to allocate greater efforts or resources to detect these threats. Linux malware will continue to become more complex over time and currently even common threats do not have high detection rates, while more sophisticated threats have even lower visibility. IOCs 103.206.123[.]13 103.206.122[.]245 http://103.206.123[.]13:8080/system.tar.gz http://103.206.123[.]13:8080/configUpdate.tar.gz http://103.206.123[.]13:8080/configUpdate-32.tar.gz e9e2e84ed423bfc8e82eb434cede5c9568ab44e7af410a85e5d5eb24b1e622e3 f321685342fa373c33eb9479176a086a1c56c90a1826a0aef3450809ffc01e5d d66bbbccd19587e67632585d0ac944e34e4d5fa2b9f3bb3f900f517c7bbf518b 0fe1248ecab199bee383cef69f2de77d33b269ad1664127b366a4e745b1199c8 2ea291aeb0905c31716fe5e39ff111724a3c461e3029830d2bfa77c1b3656fc0 d596acc70426a16760a2b2cc78ca2cc65c5a23bb79316627c0b2e16489bf86c0 609bbf4ccc2cb0fcbe0d5891eea7d97a05a0b29431c468bf3badd83fc4414578 8e3b92e49447a67ed32b3afadbc24c51975ff22acbd0cf8090b078c0a4a7b53d f38ab11c28e944536e00ca14954df5f4d08c1222811fef49baded5009bbbc9a2 8914fd1cfade5059e626be90f18972ec963bbed75101c7fbf4a88a6da2bc671b By Ignacio Sanmillan Nacho is a security researcher specializing in reverse engineering and malware analysis. Nacho plays a key role in Intezer's malware hunting and investigation operations, analyzing and documenting new undetected threats. Some of his latest research involves detecting new Linux malware and finding links between different threat actors. Nacho is an adept ELF researcher, having written numerous papers and conducting projects implementing state-of-the-art obfuscation and anti-analysis techniques in the ELF file format. Sursa: https://www.intezer.com/blog-hiddenwasp-malware-targeting-linux-systems/
    1 point
  2. How WhatsApp was Hacked by Exploiting a Buffer Overflow Security Flaw Try it Yourself WhatsApp has been in the news lately following the discovery of a buffer overflow flaw. Read on to experience just how it happened and try out hacking one yourself. 10 MINUTE READ WhatsApp entered the news early last week following the discovery of an alarming targeted security attack, according to the Financial Times. WhatsApp, famously acquired by Facebook for $19 billion in 2014, is the world’s most-popular messaging app with 1.5 billion monthly users from 180 countries and has always prided itself on being secure. Below, we’ll explain what went wrong technically, and teach you how you could hack a similar memory corruption vulnerability. Try out the hack First, what’s up with WhatsApp security? WhatsApp has been a popular communication platform for human rights activists and other groups seeking privacy from government surveillance due to the company’s early stance on providing strong end-to-end encryption for all of its users. This means, in theory, that only the WhatsApp users involved in a chat are able to decrypt those communications, even if someone were to hack into the systems running at WhatsApp Inc. (a property called forward secrecy). An independent audit by academics in the UK and Canada found no major design flaws in the underlying Signal Messaging Protocol deployed by WhatsApp. We suspect that the company’s security eminence and focus on baking in privacy comes from the strong security mindset of WhatsApp founder Jan Koum who grew up as a hacker in the w00w00 hacker clan in the 1990s. But WhatsApp was then used for … surveillance? WhatsApp’s reputation as a secure messaging app and its popularity amongst activists made the report of a 3rd party company stealthily offering turn-key targeted surveillance against WhatsApp’s Android and iPhone users all the more disconcerting. The company in question, the notorious and secretive Israeli company the NSO Group, is likely an offshoot of Unit 8200 that was allegedly responsible for the Stuxnet cyberattack against the Iranian nuclear enrichment program, and has recently been under fire for licensing its advanced Pegasus spyware to foreign governments, and allegedly aiding the Saudi regime spy on the journalist Jamal Khashoggi. The severe accusations prompted the NSO co-founder and CEO to give a rare interview with 60 Minutes about the company and its policies. Facebook is now considering legal options against NSO. The initial fear was that the end-to-end encryption of WhatsApp had been broken, but this turned out not to be the case. So what went wrong? Instead of attacking the encryption protocols used by WhatsApp, the NSO Group attacked the mobile application code itself. Following the adage that the chain is never stronger than its weakest link, reasonable attackers avoid spending resources on decrypting communications of their target if they could instead simply hack the device and grab the private encryption keys themselves. In fact, hacking an endpoint device reveals all the chats and dialogs of the target and provides a perfect vantage point for surveillance. This strategy is well known: already in 2014, the exiled NSA whistleblower Edward Snowden hinted at the tactic of governments hacking endpoints rather than focusing on the encrypted messages. According to a brief security advisory issued by Facebook, the attack against WhatsApp was a previously unknown (0-day) vulnerability in the mobile app. A malicious user could initiate a phone call against any WhatsApp user logged into the system. A few days after the Financial Times broke the news of the WhatsApp security breach, researchers at CheckPoint reverse engineered the security patch issued by Facebook to narrow down what code might have contained the vulnerability. Their best guess is that the WhatsApp application code contained what’s called a buffer overflow memory corruption vulnerability due to insufficient checking of length of data. I’ve heard the term buffer overflow. But I don’t really know what it is. To explain buffer overflows, it helps to think about how the C and C++ programming languages approach memory. Unlike most modern programming languages, where the memory for objects is allocated and released as needed, a C/C++ program sees the world as a continuum of 1-byte memory cells. Let’s imagine this memory as a vast row of labeled boxes, sequentially from 0. (Photo by Samuel Zeller on Unsplash) Suppose some program, through dynamic memory allocation, opts to store the name of the current user (“mom”) as the three characters “m”, “o” and “m” in boxes 17000 to 17002. But other data might live in boxes 17003 and onwards. A crucial design decision in C and C++ is that it is entirely the responsibility of the programmer that data winds up in the correct memory cells -- the right set of boxes. Thus if the programmer accidentally puts some part of “mom” inside box 17003, neither the compiler nor the runtime will complain. Perhaps they typed in “mommy”. The program will happily place the extra two characters into boxes 17003 and 17004 without any advance warning, overwriting whatever other potentially important data lives there. But how does this relate to security? Of course, if whatever memory corruption bug the programmer introduced always puts the data erroneously into the extra two boxes 17003 and 17004 with the control flow of the program always impacted, then it’s highly likely that the programmer has already discovered their mistake when testing the program -- the program is bound to fail each time, afterall. But when problems arise only in response to certain unusual inputs, the issues are far more likely to have failed the sniff test and persisted in the code base. Where such overwriting behavior gets interesting for hackers is when the data in box 17003 is of material importance for the program to figure out how the program should continue to run. The formal word is that the overwritten data might affect the control flow of the application. For example, what if boxes 17003 and 17004 contain information about what function in the program should be called when the user logs in? (In C, this might be represented by a function pointer; in C++, this might be a class member function). Suddenly, the path of the program execution can be influenced by the user. It’s like you could tell somebody else’s program, “Hey, you should do X, Y and Z”, and it will abide. If you were the hacker, what would you do with that opportunity? Think about it for a second. What would you do? I would … tell the program to .. take over the computer? You would likely choose to steer the program into a place that would let you get further access, so that you could do some more interactive hacking. Perhaps you could make it somehow run code that would provide remote access to the computer (or phone) on which the program is running. This choice of a payload is the craft of writing a shellcode (code that boots up a remote UNIX shell interface for the hacker, get it?) Two key ideas make such attacks possible. The first is that in the view of a computer, there is no fundamental difference between data and code. Both are represented as a series of bits. Thus it may be possible to inject data into the program, say instead of the string “mommy”, that would then be viewed and executed as code! This is indeed how buffer overflows were first exploited by hackers, first hypothetically in 1972 and then practically by MIT’s Robert T. Morris’s Morris worm that swept the internet in 1988 and Aleph One’s 1996 Smashing the Stack for Fun and Profit article in the underground hacker magazine Phrack. The second idea, which was crystalized after a series of defenses made it difficult to execute data introduced by an attacker as code, is to direct the program to execute a sequence of instructions that are already contained within the program in a chosen order, without directly introducing any new instructions. It can be imagined as a ransom note composed of letter cutouts from newspapers without the author needing to provide any handwriting. (Image generated with Ransomizer.com) Such methods of code-reuse attacks, the most prominent being return-oriented programming (ROP), are the state-of-the-art in binary exploitation and the reason why buffer overflows are still a recurring security problem. Among the reported vulnerabilities in the CVE repository, buffer overflows and related memory corruption vulnerabilities still accounted for 14% of the nearly 19,000 vulnerabilities reported in 2018. Ahh… but what happened with WhatsApp? What the researchers at CheckPoint found by dissecting the WhatsApp security patch were the following highlighted changes to the machine code in the Android app. The code is in the real-time video transmission part of the program, specifically code that pertains to the exchange of information of how well the video of a video call is being received (the RTP Control Protocol (RTCP) feedback channel for the Real-time Transmission Protocol). (Image credit: CheckPoint Research) The odd choices for variable names and structure are artifacts from the reverse engineering process: the source code for the protocol is proprietary. The C++ code might be a heavily modified version of the open-source PJSIP routine that tries to assemble a response to signal a picture loss (PLI) (code is illustrative): int length_argument = /* from incoming RTCP packet */ qmemcpy( &outgoing_rtcp_payload[ offset ], incoming_rtcp_packet, length_argument ); /* Continue building RTCP PLI packet and send */ But if the remaining size of the payload buffer (after offset) is less than the length_argument, a number supplied by the hacker, information from the incoming packet would be shamelessly copied by memcpy over whatever data surrounds outgoing_rtcp_payload ! Just like the situation with the buffer overflow before, these overwritten data could include data that could later direct the control flow of the program, like an overwritten function pointer. In summary (coupled with speculation), a hacker would initiate a video call against an unsuspecting WhatsApp user. As the video channel is being set up, the hacker manipulates the video frames being sent to the victim to force the RTCP code in their app to signal a picture loss (PLI), but only after specially crafting the sent frame so that the lengths in the incoming packet will cause the net size of the RTCP response payload to be exceeded. The control flow of the program is then directed towards executing malicious code to seize control of the app, install an implant on the phone, and then allow the app to continue running. Try it yourself? Buffer overflows are technical flaws, and build on an understanding of how computers execute code. Given how prevalent they are, and important -- as illustrated by the WhatsApp attack, we believe we should all better understand how such bugs are exploited to help us avoid them in the future. In response, we have created a free online lab that puts you in the shoes of the hacker and illustrates how memory and buffer overflows work when you boil them down to their essence. Do you like this kind of thing? Go read about how Facebook got hacked last year and try out hacking it yourself. Learn more about our security training platform for developers at adversary.io Sursa: https://blog.adversary.io/whatsapp-hack/
    1 point
  3. Analysis of CVE-2019-0708 (BlueKeep) By : MalwareTech May 31, 2019 I held back this write-up until a proof of concept (PoC) was publicly available, as not to cause any harm. Now that there are multiple denial-of-service PoC on github, I’m posting my analysis. Binary Diffing As always, I started with a BinDiff of the binaries modified by the patch (in this case there is only one: TermDD.sys). Below we can see the results. A BinDiff of TermDD.sys pre and post patch. Most of the changes turned out to be pretty mundane, except for “_IcaBindVirtualChannels” and “_IcaRebindVirtualChannels”. Both functions contained the same change, so I focused on the former as bind would likely occur before rebinding. Original IcaBindVirtualChannels is on the left, the patched version is on the right. New logic has been added, changing how _IcaBindChannel is called. If the compared string is equal to “MS_T120”, then parameter three of _IcaBindChannel is set to the 31. Based on the fact the change only takes place if v4+88 is “MS_T120”, we can assume that to trigger the bug this condition must be true. So, my first question is: what is “v4+88”?. Looking at the logic inside IcaFindChannelByName, i quickly found my answer. Inside of IcaFindChannelByName Using advanced knowledge of the English language, we can decipher that IcaFindChannelByName finds a channel, by its name. The function seems to iterate the channel table, looking for a specific channel. On line 17 there is a string comparison between a3 and v6+88, which returns v6 if both strings are equal. Therefore, we can assume a3 is the channel name to find, v6 is the channel structure, and v6+88 is the channel name within the channel structure. Using all of the above, I came to the conclusion that “MS_T120” is the name of a channel. Next I needed to figure out how to call this function, and how to set the channel name to MS_T120. I set a breakpoint on IcaBindVirtualChannels, right where IcaFindChannelByName is called. Afterwards, I connected to RDP with a legitimate RDP client. Each time the breakpoint triggered, I inspecting the channel name and call stack. The callstack and channel name upon the first call to IcaBindVirtualChannels The very first call to IcaBindVirtualChannels is for the channel i want, MS_T120. The subsequent channel names are “CTXTW “, “rdpdr”, “rdpsnd”, and “drdynvc”. Unfortunately, the vulnerable code path is only reached if FindChannelByName succeeds (i.e. the channel already exists). In this case, the function fails and leads to the MS_T120 channel being created. To trigger the bug, i’d need to call IcaBindVirtualChannels a second time with MS_T120 as the channel name. So my task now was to figure out how to call IcaBindVirtualChannels. In the call stack is IcaStackConnectionAccept, so the channel is likely created upon connect. Just need to find a way to open arbitrary channels post-connect… Maybe sniffing a legitimate RDP connection would provide some insight. A capture of the RDP connection sequence The channel array, as seen by WireShark RDP parser The second packet sent contains four of the six channel names I saw passed to IcaBindVirtualChannels (missing MS_T120 and CTXTW). The channels are opened in the order they appear in the packet, so I think this is just what I need. Seeing as MS_T120 and CTXTW are not specified anywhere, but opened prior to the rest of the channels, I guess they must be opened automatically. Now, I wonder what happens if I implement the protocol, then add MS_T120 to the array of channels. After moving my breakpoint to some code only hit if FindChannelByName succeeds, I ran my test. Breakpoint is hit after adding MS_T120 to the channel array Awesome! Now the vulnerable code path is hit, I just need to figure out what can be done… To learn more about what the channel does, I decided to find what created it. I set a breakpoint on IcaCreateChannel, then started a new RDP connection. The call stack when the IcaCreateChannel breakpoint is hit Following the call stack downwards, we can see the transition from user to kernel mode at ntdll!NtCreateFile. Ntdll just provides a thunk for the kernel, so that’s not of interest. Below is the ICAAPI, which is the user mode counterpart of TermDD.sys. The call starts out in ICAAPI at IcaChannelOpen, so this is probably the user mode equivalent of IcaCreateChannel. Due to the fact IcaOpenChannel is a generic function used for opening all channels, we’ll go down another level to rdpwsx!MCSCreateDomain. The code for rdpwsx!MCSCreateDomain This function is really promising for a couple of reasons: Firstly, it calls IcaChannelOpen with the hard coded name “MS_T120”. Secondly, it creates an IoCompletionPort with the returned channel handle (Completion Ports are used for asynchronous I/O). The variable named “CompletionPort” is the completion port handle. By looking at xrefs to the handle, we can probably find the function which handles I/O to the port. All references to “CompletionPort” Well, MCSInitialize is probably a good place to start. Initialization code is always a good place to start. The code contained within MCSInitialize Ok, so a thread is created for the completion port, and the entrypoint is IoThreadFunc. Let’s look there. The completion port message handler GetQueuedCompletionStatus is used to retrieve data sent to a completion port (i.e. the channel). If data is successfully received, it’s passed to MCSPortData. To confirm my understanding, I wrote a basic RDP client with the capability of sending data on RDP channels. I opened the MS_T120 channel, using the method previously explained. Once opened, I set a breakpoint on MCSPortData; then, I sent the string “MalwareTech” to the channel. Breakpoint hit on MCSPortData once data is sent the the channel. So that confirms it, I can read/write to the MS_T120 channel. Now, let’s look at what MCSPortData does with the channel data… MCSPortData buffer handling code ReadFile tells us the data buffer starts at channel_ptr+116. Near the top of the function is a check performed on chanel_ptr+120 (offset 4 into the data buffer). If the dword is set to 2, then the function calls HandleDisconnectProviderIndication and MCSCloseChannel. Well, that’s interesting. The code looks like some kind of handler to deal with channel connects/disconnect events. After looking into what would normally trigger this function, I realized MS_T120 is an internal channel and not normally exposed externally. I don’t think we’re supposed to be here… Being a little curious, i sent the data required to trigger the call to MCSChannelClose. Surely prematurely closing an internal channel couldn’t lead to any issues, could it? Oh, no. We crashed the kernel! Whoops! Let’s take a look at the bugcheck to get a better idea of what happened. It seems that when my client disconnected, the system tried to close the MS_T120 channel, which I’d already closed (leading to a double free). Due to some mitigations added in Windows Vista, double-free vulnerabilities are often difficult to exploit. However, there is something better. Internals of the channel cleanup code run when the connection is broken Internally, the system creates the MS_T120 channel and binds it with ID 31. However, when it is bound using the vulnerable IcaBindVirtualChannels code, it is bound with another id. The difference in code pre and post patch Essentially, the MS_T120 channel gets bound twice (once internally, then once by us). Due to the fact the channel is bound under two different ids, we get two separate references to it. When one reference is used to close the channel, the reference is deleted, as is the channel; however, the other reference remains (known as a use-after-free). With the remaining reference, it is now possible to write kernel memory which no longer belongs to us. Sursa: https://www.malwaretech.com/2019/05/analysis-of-cve-2019-0708-bluekeep.html
    1 point
  4. KeySteal KeySteal is a macOS <= 10.13.3 Keychain exploit that allows you to access passwords inside the Keychain without a user prompt. KeySteal consists of two parts: KeySteal Daemon: This is a daemon that exploits securityd to get a session that is allowed to access the Keychain without a password prompt. KeySteal Client: This is a library that can be injected into Apps. It will automatically apply a patch that forces the Security Framework to use the session of our keysteal daemon. Building and Running Open the KeySteal Xcode Project Build the keystealDaemon and keystealClient Open the directory which contains the built daemon and client (right cick on keystealDaemon -> Open in Finder) Run dump-keychain.sh TODO Add a link to my talk about this vulnerability at Objective by the Sea License For most files, see LICENSE.txt. The following files were taken (or generated) from Security-58286.220.15 and are under the Apple Public Source License: handletypes.h ss_types.h ucsp_types.h ucsp.hpp ucspUser.cpp A copy of the Apple Public Source License can be found here. Sursa: https://github.com/LinusHenze/Keysteal
    1 point
  5. https://imgur.com/52QK8Fo
    -3 points
×
×
  • Create New...